Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


Opinion: Hooray, hooray for FPGA

DiGiCo’s technical director, John Stadius, explains why his R&D team have long championed the use of FPGA technology

Since the launch of the first of its SD range of digital mixing consoles, DiGiCo’s technical director, John Stadius, and his R&D team have championed the use of FPGA technology over the available alternatives. He explains what sent him down this path – and why he is confident it’s still the right one for audio mixing

We used Analog Devices SHARC DSPs in our first DiGiCo digital console, the D5 Live, when the company was formed in 2002. We’d been using them since 1996 in the post-production consoles made by Soundtracs, the company bought and built on by DiGiCo, so it was a technology we were very familiar with. It was certainly the right one to continue with at that point, but we were already investigating different processing options and deciding what would be the right thing for DiGiCo’s products in the future.

We decided on the use of FPGAs (field-programmable gate array chips), which, combined with our propriety Stealth Digital Processing (DiGiCo’s first use of a single large-scale FPGA for audio processing, and another significant development for us), became the core of the SD range of consoles and continues to be the heart of the audio in our new S21 console.

Today there are three technologies commonly used to process audio: DSP, FPGA and, more recently, Intel or similar X86 processors such as the i7. They all do a similar job, so why did DiGiCo choose the FPGA approach over the other two, and how is it still the best approach?

I’ve already mentioned that DSP has been around the longest. FPGAs suitable for use as audio processors became available at the start of the 21st century, around the same time as the first iterations of the i7 style of processor. Over the years, all three technologies have progressed: SHARC DSPs from Analog Devices are now on their fourth generation, and the i7 is currently on its fifth generation, soon to be sixth. While FPGA vendors have progressed (in a similar way in terms of hardware), the tools for compiling the devices have also got more efficient.

On its own, DSP and Intel chips are similar in the way they process audio: ie one thing at a time. Using multiple DSPs means a lot of the work can now be done in parallel, in a similar way to FPGA, but the audio engine becomes very complicated and large – 40 DSP chips takes up a lot of space and are often spread over multiple printed circuit boards with interconnections. This can make it less reliable than a single PCB design with one or two chips, such as our Stealth engine. It will also require a lot more power than a single FPGA, creating a lot more heat, which can create other design risks and challenges.

So what about the Intel approach? The i7 was designed for desktop PC-type applications, so it doesn’t have the flexible I/O functionality for interfacing to audio devices. The I/O is pretty much limited to PCIe Ethernet and USB, making interfacing to standard (non-network) audio interfaces complex. For example, to create a MADI port, you may need a special PCIe interface from the CPU to a dedicated MADI block. This is expensive and requires a lot of hardware and special driver software. With FPGAs, you just connect their pins to a simple buffer chip and you have a MADI I/O. Simplicity in a design like this often means more long-term reliability and a lower latency.

The Intel i7 and similar are not easy to scale down as the complexity of their I/O remains for all levels of audio engines. It also has a much higher power consumption than the equivalent FPGA design, which means an active cooling system is normally required, as well as an operating system to make it work, and this can be very expensive to develop time-wise. You could use a third-party OS but, again, that’s an additional cost and complication.

In the demands of the live audio world boot time is also an important factor. When you use an FPGA, the device can be up and passing audio in a second or two. This is particularly important after a power cut. X86 processors require the BIOS to boot first, followed by its operating system, before finally allowing the audio to flow. This is simply too long when you need to get up and running fast.

What’s more, the FPGA approach requires a single PCB that can sit within the console surface, using the same power supplies, reducing the risk of external connection failure.

Our designs have to stand the test of time so our users and clients have time to get a return on their investment. In contrast to this, Intel processors generally have a limited product life cycle; they change models every few years. This means that a console using them will have to have its hardware frequently updated or risk being left behind. Conversely, despite FPGA devices continuing to evolve, their manufacturers will supply current versions for between 10 and 20 years and they still benefit from the enhancements to the development tools. Our product upgrade program shows how effective this is, with more features being able to be added over the lifetime of our products.

It’s often the case that the engine control software in a PC-based console will be running on one of the cores of the CPU in the audio engine. This normally runs on a non-real-time operating system, such as Windows or Linux. If the control software crashes and has to be rebooted, the audio will be lost. The only way around this is to have two processors running separately, again increasing complexity and cost. Designing one processor for audio and application control with no fail-safe is like having all your eggs in one basket.

It took us five years to make FPGA work exactly how we wanted it to, but it has so many advantages, including scalability; a very fast boot time, delivering almost instant audio; future-proofing, with designs that can migrate from one generation to another using common design tools and don’t require an operating system to run; lower latency; low power consumption; and the FPGA audio engine is much simpler to manufacture.

So if they are so good, why doesn’t everyone use them? The simple answer is the initial development time. DiGiCo took around five years to develop its first FPGA-based product. It requires a special skillset to achieve this. Programming in a high level language for X86/i7 processors does get a product to market quicker, but, as we’ve seen, it has disadvantages.

One console manufacturer implied recently, when discussing their new X86-based product, that it was simply unfeasible to deliver the number of channels and processing power that a console would require using standard DSP or FPGA.

Maybe for them – but Calrec, part of the Audiotonix Group, has been doing more than their quoted channel number for years by using a low number of FPGAs in extremely demanding applications.
There is no doubt that using FPGAs has allowed us to dramatically expand the capabilities of our entire SD range without making any changes to the basic hardware of the product: all upgrades and expansions have been achieved solely through firmware and software. This is all down to the use of FPGAs, and is the simple reason why we will continue to use them for the foreseeable future.