Web Bonus Audio Semiconductors 2008

Each February, Pro Sound News queries audio design engineers about their semiconductor usage. What follows are the full replies from the engineers responding to this year's survey.
Author:
Publish date:
Updated on

by Frank Wells.

Each February, Pro Sound News queries audio design engineers about their semiconductor usage. These building blocks of modern audio electronics impact an audio product's performance, quality and value, as we learn from our designers' replies. What follows are the full replies from the engineers responding to this year’s survey. Answering our questions this year are:

Name: Bret Costin / Paul Messick
Title: Director of Engineering / Vice President of Engineering
Company: M-Audio

Name: Ed Meitner
Title: VP Engineering
Company: emm labs inc.

Name: Tom Duffy
Title: Engineering Manager
Company: TEAC Corp / TASCAM Division

Name: Bruno Putzeys
Title: Chief Engineer R&D
Company: Hypex Electronics

Name: Tony Rodrigues
Title: Vice President, Technology and Business Development
Company: The Stanton Group

Name: Nathan O’Neill
Title: Director of Engineering
Company: Loud Technologies, Inc. (aka Mackie Designs)

Name: John Siau
Title: VP
Company: Benchmark Media Systems, Inc.

Ian Dennis
Co-owner/co-CEO
Prism Sound

Analog
1. Analog components, general:
a) For analog design, have you discovered any new chips recently?

Ed Meitner: Op Amps got a lot better

Costin/Messick: No

Tony Rodrigues: Burr Brown Analog Volume control

Nathan O’Neill: We’ve investigated several analog ICs for mic-pres from TI, digital control of analog gain (TI and Wolfson), and general audio switching (ADI) – but none in the last 12 months that are new to the market.

John Siau: I now use the LM4562 opamp in circuits where I need low noise, low THD & IMD, while driving low-impedance loads. In many applications the LM4562 fills the gap between two of my favorite op-amps; the AD797 and the NE5532. All three opamps have outstanding noise performance as well as very low IMD. All of these opamps have low THD at high audio frequencies while most opamps exhibit a rise in THD with frequency. The NE5532 is an old design but it is still one of the very best audio opamps (for the reasons outlined above).

Ian Dennis: Like everyone else, I’m forever drooling over new analog components with a view to improving signal-path performance. Any improvements in cost or power are a bonus for me.

Ian Dennis: Most of the recent improvements in OPAs seem to be focussed on low-voltage and single-rail devices, but for high-end applications it can be hard to reap the benefits with a consequently low signal level. Of course, if you’re dealing with pro signal levels in and out, you can’t use low-voltage devices in those positions anyway – and if you need the high-voltage rails, you may as well use them throughout if you can. If you only need consumer levels (or for things like converter buffers), take a look at some of the new low-voltage parts: the good ones maintain excellent performance close to the rails, and you can save a lot of power if you can drop your analog rail voltages (power savings are always interesting to me – not, sad to say, for green reasons but because the give you more power to use elsewhere!) Check out the Maxim MAX4475 family of single-rail OPAs: they have good noise and distortion performance, without the input cross-over problems which make some of their counterparts unfeasible in non-inverting configurations with low gains.

Among the new high-voltage OPAs, the OPA211 from TI is impressive, with 1.1nV/rtHz input noise, fabulous linearity – in fact excellent performance in almost every department. I wondered if this might be the holy grail: the perfect OPA for every season, until I remembered that such a thing Does Not Exist! Anyhow, I’m eagerly awaiting the dual version. Another pretty useful device is the OPA228 (also from TI, duals available) – it’s been around for a while now, but I’ve only just begun to use it.

b) Are you using different components than you have traditionally?
Ed Meitner: No.

Costin/Messick: There has been little change.

Bruno Putzeys: Most of my design work is using discrete parts. The performance evolution of power MOSFETs is about as staggering as that of analogue chips.

Nathan O’Neill: Since we haven’t found any analog parts that meet our performance or pricing needs in an IC form, we have been concentrating on developing our own discrete analog designs, some of which use hybrid analog/digital techniques for control.

John Siau: The LM4562 has replaced some NE5532’s and some AD797’s. All of our components are SMT (surface mount) and these are decreasing in size and increasing in precision. Metal film SMT resistors are now readily available in 0.1% tolerance and we use these in our precision differential amplifier circuits.

Ian Dennis: Actually, I haven’t adopted many new signal-path components recently. I think this is because I’m basically a Luddite. I’m never keen to change a proven circuit block without a clear performance advantage, and that obviously gets more and more difficult as technology rolls on. If there’s some other temptation such as lower power or cost, I’m paranoid to ensure that performance isn’t compromised in some unexpected way.

c) If so, what are they and in what areas are you seeing improvement (ease of design, performance aspects, etc).
Nathan O’Neill: Mainly performance aspects – lower noise transistors, lower on-resistance switches.

John Siau: Small component sizes allow compact layouts of critical circuits. These compact layouts are less susceptible to EMI. The improved tolerances allow smaller ranges on trim controls and the leads to improved long-term stability.

2. Modular Analog Components:
a) Are you using any of the modular analog building blocks—mic pres, VCAs, dynamics engines and balanced drivers and receivers on a chip, for instance?

Ed Meitner: We do not use any.

Costin/Messick: Yes.

Tony Rodrigues: See Above.

Nathan O’Neill: We do use VCAs and RMS detectors as part of compressor circuits and for multiple channel VCA control… but in general we prefer the design flexibility of discrete circuits, which often give better performance too.

John Siau: We have looked at the new balanced receivers but do not use them because they do not match the performance of our precision balanced receivers. For performance reasons, we build our own microphone preamplifier.

Ian Dennis: I’m not sniffy about using prefabricated blocks if they provide genuinely startling performance. And for some analog functions they can do this: there are certain advantages that a monolithic part has over a discrete solution, such as temperature consistency across component groups, laser-trimming etc. On the other hand, it’s not too often that a prefab block is truly special. Looking at the wider market, I think that the improved quality of prefabs may well narrow the gap between the real high-end and the rest: the designers at places like TI and Analog Devices really know what they’re doing, and very often their $5 chips can give much better results than an elaborate, organically-grown discrete design. But wait, what am I saying? May the lord forgive me… In the mean time, take a walk around the PGA2500 (TI again) if you haven’t already. It puts a lot of well-respected discrete mic pre front-ends in the shade. On the other hand, I remain unconvinced by the latest line driver/receiver offerings; for some reason, their dynamic performance still falls short of a decent discrete solution

b) If so, elaborate how and why.
Costin/Messick: We have been using new modular mic pres to reduce cost. We have evaluated VCA’s for various applications.

Tom Duffy: I am not directly involved with Analog circuit design, but my colleagues report that several companies, including “THAT”, approach us with their new designs. For us as a company that manufactures a wide variety of products, picking a whole new set of parts for each new design is counter-productive, as we lose economies of scale and end up with odd numbers of dead stock for less “popular” parts. There has to be a very strong reason to go with a new part where an existing design, with its known issues, would suffice.
When an analog components company can provide an evaluation kit, that certainly makes it easier to jury rig in place of an existing design part to compare performance directly.

Bruno Putzeys: No. I seem to be hopping between two extremes. One is where I need the absolute best performance possible and then I can still outdo chips using discretes. The other is where I need rock bottom cost and then it’s either very simple discrete circuits or 5532 wherever it fits…

Tony Rodrigues: Chip allowed for multi-channel volume control, instead of using traditional ganged potentiometer. Chip allowed for better balance of the outputs versus the traditional analog pot.

Nathan O’Neill: RMS detectors in particular are fairly intensive in terms of discrete design, and the chips from THAT corporation give us real-estate savings and have good performance for our needs. VCAs tend to follow this aspect too in terms of real-estate savings and function too.

4. Component Sourcing:
a) Are there components that you have traditionally used (analog or digital ICs or other semiconductors, ancillary components other than chips) that are becoming hard to source because of the EUs RoHS initiative or other reasons?

Ed Meitner: Not so far.

Costin/Messick: Yes, Vactrols commonly used in compressors have become impossible to source.

Bruno Putzeys: No problem with RoHS so far. Worst headaches so far have proved to be semiconductors that are made “obsolete” without a replacement. Usually linked to fabs phasing out processes.

Tony Rodrigues: No, most everything is ROHS now

Nathan O’Neill: Most analog and digital parts are OK, however the Vac-Tec parts commonly used in some simple limiters contain Cadmium, and are therefore not RoHS compliant, as are Gallium Arsenide infrared emitting diodes, which we don’t use.

John Siau: ROHS has not been a problem, but we have seen recurring production problems with the AD797, and the BUF634.

Ian Dennis: Our main problem with ROHS wasn’t so much the temporary shortages and accompanying price-hikes which heralded the ‘upgrade’ of parts to ROHS compliance. Much worse for us was the total withdrawal of several minority-interest older parts which the manufacturers were happy enough to carry on making for a handful of small customers like us, until it came to the point of investing in an ROHS version.

b) If so, how are you responding to the shortfall?
Costin/Messick: Discontinued selling the product in the EU, evaluating other approaches to compression.

Tom Duffy: TASCAM completed the change over to fully RoHS compliant manufacturing for most products well before the cut off date, and only a couple of products had a gap in production as we switched over 100%. Certainly some components caused more severe problems, resulting in end-of-lifeing some products. The most obvious example would be for tape deck heads, which traditionally contain hunks of lead.

Nathan O’Neill: We chose to redesign those circuits with a different limiter topology.

John Siau: We maintain a large safety buffer on any critical components that are only available from a single source.

Digital
5. What criteria determine your selection of A/D and D/A conversion parts?

Costin/Messick: Price, performance, power consumption. Occasionally there are clock mode or digital interface requirements to consider.

Tom Duffy: For the majority of our products, price per channel is the most important criteria. With the market as a whole we are seeing the cachet of “24 bit” quality being lost. Many audio products on display at the NAMM show have the ubiquitous USB port for audio, and it only takes a little digging to find they are using a 16 bit/48kHz USB converter. 24 bit and hi FSs no longer matters to a sizeable subset of our customers, even though it pains me to say it.

Bruno Putzeys: Within the boundaries of cost: all of them. I’m very picky about the distortion spectrum. THD on most modern parts is acceptable but the good ones have no discernable spuriae at -20dBfs (beyond a 2nd or a 3rd perhaps) and a fairly benign spectrum at full scale too. In performance-critical applications I still do discrete converters.

Tony Rodrigues: Performance first, then architecture, then price.

Nathan O’Neill: Typically we look at dynamic range, signal-to-noise ratio and THD+N, all at a range of sample frequencies. We also look at control options (simple products require parallel/hardware control), as well as group delay for latency-critical applications such as live-sound products and recording products. And of course we look at price vs performance.

John Siau: In order of descending importance: IMD, THD, filter characteristics, dynamic range. Unfortunately the industry has been on a race for improved dynamic range and high sample rates while ignoring more important aspects of performance.

Ian Dennis: I tend to use off-the-shelf converters in particular ways, with various off-chip enhancements, which mean that I’m really only concerned with certain aspects of converter performance – and recent offerings have been disappointing in that particular regard. I generally use the ‘flagship’ parts from Cirrus, AKM or Analog Devices, depending on the situation. I can’t seem to get away with using lower power or cheaper parts, or parts with a higher channel count, because the dynamic performance of the converter front and back ends is always quite a vital factor.

6. New converter parts:
a) Have you adopted any new conversion parts this year?

Ed Meitner: No

Costin/Messick: Nothing too revolutionary, just lower-cost, more modern versions of parts we’ve been using for years.

Bruno Putzeys: Not “new” really. Designed in WM8718 (cost, cost, cost), PCM4201 (cost etc, surprisingly well-performing little bugger) and AK4395 (best bang for the buck ever).

Tony Rodrigues: Yes, AK4396, AK 4620

Nathan O’Neill: Yes.

Ian Dennis: No recent changes here for me.

b) If so, which and why?

Tony Rodrigues: Great sound, great specs

Tom Duffy: Not aware of any new parts being selected. Inversely, we changed away from a potential new part to an existing part in one design, because of lead time and cost issues. That infuriates the converter manufacturers, when they have the goods and the incentive to be a major supplier for us, but they are stymied by their own distributors who don’t stock samples or put huge mark-ups on them.

Nathan O’Neill: We have designed in a new high-performance codec from Cirrus Logic (CS4272), which gives us excellent dynamic range and very low converter latency. We have also begun using some wide dynamic range and low latency 8-channel ADCs and DACs (CS5368 and CS4385) in an effort to fit more channels into the same PCB real-estate. Lastly, we have adopted the AKM AK4396 as our standard 2-channel DAC since it offers affordable 120dB dynamic range and has very good out-of-band noise performance too.

John Siau: No, we are currently using the parts which deliver the lowest distortion, and some of the newer parts would be a step backward.

7. Sample Rate Conversion:
a) Are you using SRC parts in your designs?

Ed Meitner: No

Costin/Messick: No.

Bruno Putzeys: Yes, but only to solve actual sampling rate issues. Not as a “general synchronisation problem solver”.

Tony Rodrigues: Yes, AK4122

Nathan O’Neill: Yes.

John Siau: Yes

b) If so, what parts are you using, where and why?

Nathan O’Neill: We use the TI SRC4190 in a few products, which offers 128dB dynamic range, -125 THD+N, and supports a wide range of clocking configurations (from 128x Fs to 512x Fs). This part is also pin-compatible with the AD1895 which allows for second-sourcing if certain restrictions are placed on the design in terms of clocking configurations. We generally prefer components with a second-source as it makes obsolescence issues easier to handle.

Tom Duffy: The Cirrus CS8420 has been a very cost-effective and powerful solution for our mixer products. It is very tricky to program, and when different software teams were tasked with integrating it into a new product, we often saw the same mistakes being made. Even if something better came along, the investment in software engineering time would be huge to get to the same stable point again.

Bruno Putzeys: The TI parts. The SRC4192 has an unbeatable ratio estimator, at least for moderate input jitter (for large input jitter the performance jumps from excellent to unacceptable in one go). Wish the filters were up there too though.

John Siau: We use the AD1896 because of its low PLL corner frequency and unsurpassed jitter attenuation. There are some competing SRC designs that fail to attenuate low-frequency jitter.

Ian Dennis: I’ve been wondering recently whether the SRC dragon has finally been slain. The new breed of integrated SRCs are probably plenty good enough, at least in situations where they can be operated with well-behaved clocks. I think that the market-resistance to SRCs is now mostly historical rather than technical. My current favourite is the SRC4192 (I don’t work for TI, by the way).

8. Formats such as USB, Ethernet, AES50 and FireWire are being increasingly used for audio purposes.
a) Are you incorporating new protocols into your designs?

Costin/Messick: Yes.

Bruno Putzeys: 1394

Tony Rodrigues: Yes

Nathan O’Neill: Yes, we use USB1.0 and 2.0, Firewire (400Mbps and 800Mbps), as well as standard 100BaseT Ethernet (TCP/IP).

John Siau: We added our own 24-bit 96-kHz USB interface that operates without the installation of any drivers. The driverless interface requires no installation and configuration and does not suffer from the reliability problems caused by drivers.

Ian Dennis: We’ve used USB, FireWire and AES50 in various products, and things haven’t always gone smoothly – perhaps because we were a little too early in each case. Our problems were mainly to do with reliability of interface and driver operation, particularly that of guaranteeing reliable low-latency operation within a multi-tasking computer. There’s also the question of clock recovery in relation to these interfaces, although it perhaps shouldn’t be so much of a problem in high-end products which should contain industrial-strength clock recovery anyway. We’re working on second-generation products with USB2 and FireWire now, and things are a lot easier second time around. We’re using BridgeCo’s FireWire devices with our own firmware enhancements, and a homegrown solution for USB2.

b) If so, what devices are you using for interface?

Ed Meitner: Currently we are using Burr Brown PCM2902

Costin/Messick: USB, USB2 and Firewire.

Bruno Putzeys: DICE

Tony Rodrigues: OXFORD 971

Nathan O’Neill: We use devices from TI for our USB solutions and Oxford Semiconductor, Bridgeco, and a proprietary high-performance design with a TI Link/Phy part for Firewire (IEEE1394). For Ethernet we have begun to use the ADI Blackfin which is well suited for control with Ethernet and USB.

John Siau: We use a TAS1020B running custom software.

c) Is the current crop of interface devices adequate to the task?

Ed Meitner: Not really, we require a USB audio input device that supports 96k/24b or better.

Costin/Messick: USB and Firewire - Yes, but USB2 needs a less-costly more integrated solution.

Bruno Putzeys: Rather

Tony Rodrigues: Yes

Nathan O’Neill: For today, yes. We are investigating several new platforms at the moment for future development in 2009 and beyond.

John Siau: Most USB interfaces are limited to 16-bits and 48-kHz or they require custom drivers. Our solution operates at high resolution without special drivers.

d) Are there any tasks currently difficult or awkward to design, where you would like to see dedicated devices?

Ed Meitner: No

Costin/Messick: Dedicated PCIe audio specific interface chips would be useful—other options such as bridge chips are not necessarily cost effective for many applications.

Tom Duffy: TASCAM’s track record with USB audio speaks for itself, we continue to push what can be done here. USB Audio 2.0 finally creates a standard for high resolution audio, but I don’t see it being widely supported, either by the interface manufacturers, or the OS companies. Proprietary protocols rule the USB land, and will continue to do so. Even with “standard” protocols, we’ve been bitten by non-backwards compatible OS upgrades that break compatibility with existing products.

Audio over Ethernet and AES50 are not consumer friendly technologies, they require a certain level of sophistication to understand and deploy. Burdening a product with the license costs some of these protocols come with pushes them way out of reach even for curious semi-professionals, so I see a place for a more common protocol to take over.

TASCAM was an early adopter of the TCAT DICE-2 chipset for Firewire Audio, and we’ve never given up hope that this can provide a solution for both consumer level and professional level products. With both Microsoft and Apple “moving the goal posts” constantly, driver stability has been a constant issue, and getting to a stable starting point has been tough for everyone on this platform. There are many hairs that turned grey prematurely over this, I’m sure.

Too many of these protocols or devices for interfacing to these protocols have a fatal flaw, whether it be performance, documentation or price. As a company, you can only innovate on top of a strong foundation, so making the decision to invest engineering time into completely understanding a technology isn’t made lightly.

Bruno Putzeys: No need for me.

Tony Rodrigues: No

Nathan O’Neill: AES50 appears to be a costly interface to develop. A dedicated, low-cost (sub $5) USB2.0 audio solution that provides low-latency would be a good IC to have access to.

John Siau: Our USB project was a major undertaking. It would have been nice if we could have dropped in a high-resolution USB IC, but none exist.

9. With some of the formats listed in Q8, devices are powered off of the interface bus, or for other reasons like portability, lower voltage supplies are employed than in traditional audio designs.
a) Does this low voltage approach present challenges in the analog circuitry proceeding or following conversion?

Ed Meitner: No. all of our analog audio circuits operate a typical supply voltages.

Costin/Messick: With analog following conversion, we’re generating fairly hot signals, even in a low-voltage environment. With analog before conversion, we usually have the option to attenuate the signals down to the converter’s required range, requiring little power.

Bruno Putzeys: Not using bus or battery power, I use normal voltages for the analogue.

Tony Rodrigues: Sometimes, we still give the user an external power pupply in case the interface supply is too noisy.

Nathan O’Neill: Not to us. We have developed highly efficient DC to DC converters designed to produce low-noise “traditional” audio rails of +/-15V as well as 48V phantom power, allowing us, for example, to bring our high-headroom Onyx mic-pre technology into our firewire interfaces, all powered over the Firewire bus.

John Siau: Low voltage components definitely add complexity to our power supply designs. It is not uncommon to have 5 separate power supply voltages in our newer products. We do not use bus power.

Ian Dennis: We’ve never made anything which has been powered off the bus. Our analogue, conversion and processing load tends to be quite heavy, certainly too heavy for USB even for a low channel-count device. We’ve always felt that it was better not to compromise these just so we could use bus power. On FireWire, we’ve only made multi-channel boxes as yet, and even though FireWire can give you more power than USB, it’s still nowhere near enough for them.

b) Do these challenges mean performance compromises or present restrictions in analog performance?

Ed Meitner: No

Costin/Messick: Yes.

Tony Rodrigues: No, just add to product cost.

Nathan O’Neill: The only real restriction of bus-powering is the amount of power you can draw over the bus – this effectively limits the number of analog channels or microprocessors that you can attempt to power over the bus. For devices that exceed the current draw, we turn off bus-powering and require the user to use a wall-outlet to power the device.

John Siau: We do not compromise audio performance, we still use +/- 18 volt supplies for analog circuits, and maintain separate supplies for the digital circuits.

c) If applicable, what techniques are you employing inside your products to raise these smaller voltages to higher supply rails?

Ed Meitner: Multiple supply voltages.

Costin/Messick: We’re using switchers in products that can support the cost. In lower cost situations we have been using rail-to-rail op amps.

Tom Duffy: I don’t have direct knowledge here.

Bruno Putzeys: Two transistors and a centre-tapped inductor.

Tony Rodrigues: Buck regulators, DC/DC converters.

Nathan O’Neill: Proprietary DC-DC converter technology.
John Siau: Converter IC’s typically operate from a single 5V supply. This low supply voltage limits the voltage swing of the audio signal at the converter pins and dictates the use of very low noise amplifiers at the inputs and outputs of these converters.

10. DSP Processing:
a) If you are involved in DSP design, why have you chosen the components you use?

Costin/Messick: Cost, obviously, followed closely by performance and power. We primarily use a multi-core DSP that is tailored to audio applications.

Bruno Putzeys: Quick turn-around time, low programming effort

Nathan O’Neill: We use the SHARC family of DSPs from Analog Devices for all our major professional DSP products, mainly due to the large library of algorithms we have developed over the years to take advantage of both the 40-bit floating point capability of the SHARC as well as the easy-to-use algebraic assembly. It also supports 64-bit fixed point double-precision which exceeds that of the Freescale 56xxx family, for very high precision algorithms.

John Siau: We have always used FPGAs for DSP and have never used dedicated DSP processors.

b) Do you see advantages in one family of processors over another?

Ed Meitner: We don't use off the shelf DSPs. We currently implement all of our DSP algorithms in an FPGA.

Costin/Messick: Yes, especially in fixed-point architecture processors. Reusability is paramount for us, so we tend to stay in a given family of DSP’s.

Bruno Putzeys: Largely a word length and MIPS affair.

Nathan O’Neill: Both ADI and TI offer very equivalent offerings in terms of floating point support and good precision… the main advantage for ADI is our large library of existing algorithms. But ‘C compilers’ are getting more efficient so this advantage is diminishing over time.

c) Do you see advantages in the design tool sets available for DSP programming particular families of components?

Ed Meitner: N/A

Costin/Messick: This is more applicable to general DSP’s and especially floating point DSP’s, but not so much to the fixed point versions we currently use.

Tom Duffy: In the world of DSPs, the number one factor is familiarity. Learning how to program and integrate a different architecture of DSP into a product is an order of magnitude more difficult than you could imagine. How ever much advantage we may see in other architectures, there’s engineering time that could be spent innovating new algorithms or products on a familiar platform that would be spent just getting basic things working on the new one. For value based designs (which is actually both the low end and the high end), whatever anyone says, ending up using just 20% of a DSP’s resources and thinking you’re done just means you never explored what that DSP could really do when pushed.
Bruno Putzeys: In low-volume applications the easy availability of tools becomes a significant factor in choosing chips.
Nathan O’Neill: Several companies have recently provided ‘audio toolkits’ to allow companies to develop DSP algorithms more quickly – however generally we have found our existing algorithms provide better performance and are more easily customized for product specific audio applications.

11. Have you experimented or employed FPGA processors for DSP tasks (and if you are employing them, why)?

Ed Meitner: Yes we do, see number 10 above. I suspect that the designer that implemented the DSP functions in the FPGA did so because he was familiar with FPGAs more so than DSPs

Costin/Messick: No.

Tom Duffy: The X-48 hard disk recorder, although based on a PC motherboard, relies heavily on a large FPGA to do the one task that a PC CPU is really weak at – integer to floating point conversion. Without this, we’d never get the performance we needed to be able to playback 48 tracks at 96kHz and punch in on all tracks reliably. FPGAs in our mixers also handle conversion between digital streaming formats, crosspoint switches, and incorporate simple level ramping elements so that routing changes are always click-less.

Bruno Putzeys: Simple processes like sigma-delta modulation, upsampling and decimation are a lot cheaper to run on an FPGA, especially if you need many channels.

Tony Rodrigues: No

Nathan O’Neill: Yes, we have our proprietary ‘Mix-Engine’ developed in Xilinx FPGAs, which are very good at performing many channels of matrix mixing.

John Siau: We have been using FPGAs for DSP since 1995. We currently have a large library of functions that we can incorporate into our products without adding much cost. These functions include mixing, metering, signal routing, digital audio interfacing (AES, ADAT, S/MUX, TDIF and more), digital filtering, PLL filters, control systems, and user interfaces.

Ian Dennis: We’ve mostly used a mixture of FPGA and dedicated DSP devices for signal processing, mainly because we have to process signals other than the PCM signal path (including clock processing) which don’t lend themselves to a conventional DSP approach. However, recent advances in the speed and capacity of FPGAs, and particularly in the sophistication of their development tools, have meant that we’re increasingly able to lose the DSP device altogether.

12. Native Processing:
a) Are their advances in native processing that are now allowing you to perform DSP tasks formerly relegated to dedicated hardware inside a computer’s CPU?

Ed Meitner: Don't know.

Costin/Messick: Yes, many tasks are now handled via plug-ins running on the host.

Bruno Putzeys: I don’t write plug-ins. My only use of native processing is file-based processing as a development step for DSP algorithms.

Tony Rodrigues: No

Nathan O’Neill: The main advantage of native processing is that, as the speed of CPUs increase, you can program in ‘C’ and develop algorithms very quickly, and for DAW applications it makes perfect sense to apply EQ, dynamics and other FX in the computer if the audio is already there.

Ian Dennis: In many of our products we have to process audio both in dedicated hardware and in a host computer. However, the split is usually dictated by overall system requirements. In terms of favourite processors, we simply never have that luxury: PC software has to run properly on both Intel and AMD platforms; Mac software on both Intel and PowerPC; so we can’t really produce processor-specific software.

b) Do you favor a particular brand of CPUs, and if so, which and why?

Ed Meitner: We currently don't use native processors.

Costin/Messick: Because of our broad market, we tend to avoid development that is too specific to any particular CPU.

Tom Duffy: Combining Convolution algorithms into GigaStudio3.0 opened up a whole new world of sampling opportunities. It is taking time to get the sample library creators into this mindset to take advantage of these new abilities, but I hope we’ll see new “instruments” that combine samples and convolution in ways that are not just for recreating more accurate existing instruments, but other sounds that can’t be created physically.
With Apple moving onto the Intel platform, the SSE instruction set and libraries such as the Intel Signal Processing library became cross platform de-facto standards.

Bruno Putzeys: Intel. I have one on my desk and I know how to write assembly code for it.
Nathan O’Neill: No.

13. Are there advances in DSP technologies that you are particularly excited about?

Ed Meitner: I am very interested in DSP processing in FPGAs because of the parallelism that one can achieve within an FPGA.

Costin/Messick: Higher performance at lower cost.

Tom Duffy: The general increase in performance that means we no longer have to make a trade off in “how many of these chips would we need to create the product we envision”. Unfortunately, the chips at the high performance/low cost end of the curve usually end up removing something essential, but someday we’ll hit the jackpot and get a DSP that we can rely on for many generations of product.

Bruno Putzeys: There’s a lot of interesting new algorithm work out there but it’s outside my scope.
Rodrigues: General MIPS going up and cost going down allows us to use DSP in places where we never would have used them before.

Nathan O’Neill: The MIPs vs Cost keeps coming down, which is good for us and our users.

Ian Dennis: Most of our DSP efforts recently have been directed at processing non-signal-path elements. DSP can bring performance advantages to all sorts of things outside the digital signal path, allowing improvements in clocking, power supplies etc.

General
14. What devices are you employing for system control—do you integrate control into DSP or other processes or are you using dedicated hardware for human interface, i.e. button commands and parameter display?

Ed Meitner: We are currently using a soft core processor implemented on an FPGA for system control. The processor only handles user interface. The DSP portion on the FPGA is a separate entity.

Costin/Messick: Generally no. We usually have a separate MCU for UI and control functions.

Tom Duffy: We always consider integrating control into the main processor, but it can make it more difficult to partition the engineering work amongst a team, where everyone becomes gated, waiting for a new design to start working. Simple distributed parts often makes designing a product easier.
I’m also intrigued by the concept of embedding a CPU for system control inside an FPGA. Once a design requires an FPGA, it’s only a small step to throw out the dedicated CPU and do it virtually.

Bruno Putzeys: All applications I’ve done so far either used a PC for the HID or a PIC and a handful of LEDs. Always run out of GPIO’s on DSP chips.

Tony Rodrigues: Integrated into main system processor or DSP.

Nathan O’Neill: We use GUIs for several DSP-based mixers and interfaces, but we have also had a history of tactile control for users, which we continue to develop in our control surfaces and digital mixers. Display technology, particularly Organic LEDs, offers some interesting possibilities in the near future.

John Siau: In our products, all user interface functions are implemented in an FPGA or a CPLD.

Ian Dennis: For user-interface and control functions we’ve tended to use whichever embedded processor has been to hand in each particular box. These have included DSPs, FPGAs, ARMs, not forgetting of course the trusty 8051 microcontroller – which these days you can lose in a tiny corner of your FPGA if you like!

15. What advice would you give a consumer who is trying to intelligently assess a purchasing decision—as a designer, do you have any guidance to share?
Ed Meitner: Ultimately, let your ears decide.
Costin/Messick: Don’t commit to a design until your price, performance, form, functional requirements are well-known.
Tom Duffy: As much as everyone wants to live on the bleeding edge of technology, updating to the latest drivers, jumping on products that spout cool claims, etc, the reality is that you never know what was fluff and what was the truth until its too late.
Don’t buy a product until you know it’s going to work in your system. Ideally you’d see a post in a forum from someone with your exact same config, but that’s probably unlikely. Products are sometimes designed to be throw-away, i.e. when they break, it’s cheaper to buy a new one than repair it, and some are designed to last. You have to make the decision about which makes more sense for you, and judge the product you are looking at with the same criterion.

Bruno Putzeys: Home in on manufacturers who share very specific and detailed information about the performance aspects you find most important. If you need audio performance, skip any product that simply says “good design” in the leaflet. The ones that have really good specs will be proud to publish them.

Tony Rodrigues: Try to get to a product’s real specs, not the Hype specs.

Nathan O’Neill: Look at the audio specs, but also try to listen before you buy, as some companies list only converter specs for dynamic range and do not include the analog front-end and ‘system’ measurements. You generally do get what you pay for, and for audio professionals, the sound of the equipment should really be paramount.

John Siau: When purchasing, focus on overall performance, features, reliability, and company reputation. Do not get overly focused on a particular component. Good components alone cannot produce a good product. Good components need to be applied by competent Engineers and products must be manufactured with strict attention to quality.

Ian Dennis: This is a really difficult question. It’s tempting for a knowledgeable consumer to form opinions based on all sorts of technical facts (and non-facts) propagated about this box or that. Unfortunately, the implications of this nonetheless-fascinating techno-froth are increasingly difficult to chase down – modern digital audio contraptions are haughty and inscrutable creatures. So I suppose that the proof of the pudding remains where it’s always been. I’d recommend starting by short-listing products which meet your functional requirements, and auditioning as many as you’ve got the time and patience for – whilst trying to ignore all marketing input. Knock it down to two or three, and live with them for a while. Use them in real situations: you’ll need a little while to really find out how something sounds and, in any case, with a little time there may be functional issues which come to light, and you may learn something about reliability.

16. Any parting thoughts on ICs and their application? Trends you’d like to comment on?

Ed Meitner: None

Tom Duffy: The advances made in FPGAs over the last few years are impressive, and we’ve yet to see that technology really trickle into a wide variety of Audio products. That will happen, and I hope we’ll see an explosion in products that differentiate on cool technology that is possible because of these FPGAs, rather than the same chipset thrown into a box with a handful of peripherals converters.

Bruno Putzeys: Yes. I’d like to see data sheets once again written by engineers for engineers, not by the sales dept for the purchase dept. I once blew my top reading through a data sheet of a “100MHz” op amp. The fine print: it’s a 70MHz op amp, but in unity gain the falling phase margin will produce sufficient Q at the end of the bandwidth to stretch the -3dB point to 100MHz. This is a product for engineers for chrissake! And then all the fancy trademarks on the digital chips. Seriously, whether a headphone amp has “on/off pop reduction circuitry” or “Pop-Guard ™” makes no difference to me, except for the tone of my nerves upon stumbling on an utterance like that.

Tony Rodrigues: Audio conversion is so good now, it’s pretty hard to mess it up. Analog design is still what separates the boys from the men in many designs.

Nathan O’Neill: I think we have seen decent progress in recent years in terms of converter performance and DSP performance, however I feel that analog ICs in particular have lagged behind their digital brethren. I would particularly like to see ICs developed aimed at the ‘digital-control-over-analog’ problem, which still has no simple solution, although many companies have developed their own means of performing these functions. Sometimes its just better to keep things in analog, and digital control of these functions would be a good area for IC manufacturers to look at.

Ian Dennis: I’m quite interested in SMPS technology at the moment. There has been a recent quantum leap in the sophistication of switch-mode power supply controllers, which now offer the possibility of much less hostile switching designs. I’ve only recently started using switch-mode PSUs, mainly through the pressure to get more and more functionality and channels in a box. I was never really happy with them before – the requirements of high-quality audio mean that conducted and radiated emissions from switching power supplies become an audio problem way before they ever become a legal problem (even in sample-synchronous designs), and the concerns of the SMPS component manufacturers stopped way short of ours. But new parts from NXP, ST, Fairchild and others could finally allow high-end audio to embrace this useful technology.