7. What are the criteria you use for selecting DSP devices? How have these criteria changed over time?
O’Malley and Vanwulpen: We only used one recently for the Big Ben, and not at all for audio.
• external parts needed (cost again, and noise)
• raw DSP power
McGrath: Yes, the criteria has changed. Floating point is a must have. 10 or 15 years ago everyone pretty much used the Motorola 56000, which is a 24 bit fixed point processor. It took great care to maintain acceptable dynamic range and distortion even though it was in the digital domain. Once you selected a particular part from a manufacturer you were pretty much locked into that chip family, because code portability was pretty near impossible. The lack of portability was due to the fact that you were programming these chips in assembly language, and the fact that the noise/distortion issues were slightly different from one chip family to another. If you were pulling any tricks to get extended precision (or whatever), these tricks would never port easily from, say, Motorola to TI (anyway, no-one ever used TI for pro audio).
What’s really changed today is that the modern chips now run with great C-compilers (ones that actually give you performance that’s pretty much as good as hand tuned assembly code). Once you have IEEE floating-point based C-code, you are in a position to pick whatever processor you like for the product. We’ve supported all the major DSP makers at Lake (both in our pro products, and in the consumer technologies we license under the Dolby banner). The advances in DSP architecture and power make the porting of code much easier. We no longer have to spend a long time pushing code around in order to get maximum performance out of a chip, because either the tools do a lot of the work for you, or the chip has so much excess power you can afford to be inefficient (trading off design effort versus DSP MIPs, where the MIPs are almost free).
McTigue: Scalability, preservation of software investments, road maps for products. This hasn’t really changed. It is becoming apparent that the FPGA solution to DSP is becoming a viable alternative particularly in light of the need to provide both PCM and real DSD processing.)
Massenburg: Well, DSP is another story. More and more we feel we must treat DSP as generic, and we’ve chosen to go by the time-line for the evolution of native processing as being the beacon for what we’re developing. About the best you can say about Motorola, whose management has completely fallen asleep for really interesting new silicon and has not come forward with much of anything for audio, is that they’re releasing new chips each a little better than the last. I’m not alone in counting them out of the market. And no one will miss the idiotic, arcane hacking in Motorola 56k assembler.
Weiss: Selection criteria are software compatibility to older DSPs, wordlength, speed, price, connectivity for multiprocessor designs (I/O bandwidth), expected life time of the chip (i.e. for how long will it be available). These are the main criteria, yesterday, today and tomorrow.
8. Are there particular benefits to the latest generation of DSP chips that have corollary benefits for the end user?
O’Malley and Vanwulpen: Not in our realm, per so, but the major players all are coming out with DSP’s that have a ton of audio-specific stuff. These specific audio features will allow manufacturers to do more with 1 part and have less additional parts. Usually these devices (and others as well, not only DSP’s) seem driven by home theatre and desktop audio applications, on the other hand these mass market devices often can be used quite nicely in a pro product (at least purely digital ones).
Massenburg: Other than speed, I don’t think so.
Weiss: Latest generation is faster per buck and thus the algorithms can potentially be more elaborate (better, more features).
9. How much does the selection of processor affect the sound of a product that incorporates DSP? How about processing depth and rate (for the layman)–fixed vs floating point math, double precision and over sampling/up-conversion and so on?
O’Malley and Vanwulpen: It affects it much less than many people (DSP vendors, DAW vendors,) would want you to believe. Many DSP engineers prefer floating point, it is for sure nice when pressure is on as it requires less thinking…, At first sight in fact, if you get down to the details you end up losing precision. To optimize an algorithm for the best possible results requires the effort of thinking it all the way through regardless of the architecture. There are many misconceptions about this but i.e. regular IEEE floats have a 24 bit mantissa, audio usually being represented between +1 and -1, That means that it ends up being the same as regular 24 bit signed integer (fixed point), if you were to divide by 8 and then later multiply by 8 again without precautions you loose 3 bits, just as well with floats as with fixed.
The advantage of floats in my opinion is that it’s easier to get an idea going, it’s easier to not make dumb mistakes. To get it to optimal quality it is the same amount of work.
Double precision is obviously nicer than single precision and so are higher fixed-point bit depths. On top of that, the sheer increases in speed are nice as well, allowing one to do some juggling around in cases where the precision isn’t enough yet.
Oversampling in my opinion is not used enough yet. Although we can’t hear 40k (well some out there probably claim they can) if you connect analogue gear in a chain, those frequencies will affect the behavior of subsequent boxes, being one out of a gazillion differences of DSP emulations versus the real deal, in my opinion at least. Besides that, currently there often is a lot of up/downsampling going on between processes (i.e. plugins) which is not needed either, in many cases.
Dennis: The use of different DSP devices shouldn’t affect sound quality. It is straightforward to define the number, type and necessary precision of operations required to produce the desired result and these requirements can be tweaked in simulation or after implementation. They could, within reason, be achieved with any DSP device (or increasingly in FPGAs). However, shortcomings of native wordlength and speed can result in compromises if there is insufficient money, space or power to include a sufficiency of the chosen device. Much used to be made of the fixed-point versus floating-point debate. Purist audio is inherently a fixed point world – we don’t stop being interested in small sounds just because there are big sounds around. But, of course, that doesn’t stop floating-point being OK if the mantissa is long enough. The problem used to be that the standard 24-bit mantissa was nowhere near enough! On the other hand if you want to compromise, floating point is very useful since it makes the best use of limited bits. One of the most refreshing things about the recent trend of ‘going native’ – moving the audio processing out of dedicated DSPs into desktop PCs and Macs – is that you can overkill the precision easily without throughput penalty. Similarly, increasing use of FPGAs for audio processing allow precision to be tuned within different parts of an algorithm. I see this as an increasingly attractive solution – it certainly will be if DSD catches on!
Kraemer: Processor selection should not affect sound. Either an audio signal processing algorithm can be properly ported to a DSP or it can’t. If the algorithm is ported properly on any architecture the behavior should be identical. In general the biggest factor that affects sound is bit depth. A minimum of 24 bit fixed point internal processing is required to maintain high quality throughout the processing stages of an audio algorithm. Floating point just makes it easier.
Massenburg: The processor taken alone? exactly zero outside of power.
Weiss: Engineer says “it depends”….
A faster DSP can achieve better results for some algorithms. Simple ones do not profit from speed (e.g. a gain multiplication can’t be made better with speed). Others like processing in the frequency domain certainly can be made better if more DSP power is available.
Of course even with very fast / very precise DSPs most algorithms still are compromised. But the compromises get smaller and smaller with more modern DSPs. Also one has to differentiate between the algorithm and the algorithm implementation. Both affect the quality of a product. The DSP technology helps in the implementation department, but the algorithm itself is another issue. Of course one can change algorithms in order to fit certain DSP architectures (fixed / floating point discussion). This is where the DSP engineer has to bring in his know how.
The discussion fixed vs floating point is meaningless without knowing the algorithm and the exact number formats at hand. The same applies to the precision and the upsampling issue. E.g. upsampling can be very useful when it comes to non-linear transfer functions, but is not important for a simple gain change via a fader.
10: Any new processing components that you are jazzed about?
O’Malley and Vanwulpen: All the ARM,PowerPC,XScale,… and other such parts. I also like the newer TI C6x’s (although most audio folks hate them, because of an unfounded scare I think, which I had to overcome as well), the new Sharcs are nice as well. In FPGA land the combos of FPGA’s and processors in one package seem very interesting, allowing one to take a core and for example add custom designed logic to it. It’s a bit like digital Lego these days.
Massenburg: I like the new generation of 64 bit architectures, specifically the AMD and IBM parts.
Weiss: As we are in the Analog Devices Sharc camp, we look at the new low cost Tiger Sharcs for incredible raw DSP power, the new low cost SIMD Sharcs with built-in audio algorithms and the Blackfin family which is fast enough for DSD wide (PCM narrow) processing.
To give an idea: Four of the latest Tiger Sharcs have the same processing power as the DSP core which has been used for the DISQ mixing system back in the nineties when it was fully populated with 128 pieces of AT&T DSPs…
McGrath: We are currently working on designs for all the major DSP makers, so we get to look closely at all the latest and greatest chips as they come out. We’ve seen exciting new developments from Cirrus, Fujitsu, Texas Instrument, Analog Devices, Motorola and others. And, by the way, it’s no longer considered taboo to use TI chips in pro audio today, which is a big turn-around from where they were 10 years ago.
van der Mee: Chip manufacturers are more and more into developing “black boxes”; signal goes in, signal goes out. The functionality of the chip is usually not released because of competition. Which makes it very restricting and difficult for designers with a limited budget to be creative.
Also the vast pace of development of new chips and hence the older becoming obsolete, makes it very hard to maintain vintage equipment. A product developed today, can very likely not be fixed in roughly ten years, when one of its “black box” parts dies.
O’Malley and Vanwulpen: The future of packaging might become a problem for more and more small companies. Of which there are plenty in pro audio. IC’s tend to be more and more targeted for use by big corporations, often requiring large investments in tools and so forth. Then again, as has happened in the past, competition will increase and certain standards will become more prevalent, forcing manufacturers to discount their software in order to lure potential customers into using their chips.
The open source movement as far as software goes (on all fronts, not just those related to digital design) is also quite encouraging. There are so many people in remote places that are making a difference in their spare time. I can think of multiple times where open source software has benefited us. And as the amount of information out there increases, the benefits to the community at large (and smaller companies like ours) will only increase.