Beyond the Digital Basics

Publish date:
Social count:
Going Native promo image

Emulating analog circuitry performance with digital processing has certainly come a long way in the past couple of decades. I remember when the AT&T DisQ project was initiated in the ’90s, using an SSL or Neve console as a control surface for a DSP array that was pretending to be the console, there was no pretending to model the sound of the consoles. Transfer functions were measured in regards to the feel of the console controls and gain structure.

Engineers familiar with a desk, grabbing, say, an EQ knob to get an expected result, would get that result in terms of the raw parameters — boost or cut, Q, frequency of operation. No attempt was made to model any character inducing aspects of the mapped device (function mapped, not modeled). Indeed, it was beyond the experience if not also the abilities of the genius-level programmer coding the system to model beyond knob positions. It was equally certainly beyond the DSP resources of the processing core to handle the complexity of sonic modeling in anything near real time.

A popular internet forum topic over the last year has begun with someone stating, “All digital EQ sounds the same.” Within a narrow window, they are correct — there are standard formulas for minimum phase EQ (that’s EQ of the same mathematical character as analog EQ, sans any analog circuitry artifacts); simple EQ is done the same across a wide range of digital processor engines. Certain aspects of digital EQ, such as processing of near-Nyquist frequencies, are enhanced in some engines by using techniques such as up-sampling. Then there’s whole swath of additional EQ techniques using linear-phase equalization, creating equalization effects that are only possible with digital processing. With enough processing power and time (particularly at low frequencies), most any EQ curve can be created that one desires. The EQ in this case is not the traditional sound we are used to associating with EQ; FIR versus IIR filters, a new toolset digital brings to end-users.

Given that similar processing is typically applied, where do we find the differences within digital equalizers applying minimum phase EQ? Two primary areas of performance are in play: the human interface (how easily and familiarly can you get a desired effect from the controls) and the introduction of artifacts (sonic effects that emulate certain euphonic aspects of analog processors). It is in the latter area that the most progress has been made over the past two decades as clever programmers embellish the core EQ mathematics with modeling techniques, modeling a target device to the component level, or by adding additional processing elements to achieve a particular sonic signature. That’s where today’s magic happens.