Frank Wells email@example.com While looking back at some archival material on the web, I was struck by a realization of just how little we are now concerned by some of what were Major Issues just a few short years ago. When digital audio was young, discussions raged at length concerning a myriad of details that we just don’t dwell on in these more enlightened times. Early digital got its justifiable share of black eyes because the technology and its implementation were not totally fulfilling the promises of the long-known but unrealized theory (forgive me if I get a bit geeky here).
Early analog-to-digital converters were rarely as linear as was theoretically possible, even with only 16-bit digital storage. Converters had “a sound” to them, and conversion artifacts were sufficiently audible that great lengths were gone to in the avoidance of multiple conversion passes within a signal chain. Early anti-aliasing filters were analog, with artifacts that intruded into the digitized audio.
The importance of accurate, stable digital clocking was poorly understood in the beginning. Clocks that varied from rock-solid stability were all too common, which we eventually learned to quantify as jitter. While video engineers understood the concepts of house sync, audio system engineers often did not.
The concept of sample rate conversion was introduced to resolve sources using disparate clocks and sample rates into synchronous data streams, and to allow the use of higher sampling rates during recording and production than would be used for mass distribution. SRC was another much maligned process in its infancy.
Once conversion at longer word lengths became possible, the already hot topic of dither got even hotter as the evils of truncation were evidenced.
Latency became a hot topic in those early years, in regards to mixing monitor and channel paths (no more SuperCue for you!), to artist sensitivity, and far too late, in regards to interchannel time discrepancies (the latter is one of the biggest factors in digital mixing getting a bad rap, in my opinion).
When synchronizing multiple recorders, digital introduced new issues of time code and clock lock. The need for time code that was synchronous to the word clock was simply overlooked by many manufacturers in the early days of digital audio.
Now, 24-bit sampling is the norm and converter linearity at least approaches the realm of residual analog component noise. Oversampling allowed the use of unobtrusive digital anti-aliasing filters. We’ve learned how to build low-jitter clocks and to clean up signals with effective, transparent reclocking. We rediscovered transmission line theory, applying techniques that RF engineers already understood to the delivery of audio as high-speed data. Sample rate conversion is sophisticated and largely transparent. High resolution sampling rates can be employed effectively.
Good engineering practice still mandates that unnecessary ADDA conversions and sample rate conversions are to be avoided, but converters are now good enough that few individuals are overly concerned with an extra stage of analog in the middle of their signal flow. Converter chips are monolithic, requiring very few external components to operate at performance levels unheard of 20 years ago, and they are relatively inexpensive. The analog componentry in front of and behind the converters is now the largest factor determining sonic performance.
Latency is still discussed, but as a parameter for consideration rather than as a major issue. DAWs now auto-resolve interchannel time alignment discrepancies.
The youngsters among you, if you even read this far, are probably saying, “What’s the big deal?” But we’ve arrived where we have with digital audio through by way of some hard lessons learned. I, for one, am happy to have other things to talk about.