Craig Anderton The late John Simonton, PAIA Electronics’ founder, was a visionary—and I don’t use that word often. He foresaw the possibilities of combining computers and music very clearly, as exemplified in his writing during the ’70s, followed up by his 1980 book, Friendly Stories about Computers/Synthesizers. He basically nailed everything we have today simply by extrapolating into the future elements that were just starting to appear. So when I hear people say that DAWs have pretty much come as far as they can—what could we possibly add to what we have?—I can only think they’re not extrapolating properly.
Take touch, which is in its infancy. Companies like Smithson- Martin, Slate, Cakewalk, Yamaha, and many others are integrating touch into the studio. Yet the future isn’t necessarily about copying hardware mixers, but re-thinking the concept of mixing around humans— not sheet metal.
A major limitation with DAWs is they provide a window to a mix, not the mix itself. Sure, you can show and hide modules, and change channel strip widths. But consider a mixing process where you’re more like a conductor in front of a symphony orchestra. Maybe Line 6’s M20d didn’t get as much traction as the company hoped, but it pursued the right direction by removing the hardware layer between the music and the mix—you created a virtual reallife soundstage instead of a virtual sheet-metal mixer.
Throw more touch into the equation, and you could build a mix that’s based more on live performance thinking. Track grouping would be more like a brass or string section. Players take centerstage for a solo, which changes their levels in relation to the other players. You wouldn’t move an amp sim into an abstract slot into a mixer, but set it up in your virtual stage and plug your player’s instrument into it.
The object would not be novelty or looking cute, but using color, visual representations, spatial positioning, and touch control to create a “right brain”-friendly environment that’s closer to the original process of music creation: Live performance.
Why live performance? Extrapolate another trend...the end of making money from recorded music. The concept of freezing music in time for later playback is comparatively new compared to centuries of music being a real-time event in front of a limited number of people. At seminars, I often ask, “If you could only play live or only in the studio for the rest of your life, which would you choose?” Almost everyone chooses live performance. The process of recording and mixing music needs to return to that paradigm, even if the performance is virtual.
Artificial intelligence will also affect us. We already have crude examples throughout our lives— like when you rate a Netflix movie, and related suggestions appear. Few would deny the value of collaboration, but what if that collaboration was with a computer that knew the rules of harmony— or could suggest a chord progression for a bridge based on what you’d already played for the verse? Some will think this is blasphemy that leads to canned music, but just as a songwriting partner can listen to what you have and say “let’s try this for the second verse,” so could a machine that knew what you liked in the past and how you structured songs. But take this one step further: If you liked, for example, my approach to music, I could offer you my “algorithms” to make suggestions on what you might do based on what I might do. The object wouldn’t be to replace humans, but inspire them.
DAWs could also be more complete environments. Which would be a more useful add-on—another compressor, or a rhyming dictionary? Another reverb, or the ability to store photos in a track of how you miked the instrument? And what about a help menu for creative blocks? Images and colors stimulate the right side of your brain, so a help file that generated cool kaleidoscopic pictures could actually help promote a more creative frame of mind.
Today’s DAWs are excellent at emulating a classic hardware recording studio, and in that sense, I can see why some people think we’ve gone as far as we can—once you emulate the gear for a million-dollar studio “in the box,” where do you go from there? To me, the answer is clear: You bring more of the real world of art and creativity into the box.
Author/musician Craig Anderton has given seminars on technology and the arts in 38 states, 10 countries, and in three languages. Check out his latest music videos at youtube.com/thecraiganderton