We interact with artificial intelligence daily, through Siri, Alexa, Spotify, Google and other platforms. These are useful tools, keeping us organized, informed and entertained. But as AI increasingly pops up in the world of music and pro audio, some alarmists wonder if machines are coming for our jobs.
The simple answer is no—not right now, anyway. If anything, these next-gen technologies are a boon to the business, offering time-saving tools that relieve users of tedious tasks, freeing them to focus on their creativity. And if you don’t like the results, you can tweak it or turn it off.
The human voice is a target for some of these new technologies. Syncho Arts’ VocALign, for instance, automatically analyzes two tracks–they can be instruments and sound effects, not just voices—to perfectly align their timing, significantly reducing dialog re-recording or overdub time in the studio. Audionamix’s IDC: Instant Dialogue Cleaner automatically detects and separates speech, no matter the surrounding content, also using AI.
Feed these algorithms with examples against which the software can make informed decisions and you move into the territory of machine learning. Take iZotope’s Track Assistant in Neutron, Master Assistant in Ozone and Vocal Assistant in Nectar Elements. Using machine learning, these tools identify and recognize certain elements and characteristics in recorded audio and intelligently apply processing.
AI tools don’t have to be complex. Vocal Rider from Waves, introduced nearly a decade ago, simply places a vocal at a target level range within a mix. Introduced some years later, LM-Correct 2 from Nugen Audio intelligently automates track loudness to fit global broadcast standards.
On the music-making side, Regroover from Accusonus, hosted within Ableton Live or other apps, burrows into audio loops and separates them into layers, picking out a melody or a specific percussion part, allowing them to be remixed and personalized, not just rhythmically sliced and diced. In 2014 the developer introduced Drumatom, a standalone application that analyzes a multitrack drum recording for bleed, which may then be adjusted.
All that said, there is a potential dark side to this new tech. In early 2018, Chinese megacompany Baidu unveiled the latest iteration of its Deep Voice project, a neural network-powered platform that can authentically replicate a voice based on just a few seconds of speech. The potential for fakery is obvious.
Mastering engineers are likely keeping an eye on AI platforms that could threaten their livelihoods. Landr and CloudBounce, as examples, offer cloud-based AI record mastering services that appear to be gaining some traction at the low-budget end of the market.
Even music composition, a most human endeavor, faces competition from machines. A variety of production music platforms, including Amper Music, Xhail and Jukedeck, offer AI-generated soundtracks for videos, commercials, films and television based on a user’s specifications.
Sony’s Flow-Machines project in France is especially advanced, as demonstrated by composer Benoît Carré, who created “Daddy’s Car” in the style of the Beatles using the machine learning platform. More recently, in a world’s first, French collective SKYGGE released an entire album, Hello World, that used Flow-Machines as an inspirational and collaborative tool to generate sections of songs, melodies and instrument and vocal tracks.
The robots are not here to take your job just yet. But it’s not too early to start a conversation regarding prospective issues such as rights, royalties and regulations.
If they do take over, well, at least we’ll have more leisure time. “Alexa, play ‘Despacito.'”