Composer Andrew Gross observes, “It’s no longer a question of when AI will start generating business and competing with human composers; it’s happening now.”

Los Angeles, CA—Are robots coming to take your job? If you’re a songwriter or a composer, they’re already here.

For years, many questioned whether AI, a technology that began in the 1950s, could ever crack the code of creativity, which might be considered a uniquely human capability. But in the last several years machine learning has made significant advances, such that we should no longer ask if AI can make art. Rather, when presented with a photograph, a painting or a piece of music, we must ask, Was this made by a human or a machine? And the answer is not always obvious.

PMA Conference Addresses How AI Will Impact the Future of Production Music. Panel moderator Andrew Gross

Andrew Gross

As composer Andrew Gross observes, “It’s no longer a question of when AI will start generating business and competing with human composers; it’s happening now.”

During the fifth annual PMA Production Music Conference, to be held Sept. 26-28 at the Loews Hollywood Hotel in Hollywood, CA, Gross will moderate “AI-Generated Music: How Will This Impact the Future of Production Music?” The panel will explore the current state of AI-generated music for sync licensing, whether for film, television, video games or other visual media.

But as much as the panel will focus on the technologies, workflows and business models, there will also be some discussion around its potential regulation. “Because there’s nothing in place at the moment to stop it,” says Gross, who is also a PMA (Production Music Association) board member. “There are no laws—there are just ethical arguments and conversations that we can all have.”

Online AI music platforms operate similarly. A user selects parameters such as genre, length, tempo, instrumentation and so on, and the web application generates a file. The method used to create the music and the assignment of the composition’s copyright are two areas where the services tend to differ.

Want more stories like this? Subscribe to our newsletter and get it delivered right to your inbox.

Amper Music offers fully customizable music. (Amper CEO and co-founder Drew Silverstein, an award-winning composer, producer and songwriter for visual media, will be on Gross’ panel.) Gross has tried Amper’s system: “I was able to upload a video, make a start point and an end point, and identify anything I wanted to hit, like a cut. I would say it was high-school level at best, but it was impressive in that it had the right idea. It’s just that it was using sounds that you might find in Garage Band or Logic.”

PMA Conference Addresses How AI Will Impact the Future of Production Music. Andrew Gross uses Amper

Amper in action

At the other end of the spectrum, Xhail, which will be represented on the panel by CEO and founder Mick Kiely, who has over 30 years of experience in music composition and live performance, creates custom AI-generated pieces using human-generated loops and stems. “I’ve heard some of the results; it’s pretty impressive,” says Gross. “They have a very tight set of specs that they send to their composers. The quality is quite good because they’re well produced loops that humans created.”

Amper Music is perhaps the highest profile AI music platform, but it was not the first to market. That honor appears to go to Jukedeck in the UK, which to-date has reportedly generated over half a million tracks for customers worldwide, including Coca-Cola and Google (which has its own AI-generated music and art research project, Magenta).

Tyler Bates to Keynote Production Music Conference, June 12, 2018

Founded by Cambridge University graduates, Jukedeck is working toward total AI music synthesis, not just machine-generated compositions, using artificial neural networks. The company’s R&D team posted an eight-bar piano piece on its blog in late 2016, noting, “To the best of our knowledge, it’s the first time a computer has written and produced a complete ‘song,’ from start to finish, using purely machine learning-driven techniques.”

In one possible dystopian future, machines could make composers and songwriters redundant. But Gross believes that AI music software developers view composers and musicians as future partners. “The better the musicians, the better the composers they’re able to hire to program the AI, the better the AI will be able to write and produce better quality music,” he says.

There is already evidence that these AI-generated music technologies are useful collaborative tools. One YouTuber, Taryn Southern, added her own lyrics and vocal melodies to an AI-generated Amper composition to create “Break Free,” a video—which also uses AI-generated imagery—that is approaching two million views.

The most ambitious—or audacious, perhaps—example of AI-generated music is “Daddy’s Car.” In 2016, researchers at Sony Computer Science Laboratories used the company’s Flow-Machines technology to create a pop song, in collaboration with composer Benoît Carré, in the style of the Beatles. A more recent project by SKYGGE, a collective of French creatives, used Flow-Machines as an inspirational and collaborative tool to generate sections of songs, melodies and instrument and vocal tracks to produce an entire album, engineered and mixed by humans, entitled Hello World (it can be heard on Spotify, YouTube and elsewhere).

One gray area regarding AI music, says Gross, is the issue of rights and publishing. Where these services offer royalty-free music, he says, “A company can purchase this music and there’s no composer attached to it. They own all rights to it, both master and publishing, in perpetuity.”

Potentially, “That company would have the right to attach a name as a composer and their own publishing entity, to collect royalties should it air on television, in the U.S. and worldwide,” or monetize it on YouTube, he says. “A CEO could say, My family and I are the composers and we’re going to start a publishing company. They could register with BMI and ASCAP and collect royalties just like anyone else.”

Xhail takes a different approach, Gross observes. “They register those copyrights. Xhail in effect becomes the owner and publisher of that piece of music.” Kiely has noted in past interviews that sync and royalty revenue is shared with contributing musicians and composers, with writers retaining 100 percent of their rights and Xhail owning 100 percent of the publishing.

AI-enhanced audio software from iZotope, Nugen Audio and Waves might eventually impact the bottom line for mixers. And just as AI services like Landr and CloudBounce could take business away from lower-cost mastering engineers, AI music platforms may affect revenues for composers working on low budget sync licensing projects.

But Gross can also see an upside: “As a composer, I can see it being a great tool. You use it as inspiration. I like that.”

The “AI-Generated Music” panel is scheduled to begin at 4:20 p.m. on Sept. 28 at the Production Music Conference. The PMC schedule is online.

Production Music Association • pmamusic.com

2018 Production Music Conference • pmc.pmamusic.com