Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


Road-Testing AI-Generated Production Music

By Steve Harvey. In a study conducted earlier this year, audio effectiveness measurement platform Veritonic quizzed consumers to determine whether they could tell the difference between music created by a human and that generated by an AI. Spoiler alert: they couldn’t.

Santa Monica, CA—In a study conducted earlier this year, audio effectiveness measurement platform Veritonic quizzed consumers to determine whether they could tell the difference between music created by a human and that generated by an AI. Spoiler alert: they couldn’t.

“It’s the man-versus-machine debate,” says Scott Simonelli, CEO of Veritonic, which conducted the analysis for its client, AI music creation company Amper Music. Veritonic, founded four years ago, works with diverse brands, he says, from Brighthouse Financial and E-Trade through Visa and Pepsi to Pandora and NPR.

Simonelli explains, “We’re trying to look at this and say, Amper is creating audio based on its technology and AI. Is it possible to quantify if that audio is different than audio created by humans? Can we put statistically significant data behind that? Is it performing on par or better than music written by humans?”

Amper Music Launches AI Music Composition Platform

Panelists were presented with two versions of a video—one with stock music, the other with Amper’s AI-generated music—from brands including USA Today, Discovery, NASA and MLB. Through its proprietary platform, Veritonic tracked panelists’ emotional responses, predisposition to watch videos from the selected brands and preferences for the music without telling them in advance that any of it was created using AI, to eliminate bias.

“For the source material, we had a team look through relevant and noteworthy content that had music but didn’t also have dialogue or other audio on top of it,” says Amper Music CEO Drew Silverstein. The team then wrote a description of the stock music in each piece of selected content to feed into Amper’s engine, which is driven by criteria such as genre, mood, instrumentation and tempo.

“We input that into Amper and had it create a piece of music from scratch that followed the descriptors and was tailored to the length of the video. We took that output and married it to the video. At that point, Amper’s role in the process was complete,” he says.

While Amper will create a unique piece of music from scratch with every request, the team accepted the first attempt. “We gave it one shot,” says Silverstein. “So much of Amper’s premise is that the experience of working with the platform should be faster and more economical than an alternative process. For that to matter, the music must be equally good, if not better.” The object of Veritonic’s research was to test that premise, he says.

Is AI Taking Over? Not So Fast, Dude., by Steve Harvey, Oct. 17, 2018

There are two steps to Amper’s process, says Silverstein: composition and performance. “On the composition side, we’re building data sets that describe music on a music theory level, an emotional level and a genre level. We use those data sets to compose a piece of music that reflects what those genre and mood inputs are.”

That data is turned into audio using Amper’s human-performed instrument sample library. “We’ve now got one of the world’s largest sample libraries, almost 5,000 instruments strong. We use those audio samples to transform a 60-second piece of music into an audio file that might contain 100,000 audio clips that we hope sounds like a professional recording,” he says.

Amper has been building that library since the company was founded five years ago. “It’s more or less an around-the-clock effort, costing millions and millions of dollars,” says Silverstein, who hopes to eventually sample every instrument played in every way possible. “We haven’t yet hit the more obscure parts of the music canon, but what’s helpful is that when we capture brilliant musicians performing on instruments in an elite way, that human element of the performance is translated into the music.”

Responding to Veritonic’s survey attributes such as “authentic,” “inspiring,” “likeable,” “modern,” “optimistic” and “unique,” panelists scored the music remarkably evenly. They scored Amper and stock music accompanying the videos similarly across emotional response attributes and for purchase intent and showed no strong preference for either type of music.

Subsequently asked if any of the music was AI-generated, the percentage of panelists who believed that the human-produced stock music was machine-made was about the same as the number who believed that Amper’s music was. Those two percentages combined were roughly equal to the number of respondents who believed both choices were machine-made, and about the same as the percentage who responded that neither were AI-generated.

PMA Conference Session: How Will AI Impact the Future of Production Music? by Steve Harvey, Aug. 30, 2018

“When I read the data as even,” says Silverstein, “I interpreted that as people guessing. It means that it’s so unclear which is which that really they’re just flipping a coin.”

“I wish I could tell you our data was always that flat,” says Simonelli. “Not one thing was an outlier. That’s compelling. I can’t think of a study we’ve done where things were this tightly scored.”

But the hope was that the two types of music would score evenly. “The reality is that if one were better or worse, I would be worried,” Simonelli says. “When you would have been worried is if [AI-generated music] underperformed, if it was perceived as ‘off’ or weird,” in the same way that AI-generated faces may be perceived as not quite right—the “uncanny valley” effect. But Veritonic’s panelists uniformly found Amper’s AI music indiscernible from its human-made counterpart. “It’s the best outcome you could have hoped for,” he says.

Public perception of AI-generated music appears to be shifting. Early attempts at AI music could be described as clunky at best, but improvements in AI technology and the data behind the music-making algorithms now appear to be delivering an authenticity that is more widely appreciated.

PMC2018 Explores Getting Paid, Artificial Intelligence, by Steve Harvey, Nov. 5, 2018

“The thing that was delightfully surprising from my perspective was the question we didn’t necessarily know was going to be asked: If you knew the music was composed by a machine, would it change your perception of the brand? Most people said no,” says Silverstein.

Of the people who thought that the use of AI music would affect their perception, more than not said it would make them think positively of the brand, Silverstein reports. “Especially given the conversations that have happened publicly, it’s lovely to see that. The default perception is starting to evolve. People are starting to realize this can be a positive thing.”

Amper Music •

Veritonic •