Little Ferry, NJ (January 3, 2014)—Eventide has released a trial version of the Mood plug-in, which analyzes and compares key, spectral content, tempo, dynamics and additional musical aspects to a database made of responses from people listening to and rating pop songs.
Mood displays, in real time, the relative intensity of four emotions – angry, calm, happy and sad. The intensity of these emotions are output as MIDI and OSC values which could be used, for example, to control the brightness and color of lights on stage or in a dance club.
Training is done by asking people to listen to examples of songs that make them “feel” a certain way and having them judge the degree of each emotion. The algorithm then analyzes these rated songs to determine those characteristics involved in eliciting specific emotions. This process creates the ‘descriptors’ that can then be used to analyze a new submission/song.
“Mood is a bit whimsical and no doubt some will question why we bothered to create the plug-in. The fact is that audio analysis is at the heart of what we do, and we were curious to explore the possibility of using signal analysis to map musical content to emotion,” said Eventide’s Tony Agnello.
To date, Mood has only been trained on ‘pop’ songs. Solo voice, solo instruments, jazz and classical music will not yield meaningful results. However, training is ongoing as Eventide continues to develop result for other genres.