Josh McDermott delivers the keynote address at the opening ceremonies for the 135th AES Convention
New York, NY (October 18, 2013)—Whether you hear the constant stream of rain or the crackling of a fire, the connection between the sound, the ear and the brain allows you to determine exactly what you are hearing. For Josh McDermott, a perceptual scientist studying sound, hearing and music in the Department of Brain and Cognitive Sciences at MIT, understanding how we can identify these various sounds is something he finds particularly interesting.
As part of Thursday’s opening ceremonies for the 135th AES Convention in New York City, McDermott delivered his keynote address, “Understanding Audition via Sound Synthesis,” which outlined his experiments and research on this topic.
“Everyday human listening is quite a stunning computational feat,” McDermott said. “The listener is interested in what happened in the world that made that sound.”
Looking at a sound wave, we aren’t able to determine what it is that made the sound, McDermott explained. Instead, when the sound wave travels into our ear, the information is transferred and processed in the brain. To examine this exact process, McDermott and his partners took clips of well-known sounds with texture—meaning sounds that contain many layers to produce the sound we recognize—and broke the sound down to the basic statistics of frequencies used in that sound. For example: When we hear rainfall, we are actually listening to a number of sounds going on at once, which creates the sound of rain with which we are familiar. McDermott analyzed the frequencies of these textured sounds to reproduce a synthetic version in order to determine if it was just these statistics that the brain needed to distinguish rainfall from another sound. However, McDermott said his research proved that it was not just basic textures that helped determine a sound.
He also discovered that we can determine a shorter sound easier because of the distinct start and end of the noise. In a second experiment, he took three similar sounding samples, with two of the sounds coming from the same source, and asked a group to listen to the samples and determine which sound was different than the other two. For the shorter samples, the group was more accurate in identifying the singular sound, versus when they listened to the same three sounds for a longer period of time.
“The brain is using detail in the sound to measure the statistics, and throwing the detail away,” McDermott concluded. “Textures statistics may be all that we retain, while we lose access to the individual raindrops.”
To learn more about McDermott’s research, visit his Webpage at mcdermottlab.mit.edu.