Professors Develop New Sound Synthesis Model for VR Audio Effects

The 12-tone Mother Chord is at the heart of a new approach to modal sound effects in VR.
Author:
Publish date:
2019-08-09-stanford

Stanford, CA (August 9, 2019)—Austrian composer Fritz Heinrich Klein’s creation of the 12-tone Mother Chord (Mutterakkord) in 1921, which contains one instance of each interval in an octave, has inspired a new approach to the synthesis of modal sound effects in virtual reality.

Unlike television or motion pictures, where sound effects are generally synchronized to picture during post production, VR is unscripted and unpredictable, making the generation of realistic, accurately localized sounds within the environment a significant challenge. But a paper presented at the recent ACM SIGGRAPH 2019 conference on computer graphics and interactive techniques by Stanford University professor of computer science (with a courtesy appointment in music) Doug James, and graduate student collaborator Jui-Hsien Wang, offers a solution that can synthesize realistic sound models, and with greater efficiency than current algorithms.

Previous algorithms have relied on the work of Hermann von Helmholtz, the 19th-century German scientist who, among many things, developed an equation describing the propagation of sound. Scientists have created 3D sound algorithms based on his work to synthesize audio that, by changing volume and direction relative to the listener, appears realistic. But these modelling algorithms also rely on the boundary element method (BEM), an integral equation that requires costly computational power and time.

Rather than use the Helmholtz equation and BEM, the research scientists turned instead to Klein’s Mother Chord, initially used in his chamber orchestra composition Die Maschine in 1921, which harmoniously combines numerous notes and tones into a single sound. They named their algorithm KleinPAT in his honor. “Our KleinPAT algorithm optimally arranges different modal tones of a vibrating 3D object into chords, which are then played together by a time-domain vector wavesolver in order to efficiently estimate all acoustic transfer fields,” they write in the prologue of their paper.

The abstract for the paper, “KleinPAT: Optimal Mode Conflation for Time-Domain Precomputation of Acoustic Transfer,” sums up the scientists’ work.

“We propose a new modal sound synthesis method that rapidly estimates all acoustic transfer fields of a linear modal vibration model, and greatly reduces preprocessing costs. Instead of performing a separate frequency-domain Helmholtz radiation analysis for each mode, our method partitions vibration modes into chords using optimal mode conflation, then performs a single time-domain wave simulation for each chord. We then perform transfer deconflation on each chord’s time-domain radiation field using a specialized QR solver, and thereby extract the frequency-domain transfer functions of each mode. The precomputed transfer functions are represented for fast far-field evaluation, e.g., using multipole expansions.

Want more information like this? Subscribe to our newsletter and get it delivered right to your inbox.

“In this paper, we propose to use a single scalar-valued Far-field Acoustic Transfer (FFAT) cube map. We describe a GPU-accelerated vector wavesolver that achieves high-throughput acoustic transfer computation at accuracy sufficient for sound synthesis. Our implementation, KleinPAT, can achieve hundred- to thousand-fold speedups compared to existing Helmholtz-based transfer solvers, thereby enabling large-scale generation of modal sound models for audio-visual applications.”

As James comments in an article in the latest issue of the Stanford Engineering magazine, “We think this is a game changer for interactive environments.”

Stanford University School of Engineering • http://engineering.stanford.edu