This article originally appeared in the November 2018 issue of Pro Sound News. Innovations is a monthly column in which different pro audio manufacturers are invited to discuss the thought process behind creating their products of note.
The history of our technology is a journey of searching for the true reproduction of sound.
Back when Helmut Oellers, our ideation father and Head of R&D, spent his days solving acoustical issues in planetariums, he started to look for an improved audio reproduction solution which would be superior to conventional approaches. The goal: Achieving correct localization and reproduction of the original signal for everyone in the audience, thereby creating a truly genuine immersive audio experience.
It all started with the fact that the classical audio reproduction procedures which we are all so familiar with are not truly reproducing the original sound. Until today, the status quo in audio reproduction, whether for so-called “immersive” environments or not, is based on the use of phantom sources. These phantom sources are an artificial perception of the source location in our brain—psychoacoustic effects caused by level- and time differences at which signals arrive at both our ears. In these cases, all attributes of the original source, direct sound, first reflections and timbre, are captured in one signal and subdivided between the speakers (e.g. stereo). Therefore, none of the important first strong reflections, which the human auditory system uses for localization, are reconstructed at their correct starting point. Hence, the replicated sound sources are unreal, do not exist like this in nature and do not share the same behavior with the original signal.
Due to being an illusion, phantom sources create an unstable image and localization and their effect can only be achieved in a small area of the audience (the sweet spot). Once a listener is outside this area or changes his perspective, the imaging and localization are distorted and incorrect. The illusion no longer works and the listener is able to clearly identify the speaker as the source of all audio.
In practice, this might seem less of a prominent problem when an acoustical image is only replicated on the frontal horizontal listening plane for a fixed listener position. However, with the industry and consumers demanding more immersive, intriguing and realistic experiences, this is seldom the case in current projects. The desire to create spatial audio is today clearly on the rise, and for Holoplot, which started in 2011, this was already the ignition spark.
Helmut’s—and hence Holoplot’s—approach to finding a better solution started off with wave field synthesis (WFS) as a basis of design. WFS is a spatial audio reproduction procedure, which does not depend on psychoacoustic phantom sources. Instead, it creates a physical copy of the original wavefront, using a high number of smaller elementary waves according to the Huygens principle. This allows for the recreation of all sound sources and their reflections at their correct location. The realization of this principle delivers several advantages: Firstly, the sweet spot is eliminated and localization of sound sources is correct and stable, independent of the listener’s point to the source. Secondly, virtual sources can be synthesized in the room, close to the listeners. Therewith, proximity, a so far non-available parameter in audio reproduction, has been introduced and delivers incredible effects. Thirdly and contrary to conventional procedures, direct waves, reflections, and reverberation can be processed in separate unique signals and, thus, can be treated independently from each other. This gives incredible new flexibility to the creation of sound experiences.
During the development of a wave field synthesis system, we were able to learn from past attempts to design a commercial WFS system. Despite the proven theoretical concept, previous classical WFS systems did not attain big commercial success. The reasons were often rooted in their environmental and processing requirements. Classical WFS systems mostly prescribed a circular array of speakers around the listener, ideally dry room acoustics and often a significant amount of processing power. Hence their application was limited to very specific immersive installations, with a broader application not being feasible. With our developments, we took on the challenge to fill that gap and create a flexible and practical product. Perhaps unsurprisingly, the development process for such a research dependent product is non-linear, scattered with side steps, loops and setbacks. When trying to pave completely new paths, choosing the right one is a challenging task and often non-obvious.
When I joined Holoplot full-time in 2016, after being involved for several years, the company was still in the prototype phase with a rather open target market. The product concept has been based on a fully modular, two-dimensional matrix system, consisting of our own proprietary speaker modules, processing core and software. Depending on the application, our speaker modules can be combined into single or multiple wall structures. The system works mostly centrally from a single or a few points of origin, meaning it does not distribute many speakers around a space. Similar to reality, it can often actively use the playback room through targeted reflections to create the desired sound field. This is in stark contrast to classical WFS approaches and allows to make a system much more scalable and applicable. With various algorithmic and computational adaptations, the processing requirements can also be reduced. The combination of proprietary hardware and software finally gives us significantly more control, by letting us create more efficient audio wavefronts, leading to very precise two-axis beamforming; this results in a very constant level over distance and multi-content capabilities, which make it appropriate for uses beyond the typical WFS applications.
Together with the team, we defined our scope for the future by taking a new perspective on the product’s capabilities and identifying its potential for other applications aside from just immersive audio. As we demonstrate din test installations at the Frankfurt Central Station, control over the wavefront propagation leads to excellent audio quality results in even the acoustically worst environments. In this case, that meant Speech Transmission Index (STI) values above 0.8 on 175 m distance with only -7 dB over the whole plane. Raising the bar for speech intelligibility and quality in general though is not only a topic for the most challenging environments, but all applications.
In February, 2018, we released our first product series, called Orion. The focus of this series is mainly on speech as well as creative applications. The capabilities it offers to planers, audio engineers and creatives are vast, as they get access and control over audio that they haven’t had before. Whether it’s creating an incredible homogenous soundfield over a large area, delivering best speech intelligibility, distributing multi-contents in a single space or building an incredible immersive installation, Holoplot can be a very powerful, flexible and easy to use solution.
We are proud of our innovation and will continue developing more, to really change how audio is reproduced and eventually experienced.
Roman Sick is the CEO of Holoplot.
Holoplot • www.holoplot.com