LEVERAGING Auditory Perception (Part 2): The HOW
Clearly, sound is amazing; so how do we use it?
1. First, let’s record sound the way it is meant to be heard.
I mentioned that our sense of hearing is totally spherical, and has a wide spatial range. This is because it is binaural, meaning that it utilizes both ears to do things such as localize sounds in time and space, to track motion, etc.
We’ve actually been able to record binaural audio for about a century, but haven’t really had practical/widespread applications that necessitate its use. Binaural (aka 3d, 360, ambisonic) audio is captured using special microphones that map where sounds are localized. The end result is a robust recording that reflects the way the world naturally sounds.
So why is binaural audio useful? Because it adds dimensionality to sound, and can provide a lot more information, a much clearer picture of certain scenes. It is surround sound in the truest sense.
How can we use binaural audio? Because we hear sounds all around us, including outside our visual field, sound serves as an excellent cue that draws our attention to some visual target. Using sound as a signaling mechanism could be a very effective way to look at large datasets where you don’t know where to start. It could be used in VR movies to tell a coherent story while also granting the viewer freedom to look around in any direction and enjoy the whole scene.
How can we use binaural audio? Because we hear sounds all around us, including outside our visual field, sound serves as an excellent cue that draws our attention to some visual target. Using sound as a signaling mechanism could be a very effective way to look at large datasets where you don’t know where to start. It could be used in VR movies to tell a coherent story while also granting the viewer freedom to look around in any direction and enjoy the whole scene.
You can listen to binaural audio through a regular pair of headphones — check out this barbershop sample created by QSound Labs:
2. Let’s Use Sound to Perceptualize Data
Perhaps the most direct and meaningful way in which we can take advantage of sound today is via data sonification — the representation of data as sound.
“Sonification is defined as the use of sound to perceptualise data and convey information. It is an alternative and complement to visualisation, and proved to be particularly useful in the analysis of highly complex and multi-dimensional data sets.”
As the world is becoming more data-driven, we are starting to become obsessed with data visualization — graphical ways to depict objects, trends, and relationships. However, sound as data is vastly underutilized — but it shouldn’t be. Our ears are highly sensitive to changes in timing, pitch, timbre, frequency, amplitude, and spatial location, making them trainable and perfect for pattern recognition. In many ways, audition is even superior to visual perception.
In short, auditory perception has advantages in info resolution, learning, memory, and pattern recognition that make it an excellent method for processing and understanding data, especially when coupled with visualization techniques.
These features make make sonification a highly robust and potential-filled technique. The applications are limitless:
In her TEDx talk, Lily Asquith discusses listening to photons from the Higgs boson in the Large Hadron Collider. Solar scientists at NASA have found that sonification lets them parse through data at least ten times faster, and is leading to novel discoveries that may otherwise have gone undetected.
One of my favorite applications is in neuroscience: listening to neurons. In electrophysiology, neuroscientists use electrodes to record electrical activity directly from individual and populations of neurons. By hooking the oscilloscope up to a speaker, you can hear neurons spiking and even identify the type of firing pattern. I remember one of my professors at Brown, Monica Linden, telling us in her Learning and Memory class: “It’ll be hard for you to distinguish the neuron from the background noise because you’re not used to hearing it, your ears don’t know what to listen for. But do this for a couple weeks and it’ll stand out so clearly/be so obvious, you won’t be able to nothear it.” Our ears actually train themselves to recognize patterns, as any musician will attest.
But sonification doesn’t just matter to data scientists. We can use sonification as a source of auditory feedback. Großhauser and Hermann describe a drill which emits sounds to guide the user to a perfect perpendicular angle every time. Our ears are so good at rhythm and timing that they can actually help elite athletes synchronize motions: Recent studies on elite rowers have shown marked improvements in boat speed when presented with real-time acoustic feedback on their rowing.
In fact, we already experience sonification on a daily basis — think about a kettle boiling or a doorbell ringing — information outside our visual periphery that is brought to our attention sonically. Alarm clocks are another great example — we close our eyes when we sleep, so we use sound to let us know it is 7am. Other sensory modalities could be used to transmit this wake-up signal as well — touch, if we have someone gently shake us awake. Smell, in the form of pungent salts, can also work. (Taste would conceivably work as well, but doesn’t seem like an appealing option, partly because our mouths are closed when we sleep.)
Ultimately, sonification is another way to deliver information that doesn’t further saturate the visual scene. Our digital displays already overload us with visual stimuli — auditory perception can be used in a complementary fashion to impart a more engaging and effective experience.
For this to happen, design is key. The aesthetics of sound are very important, and effective sonification efforts will require collaboration/input from scientists, musicians, psychologists, and more. According to Paul Vickers at the University of Northumbria, in Britain, progress is being made: “We’re seeing people working in the scientific domains engaging with philosophers of music and psychologists, and thinking about how we listen in the real world to inform how we design sonifications.” (src)
In the future, what other sensory modalities might we use? Certainly tactile representations of data are on their way. But is there a gustation of data arriving anytime soon? What about data olfaction, or data proprioception? Could we create and use new senses entirely?
Read Part 1 of this post here (Ear vs Eye)
Research for this essay came mostly from The Sonification Handbook, fully available here, and the articles and papers linked throughout. Special thanks to Dr. Dave Warner for his help and inspiration for many of these ideas.