Spatial, Music & Hearables: The Next Frontier In Health & Wellness
By cuterose

Spatial, Music & Hearables: The Next Frontier In Health & Wellness

06/05/2022  |   691 Views

This article is Part 3 of a 3 Part Series on Spatial Audio and Well-being.

In the first article, Spatial Audio Trend Sparks New Opportunities In Digital Health, we explored some of the history, nomenclature and evolving trends behind spatial audio.

In Part 2, 3 Ways Spatial Audio Can Transform The Future Of Digital Health, we dove deeper into the potential health and wellness implications of spatial audio on the human brain and our physiology. We looked at some of the latest research, the key drivers behind the trends, and the potential impact on future audio content and products.

In Part 3, Spatial, Music & Hearables: The Next Frontier In Health And Wellness, we explore how the integration of immersive audio, personalized sound, and the Hearables Revolution is redefining the future of music as precision medicine.

What might this “musical” New World Order mean to you?

Imagine yourself wearing your brand-new pair of fashionable Ear Buddies comfortably tucked away as you go about your day. Now imagine that within these tiny devices lives a turbocharged computer, a high-fidelity hearing enhancement system and a sophisticated set of biometric sensors specifically designed to monitor and improve your health, performance and quality of life.

MORE FROMFORBES ADVISOR

Best Travel Insurance Companies

ByAmy DaniseEditor

Best Covid-19 Travel Insurance Plans

ByAmy DaniseEditor

Through these stylish tech companions, you can select your favorite apps and personalize feature sets to filter out background noise, enhance audio fidelity, customize your personal hearing profile, track physical and emotional biomarkers, and measure the real-time impact auditory input has on your performance, productivity, and well-being.

Not to worry, you will still be able to make hands-free phone calls, update songs on your Power Playlist, order the Amplified book on Audible, or flirt with your virtual assistant—all while cooking dinner for the kids.

Welcome to the Amplified Future

You are entering a world in which millions of people will benefit from precision and personalized medicine; new populations will have affordable access to quality digital health resources; and health science research will move from the lab to the wild—using real-time biometric tracking on the individual via the human ear.

The lines separating neuroscience, wellness and entertainment are becoming blurred; digital health products, wellness-tech platforms and content creators are racing to add new features to more accurately measure and improve your well-being. Wearables are migrating to the ears, and anyone with a smartphone or smartwatch and pair of sensor-rich hearables will be equipped to optimize their own health and performance in personal, meaningful and measurable ways.

This revolutionary marriage of music and exponential technology promises to transport us beyond entertainment by boldly redefining music as an integrated healing modality. New supercomputers are taking position in and around the ears to disrupt industries in much the same way Steve Jobs transformed the way we live by disrupting the mobile phone, computer and music industries with the introduction of Apple’s iPod, iTunes and iPhone nearly two decades ago.

Music + Health = A Formula for Success and Happiness

Music—the same timeless exponential technology that enabled Jobs to pull Apple from near bankruptcy to the trillion-dollar brand it is today—is becoming a game-changer for products and programs looking to win in the highly competitive digital health marketplace.

In September 2021, Universal Music Group was spun off from former parent Vivendi to go public with an initial valuation of $54 billion USD (representing a 39% increase in share value on day one of trading). As the market value for music itself steadily increases in the streaming Decade of Sound, investors are lining up to buy music catalogs for as much as 30 times their average annual royalties.

Meanwhile, the demand for wellness and functional music (focus, chill, fitness, meditation, sleep, etc) has been steadily rising. Content providers large and small are racing to get their slice of that digital music and health revenue pie, not to mention a coveted slot on those increasingly more popular wellness playlists.

Beyond the obvious growth in streaming revenues from functional music playlists—leveraging music to enter a much bigger market—digital health and wellness is something that has captured the attention many industry visionaries, including Apple’s CEO Tim Cook. With over $7 trillion in health spending per year, the health and wellness markets dwarf even the most profitable growth opportunities for smartphone hardware. Cook himself has suggested that health will ultimately be Apple’s greatest contribution to mankind. Just as Jobs realized twenty years ago, music could be the power play to catalyze that vision.

Whether Apple successfully connects the dots first or not, scientifically validated and emotionally engaging therapeutics using music and sound for the user’s well-being will soon be a new standard—available to anyone, anytime. Products and content that provide the best user experience (UX) — drawing upon decades of experience in wellness, entertainment and technology — will dominate in an exponential and high-demand global market for music and digital health. Their formula for success in the Amplified Future with combine the measurable efficacy and compliance that is core to health and wellness with the scalability, engagement and stickiness that keeps the heart beating for leading music and audio brands.

5 Keys to Enter the Amplified Future

While the new Spatial spotlight (covered in Part 1 and 2 of this series) have brought the industry’s attention to one piece of the audio puzzle connecting sound, music and human health, there are a few more hurdles to overcome before we can create the fore-mentioned utopian future.

Following are 5 key areas that need to be addressed:

1. Deep Personalization

2. Whole System Integration

3. Nano-Sensors & Biomarker Tacking

4. Microprocessors & Dedicated OS

5. Scientifically Validated, Popular & Responsive Content

1. Deep Personalization

In a July 2021 Abbey Road Red interview, Dolby’s Chief Scientist Poppy Crum said reaching the next plateau in personalized audio technology for hearables and immersive audio would require Personalized Head Related Transfer Function (P-HRTF) and Empathetic Technologies. Let’s start with Personalized (P-) HRTF.

In Part 2 of this series, we introduced the benefit and current use of HRTF* to create a spatial audio experience through headphones and earbuds (hearables). HRTF is an algorithm built into in the audio signal that tricks the brain into perceiving sound as surrounding us. It is a tech-based attempt to simulate what we might experience in a real-world 3D environment.

*For a fun and simplified explanation of HRTF, check out this blog from immersive audio lab HEAR360.

While the current integration of HRTF into hearables and spatial audio technologies has created significant advances (and industry buzz), it is a generalized and limited approach to an extremely complex and highly personal experience: hearing. To move from that “generalized” approach to a truly personalized and far more natural sonic experience, each unique listener will need a more custom-fitted, personalized HRTF (P-HRTF).

Spatial, Music & Hearables: The Next Frontier In Health & Wellness

While it might sound like a concern for tech geeks only, this higher level of personalization is going to be critical for us to move from generalized sound enhancement to more precise and personalized approaches to leveraging music and sound for health and wellness. Our ears—a critical portal for our safety, pleasure and well-being—are highly sensitive, individual and complex.

According to founding director of the Spatial Sound Institute (SSI) Paul Oomen, sound has a deeply rooted relationship with our physiology. This relationship, unique to sound and to each individual, impacts function in multiple areas of the human brain and body. Hearing also affects balance, posture and motor functions. The inner ear is the first sensory organ to develop during the embryonic phase in the womb, influencing the early development of the brain and parasympathetic nervous system. Sound and music can be used to bring the body into balance, or homeostasis, and to provide relief from stress and diseases.

Creating a P-HRTF

There are currently two primary approaches being used to create a P-HRTF.

The first approach is in the lab. By surrounding the subject with a sophisticated array of cameras, microphones and nano-sensors, a technician can measure the way a person hears and experiences the culmination of the hundreds of reflections that affect sound in space before it enters our inner ear. Unfortunately, this is not the scalable and simple solution that audio hardware manufactures are looking for, regardless of its potential value to the listener.

Another approach is two-pronged: An individual can use local sensors and software built into their smartphone or hearable to take a less comprehensive, yet personalized, measurement on themselves. The information captured can then be compared via AI to an extensive database of complex results gathered from a large and diverse group of individuals captured and tagged in the lab (as above). The result is a much more accurate approximation of an individual’s P-HRTF.

The immersive audio team at VisiSonics, in association with researchers at the University of Maryland, has been working to build more comprehensive research databases from individuals in the lab, apply advanced cloud AI computing, and design personal hearing profile capture applications on existing mobile devices. According to CEO and cofounder Ramani Duraiswami, VisiSonics already has the largest privately held collection of lab-captured P-HRFTs. They have also developed a smartphone app, combined P-HRFT measurements with advanced personal hearing profiles, and designed a process for integrating the process into a game engine.

2. Whole System Integration (Whole Body Listening)

Just as P-HRFTs help address the highly unique nature of an individual’s listening experience in relation to space and sound entering through the ears, Empathetic Technologies (the other missing piece according to Crum) provide another way to help us experience sound and vibrations. Many practitioners in the field, like integrative health physician and cofounder of Bio-Self Technologies Stefan Chmelik, refer to the use of these empathetic applications as vibroacoustic therapies (VAT). Audio-tech startup Subpac has been working with the application of low frequency vibroacoustic experiences across a number of industries, including wellness. Beyond giving us a more complete picture of how we experience sound, these applications can help us better improve healing outcomes, cultivating what Oomen referred to as homeostasis.

Empathetic technologies help us address other important elements of the natural listening experience in the real world—be that walking through Manhattan, hiking in a rain forest or attending a live concert. They help address the broader range of sonic and vibrational stimuli that our senses and brain expect from a natural environment, something far more complex than we are able to capture in the current evolution of headphones or earbuds.

Capturing this level of the natural sound experience involves using the whole body as the listening instrument. Capturing vibroacoustic signals is part of a more complete experience designed to address the distinct physiology of the individual. In addition to helping take into account the hundreds of reflections that come off of the body before arriving at our ears (and are constantly changing with physical gestures and movement), developers of empathetic technology explore ways each of us experiencesound and vibration across the entire body, including through the largest organ—our skin.

Oomen, who has been exploring complex sound treatments for depression and trauma, refers to these reflections as “sound cues.” Each audio signal involves hundreds of natural sound cues that contribute to the warmth and depth (and sense of presence) our systems thrive on. This vast and dynamic array of audio impressions contributes to our ability to feel—not just hear—sound and its highly complex and often more popular counterpart—music.

New Tech for a Better UX?

Although the opportunity is enormous, industry leaders still face a steep and winding path toward a more seamless UX, leveraging the ultimate capacity of music and sound as precision medicine. In the interim, a more integrated approach using empathetic technology and whole-body listening may come from the inclusion of vibroacoustic transmitters and wearables located on key points in the body and/or hearables that include bone conduction and other ways to capture sound cues outside of the ear itself.

Some of the more purpose-driven trials in this area have been created to help those with more severe challenges in their sensory capacity. Mike Ebeling’s Not Impossible Labs built vibro-acoustic wearables to help the deaf experience live music. The engineers at HEAR360 leveraged the bone conducting audio and head tracking features of the Bose Frames to help the blind better navigate through sound and vibration. Microsoft added the Bose Frames to the list of devices supported by its Soundscape app, which provides audio cues to help people with sight limitations move safely through busy cities. It is not unlikely that the forthcoming Apple Glasses will take advantage of similar features to enhance their next generation music, health and wearables ecosystem.

3. Biometric Nano-Sensors & Biomarker Tracking

To use hearables (ear computers) as the operating center for personalized and precision health, leveraging the power of sound and music, three other important considerations come into play:

A)The ability to have an array of very low-powered, micro-scaled sensors in and around the ear that can accurately collect biometric vitals in real time.

B)A powerful AI data processor that can tag the data with relevant biomarkers and correlate them with other historical and behavioral data unique to the individual.

C)A massive database of labeled biomarkers from a highly diverse range of individuals with related conditions that can be referenced and applied in real time.

Probably the key area where the biomarker tracking databases have reached sufficient scale to be useful is with the voice. That said, most of the voice analytic research and spending has been for natural language processing (NLP). Unfortunately, regardless of how much health information we can capture from an individual’s voice, the vast majority of those enormous databases captured daily by Siri, Alexa and Google Assistant currently lack biometric identifiers (biomarkers); therefore the data is of little use for an individual or practitioner seeking to track various health states. Because highly robust and easily applicable biometric data systems may represent the next gold rush, you can expect to see a lot more invested into biomarker tracking going forward.

In the future, biometric data captured through nano-sensors in the ear and on the body will also be connected to behavioral (including movement and online activities, such as consumer and social behavior) and environment data (recorded via sound and image) to provide a more comprehensive and personalized collection of physical and emotional health metrics to the individual. Sound will be a core component of capturing not only voice and environment, but also of capturing audio signals from the human body that indicate levels of stress, homeostasis and disease. Berlin-based startup PeakProfiling is already successfully applying AI sound analytics to accurately measure the health and lifespan of expensive manufacturing technology and is positioning to do the same in health.

4. Custom Silicon (chips) & EarOS

The radical disruption and acceleration of more powerful personal computers has been made possible by the advancement of new nano-processors and the evolution of more advanced (and much smaller) dedicated silicon chips. It was over fifteen years ago when Nvidia disrupted the computer and entertainment industries by creating the world’s most powerful video and graphics processing chips for the highly demanding gaming sector.

Last year Apple broke free of their dependency on chip manufacturing leader Intel by building faster and more powerful M-series chips with processors dedicated to and able to work across the Apple hardware and native software ecosystem. Apple now has experience building uniquely different chip sets and operating systems (OS) for each of their primary product lines: personal computers, the iPhone, the iPad and the Apple Watch.

According to Juli Clover’s February 15 article titled Apple Silicon: The Complete Guide, the M series represents Apple's first System on a chip design, integrating several different components, including the CPU, GPU, unified memory architecture (RAM), Neural Engine, Secure Enclave, SSD controller, image signal processor (ISP), encode/decode engines, Thunderbolt controller with USB 4 support and more—all of which power the different features in the Mac.

The third generation of the M chips (2023) is expected to be smaller and more powerful. It is likely just a matter of timing and company priorities before Apple will make their move from the current Airpod line into more sensor-rich hearables. Such a move would make sense as Apple strategically advances toward a longer-range play in health and wellness. To help enable that play, we could see the emergence of a dedicated chip set and OS for their hearables, giving future designs of the Airpod and Glasses many of the stand-alone features now available in the iPhone and Watch, with an even greater array of audio and sensor capabilities.

In the meanwhile, one Silicon Valley startup—comprised of former engineers from Dolby, Knowles, Apple, Sennheiser and Qualcomm—doesn’t want other manufacturers to have to wait. In a plan to deliver what CEO/Founder Gary Spittle refers to as “Android for the ear”, the team at Sonical is developing their own EarOS and a dedicated silicon chip specifically for hearables. If they are successful in their mission, dozens of smaller hearables manufacturers, as well as individual users, could select which features and apps sets they want to include in their new hearables products in the same way one currently chooses which apps they want on their laptop or smartphone.

Sonical’s move is not the first advancement into silicon directed toward the functionality of hearables. Huawei’s Kirin A1 or Apple’s H1 SiP (system in a package) chipsets are earlier examples. The demand for SiPs has already grown as manufacturers reduce device size, add more sensors to them and try to manage connectivity, audio and power from a single chipset. If a developer like Sonical can accelerate this process with a next-level chip in the way that Nvidia or Apple did for the computer, they could save hearables hardware manufacturers from spending millions of dollars creating new product updates every time they want to change a feature or functionality to meet the latest tech offerings and ever-changing market demands. Such an opportunity could give a much-needed competitive edge of small to midsize companies in the audio and wellness space.

5. Scientifically Validated, Popular & Responsive Content

Having all of this new technology for capturing, measuring and delivering health and wellness solutions via audio is of little use without the right content. It’s a little like a doctor who is better equipped at diagnosing a patient than he is in prescribing an effective remedy.

The future wellness remedies we are proposing here, like the technology that will enable them, can be enabled and amplified through sound, as well as through sound’s more complex counterpart, music. Once consumers have a UX that can seamlessly and accurately track how music and audio content impact their personal OS—their own brain and body—they will start demanding greater efficacy from those sonic remedies. Content providers will have to provide that efficacy and validation as part their wellness programming and functional music solutions. More powerful consumer wellness technologies will enter the market, requiring a new generation of content.

To succeed in the highly competitive digital wellness business, audio content will need to deliver the scientific validation, engagement and wellness outcomes that music consumers are seeking. Among a stack of future music and audio content features, deeper personal, responsive and generative components that adjust in real time to the individual’s needs—be that for exercise, recovery, or focus—will continue to grow in popularity.

But popularity itself will also be key to take lead, especially in the highly competitive and sometimes hard to differentiate functional and wellness music genres. A well-produced song or program by a favorite artist that also proves itself to be therapeutic and offers future-facing, responsive and personalization features, for example, will more than likely rise to the top of a glut of nameless soundscapes or low-quality productions and experiences. In nearly all robust studies of music on well-being, music preference is one of the leading considerations for perceived efficacy. Much as healing and quality of life is improved when surrounded by the familiar faces of family and friends, fans will continue seeking their favorite music creators in their personalized medicine chest.

This scientifically and socially validated reality may help explain why, during the great mental health pandemic of the past two years (one that is far from over), both wellness music playlists and nostalgic popular artist catalogs skyrocketed in value. It is the successful marriage of these two worlds that will catalyze the next evolution of music for health and wellness. It is also the intelligent integration of music and sound into other wellness modalities and platforms that will amplify their efficacy, retention and engagement.

Competition is also escalating in the musical race to dominate digital wellness. From Apple’s 2018 acquisition of the global music service company Platoon, to Calm’s development of its own artist-centric music label, to Headspace making John Legend Head of Music, to Peloton’s push into major artist partnerships for their programming—these transitions are just a few of the examples of companies positioning to play in the next Superbowl of music and wellness.

A Simpler Solution for a Complex System

In addition to addressing the challenges and opportunities listed above, the ultimate user experience will have to be seamless, highly engaging, measurable, uniquely personal, and musically charged. Image the power of that amplified UX in a world where content creators, distributors, manufacturers and technology developers operate with the optimal well-being of the user as a primary objective. While we might not provide a cure for hunger; by unleashing new opportunities at the intersection of music, health and human potential, we can raise the bar for humans flourishing in the Amplified Future

Tags: