Artificial intelligence can do a lot of things but can it resolve mental health issues? With AI-assembled rhythms increasingly being used for self-care, Nyshka Chandran takes a deep listen.
Sound therapy is practised to free the mind from the daily deluges of thoughts and anxieties. Music, chanting and the ancient Egyptian routine of toning, i.e.,manipulating vowels with breath and voice, are all considered restorative because they induce emotional vulnerability and reflection.
Several start-ups are now using AI-generated soundscapes of ambient, downtempo and chill-out beats in hopes of having the same impact as sound therapy on issues like depression, anxiety and dementia. In this context, AI-generated means that AI systems are manipulating, processing and mixing together recordings made by human artists, not creating original sounds. It's an increasingly growing sector that investors are paying attention to, especially as more governments embrace the idea of smart cities – an urban planning system based on automated healthcare and public services.
Berlin-based Endel has an AI system that produces bespoke soundscapes to help people focus, relax and sleep. It has a massive catalogue of melodies, instruments, noises and various other sonic textures made by Endel's composers and collaborators, who include Laraaji, Plastikman and James Blake. The app works by taking a variety of input data from users, including location, time of day, weather and heart rate movement to assess each individual's circadian rhythm. Based on that input data, an algorithm selects the appropriate stems (i.e. audio sources), from the company's vault, then splices and synthesizes those stems with effects and manipulations, explained CEO Oleg Stavitsky. The final results are adaptive soundscapes that respond to real-time changes. Currently, users open the app, browse through a library of soundscapes designed for specific scenarios–sleep, hibernation, deep focus, etc–and activities– yoga, commuting, chores, etc–before making a selection. Down the line, Endel hopes to automatically select a soundscape based on a user's calendar and other types of inputs. (Disclosure: Part of this story was written while using Endel.)
Wavepaths, a UK company that makes generative music for psychedelic therapy, operates on a similar model. Generative music, an early form of ambient music, was conceived by Brian Eno (who is part of Wavepaths) and refers to any system that can produce endless, unrepeatable rhythms on its own. Wavepaths gets artists such as Jon Hopkins, Greg Haines, Robert Rich and Eno himself to record themselves in action, building an ever-expanding library of content. Eno has said that he'd like to bring generative music specialists Philip Glass, Steve Reich, Terry Riley and Aphex Twin, among others, on board. Wavepaths' AI then dips into this pool of experimental nuggets, mixing pieces together to form generative music that the company describes as procedurally-generated rather than AI-generated. The tunes are designed to accompany MDMA, psilocybin, DMT, ketamine, acid and other hallucinogenic drugs that are slowly gaining traction as treatments for disorders like PTSD, depression, anxiety and alcoholism. Before a session, caregivers tell the Wavepaths app necessary information like: what substance they're administering, method of consumption, dosage, the patient's personality traits, the intended theme for the session and other factors. Wavepaths then chooses the music, which can be adjusted in real time to suit the patient's emotional state.
Based on their track record so far, these platforms seem highly successful. Endel, which raised $15 million in a second round of venture capital financing earlier this year, has over a million active users. Wavepaths is currently used by hundreds of legal clinics in over 30 countries and raised $4.5 million in its initial seed investment round last year. Brain.fm is another player in the field. Its algorithmic system selects from a catalogue of human-composed melodies, harmonies and chord progressions, then arranges these elements over timescales, adding specific features to enhance brain activity, spatialization and salience reduction to help relieve ADHD, insomnia and improve overall mental performance. It's clocked over two million downloads. Given the initial success of these platforms, it's clear that AI-made music is impacting people but there's still not enough data to assess its effectiveness as a mental health treatment. Silicon Valley financiers, however, seem convinced–global mental health tech investment activity soared 139 percent on-year in 2021, according to CBInsights.
The problem with these AI-mixed creations is that most of the actual music tends to sound the same. Shimmering synth lines, cascading waves of keys, layered melodies with a cocooning effect and lo-fi beats are common characteristics, usually repeated with subtle variations over a period of time. Call it white noise or elevator music; the vibe is overwhelmingly generic with a distinct lack of imagination, narrative or depth. This is where the distinction between music and soundscapes comes into play. The terms are often used interchangeably, but it's important to note the difference, particularly as they relate to healing. "The soundscape corresponds to an emotional colouring of the perceptual environment," described Dr. Michael Frishkopf, Director of Graduate Studies at the University of Alberta's Music Department. It can be defined as "a steady state often with some randomness, or phase shifting" while music "features ebbs and flows in a trajectory that stirs emotion in a very different manner and tends to demand attention through melodic sequences, harmonic progressions, modulations in key, shifts in texture and timbre, fluctuations in tempo or rhythm," he continued.
As AI technology continues to evolve, experts are asking whether it can ever replace the human element in sound therapy. For now, the answer is a resounding no. As Brain.fm noted in their white paper, "we have found no substitute for the talent of brilliant musicians in laying the foundation for a new piece of music." Lyz Cooper, Director at The British Academy of Sound Therapy, echoed those sentiments. She's currently working on AI-driven music as part of a new venture called LifeSonics. As opposed to Endel's and Wavepaths’ models, this system adapts the AI sounds to make them more therapeutic–a reverse take on generative music production. Cooper is on the fence about pure, AI-generated music being as curing as regular tunes, noting how current software "cannot yet replicate the same quality of music that arises from years of experience, emotion and intuition that a human uses to create therapeutic music."
Ultimately, the emergence of AI in sound wellness boils down to the novelty factor. AI is being explored as a solution to a multitude of societal problems such as food insecurity and car crashes so it seems like a matter of time before the disruptive technology is applied to all areas of life. Going forward, it remains to be seen whether the hype leads to consistent results. "As long as the companies creating this technology are honest and open about the capabilities that the technology actually has, rather than whitewashing people and capitalising on a gimmick, it will be interesting to see how the field develops," Cooper summed up.