AI music firm Endel has found a new frontier for its sleep tracks: Amazon Music. The Berlin-based startup has worked with the streaming service to create an eight-hour playlist called ‘Sleep Science’, which launches today.
The playlist starts with Endel’s reworking of a track by electronic artists Kaskade and deadmau5’s Kx5 project, followed by a ‘soundscape’ created using the startup’s Endel Pacific AI engine.
This is the kind of music that is already produced for Endel’s own apps, including an Alexa skill for Amazon devices. While the company has released albums of its AI-generated music before, via a distribution deal with WMG’s ADA, this is its first direct partnership with a streaming service.
However, Endel and Amazon are not strangers on a corporate level. The latter’s Alexa Fund was one of Endel’s first investors, in 2018, when the startup joined its Alexa Accelerator.
Today’s news follows Endel’s collaborations with artists including Grimes, Plastikman and James Blake, as well as funding rounds of $5m in 2020 and $15m in 2022.
The announcement also comes at an interesting time. Universal Music Group boss Sir Lucian Grainge recently criticised “lower-quality functional content… generic music that lacks a meaningful artistic context, is less expensive for the platform to license or, in some cases, has been commissioned directly by the platform.”
What will Grainge make of news that a startup has created an eight-hour playlist of functional music for Amazon Music? Although to be clear, Endel would firmly disagree with any suggestion that ‘Sleep Science’ is low-quality or generic.
We know that because Music Ally interviewed Endel CEO Oleg Stavitsky ahead of the playlist’s launch, to talk neuroscience, creative AIs and, of course, to get his response to the UMG chief’s recent views. An edited version of that conversation follows.
Music Ally: What’s the backstory to this deal? Endel and Amazon go back a way already…
Oleg Stavitsky: Amazon invested in Endel back in 2018, and followed on in subsequent times. We’ve been talking for a very long time: I’ve been going to Seattle at least a few times a year for conversations with different Amazon companies.
This one just felt so natural. I met Steve Boom [Amazon Music boss] who looped me in with Ryan Redington [VP] and Stephen Brower [global co-lead of artist relations] and they immediately grasped what we’re trying to do here.
They loved that it is not a threat to music artists. This partnership actually started with a collaboration last summer, with an Endel and Laraaji [a US-based artist] album.
One track on that album came out as an Amazon Original, and was on the cover of their meditation playlist for a while. Everyone loved it, and Stephen is an ambient music nerd just as I am, so at some point I said ‘Look, I think we can do so much more here…’
MA: In the announcement of ‘Sleep Science’ you’re very clear about the importance of that second word: science. A lot of people still think of sleep music as basically just gentle piano with wave sounds. Can we talk about the science behind what you’re doing?
OS: Yeah, the reason when you think about sleep music you think about relaxing piano songs is because that’s the way these playlists are traditionally designed. An editor at a streaming service throws together some tracks – ‘this sounds relaxing, like something people could sleep to!’ – and that’s the science behind 99% of these playlists.
That isn’t how sleep science works. Very early on in Endel’s life, we started working with a bunch of sleep scientists, including Dr Roy Raymann from SleepScore Labs.
He used to work at Apple: he designed their bedtime functionality, actually. He walked us through the stages of sleep, and the different sounds that need to be there in those different phases.
You start with the wind-down phase, a couple of hours before you get into bed, when what normally happens is that your parasympathetic nervous system gets activated, lowering your blood pressure and basically preparing you for the time when you’re in bed.
MA: Would this be the time when people are messing that up by staring at their smartphones instead?
OS: Yes! In reality we check our inbox, stare at blue-light screens, and all of this prevents your parasympathetic system from being activated. If you’re having trouble falling asleep in bed, the problems started a couple of hours before.
In the wind-down phase we play so-called parasympathetic tones: relaxing sounds, very mild, not a lot of musical variability. You just gradually need to start listening to things that are naturally relaxing your muscles.
Then there’s the sleep onset phase, the quality of which is determined by how fast you’re falling asleep. And if you listen to a recognisable melodic pattern that kicks in every time you’re in bed, it creates almost a Pavlovian reflex. If you listen to the same sleep jingle every time you’re in bed, a few days in your brain will hear it and go ‘it’s time to sleep’.
So here we start playing various musical phases mixed with natural sounds, coloured noises, stuff that helps you fall asleep naturally. So in the sleep-onset phase of the playlist, you’ll hear brown noise, white noise, a little bit of melody still going, but it’s very subtle. Within half an hour all of the musical stuff disappears, and hopefully you are asleep by that time.
This is where the sleep phase kicks in, where we have a combination of various colour noises that provide so-called ‘sound masking’ that shield you from noises that might wake you up. And then you have the wake-up phase.
MA: So these phases are what the ‘Sleep Science’ playlist moves through in eight hours?
OS: Yes. This is what happens when you listen to a sleep soundscape in our app, and what we did with the playlist is replicate that experience, including some clever sound design to make it flow with no interruptions between the tracks.
I think there are 130 tracks in it, and if you look at the names we’ve split it into albums, and those albums are named after the different phases.
MA: What is the appeal of working with a streaming service like Amazon Music? Is it that they can put this playlist in front of the people who they know already listen to sleep playlists?
OS: Exactly. Amazon is such a powerful partner for this because they have Alexa. A lot of Amazon Music consumption happens on Alexa, and we also have an Endel Alexa skill that generates roughly 300,000 hours of listening time every month.
That’s a lot! Overall across all our apps there are more than three million hours of listening time a month, so 10% of that is Alexa. And on Alexa, Endel is 100% a sleep machine. People have Alexa devices in their bedrooms, and it’s so easy: ‘Hey Alexa, I want to go to sleep’.
MA: So this Amazon Music partnership is sitting alongside your own apps and Endel as a consumer product?
OS: There’s this underlying, proprietary, patented technology that powers everything we do, and today there are two business spheres. One is the D2C consumer ecosystem of apps, which we are super, super focused on. Strategically that’s very important to us.
Then there are the DSPs, and our presence on them. The reason we’re doing this is that we see this functional sound market as huge, and growing. Not a lot of people talk about it.
Well, UMG’s CEO Lucian just sent that email to all his employees in which he specifically pointed out things like white noise, which is all to me part of this functional sound market. It has been doubling in size every year.
My thinking is that if hundreds of millions of people are searching for this type of music on Spotify, Apple Music, Amazon Music and so on, we should be there. They should be interacting with our content.
MA: We should talk about Lucian Grainge’s comments. What’s your take on his views on functional music and how it impacts the music industry and artists?
OS: His comments frankly couldn’t have come at a better point in time! They paved the way for the release of this playlist. But this Kx5 track [at the start of the ‘Sleep Science’ playlist] should hopefully demonstrate how we can work with the music labels.
Yes, we can generate eight hours of sleep sound without any artist input, but it doens’t have to be like that. We have always commissioned artists like Grimes, Miguel and James Blake, who created stems specifically for us following our scientific guidelines.
The difference with the Kx5 release is that our technology has evolved to a point where we don’t need the artist to specifically prepare stems for us. We can can turn any content – any album, any song – into a functional soundscape.
MA: But this is still done with their permission as a partnership? It’s just that you can use their existing music rather than require them to make bespoke stems for your system to use?
OS: Right. The future I’m imagining – and hopfully that Lucian Grainge and the other music labels are going to see – is that this is a big opportunity for them. Generative AI has evolved to a point where it can process pre-existing stems and export them as functional soundscapes.
So you might imagine that a new Taylor Swift album is coming out, and it exists in the form of an album, and you listen to it in the form of an album. But then there is a companion functional soundscape of it, almost released as a b-side.
You can sleep to this album, work to it, focus… It extends the universe of the album, and by extension the music universe of that artist.
We are not trying to train our model on all of the content that is available out there, and steal these artists’ styles. We’re saying hey, you can use our technology to create functional soundscape versions of your work, and we will gladly collaborate with you.
It’s the harder way. You have to deal with the legal side of it, which is insanely complex – and labels are not making it easy for us! But we take that hard road, because we think it’s the right road.
MA: What’s your take on how AI music is currently being perceived within the music industry?
OS: I may be wrong here, but personally I don’t think anyone wants to listen to AI music, at least at this point. People still want to listen to their favourite artists, their favourite genres.
They don’t want to listen to AI music. It’s bland! It’s not good enough at this point. It’s good enough to soundtrack your YouTube video or your podcast, but certainly not good enough to replace music artists.
It still has a long way to go. The only niche where it can truly be applied is functional sound, and AI alone is not enough here: you need to go very deep into the neuroscience of sound.
We’ve been doing this for five years now, but we are still bringing artists into the mix. We want the end result to have a human touch to it.
MA: Does being able to use pre-existing stems increase the number of artists you can work with in this way?
OS: It is definitely cool: it broadens the choice of the artists we can work with, but they do still need to give permission for the stems to be fed into our machine, and they still have to approve the end result. We are not feeding people’s music into the machine and not telling them we’re doing this!
It also allows us to start thinking about creator tools, because we could open our technology to anyone at this point. If we can produce a soundscape by just taking a few stems and throwing them into the machine, why not open it out to all of the artists out there? All of the bedroom producers?
They could do whatever they feel like with the end results: download it, mould it, tweak it. This is such an exciting time for me, frankly.
MA: If more artists are involved at the start and end of the process, could this change perceptions of AI-generated music as lower quality?
OS: Take this with a grain of salt: it’s my personal opinion, and I don’t consider myself to be a technical expert on AI in the way that our CTO is! But the thing with AI is that the output is only as good as the input.
You need high-quality data sets to train your models on, and this is where we come to a fork. Most of the AI music models were trained on just stock music, or stems that were created by a bunch of session musicians basically.
We have seen what AI did for graphic design. There has been massive outrage when people have recognised their style in the output of some of the big systems: because basically they’ve taken all of the visual design and art and fed it into their machines.
Fortunately, you cannot do that with music, because it belongs to someone. The music industry has been clear on that: ‘If you train your model on our content, we’re going to come after you’.
So, in order for AI music to become as good as the actual music that we all love, it needs to be trained on actual [commercial] music, and for that, you need to take that hard road and collaborate with music labels.
You need to talk to musicians and get their stems to train your models. Otherwise your output will still sound like stock music.
MA: So you’re going to need licensing deals. What progress have you seen being made on what those deals might look like?
OS: There’s no framework for licensing music for AI at this point. We’re inventing it as we go, so to speak. I wouldn’t even say a framework is being put in place: there are all sorts of agreements, from full buyout to licensing models, and everything in between.
We are in active conversations with all the big three music labels. We’re talking to all of them. And we’ve worked with their artists: Miguel is on RCA, that’s Sony Music. James Blake is on Republic, that’s Universal.
There are a lot of really smart people in these companies, and they are excited. That comes from the artists too. The reason everyone was excited about the James Blake project was because James himself was excited.
Everyone’s figuring out how to deal with this. My hope is that the music labels recognise the opportunity here, and that their excitement about this opportunity outweighs the fear.