Oleg Stavitsky is co-founder and CEO of Endel, a sound wellness company that utilizes generative AI and science-backed research.
Today, when the music industry cautiously expresses enthusiasm for AI, it focuses primarily on AI as a new tool that producers and artists can use in the studio. The rest of generative AI’s capabilities are seen as a threat, or as RIAA President Mitch Glazer put it in a recent op-ed, as “empty knock offs and chatbot impersonations,” from which true, human-made music must be protected.
Meanwhile, The U.S. Copyright Office has put forward new requirements when registering copyright for AI-generated content: they will consider the extent to which the human had creative control over the work’s expression and “actually formed” the traditional elements of authorship. Both perspectives draw a hard line between human creativity and AI activity, rather than looking at ways the two can collaborate and co-create.
While there is a real threat posed by Generative AI to some areas of the music business, this stark AI-generated vs. human-made approach denies the existence of a whole new category that emerges as we speak: deeply human music created in collaboration with an AI. This third way solves several problems looming over AI and copyright at the moment, in that both inputs (the stems or other human-crafted sonic elements) and outputs (the human-driven result) are copyrightable under current regulations.
Yet this third way also suggests a new, perhaps radical reality we need to embrace: AI engineers can be songwriters, much as producers or audio engineers are recognized for their powerful artistry.
A recent example of this third way is James Blake’s “Wind Down” album. It was created by feeding Blake’s original stems (the individual instrument or vocal tracks that make up a recording) into an AI that used them to generate a scientifically engineered sleep soundscape. That AI ended up being billed alongside Blake as a primary artist.
I would argue this is justified: This AI didn’t impersonate James Blake. It used stems that contained his trademark sound — his musical DNA — processed them, stretched them, applied filters, and produced a sleep soundscape according to a neuro-scientific framework. Tens of millions of dollars and years and years of work were spent to create this framework and technology. Should the contributions of engineers who built this AI be recognized in the resulting work? I say, “Yes.”
The engineers behind this and other AI models have to have a rare mix of skills and gifts. They understand music theory, they often produce music, but they’re also brilliant coders. They create systems that, when fed human-made stems, produce new, moving works of art. Why do we recognize a producer that contributed a brilliant bass line to a song as a songwriter, but deny these highly creative engineers a credit?
I believe this calls for a new co-writing category that recognizes new work created in collaboration with AI. The role of a music artist in this is still very important, but it’s different. The artist actively crafts the sounds that feed AI, and then mold, shape, add, subtract, and sign off on the final result, much like they have for decades with remixes. The artist listens, reflects, curates, and decides in relationship with AI.
AI is not a mere tool here; it’s a full-blown active collaborator. And the way it works with human-made source material is defined by the groundbreaking engineers behind this technology. They deserve recognition — and their human creativity is copyrightable.
I firmly believe that in the next few years, we will see software engineers getting on stage at Grammys to receive their award, standing shoulder to shoulder with the fellow artists they’ve created their award-winning music with.
This is how you truly protect human creativity: By including and recognizing, not by denying and excluding the creative humans powering it, whether they are training a model and drafting lines of code.