Press Clipping
02/23/2023
Article
INTELLIGENCE TEST: TECHNO-PLAGIARISM AND THE BATTLE FOR MUSIC IN A NEW AGE OF AI

It is extraordinarily rare that a fiercely independent musician and a senior executive at the biggest music rights company in the world would find themselves fighting the same corner, albeit for tellingly different reasons: but they have both now entered the battle for music in a new age of AI.

In January, Nick Cave devoted an edition of his Red Hand Files newsletter to responding to a fan who sent him an example of lyrics written “in the style of Nick Cave” that were created by OpenAI’s chatbot, ChatGPT. Cave was, it’s fair to say, not best pleased, referring to the results as “replication as travesty” and “a kind of burlesque”, calling the human process of songwriting a “blood and guts business” that a piece of technology will never understand and, as such, never come close to replicating.

The following month, Michael Nash, EVP and chief digital officer at Universal Music Group, wrote an op-ed for Music Business Worldwide where he outlined his worries about “artificial intelligence and the perils of plunder”. His concern was about AI-produced tracks that constitute “outright fakes” or a “flood of imitations”, that are “diluting the market”, meaning creators (and, therefore, labels/publishers) are finding their rights violated and are having to fight even harder for attention in the streaming world.

Nash critiqued the current “investment frenzy” around generative AI, calling it a “calamity for artists” and paralleling its potentially destructive impact with that of Napster at the turn of the millennium. He said AI developers must “respect artistry” and work with artists and copyright holders to ensure that generative AI music is not a new form of techno-plagiarism. He also called on policymakers to ensure legislation here can “buttress copyright and promote innovation” in a way that supports tech startups but “without doing harm to artists and their rights”.

This last point was arguably a reference to the UK government which had, mere days earlier, scrapped chunks of a proposed copyright law amendment that would have allowed AI developers to exploit (i.e. “train” AI on) existing copyrighted works without the permission of the creator or copyright owner. (The amendments did, however, offer exemptions for text and data mining purposes using AI.)

UK Music welcomed the governmental backpedaling, cautioning that the original plans would have “caused huge potential damage to a world-leading UK sector”.

Copy control: the new shape and the new sound of plagiarism?
At the heart of the uproar about generative AI and music is the argument that if you feed existing songs/recordings into an AI tool to “train” it, the end result will effectively be straight plagiarism, a crude rehash or a glorified interpolation. Like a twist on the GIGO maxim in computer programming, the thesis is: copyrighted material in, copyrighted material out.

As music lawyer Stacey Haber (co-founder of Web3 Music Rights Group and consultant at DMG Clearances) says, “While there are a finite amount of notes, and new music is being written every second, the instant you take someone else’s recording and/or composition as your starting point, the original owners become the final product joint owners. The only way for an AI programmer to own the final recording is to play random notes in the program.”

“While there are a finite amount of notes, and new music is being written every second, the instant you take someone else’s recording and/or composition as your starting point, the original owners become the final product joint owners.”
– Stacey Haber, music lawyer
She compares the situation today to that of the early days of sampling as there is a lot of feeling around in the dark and very little in the way of substantive legal precedents being set. “I keep shouting it from the rooftops and saying it out loud on panels,” she says, “but everyone else seems to be saying, ‘Wait and see,’ ‘Ooh it’s a problem,’ ‘Wild Wild West’ et cetera. Drives me mad.”

She is firm that “copyright laws exist and no new tech will change that until legislators change those laws”, so the music and tech sectors need to find ways to work together that are net positive for everyone. Policing it all will be, she accepts, an administrative horror show if programmers do not keep detailed records of what music was drawn on in their experiments.

Barry Scannell, AI law specialist at William Fry LLP, says there are parallels in the visual arts world where AI is used to generate images based on existing (and copyrighted) ones.

He asks, “The question is: when AI companies are using these data sets to train AI models, are they making reproductions of the images? In many cases the answer is yes, and in many cases, those images are subject to copyright, which means that, generally speaking, authorisation is required in order to make reproductions.”

Different territories have in their legislation different responses to these questions.

“In the US, the open question is whether Fair Use permits this activity,” he notes. “In the likes of the EU, Japan and other jurisdictions, there are exceptions to the reproduction right in the context of text and data mining, which permits making reproductions in text and data mining operations in certain cases.”

The pace of change enters warp speed as both the law and business risk falling dangerously behind
A number of things have happened in swift succession this year that make these debates more pertinent and even more pressing, setting out in sharp relief what is at stake while underscoring just how much a legal/licensing/economic consensus is needed.

At the start of the year, the Drayk.it site launched, offering to let users input certain topics and it would produce a Drake-like song that would be around a minute in length. (The site is currently inactive, with an “RIP” sign suggesting that maybe some forthright calls from some lawyers were received.)

Right at the end of January, Google revealed it had developed a text-to-music AI tool (called MusicLM) that is capable of “generating high-fidelity music from text description”, but said it was not opening the technology to the public. For now.

In the paper Google published about the technology, it stressed that the AI was trained on around 280,000 hours of music from the Free Music Archive dataset, but alluded to copyright concerns that could prevent it from being unleashed on any music in the world.

“We acknowledge the risk of potential misappropriation of creative content associated to the use-case,” it said. It added, however, that future iterations of the technology could focus on lyric generation “along with improvement of text conditioning and vocal quality”.

Then in February, David Guetta posted a snippet of a track he had made where he got an undisclosed AI site to generate lyrics in the style of Eminem and then feed those lyrics into another AI site (also undisclosed) to get it to replicate the rapper’s voice. Guetta insisted the track would never be officially released, but praised the technology behind it. He said he believed “the future of music is in AI” but only “as a tool”.

He added, “I think really AI might define new musical styles. I believe that every new music style comes from a new technology.”

Days after the Guetta video went viral, Amazon announced it had created an eight-hour Sleep Science playlist alongside AI music startup Endel. It opens with a reworking of a track by Kx5, the joint project of Kaskade and deadmau5.

Endel CEO Oleg Stavitsky spoke to Music Ally about the project, saying this is far beyond the “lower-quality functional content” that Universal Music Group’s Lucian Grainge warned was swamping DSPs and stealing market share from “real” musicians.

Stavitsky stressed this is “not a threat to music artists” and that they work with a number of artists already on commissions, naming Grimes, Miguel and James Blake as stand out examples.

“[O]ur technology has evolved to a point where we don’t need the artist to specifically prepare stems for us,” he said. “We can turn any content – any album, any song – into a functional soundscape.”

Then he said something that would undoubtedly make copyright lawyers sit bolt upright: “Generative AI has evolved to a point where it can process pre-existing stems and export them as functional soundscapes.”

He used the interview to send a message to labels that he wanted to work with them rather than work against them (although the added that the copyright holders are still making things, in his view, unnecessarily complex for companies like his).

“You have to deal with the legal side of it, which is insanely complex – and labels are not making it easy for us!” he said, in part passing the buck to copyright owners. “But we take that hard road, because we think it’s the right road […] There’s no framework for licensing music for AI at this point. We’re inventing it as we go, so to speak. I wouldn’t even say a framework is being put in place: there are all sorts of agreements, from full buyout to licensing models, and everything in between.”

Authorship in AI contested
A new challenge has just emerged regarding AI and authorship – with regard to images specifically but which could have huge implications for music. A graphic novel by Kris Kashtanova was created in part by using AI tool Midjourney. She wrote to the US Copyright Office (USCO) in September 2022 asking for a copyright registration for the images. The USCO has just responded and ruled that the images – on their own – cannot be granted copyright protection. It was agreed that Kashtanova is unquestionably the author of “the Work’s text as well as the selection, coordination, and arrangement of the Work’s written and visual elements”. It added, however, that the Midjourney images “are not the product of human authorship”. That means, in isolation, that they are not protected.

Scannell wrote about the decision in a LinkedIn post, calling it “a significant blow to the IP rights of creators who use generative AI”, saying he does not agree with the USCO’s rationale here which effectively strips out the creative role of the artist in the process.

“I think Midjourney creations are a lot more predictable than a Jackson Pollock splatter of paint, and I believe that a great amount of human skill and creativity goes into AI generated art,” Scannell proposes. “This decision potentially has major implications for US creative industries, from music to art to gaming, as it calls into question whether works which utilise (even in part) #AI technology can be protected by #copyright.”

He says the decision will hopefully be repealed and that this should not be the end of the matter.

What the case does reveal is the emergence of a complex legal and creative debate about authorship with regard to AI-generated artistic works. The implications for music are obvious as, potentially, AI-powered works are released to the public but without a recognised author. Without an author, if we follow the logic, then there is no copyright protection and no royalties chain.

While record labels and publishers are rightly concerned about AI music being produced that is “trained” on existing copyrights, a wider concern will now be about AI music created without any formal creative “source” and the complete eradication of the author.

A dystopia is a necessary stop on the way to a utopia?
Perhaps, as Haber notes, we are still deeply in the uncertain/upheaval stage of the process and we need to pass through the chaos first to arrive at greater clarity, a situation that will only be made more tumultuous if the tech companies operating here do not get to grips with copyright.

“In reality there will be many, many techies trying to claim the rights because they don’t understand music and copyright,” she says. “And there will be the chancers that think they won’t get caught. They will. They always do. Just like in sampling. It will be like every new tech where there’ll be much fighting, many lawsuits and the levelling out of rights and income. I only predict nightmare scenarios because that’s what’s happened previously. It’s not necessary and can be avoided. People just have to do the right thing from the start.”

Scannell is clear that the music industry and the tech industry must agree on a set of standards around generative AI music. “The industry needs to come together in the same way it eventually did in response to internet piracy, to agree on common mechanisms for protecting creators’ rights,” he argues.

This does not, he believes, require the drafting of new copyright laws (“or to give AI legal personhood”) because “existing copyright laws offer the protections needed by creators”.

The answer lies, he believes, in new licensing models.

“Some of the solutions, such as new types of licences, could mean new revenue streams for creators […] The industry needs to come together to agree on common standards and approaches under existing legal frameworks.”
– Barry Scannell, William Fry LLP
“Many of the solutions will be contractual, in terms and conditions of websites, in licensing agreements et cetera whereby rights will be reserved in relation to AI training sets,” he says. “Some of the solutions, such as new types of licences, could mean new revenue streams for creators […] The industry needs to come together to agree on common standards and approaches under existing legal frameworks. It’s also important not to leave behind independent creators, publishers and labels from this process, who may not necessarily have the legal or financial capital to undertake the necessary steps. The industry needs to collaborate and this may be a slow process.

Warner Music Group has made its stance on all of this clear: it will work with companies in this space, but it and its artists will be expecting financial remuneration if their music is used to train AIs.

Speaking at the recent NY:LON conference in London, Oana Ruxandra (chief digital officer and EVP at business development at Warner Music Group) said, “The industry needs to be paid and our artists need to be paid based on the AIs that are learning off their music.”

It was a point reiterated by Robert Kyncl in February at his first earnings call as Warner’s new CEO.

“AI is probably one of the most transformative things that humanity has ever seen, it has so many different implications,” he said, but added he was deeply concerned that “the craft of artists and songwriters being diluted or replaced by AI-generated content”. As such, he was insistent that music companies start working closely with AI companies immediately to ensure their copyrights are protected and respected.

No one in the music business – not outwardly, anyway – is automatically clutching their pearls when the very notion of AI-generated music is raised. They insist that copyright is honoured but they are also being realists about it. This is illustrated nowhere better than in what David Israelite, head of the National Music Publishers’ Association, said when addressing the Association of Independent Music Publishers in the US recently.

“[W]hat I’m hoping is that, as an industry, we approach these AI issues with the mindset of this is not necessarily bad,” he said. “It doesn’t matter anyway, because we’re not going to control it. Instead, what are the opportunities? And how do we engage with it in a productive way, so we don’t look back and say, ‘It took us 20 years to figure out how to deal with AI like we did with digital music’?”

Rights owners are fully aware they cannot dictate terms entirely to next-generation digital companies – something they have had to get slowly accustomed to since the very end of the 1990s – but they are equally fully aware that safeguarding rights has to define how they act and move here.

No one has collapsed into outright dystopian thinking here (so far, at least) and start to worry that the machines are going to rise up and take over, just like HAL 9000 at the end of 2001: A Space Odyssey.

HAL 9000 did, however, slowly malfunction while singing ‘Daisy Bell’, electronically chewing it into disturbing new shapes. A bit like generative music AI tools today. It is worth pointing out that ‘Daisy Bell’ was written by Harry Dacre (aka Frank Dean) who died in 1922. By 2001, this was a song that was in the public domain.

At least that was one less thing to worry about. See? Dystopias aren’t all bad.