AI Video and Music Are Getting Complicated for Platforms Like YouTube

You can do more and more with AI every day... including getting embroiled in a copyright law scandal. Let's take a deep dive into the new legal precedents being set around AI and copyright and what that means for creatives and brands.

AI Video and Music Are Getting Complicated for Platforms Like YouTube

It’s easier than ever to use AI to create anything – from images, to videos, to famous voices. And that’s put platforms like YouTube in a sticky situation. With little legal precedent to go off, platforms are creating their own rules for AI use, and that has implications for creators and brands.

Google, which owns YouTube, is currently grappling with how to handle videos that use AI to mimic the voices of famous singers. As reported by The Verge, the issue arose when a creator called Ghostwriter977 created a track, “Heart on My Sleeve,” using the AI-generated voices of Drake and The Weeknd.

Universal Music Group represents both artists and was not happy their voices had been used in this way. They pressured Apple and Spotify to remove the track, but at YouTube it was more complicated. YouTube typically only removes content when there’s a clear copyright violation. Tracks, music videos, and lyrics are all clearly copyrighted. But voices? Those aren’t copyrightable. There was never a reason for them to be.

Until now, perhaps.

With no clear legal path forward, YouTube was put in a tough position. The platform relies on friendly relationships with music labels, but how friendly can they get?

In the end, the track was removed for containing a copyrighted sample, but YouTube still had to satisfy the label’s demands. With that in mind, YouTube announced a deal with Universal Music group to “develop an AI framework to help us work toward our common goals.”

This means that YouTube is going to start working with music labels to create their own rules. Meanwhile, Google (which, as you’ll recall, owns YouTube) is freely still scraping the internet to train its own AI applications.

So how did we get here?

Interest in generative AI has exploded over the last year, and along with that has come concerns about copyright.

For example, if an AI image generator was trained using copyrighted art, who owns the final product? This came up in December 2022 with the trend of using Lensa AI, an app where users upload photos of themselves and receive AI-generated art of themselves. It was certainly cool to try out, but raised ethical concerns from artists whose work helped train the AI. Should they be getting compensation? Was the whole app just art theft?

Bring in AI video and music, and it gets even stickier.

AI can be used to learn a singers’ voice or an actors’ likeness and then create a computer-generated version. OpenAI’s Jukebox project has been used to create “deepfakes” of the voices of artists like Frank Sinatra, Katy Perry, and Elvis.

Here’s “Sinatra” singing Toxic by Britney Spears, for example.

Again, voices aren’t covered by copyright law — although the music used to train AI applications certainly is.

Beyond music, this also comes up in the publishing world. There’s concerns that AI will be used to learn a writer’s unique style and voice to generate new content that the author had nothing to do with. Authors have spoken about AI clauses being slipped into publishing contracts, asking for rights to train AI on their work. While there’s no proof that big publishing companies are using AI to impersonate writers (yet), there was a case of an author, Jane Friedman, who found books she’d never even written being sold under her name on Amazon. She suspects AI was used to write the content.

In all these cases, copyrighted materials are being used to generate new content — the question is, who owns that new content, and who should be collecting any revenue they generate.

Artists are fighting back

Lacking legal framework, artists are banding together to demand fair and equitable use of their work and likenesses.

In publishing, more than 10,000, including names like Margaret Atwood and James Patterson, signed an open letter to AI innovators like OpenAI and Microsoft calling for them to obtain consent before scanning their work. They also asked for compensation.

According to the Authors Guild, the organization that published the open letter, it “emphasizes that generative AI technologies heavily rely on authors’ language, stories, style, and ideas. Millions of copyrighted books, articles, essays, and poetry serve as the foundation for AI systems, yet authors have not received any compensation for their contributions.”

AI has also been a major sticking point in the ongoing SAG-AFTRA strike taking place in Hollywood. One of the points striking members are negotiating around is a proposal from the Alliance of Motion Picture and Television Producers (AMPTP) on AI use.

AMPTP wants studios to be able to use an actor’s digital likeness, with their consent. SAG-AFTRA’s concern is that studios could pay an actor for a single day’s work, scan their likeness, then use that likeness in perpetuity without having to further compensate the actor.

Fran Drescher, president of SAG-AFTRA, said, “artificial intelligence poses an existential threat to creative professions, and all actors and performers deserve contract language that protects them from having their identity and talent exploited without consent and pay.”

Waiting for the law to catch up

The US legal system has been slow to catch up to the lightning speed of AI innovation, although there has been some movement.

In August 2023, a DC District Court judge ruled that AI-generated art cannot be copyrighted. The case came out of a lawsuit from Stephen Thaler, who created an AI-generated image using an algorithm he’d created. He tried to copyright the image but kept being rejected by the US Copyright Office.

The judge in the case wrote in his decision that copyright will never be granted to work that was “absent any guiding human hand.” He also said “human authorship is a bedrock requirement of copyright.

And that brings us back to Google. Although voices are not subject to copyright, that was decided before the advent of AI-generated voices. It remains possible that we’ll see a shake-up in copyright law as generative AI takes over.

In the meantime, Google has to make its own decisions about what will be allowed to be published on YouTube. The platform shared in a blog post that it was approaching the problem with three principles in mind:

  1. AI is here, and we will embrace it responsibly together with our music partners.
  2. AI is ushering in a new age of creative expression, but it must include appropriate protections and unlock opportunities for music partners who decide to participate.
  3. We've built an industry-leading trust and safety organization and content policies. We will scale those to meet the challenges of AI.

They also said they’re still developing what these new policies will be and how they’ll impact monetization. What is clear is that Google plans to work out those policies with input from the music industry, which has a vested interest in monetizing the artists they represent.

How creators and brands can protect themselves

So, where does this all leave creators and brands?

For those who make original work, it’s important to band together with fellow creatives, as we’ve seen with authors and actors. The time is now to create precedents of how AI will be used moving forward.

If you’re a creator who uses AI-generated content in your work, you need to be aware that you’re wading into contested territory. Do not assume that you own the copyright of work you’ve made using generative AI. And while you may be free to post an AI-generated voice on YouTube now, it’s possible that in the future the monetization will be taken away by a music label. DMCA strikes are serious business on video platforms, and you don’t want to run afoul of the rules.

For brands, it’s a similar story. Work under the assumption that the rules can change at any time. It’s also important to weigh the ethical concerns — is it worth burning relationships with creatives to use AI-generated content?

It’s also important to investigate how the AI tool you’re working with uses materials. Kapwing’s AI video generator, for example, uses royalty-free images, so no copyright issues there. And our AI script generator, as another example, uses LLM to generate text, but it hasn’t been trained to mimic any particular creator’s voice.

Use AI ethically

Generative AI comes along with some fascinating legal issues, but that doesn’t mean you should be hesitant about AI as a whole.

Whether you know it or not, you’re already using AI all the time on social media, or you may be proactively using AI with the help of apps like ChatGPT. While AI can pose risks, including AI hallucinations and the spread of misinformation, artificial intelligence can be incredibly empowering for creatives, such as with automating tedious tasks or brainstorming your next big idea.

AI is fundamentally reshaping not only technology, but art.

It’s a sure bet that it’s here to stay and that many artists will flourish with its help. What’s important is keeping on top of both legal rulings and how platforms are incorporating AI into their policies.