YouTube will provide a feature allowing users to flag songs created by AI that imitate the voices of real artists.

Record companies will be able to request removal of content that mimics an artist’s ‘unique singing or rapping’.

YouTube has rolled out new rules. Now, if music companies don’t like songs that use computer-made versions of singers’ voices, they can ask YouTube to take those songs down.

YouTube is bringing in a tool that lets music labels and distributors point out content that copies an artist’s voice. This is happening because of the rise in fake songs made by computers. This tech, called generative AI, can make text, images, and even voices that seem very real.

For instance, there’s a song called “Heart on My Sleeve” that claims to have vocals made by AI imitating Drake and the Weeknd. It got taken off other music platforms when the record company, Universal Music Group, said it broke the rules by using generative AI. But, you can still find the song on YouTube.

In their blog, the platform owned by Google stated that they will first test-run these new controls among some selected record companies and independent labels before the final deployment can be effected with every label or company. Youtube indicated that this small group were also involved in undetailed early AI music experiments, which involves using generative AI technologies to produce material.

In addition, a new update of YouTube’s privacy complaint process will enable citizens to file complaints against deep fakes.

Accordingly, the platform claimed, “‘We will allow requests for takedown of AI-created or other synthetic/manipulated content appearing to depict an identifiable person, such as their face or voice’, except may be parodies and celebrity depictions”.

Nevertheless, not all content would be pulled off Youtube and our evaluation of such requests would take into account many things – Jennifer Flannery O’Connor and Emily Moxley; two product management VP’s from youtube blogpost.

Similar to YouTube, creators will be required to declare that they used realistic looking “manipulated of synthetic” content, including computer generated products. Creators will have a chance to flag synthetic content when it is uploaded. However, repetitive noncompliance with the rules would lead to removal of content and suspension of advertising payments on YouTube.

In a blog post, the Google-owned platform said that it will trial this control to chosen number of labels and distributors before going for global deployment. The select group was also involved in unrevealed “early AI music experiments” involving generative tools for producing contents.

The latest update on the privacy complaint process, YouTube will also let people register complaints about deepfakes.

The platform further said, “we will enable users to demand the take down of synthetic or manipulated content that purports to represent one specific person, whether through artificial intelligence (AI) generation or otherwise. This includes instances of parody or deepfakes relating to a public figure, politician, famous

However, not everything that will be removed from youtube and we will consider various aspects, such as compliance with local laws or whether the content is inappropriate, when dealing with this kind of demands”.

You tubers will also be asked to declare whenever they create “manipulated or synthetic” content that resembles real life. Uploading of content, creators will have an option of flagging synthetic footage. YouTube would warn about persistently violating these guidelines as it might lead to deletion or suspension of payment for advertisements.

YouTube has come up with new rules for artificial intelligence, and they’re especially crucial when it comes to talking about sensitive stuff like elections, ongoing conflicts, public health crises, or public officials.

Now, if a label marks a video as AI-generated, you’ll see that info in the video’s description. But, if the content talks about sensitive topics, there will be a more noticeable label. If any AI-made content breaks the existing rules, like making a violent video just to shock people, YouTube will take it down.

Last week, the big company that owns Facebook and Instagram, called Meta, also said that political advertisers on their platforms have to admit when they’ve used AI in ads. They want transparency, so if a picture, video, or sound is used to make it look like someone is saying or doing something they didn’t, advertisers have to tell you.

In the UK, the government is worried about deep fakes, which are fake videos or audio clips that can mess with the information we get and make us trust things less, especially during elections. Just last week, there were fake audio clips going around on social media, pretending to be the mayor of London, Sadiq Khan, saying things he didn’t actually say about Armistice Day.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *