YouTube fully embraces AI-generated content through a series of new tools to assist artists in video creation, as well as measures to protect viewers from misleading content.
In a post published last Tuesday (14th November) on the company’s official blog, YouTube announced it will be introducing new labels to content created or altered with AI tools. Given the current realistic forms AI-generated content can take, such videos have the potential to seriously mislead viewers, prompting the platform to take action to transparently inform them about the usage of these tools.
If a creator complies to inform YouTube about their AI-generated upload, a new label will be added at the video’s description. In more ‘sensitive topics’ – a concept the platform leaves undefined presently – the label can be directly introduced to the video player. If the creator does not disclose the AI nature of the video, it can risk not only the content’s removal but also his own suspension from YouTube’s Partner Program.
Of course there is still the possibility of video removal even if its creator states AI usage and a label is correctly added. This is the case for videos that violate the Community Guidelines, such as by showing extreme violent or dangerous content.
Nonetheless, the platform reassures creators and artists that it’ll work closely alongside them when rolling out these updates, ensuring they’re fully aware of the new requirements.
In the future, YouTube will also allow, through its Privacy Complaint Process, the request for the removal of AI-generated content that copies ‘identifiable individual’ features, like the voice or appearance of someone.
A removal request, however, does not immediately mean a takedown action, as that is dependent on a variety of other factors. For instance, parodies and satires of public figures, is one type of content the platform will protect when evaluating these requests.
What’s Up For the Music Industry?
At the very end of Youtube’s announcement is a short but important piece of information for its music partners. To appease critics who lost sleep over the ‘Fake Drake’ and its deep fakes, the platform states that some of its music partners are authorised to request the removal of AI-generated music content ‘that mimics an artist’s unique singing or rapping voice’.
For now, only labels and distributors representing artists who have chosen to collaborate with the platform’s ‘AI music experiments’ will have that chance, even though YouTube promises to expand this access in the future.
As Music Business Worldwide points out, this could be just the first of many future announcements regarding the implementation of more advanced forms of automatic identification technology for AI-generated content. Earlier this month, Believe disclosed that it has developed a software capable of detecting deep fakes with 98% accuracy, thereby confirming the potential creation and adoption of these kinds of highly effective identification technologies.
These updates follow the first steps already taken by YouTube last August when it jointly announced the launch of the AI Music Incubator with Universal – the so-called ‘AI music experiments’ Tuesday’s announcement mentioned. The incubator is composed of some of the Group’s artists, songwriters and producers, to create an ‘artist centric approach to generative AI’. This launch took place at the same time as YouTube’s announcement of its AI Music Principles – three fundamentals showcasing its commitment to collaborating with the music industry in future innovations in the AI space.
But what seemed to be just platform improvements to protect viewers from the dangers of AI has proven to be much more than that. On 16th November, YouTube announced its new Dream Track for Shorts and a new set of Music AI tools in collaboration with Google DeepMind.
Starting from this month, Alec Benjamin, Charlie Puth, Charli XCX, Demi Lovato, John Legend, Papoose, Sia, T-Pain, and Troye Sivan will be the only artists – for now – which allow users to create an original 30-seconds soundtrack using their likeness by typing their idea into a creation prompt. A new music generation model called Lyria will take care of the rest, creating the soundtrack with each artist’s AI-generated voice, which users can then use on their Shorts.
The Music AI Incubator, of which we haven’t heard of since its creation, has now assumed a big role in testing new AI tools to be later implemented on YouTube, aiming to assist artists’ creation process. An example of such a tool could be one that enables musicians to create professional-looking audio tracks by simply inputting short prompt texts and uploading humming sound files.
Among so many updates, there is one thing we know for sure: many more are to come. Whether in the form of wide release ready-to-use AI music tools or the further implementation of protective updates for viewers, there will no doubt be more updates to come. Watch this space…