Universal Adds AI Training Opt-Out to Film Credits

Gizmodo

Universal Pictures has begun incorporating a new disclaimer into the credits of its films, explicitly stating that the content “may not be used to train AI.” This move signals a significant escalation in the ongoing efforts by major intellectual property holders to safeguard their creative works from being ingested by artificial intelligence models without authorization or compensation.

The warning, reportedly first seen at the conclusion of the live-action How to Train Your Dragon upon its theatrical release in June, has since appeared in the end credits of other Universal productions, including Jurassic World Rebirth and Bad Guys 2. This specific AI-focused message is presented alongside more traditional copyright notices, which typically declare the motion picture’s protection under national and international laws and warn against unauthorized duplication, distribution, or exhibition, citing potential civil liability and criminal prosecution. In jurisdictions outside the United States, Universal has also reportedly included a reference to a 2019 European Union copyright law, which allows individuals and companies to opt out of having their works utilized for scientific research, a provision often interpreted to include AI training.

The primary intent behind these new disclaimers is to establish an additional layer of legal protection, deterring AI models from using the films as training data and, crucially, from being able to reproduce or mimic the unique styles and content within these works. The underlying concern echoes previous incidents, such as when OpenAI’s AI image generator tool was released and users quickly began creating images in the distinctive style of Studio Ghibli. This raised critical questions about whether AI companies could freely absorb the vast body of work from artists and studios, then reproduce that style commercially without permission or payment.

Film studios like Universal are acutely aware of these copyright challenges, especially given the historical practices of some AI model developers who have been less than transparent about their data acquisition methods. Reports have surfaced, for instance, of Meta allegedly torrenting terabytes of books from LibGen, a known piracy site, while major publishers like The New York Times have initiated lawsuits against AI companies, including OpenAI, over the unauthorized use of their copyrighted content.

In the rapidly accelerating race to develop the most powerful AI models, technology firms have often adopted aggressive data collection strategies, leading to skepticism about the true enforceability of a “Do not train” warning. While such a disclaimer may not physically prevent films from being scraped and used in AI training datasets, it unequivocally establishes a clear boundary. Critically, it strengthens the legal grounds for recourse should studios discover their content has been used without permission, providing a more explicit basis for civil action or prosecution. This evolving landscape highlights the growing tension between the rapid advancements in AI technology and the established rights of content creators seeking to protect their intellectual property in the digital age.