AI Industry Warns of Ruin from Largest Copyright Class Action
The artificial intelligence industry is currently locked in a high-stakes legal battle, urging an appeals court to intervene in what is being described as the largest copyright class action ever certified. At the heart of the dispute is a lawsuit brought by three authors against Anthropic, a leading AI company, which industry groups warn could “financially ruin” the entire sector if up to seven million claimants ultimately join the litigation and force a settlement.
Last week, Anthropic formally petitioned the court to appeal the class certification, contending that the district court judge, William Alsup, failed to conduct a “rigorous analysis” of the potential class. Anthropic claims Judge Alsup relied instead on his “50 years” of experience, thereby rushing a certification that could expose the emerging company to “hundreds of billions of dollars in potential damages liability” within a mere four months. With each of the millions of potential claimants possibly triggering a $150,000 fine, Anthropic argues that such extreme financial pressure could compel it to settle, foregoing its right to raise valid defenses for its AI training practices. This, the company warns, would set an alarming precedent for other generative AI firms facing similar lawsuits over the use of copyrighted materials for training.
In a recent court filing, major industry bodies, including the Consumer Technology Association and the Computer and Communications Industry Association, threw their weight behind Anthropic. They cautioned the appeals court that the “erroneous class certification” poses “immense harm not only to a single AI company, but to the entire fledgling AI industry and to America’s global technological competitiveness.” These groups argue that allowing such sweeping copyright class actions in AI training cases would leave critical copyright questions unresolved, emboldening claimants and chilling vital investment in AI development. They emphasize that the technology industry, poised to shape the global economy, simply “cannot withstand such devastating litigation,” warning that the United States’ leadership in AI could falter if excessive damages stifle innovation.
Intriguingly, the industry groups are not alone in their concerns about the class action’s structure. Advocates representing authors, including the Authors Alliance, the Electronic Frontier Foundation, the American Library Association, the Association of Research Libraries, and Public Knowledge, have also backed Anthropic’s appeal, albeit for different reasons. They contend that copyright suits are generally ill-suited for class actions because each individual author must independently prove ownership of their work – a notoriously complex task, as demonstrated by the precedent set in the Google Books case.
In the Anthropic case, these author advocates criticized Judge Alsup for what they described as a superficial assessment of the seven million books involved. They allege the judge conducted “almost no meaningful inquiry into who the actual members are likely to be,” failing to analyze the types of books, their authors, applicable licenses, or the diverse interests of rightsholders. Despite “decades of research, multiple bills in Congress, and numerous studies from the US Copyright Office” highlighting the challenges of determining rights across a vast number of books, the district court appeared to assume authors and publishers could easily “work out the best way to recover” damages.
However, the reality is far more intricate. Issues abound, such as defunct publishers complicating ownership, rightsholders possessing only a fraction of a work, or the challenge of dealing with deceased authors whose literary estates have split rights. The problem is compounded by “orphan works,” where identifying rightsholders is virtually impossible. Critics warn that if the class action proceeds, the court could face “hundreds of mini-trials” to resolve these complex ownership questions.
Furthermore, the proposed notification scheme for potential claimants is deeply flawed, according to these groups, requiring claimants to notify other potential rightsholders themselves. This overlooks the staggering cost Google incurred—$34.5 million—to establish a “Books Rights Registry” for payouts in a previous large-scale case involving authors. The court’s suggestion that authors could simply “opt out” if they disagreed with the class action is also deemed insufficient, as many may never even learn about the lawsuit, thus compromising fundamental fairness and due process for absent class members. The potential for conflict between authors and publishers, who may have differing views on AI litigation, further complicates an already tangled situation.
Ultimately, advocates on both sides argue that “there is no realistic pathway to resolving these issues in a common way,” despite the district court identifying a common question in Anthropic’s downloading of books. They warn that pursuing this path risks forcing settlements that leave critical questions about AI training on copyrighted materials unresolved, casting a persistent cloud of uncertainty over the industry. This case, they conclude, is of “exceptional importance,” addressing the legality of using copyrighted works for a “transformative technology used by hundreds of millions of researchers, authors, and others.” They fear that the district court’s “rushed decision to certify the class represents a ‘death knell’ scenario that will mean important issues affecting the rights of millions of authors with respect to AI will never be adequately resolved.”