A U.S. federal judge has ruled in favour of Anthropic using copyrighted books to train its AI models, confirming that such use falls under the fair use exception—even without explicit author permissions. This decision comes at a time when the role of AI in society is under close watch by regulators and lawmakers, even as many in the tech community push for lighter oversight.
Judge William Alsup compared the process to an aspiring writer learning from established literature, stressing that Anthropic’s language models aren’t designed to mimic existing works but to generate new, original content. This judgement follows a lawsuit in which several authors accused Anthropic of using their work without consent, notably through its chatbot, Claude.
According to Alsup, the AI’s output is highly transformative, fitting neatly within the fair use doctrine intended to foster creativity and progress. However, he made it clear that while the training process is acceptable, storing seven million pirated books in a central library crosses the line into copyright infringement.
The fair use principle, which allows for limited use of copyrighted material for creative projects, is a common practice among tech companies developing AI. Nonetheless, the issue remains contentious; some worry that such practices could ultimately hurt creators if AI leads to mass-produced imitations rather than genuine artistic expression.
The plaintiffs—including figures like Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson—have labelled Anthropic’s actions as “large‑scale theft” of intellectual property. While the ruling is a win for AI developers on the fair use front, the company still faces a trial in December over the allegations related to its central library of pirated works.