A U.S. federal court has backed Meta, ruling that its use of copyrighted works by Sarah Silverman and other authors to train AI models qualifies as fair use. But don’t take this as a free pass—this decision also shows that in other situations, such practices could cross legal lines.
Judge Vince Chhabria made the call by pointing out that there was no solid evidence showing that Meta’s approach harmed the market for those works. This ruling follows a similar decision earlier this week in favour of Anthropic, which used books to train its Claude model, though that case still faces further scrutiny over allegations involving pirated digital books.
Back in 2023, Silverman and several authors took legal action against Meta for not negotiating licences for their works. Chhabria deemed the use of these texts in developing the Llama language model as “highly transformative,” stressing that the absence of clear market harm was critical to his decision.
Looking ahead, the judge hinted that AI developers might soon find themselves needing to secure licences from creators to steer clear of copyright issues. He warned, “In many circumstances, it will be illegal to copy copyright‑protected works to train generative AI models without permission.”
This decision also challenges the notion that strict copyright rulings could stifle AI progress. With the industry set to earn significant revenue, there appears to be room for compensating content creators when necessary.
It’s important to note that the court has yet to resolve another claim alleging that Meta unlawfully distributed the authors’ works via torrenting. This verdict doesn’t grant Meta a blanket licence—rather, it reflects that the plaintiffs didn’t provide the concrete evidence needed to prove market harm.