Anthropic agrees to pay authors for use of work to train chatbots
In another win for creators, at Wired, Kate Knibbs reports,
Anthropic will pay at least $3,000 for each copyrighted work that it pirated. The company downloaded unauthorized copies of books in early efforts to gather training data for its AI tools.
This is the first class action settlement centered on AI and copyright in the United States, and the outcome may shape how regulators and creative industries approach the legal debate over generative AI and intellectual property. According to the settlement agreement, the class action will apply to approximately 500,000 works, but that number may go up once the list of pirated materials is finalized. For every additional work, the artificial intelligence company will pay an extra $3,000. Plaintiffs plan to deliver a final list of works to the court by October …
The lawsuit, which was originally filed in 2024 in the US District Court for the Northern District of California, was part of a larger ongoing wave of copyright litigation brought against tech companies over the data they used to train artificial intelligence programs. Authors Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber alleged that Anthropic trained its large language models on their work without permission, violating copyright law.
“Anthropic Agrees to Pay Authors at Least $1.5 Billion in AI Copyright Settlement,” September 5, 2025

Anthropic does not admit wrongdoing but it’s a significant victory nonetheless. AI companies simply ignored copyright issues and used hundreds of thousands of works without notice, let alone permission, relying on book piracy websites.
Some would have us believe that the bots are so clever they thought all that stuff up themselves. As if.
Note: This follows on a decision in July. See “Can AI legally be trained using all the books in the world?”: Judge rules in support of chatbot Claude — except when training materials are pirated.
Anthropic partly won and partly lost its fair use defense arguments. Following statutes and precedents, the court made three main rulings.
First, Anthropic’s purchasing of books and tearing them apart to make and store digital copies of the text is considered fair use. That conclusion stems from observing that Anthropic did what anyone can do — buy a book and make a copy for storage that replaces the physical book that is discarded.
Second, using copies of books stored in the central digital library to train Claude is a fair use. Doing so is akin to using a book to teach other people its contents. Such use can also fall under Section 107 that makes copies used for “teaching, scholarship, or research” not infringing uses.
Third, however, faired poorly. The pirated digital book copies sank Anthropic on a fundamental point. Declared the court: “The downloaded pirated copies used to build a central library were not justified by a fair use. Every factor points against fair use.” Training Claude using book copies would be a fair use, but obtaining them by pirate copying infringed the authors’ copyrights. – Richard W. Stevens, July 3, 2025
There will be many more battles but this ruling means that being an original creator remains a viable way to make a living.
You may also wish to read: Court: Satire is not illegal, even in California. The Babylon Bee won but the battle is hardly over. Many governments have begun to see the internet as something that they should try to run … into the ground.