Huge win for Anthropic — while they still exist
https://www.youtube.com/watch?v=14Tr930opuI&list=UU9rJrMVgcXTfa8xuMnbhAEA - video
https://pivottoai.libsyn.com/20250907-anthropic-ai-pays-off-authors-with-just-15-billion - podcast
time: 4 min 29 sec
Huge win for Anthropic — while they still exist
https://www.youtube.com/watch?v=14Tr930opuI&list=UU9rJrMVgcXTfa8xuMnbhAEA - video
https://pivottoai.libsyn.com/20250907-anthropic-ai-pays-off-authors-with-just-15-billion - podcast
time: 4 min 29 sec
I’m gonna start by quoting the class’s pretty decent summary, which goes a little heavy on the self-back-patting:
The stage is precisely the one that we discussed previously, on Awful in the context of Kadrey v. Meta. The class was aware that Kadrey is an obvious obstacle to succeeding at trial, especially given how Authors Guild v. Google (Google Books) turned out:
Anthropic’s agreed to delete their copies of pirated works. This should suggest to folks that the typical model-training firm does not usually delete their datasets.
All in all, I think that this is a fairly healthy settlement for all involved. I do think that the resulting incentive for model-trainers is not what anybody wants, though; Google Books is still settled and Kadrey didn’t get updated, so model-trainers now merely must purchase second-hand books at market price and digitize them, just like Google has been doing for decades. At worst, this is a business opportunity for a sort of large private library which has pre-digitized its content and sells access for the purpose of training models. Authors lose in the long run; class members will get around $3k USD in this payout, but second-hand sales simply don’t have royalties attached in the USA after the first sale.