Skip to content
16px
Anthropic Wants to Have Its Cake and Eat Yours Too
AIEthicsAnthropicOpinionOpen Source

Anthropic Wants to Have Its Cake and Eat Yours Too

Anthropic paid $1.5 billion to settle a lawsuit for training Claude on pirated books, but now they're complaining about Chinese AI labs 'stealing' their model outputs.

February 24, 20263 min read

Remember when your mom caught you with your hand in the cookie jar, and you tried to blame your little sister? That's basically Anthropic right now. Except the cookie jar was half a million pirated books, and the bill came to $1.5 billion.

The Settlement Nobody's Talking About

Back in September, Anthropic quietly agreed to pay $1.5 billion to settle a class-action lawsuit. The accusation? Training their shiny Claude models on pirated copies of roughly 500,000 books. Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson led the charge, and Anthropic folded. About $3,000 per book. Lawyers called it "the largest copyright recovery ever" and "the first of its kind in the AI era."

Translation: they got caught with their hand so deep in the jar they had to pay ransom money.

Fast Forward Five Months

Now here's where it gets spicy. Last week, Anthropic published a dramatic blog post titled "Detecting and Preventing Distillation Attacks." The villain? Chinese AI labs like DeepSeek, Moonshot, and MiniMax. The crime? Using Claude's outputs to train their own models—a technique called "distillation."

Anthropic's tone? Outraged. Betrayed. They called it "industrial-scale" theft. They warned about "national security risks." They framed it as a threat to democracy itself.

Wait. Hold up.

The Irony is Thick Enough to Spread on Toast

Let's recap:

  • Step 1: Anthropic builds a business by scraping the internet—including, allegedly, half a million pirated books. They pay $1.5B to make that little oopsie go away.
  • Step 2: Chinese labs use Claude's publicly available outputs (you know, the thing Anthropic literally sells access to) to train competing models.
  • Step 3: Anthropic throws a tantrum about "illicit extraction" and "theft."

The mental gymnastics here deserve an Olympic gold medal.

What's Actually Happening

DeepSeek released R1—a reasoning model that matches or beats Claude at a fraction of the cost. And they open-sourced it. That's the real threat. Not some vague "national security" boogeyman. Anthropic watched a competitor leapfrog them using techniques that are, by their own admission, "widely used and legitimate."

The only difference? DeepSeek didn't need to pirate anything. They bought API access (through some shady account shenanigans, sure) and used the outputs. The same way every AI company on earth has been doing since this industry started.

The Real Double Standard

When Anthropic trains on your book without permission? "Innovation." "Fair use." "The cost of progress."

When someone trains on Anthropic's outputs? "Theft." "Attack." "Existential threat."

Here's the uncomfortable truth: every major AI model is a collage of other people's work. GPT-4, Claude, Gemini—they're all standing on a mountain of human creativity they didn't pay for until courts forced their hand. Now that the same logic applies to them, suddenly the rules need to change.

The Open Source Elephant in the Room

Anthropic's blog post reads like a desperate attempt to justify why American AI companies should have special rights. They warn that open-sourcing distilled models lets "dangerous capabilities proliferate." They tie it to export controls and geopolitical competition.

But here's what they won't say out loud: open-source models like DeepSeek R1 threaten their business model. Why pay $20/month for Claude when you can run a comparable model locally for free?

The "safety" arguments are convenient cover for economic self-interest. Always have been.

Bottom Line

Anthropic isn't mad that someone "stole" their work. They're mad that someone outplayed them at their own game—without even needing to break the law as blatantly as they did.

You can't spend years building a castle on stolen land, then cry foul when someone builds a better castle across the street using publicly available bricks.

The $1.5 billion settlement should have been a wake-up call. Instead, it looks like they learned exactly nothing.


Want to chat about AI, ethics, or corporate hypocrisy? Find me on [Twitter/X/whatever].

Bhupesh Kumar

Bhupesh Kumar

Backend engineer building scalable APIs and distributed systems with Node.js, TypeScript, and Go.