KartavyaDesk
news

Anthropic is accusing Chinese firms of data theft. But what about its own track record?

Kartavya Desk Staff

By Shobhankita Reddy Anthropic has accused three Chinese AI laboratories – DeepSeek, MoonShot and MiniMax – of launching “industrial-scale campaigns” aimed at reverse engineering its frontier capabilities in agentic reasoning, tool use and coding. While the company does not offer commercial access to its model, Claude, in China or to subsidiaries of Chinese companies anywhere else, it accuses these companies of using proxy services to circumvent access restrictions and of deploying coordinated, targeted prompts that collectively amount to 16 million exchanges aimed at retrieving data about its models at scale. Notwithstanding the justifiable case of violating the company’s terms of service, this episode highlights an uncomfortable double standard. Western AI firms crawled the web to train their models, terming this “standard industry practice” at a time when consent was still ambiguous and before there was any understanding of infrastructure supporting opt-out mechanisms. Anthropic’s own track record in this regard is dubious. Only in June last year, the company was sued by Reddit for unauthorised scraping of its content and allegedly bypassing paid, legitimate channels for data licensing agreements that Reddit has with Google and OpenAI. In September 2025, Anthropic agreed to a $1.5 billion settlement, among the largest in copyright history to date, to resolve a lawsuit alleging that it had trained Claude using pirated books from datasets like LibGen. Crying foul today when something similar is being done to it is not so much a principled defence of fairness as a reflection of shifting norms to secure its technological advantage. Additionally, Anthropic’s allegations against China carry national security undertones. It claims that illicit distillation – training a less capable model on the outputs of a stronger one – lacks necessary safety guardrails that prevent state and non-state actors from using AI for offensive cyber operations or developing weapons capable of mass destruction. The company further claims that open-sourcing such models only multiplies this risk. Some claims about this risk are real, as evidenced by reports alleging that DeepSeek has been used by the Chinese military. However, on Anthropic’s part, this is posturing at best, particularly in light of its decision to drop its safety pledge amid pressure from the US military. Anthropic’s ongoing tussle with the Pentagon reveals the fragile balancing act the company has been trying to maintain as a defence supplier in the US while establishing some self-defined safety guardrails. While defence deals are lucrative for their long-term, guaranteed revenues and political capital, Anthropic’s pursuit of military contracts also appears motivated by the reputational moat it stands to gain from being the military’s trusted partner – credibility that it can leverage in its dealings with customers in highly regulated sectors such as healthcare and financial services. The company won a $200 million contract with the DoD last year and is officially part of the US defence supply chain, but its stance on safeguards against fully autonomous weapons targeting and US domestic surveillance had put this relationship at loggerheads. The Pentagon had pushed back against the company’s stated redlines, held that commercial AI should be available for “all lawful purposes” and threatened to designate the company a “supply chain risk”. Aligning with the military’s demands may clarify Anthropic’s political stance, but leaves unanswered questions on the risk profile associated with Anthropic’s models vis-à-vis their distilled versions. • 1With BAFTA win, ‘Boong’ takes Northeast’s emotional landscape to global stage • 2Ahead of an election, Kerala is getting a potentially costly name change. The people may prefer an AIIMS • 3Shashi Tharoor writes: Deterrence cannot be built on hope. Growing defence budget is a course correction • 4Sun sets on East India Company, again • 5Yogendra Yadav writes: India’s new federal compact must be based on principle of non-domination More importantly, the episode highlights the inevitability of AI diffusion. If a model’s outputs enable replication through capability inferences and observable behaviour, the question then is not about legal excludability so much as how long any technology leadership can last. This runs counter to the Anthropic CEO’s view. Calling AI a technology in its adolescent phase and capable of massive harm, he has compared allowing the sale of advanced AI chips to China to selling nuclear weapons to North Korea. The nuclear analogy assumes that tech denial regimes focused narrowly on a few inputs can meaningfully cap global risk. However, as the company’s own experience shows, AI is a general-purpose technology, and input-based controls on it are prone to being leaky. Fissile material is controlled and traceable through coordinated, multilateral frameworks, but mathematical models and algorithms underpinning AI are not. Restricting their flows is increasingly difficult in a world where knowledge circulates through open research ecosystems, talent mobility, and rapid technology adoption driven by economic incentives. Still, episodes like this coming to the fore indicate that Chinese capabilities, despite their open-source credibility and widespread adoption, lag behind those of their closed-source US peers and continue to play catch-up. In the US, this weakness on the Chinese side will be used to conflate business competition with national security to make the case for stronger export restrictions on China. India has typically been inadvertently caught in the middle of these bilateral frictions, as seen in the rescinded AI diffusion rules. These developments are, therefore, worth tracking as signals of a changing technology geopolitics landscape for the rest of us. The writer is a researcher in technology geopolitics at the Takshashila Institution, Bangalore. Views are personal

AI-assisted content, editorially reviewed by Kartavya Desk Staff.

About Kartavya Desk Staff

Articles in our archive published before our editorial team was expanded. Legacy content is periodically reviewed and updated by our current editors.

All News