Anthropic Alleges Chinese AI Firms Illegally Extracted Data from Claude Platform

Anthropic Alleges Chinese AI Firms Illegally Extracted Data from Claude Platform

According to The Wall Street Journal, Anthropic claims that DeepSeek, Moonshot AI, and MiniMax created fake accounts to repeatedly query its Claude model over 16 million times for training purposes.

Fact Check
Multiple credible, high-authority news outlets and official company communications consistently report that Anthropic publicly accused three Chinese artificial intelligence firms—specifically named as DeepSeek, Moonshot, and MiniMax—of engaging in illegal or improper data extraction from its Claude platform via so-called 'distillation attacks.' The official Anthropic blog post provides direct evidence and detailed methodology describing the alleged activity, including the use of mass prompts, fraudulent accounts, and large-scale harvesting of Claude outputs. Independent major publications, including The New York Times, Wall Street Journal, Fortune, and South China Morning Post, corroborate these allegations with details about scale, actors involved, and context. The reporting is consistent across sources, with no substantive contradiction, and cites both Anthropic's statements and supporting evidence. While the accusations have not yet been adjudicated in court, the fact that Anthropic has made this allegation is well-documented and directly supported by primary sources, making it highly probable that the statement about Anthropic alleging such conduct is true.
    Reference12
Summary

No Summary provided as the original text is short

Terms & Concepts
  • Claude: An advanced AI chatbot developed by Anthropic, designed for conversation and task assistance.
  • Data siphoning: Unauthorized extraction or copying of data from a digital platform or service.
  • DeepSeek, Moonshot AI, and MiniMax: China-based artificial intelligence companies accused by Anthropic of using fraudulent accounts to access data.