Anthropic Users Can Now Choose: Opt Out or Share Data for AI Training
Anthropic is making substantial modifications to its user data management, mandating that all Claude users decide by September 28 if they want their conversations incorporated into AI model training. When we inquired about the rationale behind these changes, the company directed us to its blog post on the new policies, but we have our own speculations.
Here’s the core of the changes: Anthropic previously avoided using consumer chat data for model training. Now, the company intends to utilize user conversations and coding sessions to enhance its AI systems, stating that it will keep data for five years for users who opt not to withdraw their consent.
This marks a significant transformation. Users of Anthropic’s consumer products were earlier assured that their prompts and conversation outputs would be removed from the company’s backend within 30 days, “unless legally or policy-required to retain them longer,” or if a user’s input was flagged for violations, in which situation it could be stored for up to two years.
By “consumer,” we refer to the new policies applicable to Claude Free, Pro, and Max users, including those using Claude Code. Business clients utilizing Claude Gov, Claude for Work, Claude for Education, or API access will remain unaffected, mirroring how OpenAI safeguards its enterprise customers from data training policies.
What drives this change? In the update, Anthropic frames the changes around user choice, asserting that by not opting out, users will “aid us in refining model safety, making our systems for detecting harmful content more accurate and less likely to flag benign conversations.” Users will also contribute to enhancing future Claude models in areas such as coding, analysis, and reasoning, ultimately benefiting all users with improved models.
In essence, assist us to assist you. However, the overall truth may be a bit less noble.
Like all significant language model entities, Anthropic values data more than it prioritizes users’ brand sentiments. Training AI models requires extensive volumes of high-quality conversational data, and accessing millions of Claude interactions could provide the essential real-world content to improve Anthropic’s competitive position against rivals like OpenAI and Google.
Techcrunch event
San Francisco
|
October 27-29, 2025
Beyond the pressures of AI development, these changes also appear to reflect broader industry shifts regarding data policies, as companies like Anthropic and OpenAI face heightened scrutiny surrounding their data retention practices. For example, OpenAI is currently contesting a court ruling that mandates the company to retain all consumer ChatGPT conversations indefinitely, including deleted chats, following a lawsuit from The New York Times and other publishers.
In June, OpenAI COO Brad Lightcap labeled this as “a sweeping and unnecessary demand” that “fundamentally conflicts with the privacy commitments we have made to our users.” This court order applies to ChatGPT Free, Plus, Pro, and Team users, with enterprise clients and those with Zero Data Retention agreements remaining protected.
What’s alarming is the confusion these evolving usage policies are causing for users, many of whom are unaware of the changes.
To be fair, everything is progressing rapidly; as technology evolves, privacy policies must adapt as well. However, numerous updates are extensive and only briefly mentioned alongside other announcements from companies. (One would not expect Tuesday’s policy changes for Anthropic users to be significant news based on the position of this announcement on the company’s press page.)

Yet many users might not realize that the guidelines they agreed to have shifted, partly due to design choices that appear to facilitate this misunderstanding. Numerous ChatGPT users continuously click “delete” toggles, which may not be truly deleting anything. Concurrently, Anthropic’s new policy implementation follows a similar trend.
How does it work? New users will set their preferences during the signup phase, while existing users will see a pop-up titled “Updates to Consumer Terms and Policies” in large lettering, accompanied by a noticeable black “Accept” button and a much smaller toggle switch for training permissions below, which is set to “On” by default.
As pointed out earlier today by The Verge, this design could prompt users to hastily click “Accept” without fully understanding that they are agreeing to data sharing.
At the same time, the necessity of user awareness cannot be emphasized enough. Privacy advocates have long warned that the complexities surrounding AI make genuine user consent nearly impossible. Under the Biden administration, the Federal Trade Commission even intervened, warning AI firms that they risk enforcement actions if they engage in “surreptitiously changing its terms of service or privacy policy, or burying disclosures behind hyperlinks, in legalese, or in fine print.”
Whether the commission—now operating with just three of its five commissioners—continues to keep an eye on these practices today remains uncertain, a question we’ve directly asked the FTC.