Anthropic changing is how it uses cloud users’ data, asking them to decide by 28 September whether their interactions can be included in future AI training. This step introduces new rules on data retention and consent, giving individuals the option whether they do not want to analyze their chats.
What is changing
So far, anthropic has not used consumer chat data to train its model. With updates, the company plans to include interactions and coding sessions with users that do not leave. The data of these accounts can be stored up to five years. This is a sharp change from the earlier policy, where the signals and outputs were automatically removed after 30 days unless for violation or for legal reasons.
Also read: Only in months, Mata’s highly paid AI researchers are leaving: What is happening behind the curtain?
The update cloud free, pro and Max is also applied to users -cloud code. However, enterprise customers using Cloud Gove, Cloud for Work, Claude for Education, or API access will not be affected, after a similar approach by the openIA to mold business customers.
Why the shift matters
Anthropic states that the change supports model reforms, claiming that shared user will help create safe, more accurate systems by strengthening skills in data coding, analysis and logic. The company frames it in a way to contribute to users to contribute to strong models.
Also read: AI is great in system tests, but how do they perform in real life?
However, industry analysts noted that the update also reflects competition in AI companies. Training advanced systems require access to large versions of real -world interactions, and cloud user data can lead to anthropic against rivals such as Openi and Google.
Also read: iPhone 17 Air: Launch date, specification, facilities and values ​​in India
Concern around consent
Policy has raised questions about transparency. New users will see an option in sign-up, but existing users have to face a pop-up with a “accept” button, while the opt-out togle appears small and is set to “current” by default. Critics warned that this design could push many people to agree without realizing it.
Privacy experts argue that when policies are buried in complex language or hidden in fine prints, it is difficult to achieve true consent. Regulators, including the US Federal Trade Commission, have already warned AI companies against the changing data policies without clear disclosure.