OpenAI has secured court approval to stop preserving most deleted ChatGPT conversations, ending a controversial legal mandate that required indefinite retention of user data. The resolution comes through a joint agreement between OpenAI and news organizations led by The New York Times, who sought the data as evidence in their copyright infringement lawsuit. While the AI company can now resume normal deletion practices for most users, monitoring continues for accounts linked to domains flagged by plaintiffs.
Legal Battle Over User Data Preservation
The data preservation order originated from a lawsuit filed by The New York Times and other media plaintiffs alleging systematic copyright infringement by OpenAI’s AI models. News organizations argued that users attempting to bypass paywalls might use temporary or deleted chats, making preservation crucial for evidence collection. US Magistrate Judge Ona Wang initially granted the preservation request, compelling OpenAI to retain “all output log data that would otherwise be deleted” despite the company’s privacy objections.
OpenAI fought the order vigorously, citing user privacy concerns and defending its data handling policies. The company’s opposition proved unsuccessful, and by July 2025, news plaintiffs began examining the preserved ChatGPT outputs. Several ChatGPT users attempted to intervene in the case, arguing their privacy interests were being violated, but courts consistently denied their petitions, ruling they lacked standing as non-parties to the litigation. The Electronic Frontier Foundation had warned about the broader implications for digital privacy in such preservation orders.
Partial Resolution with Ongoing Monitoring
Thursday’s court order approved a compromise that allows OpenAI to resume normal data deletion practices for the majority of ChatGPT users effective September 26. However, the agreement establishes a continuing monitoring regime for specific accounts. OpenAI must preserve deleted and temporary chats from users whose domains have been flagged by news organizations during their initial data examination. The arrangement creates a mechanism for expanding surveillance as plaintiffs identify additional domains of interest.
Under the approved framework, all previously preserved chats remain accessible to news plaintiffs for their copyright investigation. The monitoring system operates through an automated process where OpenAI’s systems identify accounts associated with flagged domains and exempt them from normal deletion protocols. This targeted approach represents a middle ground between complete data preservation and unrestricted deletion, though it raises questions about user notification and consent for ongoing surveillance.
Broader Legal Context and Industry Impact
The data preservation dispute occurs within a larger legal battle that could reshape AI development and content distribution. News organizations allege that ChatGPT and similar AI tools threaten their business models by reproducing copyrighted content without compensation. They’ve documented instances where ChatGPT attributes false information to reputable publications, potentially damaging their credibility. The lawsuit joins multiple high-profile copyright cases testing the boundaries of fair use in AI training.
Microsoft’s recent motion to exclude its Copilot AI from the litigation highlights the expanding scope of the legal challenges facing AI companies. The software giant argues its product operates differently from ChatGPT, though both systems rely on similar training data and architectures. Industry analysts note that the outcome could influence how intellectual property frameworks adapt to generative AI technologies, with potential global implications for content creation and distribution.
Financial and Insurance Pressures Mount
Beyond the immediate legal skirmishes, OpenAI faces growing pressure from insurance providers concerned about mounting litigation risks. Multiple sources confirm that insurers are increasingly reluctant to provide comprehensive coverage for AI products with pending lawsuits that could result in multibillion-dollar liabilities. This financial pressure may force OpenAI toward settlement regardless of the legal merits of its position, as insurance market dynamics create additional incentives for resolution.
The insurance industry has become cautious about AI liability following several high-profile claims. A recent report from the International Risk Management Institute noted that AI-related lawsuits have increased 340% since 2022, causing insurers to reevaluate coverage terms and premiums. For OpenAI, which relies on insurance to manage operational risks, these market conditions could prove as consequential as the legal arguments themselves, potentially accelerating settlement discussions.