This doesn't seem especially newsworthy. Oral arguments are set for OpenAI itself to oppose the preservation order that has everyone so (understandably) up in arms. Seems unlikely that two motions from random ChatGPT users were going to determine the outcome in advance of that.
The judge is clearly not caring about this issue so arguing before her seems pointless. What is the recourse for OpenAI and users?
If anyone is under the impression OpenAI isn't saving every character typed into the chats and every bit sent to the API, I would implore them to look at the current board members.
I find it really strange how many people are outraged or shocked about this.
I have to assume that they are all simply ignorant of the fact that this exact same preservation of your data happens in every other service you use constantly other than those that are completely E2EE like signal chats.
Gmail is preserving your emails and documents. Your cell provider is preserving your texts and call histories. Reddit is preserving your posts and DMs. Xitter is preserving your posts and DMs.
This is not to make a judgement about whether or not this should be considered acceptable, but it is the de facto state of online services.
To me, this is akin to Google saying that they don't want to follow a court-ordered law because it would be a privacy invasion.. I feel like OpenAI framed the issue as a privacy conversation and some news journalist are going along with it without questioning the source and their current privacy policy re: data retention and data sharing affiliates, vendors, etc.
It takes 30 seconds to save the privacy policy and upload it to an LLM and ask it questions and it quickly becomes clear that their privacy policy allows them to hold onto data indefinitely as is.
Even if OpenAI and other LLM providers were prohibited by law not to retain the data (opposite of this forced retention), no one should trust them to do so.
If you want to input sensitive data into an LLM, do so locally.
We (various human societies) do need to deal with this new ability to surveil every aspect of our lives. There are clear and obvious benefits in the future - medicine, epidemiology will have enormous reservoirs of data to draw on, entire new fields of mass behavioural psychology will come into being (I call it MAssive open online psychology or moop) and we might even find governments able to use minute by minute data of their citizens to you know, provide services to the ones they miss…
But all of this assumes a legal framework we can trust - and I don’t think this comes into being piecemeal with judges.
My personal take is that data that, without the existence of activity of a natural human, data that woukd not exist or be different must belong to that human - and that it can only be held in trust without explicit payment to that human if the data is used in the best interests of the human (something something criminal notwithstanding)
Blathering on a bit I know but I think “in the best interests of the user / citizen is a really high and valuable bar, and also that by default, if my activities create or enable the data,it belongs to me, really forces data companies to think.
Be interested in some thoughts
Previous discussion:
OpenAI slams court order to save all ChatGPT logs, including deleted chats
The "judge" here is actually a magistrate whose term expires in less than a year.[1]
Last time I saw such weak decision-making from a magistrate I was pleased to see they were not renewed, and I hope the same for this individual.
[1] https://nysd.uscourts.gov/sites/default/files/2025-06/Public...
No judge can block any kind of mass surveillance program which has been ongoing since more than a decade now. This is a joke and completely irrelevant. OpenAI, just like every other corp, is storing as much as they can to profile you and your bits stream
After reading the actual Order, it appears the defendants filed an application to the Court expressing concern that plaintiff, OpenAI, was potentially destroying evidence and to prevent a spoliation claim (relief due to destruction of evidence), the Judge Ordered OpenAI to stop destruction of anything (eg to preserve everything).
A person not a party to the action then filed an application to intervene in the lawsuit because the Judge's Preservation Order constituted a breach of the terms of his contract with OpenAI regarding his use of OpenAI's product - more specifically that the Intervenor entered into usage of OpenAI's product upon the agreement that OpenAI would not preserve any portion of Intervenor's communication with the OpenAI product.
The problem, as I see it, is that the Judge did not address the issue that her Order constituted a breach of Intervenor's contractual interests. That suggests to me that Intervenor did not expressly state that he held contractual rights that the Court's Order was violating. I would think the next step would be to file an Order to Show Cause directly against the Magistrate Judge claiming the Magistrate's Order constitutes an unconstitutional government taking of property without Due Process.
It's crazy how much I hate every single top level take in this thread.
Real human beings actual real work is allegedly being abused to commit fraud at a massive scale, robbing those artist of the ability to sustain themselves. Your false perception of intimacy while asking the computer Oracle to write you smut does not trump the fair and just discovery process.
I remember back in the day when I made the mistake to use Facebook and iPhones that a) Facebook never actually deleted anything and b) iMessage was also not deleting (but both were merely hiding).
This is why in this part of the world we have GDPR and it would be amazing to see OpenAI receiving penalties for billions of euros, while at the same time a) the EU will receive more money to spend, and b) the US apparatus will grow stronger because it will know everything about everyone (the very few things they didn't already know via the FAANGS.
Lately I have been thinking that "they" play chess with our lives, and we sleepwalking to either a Brave New World (for the elites) and/or a 1984/animal farm for the rest. To give a more pleasant analogy, the humans in WALL-E or a darker analogy, the humans in the Matrix.
fighting microsoft to get the illusion of privacy is the modern day fight against windmills
Your honor, it appears somebody has obtained your Netflix video rental history and has been asking ChatGPT about it.
OpenAI already saves all chats
Even though this doesn't apply to enterprise customers, I'm just waiting for European customers to wake up and realize that ChatGPT isn't compatible with the GDPR today. And if the court suddenly decides that enterprise customers should also be part of the preservation order it'll be a big hit for OpenAI.
Is there any proof that ChatGPT was deleting chats? I would think they would keep them to use as training data.
"creating mass surveillance program harming all ChatGPT users" is just taking the lawyers' words out of their mouth at face value. Totally ridiculous. And of course its going to lead to extreme skepticism from the crowd here when its put forward that way. Another way to do describe this: "legal discovery process during a lawsuit continues on as it normally would in any other case"
[dead]
"Judge creates mass surveillance program; denies it."
""Proposed Intervenor does not explain how a court’s document retention order that directs the preservation, segregation, and retention of certain privately held data by a private company for the limited purposes of litigation is, or could be, a 'nationwide mass surveillance program,'" Wang wrote. "It is not. The judiciary is not a law enforcement agency.""
This is a horrible view of privacy.
This gives unlimited ability for judges to violate the privacy rights of people while stating they are not law enforcement.
For example, if the New York Times sues that people using an a no scripts addin, are bypassing its paywall, can a judge require that the addin collect and retain all sites visited by all its users and then say its ok because the judiciary is not a law enforcement agency?
> Judge denies creating “mass surveillance program” harming all ChatGPT users
What a horribly worded title.
A judge rejected the creation of a mass surveillance program?
A judge denied that creating a mass surveillance program harms all ChatGPT users?
A judge denied that she created a mass surveillance program, and its creation (in the opinion of the columnist) harms all ChatGPT users?
The judge's act of denying resulted in the creation of a mass surveillance program?
The fact that a judge denied what she did harms all ChatGPT users?
(After reading the article, it's apparently the third one.)
"However, McSherry warned that "it's only a matter of time before law enforcement and private litigants start going to OpenAI to try to get chat histories/records about users for all sorts of purposes, just as they do already for search histories, social media posts, etc.""
If this is a concern, is the the best course of action for McSherry to stop using ChatGPT.
We have read this sort of "advice" this countless times in HN comments relating to use of software/websites controlled by so-called "tech" companies.
Something like, "If you are concerned about [e.g., privacy, whatever], then do not use it. Most users do not care."
Don't use _____.
This is a common refrain in HN comment threads.
"OpenAI will have a chance to defend panicked users on June 26, when Wang hears oral arguments over the ChatGPT maker's concerns about the preservation order."
"Some users appear to be questioning how hard OpenAI will fight. In particular, Hunt is worried that OpenAI may not prioritize defending users' privacy if other concerns-like "financial costs of the case, desire for a quick resolution, and avoiding reputational damage"-are deemed more important, his filing said."
"Intervening ChatGPT users had tried to argue that, at minimum, OpenAI should have been required to directly notify users that their deleted and anonymous chats were being retained. Hunt suggested that it would have stopped him from inputting sensitive data sooner."
Any OpenAI argument that invokes "user privacy" is only doing so as an attempt to protect OpenAi from potentially incriminating discovery. OpenAI will argue for its own interests.
Maybe it's the quotes selected for the article, but it seems like the judge simply doesn't get the objections. And the reasoning is really strange:
"Even if the Court were to entertain such questions, they would only work to unduly delay the resolution of the legal questions actually at issue."
So because the lawsuit pertains to copyright, we can ignore possible constitutional issues because it'll make things take longer?
Also, rejecting something out of hand simply because a lawyer didn't draft it seems really antithetical to what a judge should be doing. There is no requirement for a lawyer to be utilized.