Skimmed through it, all of the folks have severe mental health issues. For the ones saying they did not, they must have been undiagnosed. Kind of a silly article, my opinion alone that it should have focused more on the mental health crisis in these individuals instead of suggesting leaving an ending that leads the reader to federal regulation.
I do believe that this article is a bit overly dramatic (how online journalism tends to be).
But it did change my outlook on the recent sycophancy-episode of ChatGPT, which, at the time, seemed like a silly little mis-optimization and quite hilarious. The article clearly shows how easy it is to cause harm with such behavior.
On a tangent: I strongly believe that "letting someone talk at you interactively" is a hugely underestimated attack surface in general; pyramid schemes, pig-butchering and a lot of fraud in general only work because this is so easy and effective to exploit.
The only good defense is not actually to be more inquisitve/skeptic/rational, but to put that communication on hold and to confer with (outside) people you trust. People overestimate their resistance to manipulation all the time, and general intelligence is NOT a reliable defense at all (but a lot of people think it is).
I imagine a lot of these interactions are being filtered by these people describing them. I imagine if they sent the raw chat logs out, many would not interpret the logs as things like unsolicited advice to jump off buildings.
How is an LLM supposed to discern fact from fiction?
Even humans struggle with this. And humans have a much closer relationship with reality than LLMs.
I prefer questions that reveal GPT's limitations, like an article I saw a few days ago about playing chess against an old Atari program where the model made illegal moves [0].
Causing distress in people with mental health vulnerabilities isn't an achievement, it warrants a clear disclaimer (maybe something even sterner?), as anything these people trust could trigger their downfall, but otherwise, it doesn't really seem preventable beyond that.
archive.is: https://archive.is/UUrO4
Two problems: 1. We don't have community anymore and thus don't have people helping us when we're emotionally and mentally sick. 2. AI chatbots are the crappy plastic replacement of community.
There's going to be a lot more stuff like this, including AI churches/cults, in the next few years.
I blame this on the CEOs and other executives out there misleading the public about the capabilities of AI. I use AI multiple times a week. It's really useful to me in my work. But I would never use it in the contexts that non-tech-savvy people, and I include almost all of the mainstream media here, are trying to use it for.
Either the executives don't understand their own product, or they're intentionally misleading the public, possibly both. AI is incredibly useful for specific tasks in specific contexts and with qualified supervision. It's certainly increasing productivity right now, but that doesn't mean it can give people life advice or take over the role of a therapist. That's really dangerous and super not cool.
> chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine
That's 3 medications?
Also, how convenient those stories come out in light of upcoming "regulatory" safekeeping measures.
This whole article reads like 4-chan greentext or some teenage fanfiction.
> We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher. Weโre working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior.
Oh boy. It looks like OpenAI is taking the angle of blaming prior mental illness for all this. Of course chat jippity is only "reinforc[ing] or amplify[ing] existing, negative behavior". Because, you see, everyone who was driven insane was already mentally ill, obviously! ChatGPT couldn't possibly be driving insane people who otherwise could've been fine.
This is stupid. ChatGPT can very obviously affect people who are not already mentally ill. Mental illness isn't always something you are born with, it can be acquired. Everyone's vulnerability differs and it is very obviously and evidently possible for people to be driven to mental illness by ChatGPT. So why is OpenAI blaming the victims here?
if a user breaks down talking to an ai, and the dashboard shows higher session time, does anyone even notice the harm? or just celebrate the metric?
There is still a significant problem with sycophancy in the way they're training these models. I have a horrible feeling it might be a re-run of the "engagement vs healthy behaviour" we saw with social media, i.e. profit before ethics. Or best-case a side-effect of over-reliance on RLHF training. The behaviours that get people to use ChatGPT more aren't necessarily the ones that are best for truthful, helpful responses.
TL;DR - people tend to click the "thumbs up" icon on the A/B tests on the more sycophantic replies. This results in a feedback loop.
"Some tiny fraction of the population".
Ahaha, I think that's an understatement. In my opinion, a rather large portion of the population is susceptible to many obvious forms of deception. Just look to democratic elections or the amount of slop (AI and pre-AI) online and the masses of people who interact with it.
I've found so many YT channels run by LLMs and many, many people responding to it like it's an actual human being. One day I won't be able to tell either, but that still won't stop me from hearing a new fact or bit of news and doing my own research to verify it.
"Mr. Taylor called the police again to warn them that his son was mentally ill and that they should bring nonlethal weapons." yeah this one's really sad...then they shot and killed the poor guy even though the cops were warned. Yay, America!
Quotable quote:
> "What does a human slowly going insane look like to a corporation?โ Mr. Yudkowsky asked in an interview. โIt looks like an additional monthly user.โ
I tried Google's on-device llm recently and the question I asked it was "how common are [partner's fairly rare surname]s?".^
It answered by detailing a scenario where they are a secret society etc.
I'm amazed the main providers have managed to get the large ones to stay on the rails as much as they do. They were trained on the internet after all
^I know. The reply about how they're next token predictors, I know.
This is a limitation that I am encountering more and more when casually talking with ChatGPT (it probably happens with Claude as well) where I need to prompt it with as little bias as possible to avoid leading it towards the answer I want and instead get the right answer.
If you open with questions that beg a specific answer often it will just give it to your regardless of it being wrong.
Recently:
"Can I use vinegar to drop the pH of my hydroponic solution" => "Yes but phosphoric acid [...] should be preferred".
"I only have vinegar on hand" => "Then vinegar is ok" [paraphrasing]
Except vinegar is not ok, it buffers very badly and nearly killed my plants.
"Should I take Magnesium supplement?" => Yes
"Should I take fish oil?" => Yes
"I think I have shin splints what are some ways that I can recover faster. I have new shoes that I want to try" => Tells me it's ok to go run.
An MD friend of mine was also saying that ChatGPT diagnosis are a plague, with ~18-30 y/o people coming and going to her office citing diseases that no-one gets before their sixties because ChatGPT "confirmed their symptoms match".
It's like having a friend that is very knowledgeable but also an extreme people pleaser. I wish there was a way to give it a more adversarial persona.