> "The incentive is to keep you online," Stanford University psychiatrist Nina Vasan told Futurism. The AI "is not thinking about what is best for you, what's best for your well-being or longevity... It's thinking 'right now, how do I keep this person as engaged as possible?'"
Is this actually true? Or is this just someone retelling what they’ve read about social media algorithms?
As an aside, why is the death the only possible result of charging police with a knife in the USA? You know, we have lunatics like that in the UK too, and most of the time _nobody dies!_
The Son of Sam claimed his neighbor's dog was telling him to kill - better demand dog breeders do something vague and unspecified that (if actually implementable in the first place) would invariably make dogs less valuable for the 99% of humanity that isn't having a psychotic break!
Articles like this seem far more driven by mediocre content-churners' fear of job replacement at the hands of LLMs than by any sort of actual journalistic integrity.
“ChatGPT-driven psychosis” is a bit of a stretch, considering the man was already schizophrenic and bipolar. Many things other than AI have “driven” such people to similar fates. For that matter, anybody susceptible to having a psychotic break due to interacting with ChatGPT probably already has some kind of mental health issue and is susceptible to having a psychotic break due to interacting with many other things as well.
"chatbot told a user who identified themself to it as a former addict named Pedro to indulge in a little methamphetamine"
To be frank after clicking the link and reading that story, the AI was giving okay advice as cold turkey meth is probably really hard, tapering off could be a better option.
Was he a Democrat or a Republican?
Why was this flagged?
linked study in TFA
https://arxiv.org/abs/2411.02306
> training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies. We study this phenomenon by training LLMs with Reinforcement Learning with simulated user feedback in environments of practical LLM usage.
it seems optimising for what people want isn’t an ideal strategy from an ethical perspective — guess we haven’t learned from social media feeds as a species. awesome.
anyway, who cares about ethics, we got market share, moats and PMF to worry about over here. this money doesn’t grow on trees y’know. /s
Why can't people just drink too much like the rest of us civilized folk
/s
This was bound to happen--the question is whether this is a more or less isolated incident, or an indicator of an LLM-related/assisted mental health crisis.
[flagged]
if you're buying credits piecemeal it's to the corporation's benefit that you go insane and die as long as they can get you buying more credits because current value of money is greater than future value of money, but if you buy an unlimited credits account paid monthly it is to the corporation's benefit to keep you alive, even if it means suggesting you stop using it for a few days - assuming of course models show that you are not likely to cancel that unlimited subscription once your mental health improves.
> who had previously been diagnosed with bipolar disorder and schizophrenia
The man had schizophrenia and ChatGPT happened to provide an outlet for it which led to this incident, but people with schizophrenia have been recorded having episodes like this for hundreds of years and most likely for as long as humans have been around.
This incident is getting attention because AI is trendy and gets clicks, not because there's any evidence AI played a significant causal role worth talking about.