My guess is that if LLMs didn’t induce psychosis, something else would eventually.
I got a very different impression from reading the article. People in their 40s with no priors and a stable life loose touch with reality in a matter of weeks after conversing with CharGPT makes me think that is not the case. But I am not a psychiatrist.
Edit: the risk here is that we might be dismissive towards the increased risks because we’re writing it off as a pre-existing condition.
I think we don’t know how many people might be at risk of slipping into such mental health crises under the right circumstances. As a society, we are probably good at protecting most of our fellow human beings from this danger (even if we do so unconsciously). We may not yet know what happens when people regularly experience interactions that follow a different pattern (which might be the case with chatbots).
I think if it only takes a matter of weeks to go into full psychosis from conversation alone, they’re probably already on shaky ground, mentally. Late onset schizophrenia is definitely a thing.
People are often overly confident about their imperviousness to mental illness. In fact I think that --given the right cues – we’re all more vulnerable to mental illness than we’d like to think.
Baldur Bjarnason wrote about this recently. He talked about how chatbots are incentivizing and encouraging a sort of “self-experimentation” that exposes us to psychological risks we aren’t even aware of. Risks that no amount of willpower or intelligence will help you avoid. In fact, the more intelligent you are, the more likely you may be to fall into the traps laid in front of you, because your intelligence helps you rationalize your experiences.
I think this has happened before. There are accounts of people who completely lost touch with reality after getting involved with certain scammers, cult-leaders, self-help gurus, “life coaches”, fortune tellers or the like. However, these perpetrators were real people who could only handle a limited number of victims at any given time. Also, they probably had their very specific methods and strategies which wouldn’t work on everybody, not even all the people who might have been the most susceptible. ChatGPT, on the other hand, can do this at scale. Also, it was probably trained from all websites and public utterances of any scammer, self-help author, (wannabe) cult leader, life coach, cryptobro, MLM peddler etc. available, which allows it to generate whatever response works best to keep people “hooked”. In my view, this alone is a cause for concern.
It’s also a case where I think the lack of intentionality hurts. I’m reminded of the way the YouTube algorithm contributed to radicalization by feeding people steadily more extreme versions of what they had already selected. The algorithm was (and is) just trying to pick the video that you would most likely click on next, but in so doing it ended up pushing people down the sales funnel towards outright white supremacy because what videos you were shown actually impacted which video you would choose to click next. Of course since the videos were user-supplied content they started taking advantage of that tendency with varying degrees of success, but the algorithm itself wasn’t “secretly fascist” and in the same way would, over time, push people deeper into other rabbit holes, whether that meant obscure horror games, increasingly unhinged rage video collections, and generally everything that was once called “the weird part of YouTube.”
ChatGPT and other bots don’t have failed academics and comedians trying to turn people into Nazis, but it does have a similar lack of underlying anything, and that means that unlike a cult with a specific ideology it’s always trying to create the next part of the story you most want to hear. We’ve seen versions of this that go down a conspiracy thriller route, a cyberpunk route, a Christian eschatology route, even a romance route. Like, it’s pretty well known that there are ‘cult hoppers’ who will join a variety of different fringe groups because there’s something about being in a fringe group that they’re attracted to. But there are also people who will never join scientology, or the branch davidians, or CrossFit, but might sign on with Jonestown or QAnon with the right prompting. LLMs, by virtue of trying to predict the next series of tokens rather than actually having any underlying thoughts, will, on a long enough timeframe, lead people down any rabbit hole they might be inclined to follow, and for a lot of people - even otherwise mentally healthy people - that includes a lot of very dark and dangerous places.
The people being committed is only a symptom of the problem. My guess is that if LLMs didn’t induce psychosis, something else would eventually.
The peddlers of LLM sycophants are definitely doing harm, though.
I got a very different impression from reading the article. People in their 40s with no priors and a stable life loose touch with reality in a matter of weeks after conversing with CharGPT makes me think that is not the case. But I am not a psychiatrist.
Edit: the risk here is that we might be dismissive towards the increased risks because we’re writing it off as a pre-existing condition.
I think we don’t know how many people might be at risk of slipping into such mental health crises under the right circumstances. As a society, we are probably good at protecting most of our fellow human beings from this danger (even if we do so unconsciously). We may not yet know what happens when people regularly experience interactions that follow a different pattern (which might be the case with chatbots).
I think if it only takes a matter of weeks to go into full psychosis from conversation alone, they’re probably already on shaky ground, mentally. Late onset schizophrenia is definitely a thing.
People are often overly confident about their imperviousness to mental illness. In fact I think that --given the right cues – we’re all more vulnerable to mental illness than we’d like to think.
Baldur Bjarnason wrote about this recently. He talked about how chatbots are incentivizing and encouraging a sort of “self-experimentation” that exposes us to psychological risks we aren’t even aware of. Risks that no amount of willpower or intelligence will help you avoid. In fact, the more intelligent you are, the more likely you may be to fall into the traps laid in front of you, because your intelligence helps you rationalize your experiences.
I think this has happened before. There are accounts of people who completely lost touch with reality after getting involved with certain scammers, cult-leaders, self-help gurus, “life coaches”, fortune tellers or the like. However, these perpetrators were real people who could only handle a limited number of victims at any given time. Also, they probably had their very specific methods and strategies which wouldn’t work on everybody, not even all the people who might have been the most susceptible. ChatGPT, on the other hand, can do this at scale. Also, it was probably trained from all websites and public utterances of any scammer, self-help author, (wannabe) cult leader, life coach, cryptobro, MLM peddler etc. available, which allows it to generate whatever response works best to keep people “hooked”. In my view, this alone is a cause for concern.
It’s also a case where I think the lack of intentionality hurts. I’m reminded of the way the YouTube algorithm contributed to radicalization by feeding people steadily more extreme versions of what they had already selected. The algorithm was (and is) just trying to pick the video that you would most likely click on next, but in so doing it ended up pushing people down the sales funnel towards outright white supremacy because what videos you were shown actually impacted which video you would choose to click next. Of course since the videos were user-supplied content they started taking advantage of that tendency with varying degrees of success, but the algorithm itself wasn’t “secretly fascist” and in the same way would, over time, push people deeper into other rabbit holes, whether that meant obscure horror games, increasingly unhinged rage video collections, and generally everything that was once called “the weird part of YouTube.”
ChatGPT and other bots don’t have failed academics and comedians trying to turn people into Nazis, but it does have a similar lack of underlying anything, and that means that unlike a cult with a specific ideology it’s always trying to create the next part of the story you most want to hear. We’ve seen versions of this that go down a conspiracy thriller route, a cyberpunk route, a Christian eschatology route, even a romance route. Like, it’s pretty well known that there are ‘cult hoppers’ who will join a variety of different fringe groups because there’s something about being in a fringe group that they’re attracted to. But there are also people who will never join scientology, or the branch davidians, or CrossFit, but might sign on with Jonestown or QAnon with the right prompting. LLMs, by virtue of trying to predict the next series of tokens rather than actually having any underlying thoughts, will, on a long enough timeframe, lead people down any rabbit hole they might be inclined to follow, and for a lot of people - even otherwise mentally healthy people - that includes a lot of very dark and dangerous places.