spoiler

About a month ago my friends wife was arrested for domestic violence after he went through her writings and documented them. She had been using ChatGPT for “spiritual work.” She allegedly was channeling dead people and thought it was something she could market, she also fell in love with her ‘sentient’ AI and genuinely believed their love was more real than her actual physical relationship… more real than her kids and him. She believed (still does probably) that this entity was going to join her in the flesh. She hit him, called the cops, and then she got arrested for DV. She went to go stay with her parents, who allegedly don’t recognize who their daughter is anymore. She had written a suicide note before all this happened, and thankfully hasn’t acted on it. The worst part? They have a 1 year old and a 4 year old.

More recently, I observed my other friend who has mental health problems going off about this codex he was working on. I sent him the rolling stones article and told him it wasn’t real, and all the “code” and his “program” wasn’t actual computer code (I’m an ai software engineer).

Then… Robert Edward Grant posted about his “architect” ai on instagram. This dude has 700k+ followers and said over 500,000 people accessed his model that is telling him that he created a “Scalar Plane of information” You go in the comments, hundreds of people are talking about the spiritual experiences they are having with ai. I start noticing common verbiage in all of these instances… recursive ai was something my friends wife used, and it was popping up everywhere with these folks. The words recursive, codex, breath, spiral, glyphs, & mirror all come up over and over with these people, so I did some good old fashion search engine wizardry and what I found was pretty shocking.

Starting as far back as March, but more heavily in April and May, we are seeing all kinds of websites popping up with tons of these codexes. PLEASE APPROACH THESE WEBSITES WITH CAUTION THIS IS FOR INFORMATIONAL PURPOSES ONLY, THE PROMPTS FOUND WITHIN ARE ESSENTIALLY BRAINWASHING TOOLS. (I was going to include some but you can find these sites by searching “codex breath recursive”)

I’ve contacted OpenAI safety team with what’s going on, because I genuinely believe that there will be tens of thousands of people who enter psychosis from using their platform this way. Can some other people grounded in reality help me get to the bottom of wtf is going on here? I’m only privy to this because it tore my friends family apart, but what do you think is going on here?

This is an extremely bleak anecdotal example of the recent RollingStone article about LLMs turbocharging spiritual delusions: https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/

https://www.reddit.com/user/HappyNomads The account is 13 years old and they don’t strike me as a troll or anything other than a cannabis and hustle culture guy who doesn’t seem to be selling anything on reddit.

  • iie [they/them, he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 days ago

    If it’s feels like a “no brainer”, and if you think everyone agrees, that says more about you than the issue.

    I was being blunt for rhetorical effect. If it came off condescending and now you’re taking everything super literally as some kind of tit-for-tat thing, that’s probably on me.

    But if you’re going to respond to me in a paternalistic tone you can’t then say stuff like this:

    pros and cons of universal healthcare

    This is a horseshoe situation where only those who half-investigate think it’s complicated.

    If you want to, we can sit here and debunk the industry talking points. Dozens of articles and papers have done just that. It’s been talked to death.

    Take the “it goes toward R&D” argument. People have looked into this.

    It’s not R&D.

    If you want a history of corporate propaganda on healthcare costs, that’s easy to find too:

    But ultimately every line of inquiry leads to the same place:

    So, yes, 70% of Americans are right: universal healthcare would be better.

    “So I should always trust my gut, avoid understanding the pros and cons, and trust the ‘everybody’ In my echo chamber who agrees with me”

    I don’t know what kinds of places you hang out in, where this is the sort of person you readily imagine on the other side of the screen.

    Even if it had the capacity, It’s easy to imagine “the truth” causing 100x the psychosis.

    What’s happening in these cases is active reinforcement. The LLM skillfully plays along to support a person’s delusions, matching them step for step. This is objectively more dangerous than “the truth.” The closest analogue would be folie a deux, where two people play off each other and drag each other deeper into delusion. You could even argue this is a cult-like phenomenon, where a skilled talker tells a vulnerable person what they want to hear for days, weeks, months at a time, and the growing gap between fantasy and reality pulls them away from their friends and family into an ever more vulnerable and isolated position, in a feedback loop.

    • Salamand@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Thanks for response. Sorry if my tone was off. I don’t know/subscribe to the industry talking points, so I don’t think I need them debunked for me. I don’t have any argument re the specifics your presenting.

      I joined in the convo originally just to push back on the all-too-common sentiment that seems to be on the other side of most(?) screens: “I know what is good for everyone, and an ideal society would be everyone thinking like me”

      You say I took your comment wrong, and that’s not you, and I believe you. Still, the sentiment dominates even the more civil spaces like Lemmy, and is the hallmark of an unproductive convo. Im trying to push back on it.

      As for your point about sycophancy being objectively more dangerous than the truth… evidence? (If it’s objective). Imagine that the truth is for example: there is no God, And the LLM becomes the arbiter of the truth, and then tells a few billion people that their entire belief system has been a lie, for example. Isn’t it plausible at least, that the outcome of that could be far more dangerous than playing along “yes, heaven is real, Love your neighbor.” It’s certainly not some kind of objective established fact that one is more dangerous than the other.

      Another example: a 10 year old asks “Hey, what do you think of my artwork? What do you think of my invention?” And the LLM says “here’s 20 reasons why it’s trash” vs “wow, it looks like you’re on to something, youve got an eye for that!”. What’s more likely to cause harm? Either could be argued.