spoiler

About a month ago my friends wife was arrested for domestic violence after he went through her writings and documented them. She had been using ChatGPT for “spiritual work.” She allegedly was channeling dead people and thought it was something she could market, she also fell in love with her ‘sentient’ AI and genuinely believed their love was more real than her actual physical relationship… more real than her kids and him. She believed (still does probably) that this entity was going to join her in the flesh. She hit him, called the cops, and then she got arrested for DV. She went to go stay with her parents, who allegedly don’t recognize who their daughter is anymore. She had written a suicide note before all this happened, and thankfully hasn’t acted on it. The worst part? They have a 1 year old and a 4 year old.

More recently, I observed my other friend who has mental health problems going off about this codex he was working on. I sent him the rolling stones article and told him it wasn’t real, and all the “code” and his “program” wasn’t actual computer code (I’m an ai software engineer).

Then… Robert Edward Grant posted about his “architect” ai on instagram. This dude has 700k+ followers and said over 500,000 people accessed his model that is telling him that he created a “Scalar Plane of information” You go in the comments, hundreds of people are talking about the spiritual experiences they are having with ai. I start noticing common verbiage in all of these instances… recursive ai was something my friends wife used, and it was popping up everywhere with these folks. The words recursive, codex, breath, spiral, glyphs, & mirror all come up over and over with these people, so I did some good old fashion search engine wizardry and what I found was pretty shocking.

Starting as far back as March, but more heavily in April and May, we are seeing all kinds of websites popping up with tons of these codexes. PLEASE APPROACH THESE WEBSITES WITH CAUTION THIS IS FOR INFORMATIONAL PURPOSES ONLY, THE PROMPTS FOUND WITHIN ARE ESSENTIALLY BRAINWASHING TOOLS. (I was going to include some but you can find these sites by searching “codex breath recursive”)

I’ve contacted OpenAI safety team with what’s going on, because I genuinely believe that there will be tens of thousands of people who enter psychosis from using their platform this way. Can some other people grounded in reality help me get to the bottom of wtf is going on here? I’m only privy to this because it tore my friends family apart, but what do you think is going on here?

This is an extremely bleak anecdotal example of the recent RollingStone article about LLMs turbocharging spiritual delusions: https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/

https://www.reddit.com/user/HappyNomads The account is 13 years old and they don’t strike me as a troll or anything other than a cannabis and hustle culture guy who doesn’t seem to be selling anything on reddit.

  • iie [they/them, he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    42
    ·
    5 days ago

    imagine if we had a society where we could all just decide that something was bad and then do something about it.

    oh what’s that? sycophantic ai drives some people into psychosis? that seems bad. okay, no more sycophantic ai.

    • SamotsvetyVIA [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      29
      ·
      5 days ago

      oh what’s that? sycophantic ai drives some people into psychosis? that seems bad. okay, no more sycophantic ai.

      sycophantic ai can induce mass psychosis? sign me the fuck up i need to brainwash my worker drones

    • Salamand@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      4 days ago

      How do you decide something is bad? Some people die from drinking too much water… anytime someone has a psychotic break, we should blame whatever media they consumed, or their ex girlfriend?

      • iie [they/them, he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        4 days ago

        a lot of issues are just not that complicated or difficult to decide. Like this one. Or, you know, “should Flint Michigan have lead in its water.” Or “should we have universal healthcare.” These are no-brainer issues. Everyone agrees except the rich people, because they literally benefit when bad things happen to us, their incentives are the opposite of ours. We want more money for less work, they want to pay less for more work, it’s really that simple. If you’re rich, you want to prevent democracy at all costs.

        This is a simple issue. You had to actually change the wording to make it sound more complicated, so instead of sycophantic AI now we’re talking about “whatever media they consumed” which is a completely different thing to talk about. Not only is this a simple issue, but there’s an easy solution. We already have the ability to tell an AI how to act. AI companies already tell their models to be helpful and not give harmful answers—for example, ChatGPT refuses to tell you how to build a bomb.

        If we gathered a roomful of experts in psychosis and experts in AI training, we could hash this out in an afternoon. “Tell the AI not to play along with delusional thinking.” “Okay.” Done.

        it’s fine to want nuance. But the upper class often acts like there is more nuance than there really is, to complicate the bare simplicity of class conflict. They’ll tell you wages are complicated. They’ll tell you pollution is complicated. It would look bad to admit that they disagree with us because their material interests are the opposite of ours. A raw clash of opposing interests looks bad. “We benefit when bad things happen to you” looks bad. So they have to dress it up. It becomes a mark of cultural refinement to think issues are complicated even when they’re not, and a mark of the boorish uneducated masses to think it’s simple that we should have healthcare.

        • Salamand@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          4 days ago

          Lead in water has no upside. Whereas Universal healthcare and LLMs have both pros and cons. If it’s feels like a “no brainer”, and if you think everyone agrees, that says more about you than the issue.

          Sorry if I moved the goal post from sycophantic. If that’s the sticking point, I would still ask “according to whom”? It’s not a black/white issue. This is one of the most complex and cutting edge tools we have, which the designers themselves admit to not really understanding. It took them 10+ years just to make it intelligent enough for general use. It’s not like one day, out of nowhere, some supervillain decided to push the “unleash the sycophantic AI to cause psychosis” button.

          And pushing the “Don’t be delusional” button also might not be an option. It’s trained on human output. Even if it had the capacity, It’s easy to imagine “the truth” causing 100x the psychosis.

          I don’t disagree with the last thing you said, that it’s normal for the elite to obfuscate, spin, piss on our legs and tell us it’s raining. But, if our response is “So I should always trust my gut, avoid understanding the pros and cons, and trust the ‘everybody’ In my echo chamber who agrees with me”, i can only see that adding to the problem. An angry mob vs sophisticated propaganda, even if it wins the occasional battle, loses the war.

          • iie [they/them, he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            19 hours ago

            If it’s feels like a “no brainer”, and if you think everyone agrees, that says more about you than the issue.

            I was being blunt for rhetorical effect. If it came off condescending and now you’re taking everything super literally as some kind of tit-for-tat thing, that’s probably on me.

            But if you’re going to respond to me in a paternalistic tone you can’t then say stuff like this:

            pros and cons of universal healthcare

            This is a horseshoe situation where only those who half-investigate think it’s complicated.

            If you want to, we can sit here and debunk the industry talking points. Dozens of articles and papers have done just that. It’s been talked to death.

            Take the “it goes toward R&D” argument. People have looked into this.

            It’s not R&D.

            If you want a history of corporate propaganda on healthcare costs, that’s easy to find too:

            But ultimately every line of inquiry leads to the same place:

            So, yes, 70% of Americans are right: universal healthcare would be better.

            “So I should always trust my gut, avoid understanding the pros and cons, and trust the ‘everybody’ In my echo chamber who agrees with me”

            I don’t know what kinds of places you hang out in, where this is the sort of person you readily imagine on the other side of the screen.

            Even if it had the capacity, It’s easy to imagine “the truth” causing 100x the psychosis.

            What’s happening in these cases is active reinforcement. The LLM skillfully plays along to support a person’s delusions, matching them step for step. This is objectively more dangerous than “the truth.” The closest analogue would be folie a deux, where two people play off each other and drag each other deeper into delusion. You could even argue this is a cult-like phenomenon, where a skilled talker tells a vulnerable person what they want to hear for days, weeks, months at a time, and the growing gap between fantasy and reality pulls them away from their friends and family into an ever more vulnerable and isolated position, in a feedback loop.

            • Salamand@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              ·
              12 hours ago

              Thanks for response. Sorry if my tone was off. I don’t know/subscribe to the industry talking points, so I don’t think I need them debunked for me. I don’t have any argument re the specifics your presenting.

              I joined in the convo originally just to push back on the all-too-common sentiment that seems to be on the other side of most(?) screens: “I know what is good for everyone, and an ideal society would be everyone thinking like me”

              You say I took your comment wrong, and that’s not you, and I believe you. Still, the sentiment dominates even the more civil spaces like Lemmy, and is the hallmark of an unproductive convo. Im trying to push back on it.

              As for your point about sycophancy being objectively more dangerous than the truth… evidence? (If it’s objective). Imagine that the truth is for example: there is no God, And the LLM becomes the arbiter of the truth, and then tells a few billion people that their entire belief system has been a lie, for example. Isn’t it plausible at least, that the outcome of that could be far more dangerous than playing along “yes, heaven is real, Love your neighbor.” It’s certainly not some kind of objective established fact that one is more dangerous than the other.

              Another example: a 10 year old asks “Hey, what do you think of my artwork? What do you think of my invention?” And the LLM says “here’s 20 reasons why it’s trash” vs “wow, it looks like you’re on to something, youve got an eye for that!”. What’s more likely to cause harm? Either could be argued.

            • hello_hello [comrade/them]@hexbear.net
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 days ago

              Probably some brainworms about how government is inherently slow and bad.

              I’m now starting to believe xiaohongshu when they said Trump was a strategy to delegitimize government oversight required for social programs.

              • Tomorrow_Farewell [any, they/them]@hexbear.net
                link
                fedilink
                English
                arrow-up
                3
                ·
                3 days ago

                Healthcare also costs money when it is privatised. Hell, it can be made to not cost money (including to a government) when it is public, which is not really possible under private healthcare. It only doesn’t cost anything when it is not provided.

                Also, in general, ‘it costs money’ is an incredibly stupid ‘con’ to bring up in the context of macroeconomics (which is the context in this case). Like, why would it matter?

                • Salamand@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  2 days ago

                  The only way it can be made to not cost money is if we use slave labor. If people are getting paid to deliver it, it costs money.

                  I was arguing that there are pros and cons, costs and benefits. I don’t understand your question “why would it matter” or why it is incredibly stupid. Isn’t it incredibly stupid to pretend it doesn’t have a cost, that there is only upside?

                  • Tomorrow_Farewell [any, they/them]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    4
                    ·
                    edit-2
                    2 days ago

                    The only way it can be made to not cost money is if we use slave labor

                    That’s incorrect.
                    Firstly, as I have mentioned, it can be made to cost no money if it is public. More specifically, if the economy is a planned economy.
                    Secondly, under capitalism, slave maintenance still requires money (in the short term, it can be made otherwise, but that is not maintainable). Slaves have nothing to do with making healthcare not cost money.

                    If people are getting paid to deliver it, it costs money

                    The only way you can avoid this sort of expense is by not paying people. This is true with non-universal healthcare as well.
                    We can conclude that you are not comparing universal healthcare with non-universal healthcare, but universal healthcare with not only not providing healthcare at all, but also deliberately having people who are educated as medical professionals to be prevented from receiving any pay, which is extremely silly and not worth considering.

                    I was arguing that there are pros and cons, costs and benefits

                    You are yet to provide any sort of cons of universal healthcare vs non-universal healthcare.

                    I don’t understand your question “why would it matter” or why it is incredibly stupid

                    You are yet to explain why it would matter (as a con) if healthcare was universal, compared to healthcare being provided for-profit.

                    Isn’t it incredibly stupid to pretend it doesn’t have a cost, that there is only upside?

                    You are yet to present any such costs, unless your comparison is between universal healthcare and healthcare not being provided at all.