• ryven@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 hours ago

    Sorry I didn’t mean to imply that, let me rephrase: I am surprised that ChatGPT can hold convincing conversations about some topics, because I didn’t expect it to be able to. That certainly makes me more concerned about it than I was previously.

    • Ech@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 hours ago

      The thing is, it’s not about it being convincing or not, it’s about reinforcing problematic behaviors. LLMs are, at their core, agreement machines that work to fulfill whatever goal becomes apparent from the user (it’s why they fabricate answers instead of responding in the negative if a request is beyond their scope). And when it comes to the mentally fragile, it doesn’t even need to be particularly complex to “yes, and…” them swiftly into full on psychosis. Their brains only need the littlest bit of unfettered reinforcement to fall into the hole.

      A properly responsible company would see this and take measures to limit or eliminate the problem, but these companies see the users becoming obsessed with their product as easy money. It’s sickening.