Bitch if I wanted the robot, I’d ask it myself (well, I’d ask the Chinese one)! I’m asking you!

  • Moss [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    29
    ·
    2 days ago

    My friend pulled out her phone to ask chatGPT how to play a board game last night, and despite all of us yelling at her that chatGPT doesn’t know anything, she persisted. Then the dumbass LLM made up some rules because it doesn’t know anything.

    • jsomae@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 days ago

      If you tell people that ChatGPT doesn’t know anything, they will only think you’re obviously wrong when it gives them apparently correct answers. You should tell people the truth – the harm in ChatGPT is that it is generally subtly wrong in some way, and often entirely wrong, but it always looks plausibly right.

      • Yea, that’s definitely one of the worst aspects of AI is how confidently incorrect it can be. I had this issue using deep seek and had to turn on the mode where you can see what it’s thinking and often it will say something like.

        I can’t analyze this properly, let’s assume this… Then confidently spits an answer out based on that assumption. At this point I feel like AI is good for 100 level CS students that don’t want to do their homework and that’s about it

        • jsomae@lemmy.ml
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          Same, I just tried deepseek-R1 on a question I invented as an AI benchmark. (No AI has been able to remotely correctly answer this simple question, though I won’t reveal what the question is here obviously.) Anyway, R1 was constantly making wrong assumptions, but also constantly second-guessing itself.

          I actually do think the “reasoning” approach has potential though. If LLMs can only come up with right answers half the time, then “reasoning” allows multiple attempts at a right answer. Still, results are unimpressive.

    • dat_math [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      2 days ago

      My friend pulled out her phone to ask chatGPT how to play a board game last night, and despite all of us yelling at her that chatGPT doesn’t know anything, she persisted. Then the dumbass LLM made up some rules because it doesn’t know anything.

      Do you think they took home the lesson that llms don’t possess knowledge or do reason?

        • dat_math [they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          she didn’t really pay much attention to us

          Why do people do things like this? What is the point of playing a game with your friends if you won’t listen to or pay attention to them?

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Why would she take away that lesson? It produced a list of rules to the game that look approximately right.

        • dat_math [they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          Presumably her friends corrected her and showed her why the “generated” rules were incorrect… at least that’s what I would expect of my friends