Bitch if I wanted the robot, I’d ask it myself (well, I’d ask the Chinese one)! I’m asking you!

  • jsomae@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 days ago

    If you tell people that ChatGPT doesn’t know anything, they will only think you’re obviously wrong when it gives them apparently correct answers. You should tell people the truth – the harm in ChatGPT is that it is generally subtly wrong in some way, and often entirely wrong, but it always looks plausibly right.

    • Yea, that’s definitely one of the worst aspects of AI is how confidently incorrect it can be. I had this issue using deep seek and had to turn on the mode where you can see what it’s thinking and often it will say something like.

      I can’t analyze this properly, let’s assume this… Then confidently spits an answer out based on that assumption. At this point I feel like AI is good for 100 level CS students that don’t want to do their homework and that’s about it

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        Same, I just tried deepseek-R1 on a question I invented as an AI benchmark. (No AI has been able to remotely correctly answer this simple question, though I won’t reveal what the question is here obviously.) Anyway, R1 was constantly making wrong assumptions, but also constantly second-guessing itself.

        I actually do think the “reasoning” approach has potential though. If LLMs can only come up with right answers half the time, then “reasoning” allows multiple attempts at a right answer. Still, results are unimpressive.