starting out[0] with “I was surprised by the following results” and it just goes further down almost-but-not-quite Getting It Avenue

close, but certainly no cigar

choice quotes:

Why is it impressive that a model trained on internet text full of random facts happens to have a lot of random facts memorized? … why does that in any way indicate intelligence or creativity?

That’s a good point.

you don’t fucking say

I have a website (TrackingAI.org) that already administers a political survey to AIs every day. So I could easily give the AIs a real intelligence test, and track that over time, too.

really, how?

As I started manually giving AIs IQ tests

oh.

Then it proceeds to mis-identify every single one of the 6 answer options, leading it to pick the wrong answer. There seems to be little rhyme or reason to its misidentifications

if this fuckwit had even the slightest fucking understanding of how these things work, it would be glaringly obvious

there’s plenty more, so remember to practice stretching before you start your eyerolls

  • jonhendry@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    “On my Substack I am doing non-ideological, data-driven reporting!”

    I don’t believe the son of pro-gun propagandist John Lott is capable of doing non-ideological reporting.

  • Amoeba_Girl@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Another way of putting it: Out of 196 questions, ChatGPT-4 got about 5 more correct answers than a random guesser would (39 vs 34.23.)

    What are the odds of that?

    I’m too lazy to look through the tests he’s administering, but IQ tests like the WAIS have vocabulary questions, which yes you would expect an LLM to be better at than random chance.

    I’ve surely said it before but when you see the sort of thinking on display by Mr Max Truth here, is it any wonder why rationalists are impressed with ChatGPT’s reasoning faculties.

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      I asked ChatGPT-4 if cars in roundabouts in Ireland go clockwise or counterclockwise. It got it wrong. When I told it that, it apologized and gave the right answer. But then I trickily called it out on its right answer, and it apologized again and reverted to the wrong answer. Fundamentally, it knows that the Irish drive on the left side of the road, but it doesn’t understand how to apply that to a roundabout to find the circular direction.

      lol you fucking idiot

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        this coin I’m flipping fundamentally knows everything about how the Irish drive, but it only seems to feel like giving me the right answer approximately half the time

        this reminds me of very early in my programming career, when I discovered that an NPC I programmed to randomly either move forward or turn left every 10 seconds was surprisingly good at solving simple labyrinths. I used to instantiate like 100 of them and see which ones would win (or “fight” by colliding with each other, or escape the labyrinth by stacking on top of other instances). you’re telling me now I was a handful of incredibly stupid blog posts away from being a renowned AI researcher?

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          I used to instantiate like 100 of them and see which ones would win (or “fight” by colliding with each other

          The basilisk will not take kindly to your desecration of AGI for sport.

  • mozz@mbin.grits.dev
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    I miss the days when GPT would make an explicit point within a decent fraction of its answers that it was only a large language model, and not a general purpose intelligence, because those are two very very different (if very similar-seeming to initial human perception) things.

    It seems that the inexorable tide of misperception that that was a futile attempt to forestall has come in.

    • froztbyte@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      that probably got “strategically removed” for uhhhhhhh checks notes financial reasons

      (read: almost certainly some execs made the call to get that nuked, because it didn’t fit the narrative they’re tried to sell)

      • Sonori@beehaw.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        What your telling me that a company run by the crypto shill behind worldcoin might be bending their technology to create the appearance of progress and inflate their own value? Say it ain’t so.

    • 200fifty@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      I feel like it was all over from the moment they made it talk in first person. No one had any illusions that Inferkit or NovelAI were general intelligences, because it was obvious that they were just language models autocompleting a sentence you typed in.

    • froztbyte@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      (idly: I didn’t immediately notice whether this is one of the quantified clueless, so not sure if it should be on sneerclub instead. but that domain set me wondering basically immediately)

  • Deborah@hachyderm.io
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    One of the saddest parts here is that there is almost an interesting research direction for people who are truly interested in machine intelligence. “the third cell should likely have a shape with 2 layers within a square” – if you are a person who insists on reading generative AI as “reasoning”, then that wrong answer is a jumping off point into how humans see the image composition as dependent on shapes, and GPT reasons based on something more important to a computer, namely, layers.

    • Deborah@hachyderm.io
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      9 months ago

      But nobody who’s really interested in machine intelligence thinks generative text constitutes reasoning, so instead you just have fuckwits giving IQ tests to their autocomplete engine and *not even seeing the thing that’s interesting.*

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    On their substack the author claims they are “doing non-ideological […] reporting”, which means they are definitely doing ideological reporting. Let’s see…

    next most recent post titled “the dawn of woke ai” says pretty much what you’ll guess it does from the title. It also features the AI rendered POC nazis from a bit back as evidence of “woke”ness… Fun!