It’s fun to say that artificial intelligence is fake and sucks — but evidence is mounting that it’s real and dangerous

  • conciselyverbose@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    20 days ago

    In aggregate, though, and on average, they’re usually right. It’s not impossible that the tech industry’s planned quarter-trillion dollars of spending on infrastructure to support AI next year will never pay off. But it is a signal that they have already seen something real.

    The market is incredibly irrational and massive bubbles happen all the time.

    The number of users when all the search engines are forcibly injecting it in every search (and hemorrhaging money to do it)? Just as dumb.

  • Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    English
    arrow-up
    16
    ·
    20 days ago

    I don’t believe that this is the path to actual AI, but not for any of the reasons stated in the article.

    The level of energy consumption alone is eye watering and unsustainable. A human can eat a banana and function for a while, in contrast, the current AI offering requirements are now getting dedicated power plants.

    • cheese_greater@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      20 days ago

      I honestly doubt I would ever pay for this shit. I’ll use it fine but ive noticed actual serious problematic “hallucinations” that shocked the hell out of me to the point i think it has a hopeless signal/noise problem to the point it could never be serially accurate and trusted

      • Sentient Loom@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        20 days ago

        I’ve had two useful applications of “AI”.

        One is using it to explain programming frameworks, libraries, and language features. In these cases it’s sometimes wrong or outdated, but it’s easy to test and check to make sure if it’s right. Extremely valuable in this case! It basically just sums up what everybody already said, so it’s easier and more on-point than doing a google search.

        The other is writing prompts and getting it to make insane videos. In this case all I want is the hallucinations! It makes some stupid insane stuff. But the novelty wears off quick and I just don’t care any more.

        • cheese_greater@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          20 days ago

          I will say the coding shit is good stuff ironically. But I would still have to run the code and make sure its sound. In terms of anythint citation-wise tho, its completely sus af

          It has straight up made up damn citations that I could have come up with to escape interrogation during a panned 4th grade presentation to a skeptical audience

          • Sentient Loom@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            20 days ago

            But I would still have to run the code and make sure its sound.

            Oh I don’t get it to write code for me. I just get it to explain stuff.

      • macattack@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        20 days ago

        I’ve been using AI to troubleshoot/learn after switching from Windows -> Linux 1.5 years ago. It has given me very poor advice occasionally, but it has taught me a lot more valuable info. This is not dissimilar to my experience following tutorials on the internet…

        I honestly doubt I would ever pay for this shit.

        I understand your perspective. Personally, I think that there’s a chicken/egg situation where free AI versions are a subpar representation that makes skeptics view AI as a whole as over-hyped. OTOH, the people who use the better models experience the benefits first hand, but are seen as AI zealots that are having the wool pulled over there eyes.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    20 days ago

    At the moment, no one knows for sure whether the large language models that are now under development will achieve superintelligence and transform the world.

    I think that’s pretty much settled by now. Yes, it will transform the world. And no, the current LLMs won’t ever achieve superintelligence. They have some severe limitations by design. And even worse, we’re already putting in more and more data and compute into training, for less and less gain. It seems we could approach a limit soon. I’d say it’s ruled out that the current approach will extend to human-level or even superintelligence territory.

    • macattack@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      19 days ago

      Is super-intellignence smarter than all humans? I think where we stand now, LLMs are already smarter than the average human while lagging behind experts w/ specialized knowledge, no?

      Source: https://trackingai.org/IQ

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 days ago

        Isn’t super intelligent more the ability to think so far beyond human limitations that it might as well be magic. The classic example being inventing faster than light drive.

        Simply being very intelligent makes it more of an expert system than a super intelligence.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        19 days ago

        I think superintelligence means smarter than the (single) most intelligent human.

        I’ve read these claims, but I’m not convinced. I tested all the ChatGPTs etc, let them write emails for me, summarize, program some software… It’s way faster at generating text/images than me, but I’m sure I’m 40 IQ points more intelligent. Plus it’s kind of narrow what it can do at all. ChatGPT can’t even make me a sandwich or bring coffe. Et cetera. So any comparison with a human has to be on a very small set of tasks anyways, for AI to compete at all.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          ·
          19 days ago

          ChatGPT can’t even make me a sandwich or bring coffe

          Well it doesn’t have physical access to reality

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            19 days ago

            it doesn’t have physical access to reality

            Which is a severe limitation, isn’t it? First of all it can’t do 99% of what I can do. But I’d also attribute things like being handy to intelligence. And it can’t be handy, since it has no hands. Same for sports/athletics, driving a race car which is at least a learned skill. And it has no sense if time passing. Or which hand movements are part of a process that it has read about. (Operating a coffe machine.) So I’d argue it’s some kind of “book-smart” but not smart in the same way someone is, who actually experienced something.

            It’s a bit philosophical. But I’m not sure about distinguishing intelligence and being skillful. If it’s enough to have theoretical knowledge, without the ability to apply it… Wouldn’t an encyclopedia or Wikipedia also be superintelligent? I mean they sure store a lot of knowledge, they just can’t do anything with it, since they’re a book or website…
            So I’d say intelligence has something to do with applying things, which ChatGPT can’t in a lot of ways.

            Ultimately I think this all goes together. But I think it’s currently debated whether you need a body to become intelligent or sentient or anything. I just think intelligence isn’t a very useful concept if you don’t need to be able to apply it to tasks. But I’m sure we’ll get to see the merge of robotics and AI in the next years/decades. And that’ll make this intelligence less narrow.

  • droopy4096@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    20 days ago

    the most dangerous assumption either camp is making is that AI is and end-solution. Whre 8n fact it’s just a tool. Like invented steam machines they can do a lot more than humans can but they are only ever useful as tools that humans use. Same here AI can have value as a tool to digest large chunks of data and produce some form of analysis providing humans with “another datapoint” but it’s ultimately up to humans to make the decision based on available data.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    20 days ago

    It’s the latest product that everyone will refuse to pay real money once they figure out how useless and stupid it really is. Same bullshit bubble, new cycle.