• gregorum@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago

    well, done, Google. just give it an oversized suit and an orange spray-tan, and it can run for president!

  • whodatdair@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    At least this most recent tech bro explosion is hilarious to watch. Crypto was just people putting blockchains in things that didn’t make sense, that was boring.

    They convinced Google to replace a best-in-class search engine with an AI that makes shit up or regurgitates Reddit shitposts. Amazing.

    • 100@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      welcome to the 2020s, where we jump from one techbro hype project to the next

      • jorp@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        I don’t get why even capitalists are ok with the economy being so focused on investor hype. Whatever happened to companies making profit by providing valuable goods and services? We don’t even factor the consumer in anymore, it’s all just executives trying to woo investors no matter what it does to the product or service quality being offered.

        I’m a socialist, but you’d think there’d be some capitalism advocates that are anti stock market at this point.

        • lud@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          You say that like there is some secret organisation of capitalists that dictate what capitalism is.

          Capitalism is a free market and everyone in the free market does whatever they want to do.

          • jorp@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            A system doesn’t need people to get together and collude in order to manifest certain behaviors. The market is regulated either by legislation or by people’s actions. I’m wondering why pro-capitalists have no intention to regulate this or change their behaviors.

      • rickyrigatoni@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        To be fair we’ve been doing that since the dawn of the internet. Remember when people were making entire websites in Flash?

    • Rottcodd@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      I propose that the thing that’s already looming on the horizon not be called an AI “bubble,” but an AI “pimple,” not least because it’s going to be so satisfying when it pops.

  • wise_pancake@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    This is so much worse than I thought it would be, and a whole lot funnier!

    I can’t wait for the inevitable lawsuit where Google claims it isn’t responsible for the AI they’re ineffectively cramming into every product they have, and a judge ruling in favour of the human gizzard who’s been eating ricks because Gemini told him to.

    • Z3k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Dint some airline trybthat and fail for the ai giving random discounts

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      Only to the extent that Google’s search results already had Reddit data in them. This AI is summarizing the search results it’s being given, not making stuff up on its own.

  • YurkshireLad@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    At what point will companies quietly and secretly start removing LLMs from their apps because they finally admit they suck? 😁

    • Balinares@pawb.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      When investors shut off the AI money faucet. No sooner, no later.

      By god, may that happen soon.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      But it doesn’t suck. The AI is summarizing the search results it’s getting. If the search results say things that are wrong, the summary will also be wrong. Do you want the AI to somehow magically be the arbiter of objective reality? How would it do that?

      • Carnelian@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        Personally I want the AI to simply not be there lol. What is even the point of it? You have to completely fact check it anyway by using the exact same search techniques as before.

        It’s a solution that doesn’t work, put in place to solve a problem that nobody has. So yes it does suck lol

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          It’s a solution that doesn’t work, put in place to solve a problem that nobody has.

          If that’s really true then it’ll go away.

          Have you considered that maybe not everyone has the same problems you do, and some people actually find this sort of thing handy?

          • gwindli@lemy.lol
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            the problem is that the AI misrepresents those results it’s summarizing. it represents things that were jokes as fact without showing that information in context. i guess if you dont think criticaly about the information you consume this would be handy. i feel like AI is just abstracting both good and bad info in a way that makes discerning which is which more difficult, and whether you find that convenient or not, its just bad for society.

            • Instigate@aussie.zone
              link
              fedilink
              arrow-up
              0
              ·
              5 months ago

              Therein lies the issue of using LLMs to answer broad or vague questions: they’re not capable of assessing the quality or value of the information they hold let alone whether or not it is objectively true or false, and that’s before getting into issues relating to hallucination. For extremely specific questions, where they have fewer but likely more accurate data to work with, they tend to perform better. Training LLMs on data whose value and quality hasn’t been independently tested will always lead to the results we’re seeing now.

          • Maddier1993@programming.dev
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            Going away depends on a lot more things happening in the background like VCs stopping AI funding. Your assumption that demand matches supply lacks nuance like the fact that humans are not rational consumers.

  • DarkGamer@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Garbage in, garbage out. These agents clearly need to be trained on credible, quality information and not social media shitposts by morons.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    On two occasions I have been asked, – “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question"

    – Charles Babbage

    This confusion would appear to continue to this day.

    Why is it even remotely surprising or unexpected that an AI that’s summarizing web search results for you can sometimes give false, misleading, or dangerous answers? The search results contain false, misleading, and dangerous answers sometimes. The problem is not the AI. It’s doing exactly what it’s supposed to be doing.

  • Rapidcreek@lemmy.worldOP
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    On the other hand, DuckDuckGo, Microsoft Copilot, ChatGPT’s web search, Ecosia, and Qwant all stopped working yesterday because of Bing API. And they want Microsoft Copilot deeply integrated with Windows OS. Imagine someone is unable to book emergency medical appointments because Microsoft Copilot is down or you can’t withdraw money or transfer funds through netbanking because AI and screenshot services are down. This is a good example of why we must not trust someone like Microsoft for anything serious.

    • Jaysyn@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      we must not trust someone like Microsoft for anything serious.

      The company whose software already runs nearly every part of our government?

      That ship has already sailed & sank.