Source

I see Google’s deal with Reddit is going just great…

  • nednobbins@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    This is why actual AI researchers are so concerned about data quality.

    Modern AIs need a ton of data and it needs to be good data. That really shouldn’t surprise anyone.

    What would your expectations be of a human who had been educated exclusively by internet?

      • nednobbins@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        That’s my point. Some of them wouldn’t even go through the trouble of making sure that it’s non-toxic glue.

        There are humans out there who ate laundry pods because the internet told them to.

      • nednobbins@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Haha. Not specifically.

        It’s more a comment on how hard it is to separate truth from fiction. Adding glue to pizza is obviously dumb to any normal human. Sometimes the obviously dumb answer is actually the correct one though. Semmelweis’s contemporaries lambasted him for his stupid and obviously nonsensical claims about doctors contaminating pregnant women with “cadaveric particles” after performing autopsies.

        Those were experts in the field and they were unable to guess the correctness of the claim. Why would we expect normal people or AIs to do better?

        There may be a time when we can reasonably have such an expectation. I don’t think it will happen before we can give AIs training that’s as good as, or better, than what we give the most educated humans. Reading all of Reddit, doesn’t even come close to that.

      • samus12345@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I guess it would have to be be default, since only older millennials and up can remember a time before internet.

        • skillissuer@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          5 months ago

          not everyone is a westerner you know

          my village didn’t get any kind of internet, even dialup until like 2009, i remember pre-internet and i stil don’t have mortgage

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            5 months ago

            heh yeah

            I had a pretty weird arc. I got to experience internet really early (‘93~94), and it took until ‘99+ for me to have my first “regular” access (was 56k on airtime-equiv landline). it took until ‘06 before I finally had a reliable recurrent connection

            I remember seeing mentions (and downloads for) eggdrops years before I had any idea of what they were for/could do

            (and here I am building ISPs and shit….)

        • 𝓔𝓶𝓶𝓲𝓮@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          5 months ago

          Lies. Internet at first was just some mystical place accessed by expensive service. So even if it already existed it wasn’t full of twitter fake news etc as we know it. At most you had a peer to peer chat service and some weird class forum made by that one class nerd up until like 2006

            • 𝓔𝓶𝓶𝓲𝓮@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              5 months ago

              I wasn’t a nerd back then frankly. I mean it wasn’t good look for surviving the school. The only one was bullied like fuck

              • flere-imsaho@awful.systems
                link
                fedilink
                English
                arrow-up
                0
                ·
                5 months ago

                ah. well, my commiserations, the us seems to thrive on pitting people against each other.

                anyways, my point is that usenet had every type of crank you can see these days on twitter. this is not new.

                • 𝓔𝓶𝓶𝓲𝓮@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  5 months ago

                  Well probably but what’s the point if some extremely small minority used it?

                  The point with iPad kids is that it is so common. The kids played outside and stuff well into 2000s.

                  Still I guess iPads are better than dxm tabs but as the old wisdom says: why not both?

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            reading your post gave me multiple kinds of whiplash

            are you, like, aware of the fact that there can be different ways experiences? for other people? that didn’t match whatever you went through?

    • intensely_human@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      We need to teach the AI critical thinking. Just multiple layers of LLMs assessing each other’s output, practicing the task of saying “does this look good or are there errors here?”

      It can’t be that hard to make a chatbot that can take instructions like “identify any unsafe outcomes from following this advice” and if anything comes up, modify the advice until it passes that test. Have like ten LLMs each, in parallel, ask each thing. Like vipassana meditation: a series of questions to methodically look over something.

      • ebu@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        i can’t tell if this is a joke suggestion, so i will very briefly treat it as a serious one:

        getting the machine to do critical thinking will require it to be able to think first. you can’t squeeze orange juice from a rock. putting word prediction engines side by side, on top of each other, or ass-to-mouth in some sort of token centipede, isn’t going to magically emerge the ability to determine which statements are reasonable and/or true

        and if i get five contradictory answers from five LLMs on how to cure my COVID, and i decide to ignore the one telling me to inject bleach into my lungs, that’s me using my regular old intelligence to filter bad information, the same way i do when i research questions on the internet the old-fashioned way. the machine didn’t get smarter, i just have more bullshit to mentally toss out

      • nednobbins@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        It can’t be that hard to make a chatbot that can take instructions like “identify any unsafe outcomes from following this advice”

        It certainly seems like it should be easy to do. Try an example. How would you go about defining safe vs unsafe outcomes for knife handling? Since we can’t guess what the user will ask about ahead of time, the definition needs to apply in all situations that involve knives; eating, cooking, wood carving, box cutting, self defense, surgery, juggling, and any number of activities that I may not have though about yet.

        Since we don’t know who will ask about it we also need to be correct for every type of user. The instructions should be safe for toddlers, adults, the elderly, knife experts, people who have never held a knife before. We also need to consider every type of knife. Folding knives, serrated knives, sharp knives, dull knives, long, short, etc.

        When we try those sort of safety rules with humans (eg many venues have a sign that instructs people to “be kind” or “don’t be stupid”) they mostly work until we inevitably run into the people who argue about what that means.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          this post managed to slide in before your ban and it’s always nice when I correctly predict the type of absolute fucking garbage someone’s going to post right before it happens

          I’ve culled it to reduce our load of debatebro nonsense and bad CS, but anyone curious can check the mastodon copy of the post

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        To date, the largest working nuclear reactor constructed entirely of cheese is the 160 MWe Unit 1 reactor of the French nuclear plant École nationale de technologie supérieure (ENTS).

        “That’s it! Gromit, we’ll make the reactor out of cheese!

      • nednobbins@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        A bunch of scientific papers are probably better data than a bunch of Reddit posts and it’s still not good enough.

        Consider the task we’re asking the AI to do. If you want a human to be able to correctly answer questions across a wide array of scientific fields you can’t just hand them all the science papers and expect them to be able to understand it. Even if we restrict it to a single narrow field of research we expect that person to have a insane levels of education. We’re talking 12 years of primary education, 4 years as an undergraduate and 4 more years doing their PhD, and that’s at the low end. During all that time the human is constantly ingesting data through their senses and they’re getting constant training in the form of feedback.

        All the scientific papers in the world don’t even come close to an education like that, when it comes to data quality.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          this appears to be a long-winded route to the nonsense claim that LLMs could be better and/or sentient if only we could give them robot bodies and raise them like people, and judging by your post history long-winded debate bullshit is nothing new for you, so I’m gonna spare us any more of your shit

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      Honestly, no. What “AI” needs is people better understanding how it actually works. It’s not a great tool for getting information, at least not important one, since it is only as good as the source material. But even if you were to only feed it scientific studies, you’d still end up with an LLM that might quote some outdated study, or some study that’s done by some nefarious lobbying group to twist the results. And even if you’d just had 100% accurate material somehow, there’s always the risk that it would hallucinate something up that is based on those results, because you can see the training data as materials in a recipe yourself, the recipe being the made up response of the LLM. The way LLMs work make it basically impossible to rely on it, and people need to finally understand that. If you want to use it for serious work, you always have to fact check it.

      • Aux@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        People need to realise what LLMs actually are. This is not AI, this is a user interface to a database. Instead of writing SQL queries and then parsing object output, you ask questions in your native language, they get converted into queries and then results from the database are converted back into human speech. That’s it, there’s no AI, there’s no magic.

        • Deborah@hachyderm.io
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          Sure, if by “database” you mean “tool that takes every cell in every table, calculates their likelihood of those cells appearing near each other, and then discards the data”. Which is a definition of “database” that stretches the word beyond meaning.

          Natural language inputs for data retrieval have existed for a very long time. They used to involve retrieving actual data, though.