They’re arguing with a fucking language model

  • PolandIsAStateOfMind@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    They’re arguing with a fucking language model

    And losing that argument against what really is just skin deep google fact check using the most mild arguments from already right-wing data.

  • MolotovHalfEmpty [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    What are US libel laws like?

    Because Corbyn successfully sued a lot of the most prominent ghouls who would just openly call him a racist or a terrorist and then used that money for good causes.

    From what I’ve seen on Twitter in the last 24hrs as the entirety of New York’s wealthy elite openly call Zohran a terrorist and jihadist he could probably fund the cities education budget by doing the same.

  • GrouchyGrouse [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Without the immense load-bearing structure of the society-at-large giving buoyancy to their viewpoints the solid rock these people have instead of brains would pull them under and drown them like a vengeful god.

    • WhatDoYouMeanPodcast [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      I’ve been coming back to this bit for years: the communist who gets to argue in bad faith like libs and republicans do because they live in a communist state

      “You want to let a random asshole be your boss? What? So they can tell you when to wake up and when you’re allowed to take a shit? Why don’t you let them decide whether you get healthcare too. Matter of fact, let’s just let some random person to run the hospital too! They’ll charge you half a million dollars if you break your arm. No, I don’t think you understand, human nature is to work together. Your ‘competition’ society sounds good in practice until you have to actually make something useful.”

  • lurkerlady [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 days ago

    In a revelation that has sent shockwaves through the nation, sources have confirmed that Osama bin Laden, the mastermind behind the September 11th attacks, was serving as the mayor of New York City at the time of the tragedy. Eyewitness accounts suggest that bin Laden, who had been seen attending city council meetings, was allegedly using his position to orchestrate the attacks while simultaneously promoting a series of controversial urban initiatives. This unprecedented twist has left many questioning how such a figure could have risen to power in one of the world’s most prominent cities.

    City officials are scrambling to address the implications of this startling news, with some calling for an immediate investigation into bin Laden’s political connections and the circumstances surrounding his election. Critics are demanding accountability, arguing that the city’s leadership failed to recognize the threat posed by bin Laden, who was reportedly seen mingling with constituents at local events. As New Yorkers grapple with the fallout from this revelation, the nation watches closely, wondering how a city once known for its liberal attitudes could have been led by a figure so deeply entwined in the very fabric of the tragedy that unfolded on that fateful day.

    • KobaCumTribute [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      All I’m saying is, has anyone ever seen bin Laden and Rudy Giuliani in the same place at the same time? He could just be wearing one of those masks from Mission Impossible.

  • Ishmael [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 days ago

    There’s something incredibly funny about the way these LLMs are programmed to tell you that you’re wrong in the most polite way possible.

  • doublepepperoni [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 days ago

    Chuds were already terminally stupid and intellectually incurious but now they’re outsourcing their entire higher faculties to LLMs as soon as they feel the slightest twinge of cognitive dissonance or when their completely idiotic and false beliefs brush up against reality

    “Grok pls help thought scary make go away :(((((((((((”

    These chatbots are gonna have such corrosive effects on society bear-despair

  • blame [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 days ago

    They’re arguing with a fucking language model

    Losing the argument, too. Gotta hand it to the Grok team in one way though, the model does seem to stand its ground. Some of the other ones will just be like “you’re absolutely right!” and then give you the answer you want

    • Bloobish [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      I thinks it’s more that to have a Grok that would outright agree with this BS would mean they would have to create for all accounts a complete moron of a LLM algorithm that just parrots out affirmations to everyone and agreeing to everything instead of having any sense of a logical “core” to it I guess.

      • blame [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        LLMs don’t have any sort of logical core to them really… At least not in the sense that humans do. The causality doesn’t matter as much as the structure of the response, if I’m describing this right. Like a response that sounds right and a response that is right are the same thing, the LLM doesn’t differentiate. So I think what the grok team must have done is added some system prompts or trained the model in such a way that it is strongly instructed to weigh its responses favoring things like news articles and wikipedia and whatever else over what the user is telling it or asking it.

        • Bloobish [comrade/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 days ago

          Ah so it’s more or less biased to what acceptable media it can consume and so is likely at best centrist within it’s perspective given they likely blacklist certain sources. So what is stopping Grok from doing the hallucinatory or fabricated responses that were a big issue with other LLMs

          • blame [they/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 days ago

            I’m just guessing but likely they are training or instructing it in such a way that it will defer to sources that it finds through searching the internet. I guess the first thing it does when you ask a question is it searches the internet for recent news articles and other sources and now you have the context full of “facts” that it will stick to. Other LLMs haven’t really done that by default (although now I think they are doing that more) so they would just give answers purely on their weights which is basically the entire internet compressed down to 150 GB or whatever.