• sketelon@eviltoast.org
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    2
    ·
    7 天前

    Really? The guy behind the company called “Open” AI that has contributed the least to the open source AI communities, while constantly making grand claims and telling us we’re not ready to see what he’s got. We’re supposed to stop taking that guys word?

    Wow, thanks journalists, what would we do without you.

    • 5dh@lemmy.zip
      link
      fedilink
      English
      arrow-up
      16
      ·
      6 天前

      Should your disappointment here really be pointed at the journalists?

      • Jtotheb@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        1
        ·
        6 天前

        Which group of people uncritically magnified his voice and others like it for years? Tech journalism builds the legacies of people like Musk, Bankman-Fried and Altman.

    • MouseKeyboard@ttrpg.network
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 天前

      People talk a lot about the genericisation of brand names, but the branding of generic terms like this really annoys me.

      I’ll use the example I first noticed. A few years ago, the Conservative government was under criticism for the minimum wage being well under a living wage. In response, they brought in the National Living Wage, which was an increase to the minimum wage, but still under the actual living wage. However, because of the branding, it makes criticising it for not meeting the actual living wage more difficult, as you have to explain the difference between the two, and as the saying goes, “if you’re explaining, you’re losing”.

  • sartalon@lemmy.world
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    4
    ·
    7 天前

    When that major drama unfolded with him getting booted then re-hired. It was super fucking obvious that it was all about the money, the data, and the salesmanship He is nothing but a fucking tech-bro. Part Theranos, part Musk, part SBF, part (whatever that pharma asshat was), and all fucking douchebag.

    AI is fucking snake oil and an excuse to scrape every bit of data like it’s collecting every skin cell dropping off of you.

    • Rogers@lemmy.ml
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      4
      ·
      edit-2
      7 天前

      I’d agree the first part but to say all Ai is snake oil is just untrue and out of touch. There are a lot of companies that throw “Ai” on literally anything and I can see how that is snake oil.

      But real innovative Ai, everything to protein folding to robotics is here to stay, good or bad. It’s already too valuable for governments to ignore. And Ai is improving at a rate that I think most are underestimating (faster than Moore’s law).

      • kaffiene@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 天前

        I think part of the difficulty with these discussions is that people mean all sorts of different things by “AI”. Much of the current usage is that AI = LLMs, which changes the debate quite a lot

        • Rogers@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          6 天前

          No doubt LLMs are not the end all be all. That said especially after seeing what the next gen ‘thinking models’ can do like o1 from ClosedAI OpenAI, even LLMs are going to get absurdly good. And they are getting faster and cheaper at a rate faster than my best optimistic guess 2 years ago; hell, even 6 months ago.

          Even if all progress stopped tomorrow on the software side the benefits from purpose built silicon for them would make them even cheaper and faster. And that purpose built hardware is coming very soon.

          Open models are about 4-6 months behind in quality but probably a lot closer (if not ahead) for small ~7b models that can be run on low/med end consumer hardware locally.

          • kaffiene@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            6 天前

            I don’t doubt they’ll get faster. What I wonder is whether they’ll ever stop being so inaccurate. I feel like that’s a structural feature of the model.

            • keegomatic@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              6 天前

              May I ask how you’ve used LLMs so far? Because I hear that type of complaint from a lot of people who have tried to use them mainly to get answers to things, or maybe more broadly to replace their search engine, which is not what they’re best suited for, in my opinion.

                • keegomatic@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  5 天前

                  Personally, I’ve found that LLMs are best as discussion partners, to put it in the broadest terms possible. They do well for things you would use a human discussion partner for IRL.

                  • “I’ve written this thing. Criticize it as if you were the recipient/judge of that thing. How could it be improved?” (Then address its criticisms in your thing… it’s surprisingly good at revealing ways to make your “thing” better, in my experience)
                  • “I have this personal problem.” (Tell it to keep responses short. Have a natural conversation with it. This is best done spoken out loud if you are using ChatGPT; prevents you from overthinking responses, and forces you to keep the conversation moving. Takes fifteen minutes or more but you will end up with some good advice related to your situation nearly every time. I’ve used this to work out several things internally much better than just thinking on my own. A therapist would be better, but this is surprisingly good.)
                  • I’ve also had it be useful for various reasons to tell it to play a character as I describe, and then speak to the character in a pretend scenario to work out something related. Use your imagination for how this might be helpful to you. In this case, tell it to not ask you so many questions, and to only ask questions when the character would truly want to ask a question. Helps keep it more normal; otherwise (in the case of ChatGPT which I’m most familiar with) it will always end every response with a question. Often that’s useful, like in the previous example, but in this case it is not.
                  • etc.

                  For anything but criticism of something written, I find that the “spoken conversation” features are most useful. I use it a lot in the car during my commute.

                  For what it’s worth, in case this makes it sound like I’m a writer and my examples are only writing-related, I’m actually not a writer. I’m a software engineer. The first example can apply to writing an application or a proposal or whatever. Second is basically just therapy. Third is more abstract, and often about indirect self-improvement. There are plenty more things that are good for discussion partners, though. I’m sure anyone reading can come up with a few themselves.

    • stringere@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      21
      ·
      7 天前

      Martin Shkreli is the scumbag’s name you’re looking for.

      From wikipedia: He was convicted of financial crimes for which he was sentenced to seven years in federal prison, being released on parole after roughly six and a half years in 2022, and was fined over 70 million dollars

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      7 天前

      It’s not snake oil. It is a way to brute force some problems which it wasn’t possible to brute force before.

      And also it’s very useful for mass surveillance and war.

  • MehBlah@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    5 天前

    I don’t trust any of these types. If you haven’t noticed by now morally decent people are never in charge of a any large organization. The type of personality suited to claw their way to the top usually lack any real moral compass that doesn’t advance their pursuit of power.

  • aesthelete@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    7 天前

    It’s beyond time to stop believing and parroting that whatever would make your source the most money is literally true without verifying any of it.

  • ivanafterall@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    6 天前

    You shouldn’t judge people on appearances.

    … but, I mean, come OOON… he looks like a reanimated Madame Tussaud’s sculpture. Like someone said, “Give me a Wish.com Mark Zuckerberg… but not so vivacious this time.” And he’s the CEO of an AI-related company.

    • u_u@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 天前

      Applicable to everyone really, especially those that want to sell you something that sounds too good to be true.

  • LainTrain@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 天前

    I’ll keep my open source generative models and will be happy to watch this bozo and his cultists and the artbros all eat shit all year-round.

  • OutrageousUmpire@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    24
    ·
    7 天前

    but for now, his approach is textbook Silicon Valley mythmaking

    The difference is that in this case it is not hype—it is reality. It’s not a myth, it is happening right now. We are chugging inevitably down the track to the most dramatic discovery in human history. And Altman’s views on solving the climate crisis, disease, nuclear fusion… they are all within reach. If anything we need to increase our speed to get us there ASAP.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      7 天前

      Tell me honestly, are you a bot or do you sincerely believe this shit and based on which qualification and experience?

      Gunpowder, electricity, combustion engines, universal electronic computers, rocketry, lasers, plastics - none of these made any dramatic changes. It was all slow iterative process of fuzzy transitions and evolution.

      While these made pretty fundamental impacts. Sam Altman’s company is using fuckloads of data to calculate some predictive coefficients, and the rest of its product can be done by students.

      It’s just real-life power controllers trying their muscles at bending the tech industry with usual means - capturing resources and using them to assert control. There were no such resources in the beginning, and then datasets turned into something like oil.

      Generally in computing (when a computer is a universal machine) everyone able to program can do a lot of things. This makes the equality there kinda inconvenient for real life bosses who can call airstrikes and deal in oil tankers.

      There was the smart and slow way of killing that via slow oligopolization, but everyone can see how that doesn’t work well. Some people slowly move to better things, and some were fine with TV telling them how to live, they don’t even need Internet. All these technologies are still kinda modular and even transparent. And despite what many people think, both idealistic left and idealistic right build technologies for the same ultimate goal, so Fediverse is good and Nostr is good and everything that functions is good.

      So - that works, but human societies are actually developing some kind of immunity to centralized bot-poisoned platforms.

      To keep the stability of today’s elites (I’d say these are by now pretty international), you need something qualitatively different. A machine that is almost universal in solving tasks, but doesn’t give the user transparency. That’s their “AI”. And those enormous datasets and computing power are the biggest advantage of that kind of people over us. So they are using that advantage. That’s the kind of solution that they can do and we can’t.

      Simultaneously to that there’s a lot of AI hype being raised to try and replace normal computing with something reliant on those centralized supply chains. Hardware production was more distributed before the last couple of decades. Now there are a few well-controllable centers. They simply want to do the same with consumer software. Because if the consumers don’t need something, they won’t have that something when they see a need.

      All these aside, today’s kinds of mass surveillance can’t be done with (EDIT:without) something like that “AI”. There simply won’t be enough people to have sufficient control.

      So - there are a few notable traits of this approach converging on the same interest.

      It’s basically a project to conserve elites. The new generation of thieves and bureaucrats wants to become the new aristocracy.

      • daddy32@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 天前

        You’re right. This is just “SaaS”, “cloud APIs” approach turned to 11 - making some thing unavailable to everyone unless they agree to agree with any conditions you come up in the future. For example, if Github Copilot becomes genuinely and uniquely very useful, that’s bad for the software development industry over the entire world: it means that every single software dev company will have to pay “tax” to Microsoft.