• Pilgrim@beehaw.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Could it be because they’re centering technology that’s inherently unreliable? GPT is just a guessing machine

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Is it because it doesn’t work and can’t work for the always-fictional magical Candyland of Tomorrow where workers are all AI and all the companies who fired everyone because they blew their money on magic beans are going to crash horribly in an inferno of enshittification?

    Huh.

    • AVHeiro@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Don’t sleep on “they have identified neither a problem or a solution” to build a project around even if the technology worked like they want it to.

  • fubarx@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    That cold shiver running down your spine once you start on a project, and you realize how much data you need to train the models, then realizing you have no idea how to get any of it, and you have to go into production before the big trade show/analyst report deadline. mwah

  • mirrorwitch@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I really hate it that I already see in my mind the linkedin types going like “80% of AI projects fail—how you need to sleep in the office 7 days of a week to reach the 20%”. I mean Y Combinator used to take pride on the fact that over 90% of startups fail, do you have what it takes to be a ten percenter??