• hanke@feddit.nu
    link
    fedilink
    arrow-up
    81
    arrow-down
    3
    ·
    2 days ago
    1. You can’t have unbiased AI without unbiased training data.
    2. You can’t have unbiased training data without unbiased humans.
    3. unbiased humans don’t exist.
    • Delphia@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      The best use case I can think of for “A.I” is an absolute PRIVACY NIGHTMARE (so set that aside for a moment) but I think its the absolute best example.

      Traffic and traffic lights. If every set of lights had cameras to track licence plates, cross reference home addresses and travel times for regular trips for literally every vehicle on the road. Variable speed limit signs on major roads and an unbiased “A.I” whose one goal is to make everyones regular trips take as short an amount of time as possible by controlling everything.

      If you can make 1,000,000 cars make their trips 5% more efficiently thats like 50,000 cars worth of emisions. Not to mention real world time savings for people.

    • Zacryon@feddit.org
      link
      fedilink
      arrow-up
      12
      ·
      2 days ago

      If you want AI agents that benefit humanity, you need biased training data and or a bias inducing training process. E.g. an objective like “Improve humanity in an ethical manner” (don’t pin me down on that, just a simple example).

      For example, even choosing a real environment over a tailored simulated one is already a bias in training data, even though you want to deploy the AI agent in a real setting. That’s what you want. Bias can be beneficial. Also if we think about ethical reasoning. An AI agent won’t know what ethics are and which are commonly preferred, if you don’t introduce such a bias.

    • Jerkface (any/all)@lemmy.ca
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      5
      ·
      edit-2
      2 days ago

      show your work. 1 especially seems suspect. Especially since many AIs are not trained on content like you are imagining, but rather trains itself through experimentation and adversarial networks.

        • Jerkface (any/all)@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          4
          ·
          edit-2
          2 days ago

          Yes, and? If you write a bad fitness function, you get an AI that doesn’t do what you want. You’re just saying, human-written software can have bugs.

          • xthexder@l.sw0.com
            link
            fedilink
            arrow-up
            2
            ·
            2 days ago

            You’re just saying, human-written software can have bugs.

            That’s pretty much exactly the point they’re making. Humans create the training data. Humans aren’t perfect, and therefore the AI training data cannot be perfect. The AI will always make mistakes and have biases as long as it’s being trained on human data.