The Pentagon has its eye on the leading AI company, which this week softened its ban on military use.

  • annehathway12@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    4 months ago

    It’s interesting to note OpenAI’s decision regarding the ban on using ChatGPT for “Military and Warfare” applications. For more updates and insights on AI developments, visit ChatGPT.

  • Fedizen@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    I can’t wait until we find out AI trained on military secrets is leaking military secrets.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    8 months ago

    Literally no one is reading the article.

    The terms still prohibit use to cause harm.

    The change is that a general ban on military use has been removed in favor of a generalized ban on harm.

    So for example, the Army could use it to do their accounting, but not to generate a disinformation campaign against a hostile nation.

    If anyone actually really read the article, we could have a productive conversation around whether any military usage is truly harmless, the nuances of the usefulness of a military ban in a world where so much military labor is outsourced to private corporations which could ‘launder’ terms compliance, or the general inability of terms to preemptively prevent harmful use at all.

    Instead, we have people taking the headline only and discussing AI being put in charge of nukes.

    Lemmy seems to care a lot more about debating straw men arguments about how terrible AI is than engaging with reality.

  • mechoman444@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    If you guys think that AI hasn’t already been in use in various militarys including America y’all are living in lala land.

  • funkforager@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Remember when open ai was a nonprofit first and foremost, and we were supposed to trust they would make AI for good and not evil? Feels like it was only Thanksgiving…

    • wooki@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 months ago

      I wouldnt be too worried they’ve just made an over glorified word predictor and blender of peoples art

          • pinkdrunkenelephants@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            And that totally justifies having a robot that does it so efficiently it allows people to deepfake shit that’s hard to invalidate, robbing people of their ability to discern what is reality and what is not

            • wooki@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              8 months ago

              Again not new stop grandstanding it as a new effect. Media outlets have been doing this since the dawn of journalism. Scientific process created to combat it, political standards to help reduce it fand laws to make it financially unattractive act remains its not new.

              The only thing that is new. The financial gain from the hype of abusing the word AI and thr media not calling it out. But hey here we are back at the start. Its not new.

              • pinkdrunkenelephants@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                2
                ·
                8 months ago

                And that totally makes it okay for you to use an LLM to do so far more effectively and far more efficiently, destroying humanity’s ability to discern reality

              • pinkdrunkenelephants@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                8 months ago

                Nope, not deepfakes that convincing.

                Keep lying to yourself though. Keep convincing yourself it’s worthwhile to destroy the world you claim to love just so you can keep your shiny new toy. Keep trying to tell yourself it’s not going to harm everyone else around you and that you’re still a good person.

                • afraid_of_zombies@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  8 months ago

                  Right all those people eating fucking horse dewormer were perfectly rational before.

                  Oh noes AI is going to destroy us all.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      That would count as harm and be disallowed by the current policy.

      But a military application of using GPT to identify and filter misinformation would not be harm, and would have been prevented by the previous policy prohibiting any military use, but would be allowed under the current policy.

      Of course, it gets murkier if the military application of identifying misinformation later ends up with a drone strike on the misinformer. In theory they could submit a usage description of “identify misinformation” which appears to do no harm, but then take the identifications to cause harm.

      Which is part of why a broad ban on military use may have been more prudent than a ban only on harmful military usage.