• themeatbridge@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    ·
    19 天前

    Turing tests are a framework, not a set of specific questions. It assumes the interrogator is human, and the machine passes the test when its responses are indistinguishable from a human’s. What the questions are doesn’t matter, and it doesn’t matter if the answers are right or wrong. If the human interrogator cannot tell the difference between a human and a machine, it has passed the test.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        ·
        19 天前

        That’s part of how Turing tests are done. Years ago a blind Turing test was done on chatbots and humans to see if people could tell the difference.

        A human was classified as a bot because they happened to be a Shakespeare expert and as people had conversations and by chance Shakespeare came up, they thought no one could be that knowledgeable and classified the person as a bot.

      • Hamartiogonic@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        11
        ·
        19 天前

        It’s a highly subjective test method. Depending on the people in question, the accuracy could be all over the place.

      • themeatbridge@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 天前

        Well, that’s sorta the point. Do machines think? They have knowledge and logic, but not insight or creativity. But do humans have those things? Or are we just really advanced pattern recognition machines? Turing tests demonstrated that it is really our imperfections that make us recognizable as humans. And if machines can be better at distinguishing between humans and machines, what is the virtue of “thinking”? Why is that better than “computing”?

    • Tar_Alcaran@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      19 天前

      A machine passed the Turing Test in the 60s. It’s not hard to do, since you can just specify limitations as much as you want.

  • palebluethought@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    19 天前

    There are no specific questions defining a Turing test. It’s just generally “can the average person tell the difference between this bot and a real person?” It doesn’t go any deeper than that.

    It’s also not actually some kind of “definitive” test of consciousness, the way it’s depicted in pop culture. Literally someone just asked Turing what a good way to test for machine consciousness might be, and this was the first thing that came to mind. It does not have any particular scientific significance. It just makes for splashy headlines because it’s a thing a lot of people have heard of.

    • JackGreenEarth@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      19 天前

      It’s not a test for artificial consciousness, you can’t test consciousness at all. It’s a test for humanlike AI.

    • kbal@fedia.io
      link
      fedilink
      arrow-up
      4
      ·
      19 天前

      Turing may not have specified it, but the only way such a test is at all meaningful is if the person administering it has some expertise. There have been computers that can sometimes fool the average person who doesn’t know what to look for since the 1960s.

    • ccunning@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      19 天前

      can the average person tell the difference between this bot and a real person?

      It appears technologists strategy is to simply lower the average…