To clarify: I’m not talking about the popular conception of the Turing test as something like the Voight-Kampff test, meant to catch rogue AIs—but Turing’s original test, meant for AI designers to evaluate their own machines. In particular, I’m assuming the designers know their machine well enough to distinguish between a true inability and a feigned one (or to construct the test in a way that motivates the machine to make a genuine attempt).

And examples of human inabilities might be learning a language that violates the patterns of natural human languages, or engaging in reflexive group behavior the way starlings or fish schools do.

  • AbouBenAdhem@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    In the original Turning test, the black box isn’t the machine—it’s the human. The test is to see whether a (known) machine is an accurate model of an unknown system.

    While the tester is blind as to which is which, the experimenter knows the construction of the machine and can presumably tell if it’s artificially constraining itself. When I say “the inability to act otherwise”, I’m assuming the experimenter can distinguish a true inability from an induced one (even if the tester can’t).

    • masterspace@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      While the tester is blind as to which is which, the experimenter knows the construction of the machine and can presumably tell if it’s artificially constraining itself.

      In the case of intelligences and neural networks that is not so straight forward. The humans and machines that are behind the curtain have to be motivated to try and replicate a human, or the test would fail, whether that’s because a human control is unhelpful or because the machine isn’t bothering trying to replicate a human.

      • AbouBenAdhem@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        The humans and machines that are behind the curtain have to be motivated to try and replicate a human

        In a Turing test, yes. What I’m suggesting is to change the motivation, to see if the machine fails like a human even when motivated not to.