To clarify: I’m not talking about the popular conception of the Turing test as something like the Voight-Kampff test, meant to catch rogue AIs—but Turing’s original test, meant for AI designers to evaluate their own machines. In particular, I’m assuming the designers know their machine well enough to distinguish between a true inability and a feigned one (or to construct the test in a way that motivates the machine to make a genuine attempt).
And examples of human inabilities might be learning a language that violates the patterns of natural human languages, or engaging in reflexive group behavior the way starlings or fish schools do.


You’ve edited this comment at least 3 times since I’ve replied, each time with more random shit that doesn’t make any sense. You just keep thumbing thru a thesaurus and replacing words with bigger words you clearly don’t understad.
This is probably why your posts/comments don’t make sense. Stop trying to sound intelligent and focus on communicating your point. But I don’t have the patience to ever try and explain anything to you again.
Best of luck.