next up previous
Next: My Theory of Mind Up: Computationalism is Dead; Now Previous: What is Computationalism?

The Turing Test

Now for some remarks on TT.

My What Robots Can and Can't Be [6] is a sustained, rigorous case for the two-part view that

Most of this case is devoted to demonstrating the first of these two propositions. The part of it concerned with substantiating the second is, as I readily admit in the monograph in question, inchoate. A forthcoming book of mine, Literary Creativity, Story Generation, and Artificial Intelligence [2], is devoted in earnest to bolstering TT-BUILD. This book features a storytelling agent, BRUTUS, able to (among other things) ``understand" and manipulate the concept of betrayal.

Actually, TT-BUILD is short for a thesis that makes crucial reference not only to the Turing Test ( tex2html_wrap_inline390 ) but to what has been called [6] the Turing Test sequence. There are obviously tests easier to pass than tex2html_wrap_inline390 , but which still capture Turing's empiricism. For example, we might index tex2html_wrap_inline390 to some particular domain -- golf, say -- and stipulate that questions moving significantly beyond this domain aren't fair game.gif We might insist that the syntax of queries, and the syntax of responses, conform to a certain range of patterns. Or we might decide to modify the scoring that is part of Turing's original proposal -- so that, for example, if a system can fool the judge 25% of time, rather than the full 50%, it is judged to have passed.

It's also easy enough to devise tests more stringent than tex2html_wrap_inline390 . The most famous of these tests is Stevan Harnad's [13] intriguing ``Total Turing Test," which we can baptize as tex2html_wrap_inline400 . In this test not only is the linguistic behavior of a robot (or android) scrutinized, but its sensorimotor conduct is evaluated as well. In order to pass tex2html_wrap_inline400 , a robot must not only have the capacity to be an articulate (to use one of Harnad's terms) pen-pal; it must also look human. In the same paper, Harnad considers more stringent tests than tex2html_wrap_inline400 , for example tests in which the judge is allowed to ``look under the hood," that is, to look at the physical stuff out of which the synthetic contestant is made. Other thinkers, for example Peter Kugel [14], have proposed tests so demanding that they cannot, as a matter of mathematics, be passed by Turing Machines. And so on. Let's suppose, accordingly, that there is indeed some sequence


which underlies the relevant modification of TT-BUILD, namely,

It is humanly possible to build robots (or androids, etc.) capable of ascending (perhaps all of) the Turing Test sequence.

I think TT-BUILD is true, and I hope to provide increasingly powerful evidence for this proposition as the years unfold. But I think a claim of the form `if x passes tex2html_wrap_inline414 then x is conscious,' that is, TT tex2html_wrap_inline418 , is false. I have two types of arguments for this position; each is based on a series of thought-experiments. The first series begins with a gedankenexperiment in which an excruciatingly simple LISP-based random sentence generator serendipitously passes tex2html_wrap_inline390 . Would we want to say, in such a situation, that as luck would literally have it, a new consciousness has popped into existence? I would think not. After all, such a sentence generator is easy enough for a novice in AI to write -- an observation that underlies this argument:


This argument is specified and defended in [5], and is then extended in that same paper so as to refute TT tex2html_wrap_inline432 and beyond.gif The extension allowing for the refutation of TT tex2html_wrap_inline432 , i.e., a refutation of


is easily produced: the trick is to combine the random sentence generator (from above) with a random ``behavior" generator, i.e., a program that randomly outputs not only text, but also commands to effectors. Now imagine that this composite computer program, when hooked to some real effectors, serendipitously passes Harnad's Total Turing Test. Would anyone want to say that the system in question is conscious? I doubt it. But then it follows that the conditional in question has been overthrown, since it's antecedent is true while its consequent isn't.

Of course, I'm assuming that the conditional in TT tex2html_wrap_inline418 is stronger than a material conditional, for if this conditional is of this type, then since no machine can currently pass the test ( = TT) in question, the thesis is (vacuously) true. What Turing had in mind was no doubt something a bit stronger -- probably something like


(I consider this modal version of thesis, and a number of other possibilities, in [5].) On the assumption that this sort of stronger thesis is what's at issue, I move directly to the fact that Harnad has produced a more sophisticated thesis to accompany his Total Turing Test, one he specified near the end of a long debate between the two of us, viz.,

TT tex2html_wrap_inline432 -H
If x is TTT-indistinguishable from someone with a mind, one has no non-arbitrary reason for doubting that x has a mind when told that x is a machine.

Unfortunately, this thesis is false -- given that Harnad accepts (a modalized interpretation of the conditional, which he does, and) my argument from serendipity (and Searle's Chinese Room Argument [20] against the original Turing Test). In order to see this, begin by supposing that x is TTT-indistinguishable (TT tex2html_wrap_inline432 -indistinguishable) from someone with a mind. To make this a bit more vivid, suppose that roboticist Smith escorts a robot around with him which is TTT-indistinguishable from someone with a mind, but suppose that Smith cloaks this robot in some way (with synthetic skin that passes for the real thing, artificial eyes that look human, etc. -- use your imagination) and refers to it as `Willie.' So, you carry out conversations with Willie for years; and, since we're dealing here with the Total Turing Test, you also play baseball (and the like) with Willie for years. And then one day Smith says, ``Aha! Watch this!" And he tears off the skin over Willie's forehead, and therein sits a computer busy humming along.

So far so good. How do we get the falsity of TT tex2html_wrap_inline432 -H's consequent built into the thought-experiment? No problem. Let's begin by reminding ourselves of what we need. We need the Willie scenario to include the falsity of


Let's instantiate this to me; that seems harmless enough. So we need the scenario to include the falsity of


Here's how it works: Selmer says to Smith: ``Look, Willie is still just a bunch of symbols swimming around inside a box. There's no person in the picture. It's easy enough to show that there's no principled connection between the symbols and sensors/effectors working nice and there being a person in the picture, for consider a situation (alluded to above) wherein Willie's program is a composite program: a lucky random sentence generator and a lucky program for processing information received via sensors and a lucky program for processing information sent to effectors -- all code that's based on technology students master in the first few weeks of Intro to AI." (It's important to note here that (i) all I need to overthrow Harnad's TT tex2html_wrap_inline432 -H is a non-arbitrary reason for denying that Willie is conscious; I don't need a compelling reason. And (ii) Harnad agrees that TT tex2html_wrap_inline462 is shown to be false by the original version of the argument from serendipity.)

In the second type of attack on the Turing Test, I argue that zombies, contra Dennett, are logically -- and, in fact, physically -- possible, and that this fact suffices to refute certain versions of computationalism.gif The initial thought-experiment in the series of them that underlies this attack is provided by Searle in his The Rediscovery of the Mind.gif

I move now to a brief look at the rest of my theory of mind.

next up previous
Next: My Theory of Mind Up: Computationalism is Dead; Now Previous: What is Computationalism?

Selmer Bringsjord
Tue May 21 00:31:50 EDT 1996