next up previous
Next: Zombifying the Question Up: Why Did Evolution Engineer Previous: Taming the Mongrel

The Tough Question

Notice first that we now have a specification of Q1 for each member of Block's quartet. For example, we have

Q1 $_{\mbox{{\tiny S}}}$
Why did evolution bother to give us S-consciousness?

This would work similarly for the other breeds of consciousness, giving us Q1 $_{\mbox{{\tiny M}}}$, Q1 $_{\mbox{{\tiny A}}}$, and Q1 $_{\mbox{{\tiny P}}}$. Next, we direct your attention to a variant of Q1, viz.,

Q2
Why might an AI engineer try to give her artifact consciousness?

as well as the corresponding Q2 $_{\mbox{{\tiny X}}}$, with the subscript set to a member of Block's quartet, and `consciousness' therein changed to `X-consciousness.' Now, it seems to us that the following principle is true.

P1
If Q2 $_{\mbox{{\tiny X}}}$ is easily answerable, then so is Q1 $_{\mbox{{\tiny X}}}$.

The rationale behind P1 is straightforward: If the AI engineer has a good reason for giving (or seeking to give) her robot consciousness (a reason, we assume, that relates to the practical matter of being a productive robot: engineers who for emotional reasons want to give robots consciousness are of no interest to us8), then there is no reason why evolution couldn't have given us consciousness for pretty much the same reason.

The interesting thing is that Q2 $_{\mbox{{\tiny S}}}$, Q2 $_{\mbox{{\tiny M}}}$, and Q2 $_{\mbox{{\tiny A}}}$ do appear to be easily answerable. Here are encapsulations of the sorts of answers we have in mind.

For Q2 $_{\mbox{{\tiny M}}}$:
It should be wholly uncontroversial that robots could be well-served by a capacity to have higher-order thoughts about the thoughts of other robots and humans. For example, a robot working in a factory could exploit beliefs about the beliefs of the humans it's working with. Perhaps the robot needs to move its effectors on the basis of what it believes the human believes she sees in front of her at the moment. For that matter, it's not hard to see that it could be advantageous for a robot to have beliefs about what humans believe about what the robot believes -- and so on. Furthermore, it's easy to see that robots could benefit from (correct) beliefs about their own inner states. (John Pollock [Pollock, 1995] provides a wonderful discussion of the utility of such robotic beliefs.) And for certain applications they could capitalize on a capacity for beliefs about their beliefs about their inner states. (To pick an arbitrary case, on the way to open a comination-locked door a robot may need to believe that it knows that a memory of the combination is stored in memory.7)
For Q2 $_{\mbox{{\tiny S}}}$:
Even a casual look at the sub-field of planning within AI reveals that a sophisticated robot will need to have a concept of itself, and a way of referring to itself. In other words, a clever robot must have the ability to formulate and reason over de se beliefs. In fact, it's easy enough to adapt our diner case from above to make the present point: Suppose that a robot is charged with the task of making sure that patrons leave the Collar City Diner in time for the establishment to close down properly for the night. If, when scanning the diner with its cameras, the robot spots a robot that appears to be simply loitering near closing time (in a future in which the presence of robots is commonplace), it will certainly be helpful if the employed robot can come to realize that the loitering robot is just itself seen in a mirror.
For Q2 $_{\mbox{{\tiny A}}}$:
This question can be answered effortlessly. After all, A-consciousness can pretty much be identified with information processing. Any reason a roboticist might have for building a robot capable of reasoning and communicating will be a reason for building a robot with A-consciousness. For this reason, page after page of standard textbooks contain answers to Q2 $_{\mbox{{\tiny A}}}$ (cf. [Russell & Norvig, 1995]).

Question Q2 $_{\mbox{{\tiny P}}}$, however, is another story. There are at least two ways to see that P-consciousness is quite another animal. The first is to evaluate attempts to reduce P-consciousness to one or more of the other three breeds. One such attempt is Rosenthal's HOT. This theory, as incarnated above in Def 1, didn't take a stand on what breed of consciousness is referred to in the definiendum. Rosenthal, when asked about this, bravely modifies Def 1 to yield this definition:

Def 1 $_{\mbox{{\tiny P}}}$
s is a P-conscious mental state at time t for agent a =df s is accompanied at t by a higher-order, noninferential, occurrent, assertoric thought s' for a that a is in s, where s' may or may not be P-conscious.

Unfortunately, Def 1 $_{\mbox{{\tiny P}}}$ is very implausible. In order to begin to see this, let s'' be one of our paradigmatic P-conscious states from above, say savoring a spoonful of deep, rich chocolate ice cream. Since s'' is a P-conscious state, ``there is something it's like" to be in it. As Rosenthal admits about states like this one:

When [such a state as s''] is conscious, there is something it's like for us to be in that state. When it's not conscious, we do not consciously experience any of its qualitative properties; so then there is nothing it's like for us to be in that state. How can we explain this difference? $\ldots$ How can being in an intentional state, of whatever sort, result in there being something it's like for one to be in a conscious sensory state? ([Rosenthal, ming], pp. 24-25)

Our question exactly. And Rosenthal's answer? He tells us that there are ``factors that help establish the correlation between having HOTs and there being something it's like for one to be in conscious sensory states" (p. 26, [Rosenthal, ming]). These factors, Rosenthal tells us, can be seen in the case of wine tasting:

Learning new concepts for our experiences of the gustatory and olfactory properties of wines typically leads to our being conscious of more fine-grained differences among the qualities of our sensory states $\ldots$ Somehow, the new concepts appear to generate new conscious sensory qualities. (p. 27, [Rosenthal, ming])

We confess to finding Rosenthal's choice of wine tasting tendentious. In wine tasting there is indeed a connection between HOTs and P-conscious states (the nature of which we don't pretend to grasp). But wine-tasting, as a source of P-consciousness, is unusually ``intellectual," and Def 1 $_{\mbox{{\tiny P}}}$ must cover all cases -- including ones based on less cerebral activities. For example, consider fast downhill skiing. Someone who makes a rapid, ``on-the-edge" run from peak to base will have enjoyed an explosion of P-consciousness; such an explosion, after all, will probably be the main reason such an athlete buys expensive equipment and expensive tickets, and braves the cold. But expert downhill skiers, while hurtling down the mountain, surely don't analyze the ins and outs of pole plants on hardpack versus packed powder surfaces, and the fine distinctions between carving a turn at 20 mph versus 27 mph. Fast skiers ski; they plunge down, turn, jump, soar, all at incredible speeds. Now is it really the case, as Def 1 $_{\mbox{{\tiny P}}}$ implies, that the myriad P-conscious states $s_1, \ldots, s_n$ generated in a screaming top-to-bottom run are the result of higher-level, noninferential, assertoric, occurrent beliefs on the part of a skier k that k is in s1, that k is in s2, k is in s4, $\dots$, k is in sn? Wine tasters do indeed sit around and say such things as that, ``Hmm, I believe this Chardonnay has a bit of a grassy taste, no?" But what racer, streaking over near-ice at 50 mph, ponders thus: ``Hmm, with these new parabolic skis, 3 millimeters thinner at the waist, the sensation of this turn is like turning a corner in a fine vintage Porsche". And who would claim that such thinking results in that which it's like to plummet downhill?

C F P Y
J M B X
S G R L

Def 1 $_{\mbox{{\tiny P}}}$ is threatened by phenomena generated not only at ski areas, but in the laboratory as well. We have in mind an argument arising from the phenomenon known as backward masking. Using a tachistoscope, psychologists are able to present subjects with a visual stimulus for periods of time on the order of milliseconds (one millisecond is 1/1000th of a second). If a subject is shown a 3 x 4 array of random letters (see the array above) for, say, 50 milliseconds (msecs), and is then asked to report the letters seen, accuracy of about 37% is the norm. In a set of very famous experiments conducted by Sperling [Sperling, 1960], it was discovered that recall could be dramatically increased if a tone sounded after the visual stimulus. Subjects were told that a high tone indicated they should report the top row, a middle tone the middle row, and a low tone the bottom row. After the table above was shown for 50 msec, to be followed by the high tone, recall was 76% for the top row; the same result was obtained for the other two rows. It follows that a remarkable full 76% of the array is available to subjects after it appears. However, if the original visual stimulus is followed immediately thereafter by another visual stimulus in the same location (e.g., circles where the letters in the array appeared; see the array below), recall is abysmal; the second visual stimulus is said to backward mask the first. (See [Averbach & Coriell, 1961] for the seminal study.) Suppose, then, that a subject is flashed a series of visual patterns pi, each of which appears for only 5 msec. In such a case, while there is something it is like for the subject to see pi, it is very doubtful that this is because the subject thinks that she is in pi. In fact, most models of human cognition on the table today hold that information about pi never travels ``far enough" to become even a potential object of any assertoric thought [Ashcraft, 1994].

$\circ$ $\bullet$ $\circ$ $\bullet$
$\bullet$ $\circ$ $\bullet$ $\circ$
$\circ$ $\bullet$ $\circ$ $\bullet$

So, for these reasons, Def 1 $_{\mbox{{\tiny P}}}$ looks to us to be massively implausible. More generally, the point is that it's extraordinarily difficult to reduce P-consciousness to other forms of consciousness.

The second way to see that P-consciousness is much more recalcitrant than the other three breeds we have singled out is to slip again into the shoes of the AI engineer. Why would a roboticist strive to give her creation the capacity to experience that which it's like to, say, eat an ice cream cone? It would seem that any reason the robot might have for consuming chocolate fudge swirl in a waffle cone could be a reason devoid of any appeal to P-consciousness. (Perhaps the robot needs cocoa for fuel (other types of energy sources turned out to be a good deal more expensive, assume); but if so, it can be built to seek cocoa out when it observes that its power supply is low -- end of story, and no need to appeal to anything as mysterious as subjective awareness.) Evolution qua engineer should similarly find P-consciousness to be entirely superfluous.9 Which is to say that we have moved from Q1 to what we call the ``tough" question:

Q1 $_{\mbox{{\tiny P}}}$
Why did evolution bother to give us P-consciousness?

This question can in turn be further refined through ``zombification."


next up previous
Next: Zombifying the Question Up: Why Did Evolution Engineer Previous: Taming the Mongrel
Selmer Bringsjord
2000-03-13