Notice first that we now have a specification of Q1 for each member of Block's quartet. For example, we have
This would work similarly for the other breeds of consciousness, giving us Q1 , Q1 , and Q1 . Next, we direct your attention to a variant of Q1, viz.,
as well as the corresponding Q2 , with the subscript set to a member of Block's quartet, and `consciousness' therein changed to `X-consciousness.' Now, it seems to us that the following principle is true.
The rationale behind P1 is straightforward: If the AI engineer has a good reason for giving (or seeking to give) her robot consciousness (a reason, we assume, that relates to the practical matter of being a productive robot: engineers who for emotional reasons want to give robots consciousness are of no interest to us8), then there is no reason why evolution couldn't have given us consciousness for pretty much the same reason.
The interesting thing is that Q2 , Q2 , and Q2 do appear to be easily answerable. Here are encapsulations of the sorts of answers we have in mind.
Question Q2 , however, is another story. There are at least two ways to see that P-consciousness is quite another animal. The first is to evaluate attempts to reduce P-consciousness to one or more of the other three breeds. One such attempt is Rosenthal's HOT. This theory, as incarnated above in Def 1, didn't take a stand on what breed of consciousness is referred to in the definiendum. Rosenthal, when asked about this, bravely modifies Def 1 to yield this definition:
Unfortunately, Def 1 is very implausible. In order to begin to see this, let s'' be one of our paradigmatic P-conscious states from above, say savoring a spoonful of deep, rich chocolate ice cream. Since s'' is a P-conscious state, ``there is something it's like" to be in it. As Rosenthal admits about states like this one:
When [such a state as s''] is conscious, there is something it's like for us to be in that state. When it's not conscious, we do not consciously experience any of its qualitative properties; so then there is nothing it's like for us to be in that state. How can we explain this difference? How can being in an intentional state, of whatever sort, result in there being something it's like for one to be in a conscious sensory state? ([Rosenthal, ming], pp. 24-25)
Our question exactly. And Rosenthal's answer? He tells us that there are ``factors that help establish the correlation between having HOTs and there being something it's like for one to be in conscious sensory states" (p. 26, [Rosenthal, ming]). These factors, Rosenthal tells us, can be seen in the case of wine tasting:
Learning new concepts for our experiences of the gustatory and olfactory properties of wines typically leads to our being conscious of more fine-grained differences among the qualities of our sensory states Somehow, the new concepts appear to generate new conscious sensory qualities. (p. 27, [Rosenthal, ming])
We confess to finding Rosenthal's choice of wine tasting tendentious. In wine tasting there is indeed a connection between HOTs and P-conscious states (the nature of which we don't pretend to grasp). But wine-tasting, as a source of P-consciousness, is unusually ``intellectual," and Def 1 must cover all cases -- including ones based on less cerebral activities. For example, consider fast downhill skiing. Someone who makes a rapid, ``on-the-edge" run from peak to base will have enjoyed an explosion of P-consciousness; such an explosion, after all, will probably be the main reason such an athlete buys expensive equipment and expensive tickets, and braves the cold. But expert downhill skiers, while hurtling down the mountain, surely don't analyze the ins and outs of pole plants on hardpack versus packed powder surfaces, and the fine distinctions between carving a turn at 20 mph versus 27 mph. Fast skiers ski; they plunge down, turn, jump, soar, all at incredible speeds. Now is it really the case, as Def 1 implies, that the myriad P-conscious states generated in a screaming top-to-bottom run are the result of higher-level, noninferential, assertoric, occurrent beliefs on the part of a skier k that k is in s1, that k is in s2, k is in s4, , k is in sn? Wine tasters do indeed sit around and say such things as that, ``Hmm, I believe this Chardonnay has a bit of a grassy taste, no?" But what racer, streaking over near-ice at 50 mph, ponders thus: ``Hmm, with these new parabolic skis, 3 millimeters thinner at the waist, the sensation of this turn is like turning a corner in a fine vintage Porsche". And who would claim that such thinking results in that which it's like to plummet downhill?
Def 1 is threatened by phenomena generated not only at ski areas, but in the laboratory as well. We have in mind an argument arising from the phenomenon known as backward masking. Using a tachistoscope, psychologists are able to present subjects with a visual stimulus for periods of time on the order of milliseconds (one millisecond is 1/1000th of a second). If a subject is shown a 3 x 4 array of random letters (see the array above) for, say, 50 milliseconds (msecs), and is then asked to report the letters seen, accuracy of about 37% is the norm. In a set of very famous experiments conducted by Sperling [Sperling, 1960], it was discovered that recall could be dramatically increased if a tone sounded after the visual stimulus. Subjects were told that a high tone indicated they should report the top row, a middle tone the middle row, and a low tone the bottom row. After the table above was shown for 50 msec, to be followed by the high tone, recall was 76% for the top row; the same result was obtained for the other two rows. It follows that a remarkable full 76% of the array is available to subjects after it appears. However, if the original visual stimulus is followed immediately thereafter by another visual stimulus in the same location (e.g., circles where the letters in the array appeared; see the array below), recall is abysmal; the second visual stimulus is said to backward mask the first. (See [Averbach & Coriell, 1961] for the seminal study.) Suppose, then, that a subject is flashed a series of visual patterns pi, each of which appears for only 5 msec. In such a case, while there is something it is like for the subject to see pi, it is very doubtful that this is because the subject thinks that she is in pi. In fact, most models of human cognition on the table today hold that information about pi never travels ``far enough" to become even a potential object of any assertoric thought [Ashcraft, 1994].
So, for these reasons, Def 1 looks to us to be massively implausible. More generally, the point is that it's extraordinarily difficult to reduce P-consciousness to other forms of consciousness.
The second way to see that P-consciousness is much more recalcitrant than the other three breeds we have singled out is to slip again into the shoes of the AI engineer. Why would a roboticist strive to give her creation the capacity to experience that which it's like to, say, eat an ice cream cone? It would seem that any reason the robot might have for consuming chocolate fudge swirl in a waffle cone could be a reason devoid of any appeal to P-consciousness. (Perhaps the robot needs cocoa for fuel (other types of energy sources turned out to be a good deal more expensive, assume); but if so, it can be built to seek cocoa out when it observes that its power supply is low -- end of story, and no need to appeal to anything as mysterious as subjective awareness.) Evolution qua engineer should similarly find P-consciousness to be entirely superfluous.9 Which is to say that we have moved from Q1 to what we call the ``tough" question:
This question can in turn be further refined through ``zombification."