...1
=
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... Consciousness?2
We are indebted to Stevan Harnad for helpful electro-conversations, and Ned Block for corporeal conversations, on some of the issues discussed herein.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... cracked3
Things not necessarily to be ranked in the order listed here.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... unconscious.4
Def 1's time index (which ought, by the way, to be a double time index -- but that's something that needn't detain us here) is necessary; this is so in light of thought-experiments like the following. Suppose (here, as I ask you to suppose again below) that while reading Tolstoy's Anna Karenina you experience the state feeling for Levin's ambivalence toward Kitty. Denote this state by $s^\ast$; and suppose that I have $s^\ast$ at 3:05 pm sharp; and suppose also that I continue reading without interruption until 3:30 pm, at which time I put down the novel; and assume, further, that from 3:05:01 -- the moment at which Levin and Kitty temporarily recede from the narrative -- to 3:30 I'm completely absorbed in the tragic romance between Anna and Count Vronsky. Now, if I report at 3:30:35 to a friend, as I sigh and think back now for the first time over the literary terrain I have passed, that I feel for Levin, are we to then say that at 3:30:35 $s^\ast$, by virtue of this report and the associated higher-order state targeting $s^\ast$, becomes a conscious state? If so, then we give me the power to change the past, something I cannot be given.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... A-conscious.5
The problem is that probably any computational artifact will qualify as A-conscious. We think that does considerable violence to our pre-analytic concept of consciousness, mongrel or not. One of us (Bringsjord) has suggested, accordingly, that all talk of A-consciousness be supplanted with suitably configured constituents from Block's definiens. All of these issues -- treated in [Bringsjord, minga] -- can be left aside without harming the present enquiry.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... S-consciousness.6
As Block points out, there are certain behaviors which seem to suggest that chimps enjoy S-consciousness: When colored spots are painted on the foreheads of anesthetized chimps, and the creatures wake and look in a mirror, they try to wipe the spots off [Povinelli, 1997]. Whether or not the animals really are self-conscious is beside the point, at least for our purposes. But that certain overt behavior is sometimes taken to be indicative of S-consciousness is relevant to what we are about herein (for reasons to be momentarily seen).
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... memory.7
You may be thinking: Why should a robot need to believe such a thing? Why not just be able to do it? After all, a simple photoactic robot need not believe that it knows where the light is. Well, actually, the case we mention here is a classic one in AI. The trick is that unless the robot believes it has the combination to the lock in memory, it is irrational to for it to fire off an elaborate plan to get to the locked door. If the robot doesn't know the combination, getting to the door will have been a waste of time.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... us8
This is a bit loose; after all, the engineer could want to make a conscious robot specifically for the purposes of studying consciousness. But we could tighten Q2 to something like
Q2'
Why, specifically, might an AI engineer try to give her artifact consciousness in order to make it more productive?
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... superfluous.9
John Pollock, whose engineering efforts, he avows, are dedicated to the attempt to literally build an artificial person, holds that emotions are at bottom just timesavers, and that with enough raw computing power, the advantages they confer for us can be given to an ``emotionless" AI -- as long as the right algorithms are in place. See his discussion of emotions and what he calls Q&I modules in his [Pollock, 1995].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... dead.10
This scenario would seem to resemble a real-life phenomenon: the so-called ``Locked-In" Syndrome. See [Plum & Posner, 1972] (esp. the fascinating description on pages 24-5) for the medical details.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... movies.11
The zombies of cinematic fame apparently do have real-life correlates created with a mixture of drugs and pre-death burial: see [Davis, 1985], [Davis, 1988].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... cranium.12
For example, the toolbox is opened and the silicon supplantation elegantly pulled out in [Cole & Foelber, 1984].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... him).13
Despite having no such talents, one of us (Bringsjord) usually spends twenty minutes or so telling a relevant short story to students when he presents zombies via V2. In this story, the doomed patient in V2 -- Robert -- first experiences an unintended movement of his hand, which is interpreted by an onlooker as perfectly natural. After more bodily movements of this sort, an unwanted sentence, to Robert's mounting horror, comes involuntarily up from his voicebox -- and is interpreted by an interlocutor as communication from Robert. The story describes how this weird phenomenon intensifies $\ldots$ and finally approaches Searle's ``late stage" description in V2 above. Now someone might say: ``Now wait a minute. Internal amazement at dwindling consciousness requires differing cognition, a requirement which is altogether incompatible with the preservation (ex hypothesi) of identical ``information flow". That is, in the absence of an argument to the effect that ordinary cognition (never mind consciousness) fails to supervene on ``information flow", V2 is incoherent". The first problem with this objection is that it ignores the ordering of events in the story. Robert, earlier, has had his brain supplanted with silicon workalikes -- in such a way that all the same algorithms and neural nets are in place, but they are just instantiated in different physical stuff. Then, a bit later, while these algorithms and nets stay firmly and smoothly in place, Robert fades away. The second problem with the present objection is that it's a clear petitio, for the objection is that absent an argument that consciousness is conceptually distinct from information flow, the thought-experiment fails (is incoherent). But the thought-experiment is designed for the specific purpose of showing that information flow is conceptually distinct from consciousness! If X maintains that, necessarily, if p then q, and Y, in attempt to overthrow X's modal conditional, describes a scenario in which, evidently, p but $\neg q$, it does no good for X to say: ``Yeah, but you need to show that p can present without q". In general, X's only chance is to grapple with the issue in earnest: to show that the thought-experiment is somehow defective, despite appearances to the contrary.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... 304-313).14
This is an argument on which Dennett has recently placed his chips: In his recent ``The Unimagined Preposterousness of Zombies" [Dennett, 1995] Dennett says that the argument in question shows that zombies are not really conceivable.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...$\Diamond_p$.15
For cognoscenti: we could then invoke some very plausible semantic account of this formalism suitably parasitic on the standard semantic account of $\Diamond$. For a number of such accounts, see [Earman, 1986].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... possible.16
It is of course much easier to convince someone that it's logically possible that it's physically possible that Jones' brain is transplanted: one could start by imagining (say) a world whose physical laws allow for body parts to be removed, isolated, and then made contiguous, whereupon the healing and reconstitution happens automatically, in a matter of minutes.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... possible.17
Chalmers gives the case of a mile-high unicycle, which certainly seems logically possible. The burden of proof would surely fall on the person claiming that such a thing is logically impossible. This may be the place to note that Chalmers considers it obvious that zombies are both logically and physically possible -- though he doesn't think zombies are naturally possible. Though we disagree with this position, it would take us too far afield to consider our objections. By the way, Chalmers refutes ([Chalmers, 1996], 193-200) the only serious argument for the logical impossibility of zombies not mentioned in this paper, one due to Sydney Shoemaker [Shoemaker, 1975].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... brains.18
A quick encapsulation: Artificial neural nets (or as they are often simply called, `neural nets') are composed of units or nodes designed to represent neurons, which are connected by links designed to represent dendrites, each of which has a numeric weight. It is usually assumed that some of the units work in symbiosis with the external environment; these units form the sets of input and output units. Each unit has a current activation level, which is its output, and can compute, based on its inputs and weights on those inputs, its activation level at the next moment in time. This computation is entirely local: a unit takes account of but its neighbors in the net. This local computation is calculated in two stages. First, the input function, ini, gives the weighted sum of the unit's input values, that is, the sum of the input activations multiplied by their weights:

\begin{displaymath}in_i = \sum_jW_{ji}a_j.\end{displaymath}

In the second stage, the activation function, g, takes the input from the first stage as argument and generates the output, or activation level, ai:

\begin{displaymath}a_i = g(in_i) = g \left( \sum_jW_{ji}aj \right).\end{displaymath}

One common (and confessedly elementary) choice for the activation function (which ususally governs all units in a given net) is the step function, which usually has a threshold t that sees to it that a 1 is output when the input is greater than t, and that 0 is output otherwise. This is supposed to be ``brain-like" to some degree, given that 1 represents the firing of a pulse from a neuron through an axon, and 0 represents no firing. As you might imagine, there are many different kinds of neural nets. The main distinction is between feed-forward and recurrent nets. In feed-forward nets, as their name suggests, links move information in one direction, and there are no cycles; recurrent nets allow for cycling back, and can become rather complicated. Recurrent nets underlie the MONA-LISA system we describe below.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... zombies."19
As Ned Block has recently pointed out to one of us (Bringsjord), since at least all mammals are probably P-conscious, the accident would had to have happened quite a while ago.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... nowhere.20
This is eloquently explained by Flanagan & Polger [Flanagan & Polger, 1995], who explain the some of the functions attributed to P-consciousness can be rendered in information-processing terms.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... follows.21
Some may object to P2 in this way: ``Prima facie, this is dreadfully implausible, since each (x and $\phi$) may be an effect of a cause prior to both. This has a form very similar to: provided there is constant conjunction between x and $\phi$, and someone somewhere thinks x is centrally imployed in $\phi$-ing, x actually does facilitate $\phi$-ing."

Unfortunately, this counter-argument is very weak. The objection is an argument from analogy -- one that supposedly holds between defective inferences to causation from mere constant conjunction to the inference P2 promotes. The problem is that the analogy breaks down: in the case of P2, there is more, much more, than constant conjunction (or its analogue) to recommend the inference -- as is explicitly reflected in P2's antecedent: it makes reference to evidence from reports and from the failure of certain engineering attempts. (Some of the relevant reports are seen in the case of Ibsen. One such report is presented below.)

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... P-consciousness.22
Henrik Ibsen wrote:
I have to have the character in mind through and through, I must penetrate into the last wrinkle of his soul. I always proceed from the individual; the stage setting, the dramatic ensemble, all that comes naturally and does not cause me any worry, as soon as I am certain of the individual in every aspect of his humanity. (reported in [Fjelde, 1965], p. xiv)
Ibsen's modus operandi is impossible for an agent incapable of P-consciousness. And without something like this modus operandi how is one to produce creative literature?

At this point we imagine someone objecting as follows. ``The position expressed so far in this paper is at odds with the implied answer to the rhetorical question, Why can't impenetrable zombies write creative literature? Why can't an impenetrable zombie report about his modus operandi exactly as Ibsen did, and then proceed to write some really great stories? If a V2 zombie is not only logically, but even physically possible, then it is physically possible that Ibsen actually had the neural implant procedure performed on him as a teenager, an no one ever noticed (and, of course no one could notice)."

The reply to this objection is simple: Sure, there is a physically possible world w in which Ibsen's output is there but P-consciousness isn't. But the claim we're making, and the one we need, is that internal behavior of the sort Ibsen actually engaged in (``looking out through the eyes of his characters") requires P-consciousness.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... short-short23
Our term for stories about the length of Betrayal.1. Stories of this type are discussed in [Bringsjord & Lally, 1997].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... Rensselaer,24
Information can be found at http://www.rpi.edu/dept/ppcs/MM/c-agents.html.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... not.25
For reasons explained in [Bringsjord & Ferrucci, 1997], BRUTUS.1 does seem to satisfy the most sophisticated definition of creativity in the literature, one given by [Boden, 1995].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... space.26
It may be thought that brute force can obviously enumerate a superset of $\cal I$, on the strength of reasoning like this:
Stories are just strings over some finite alphabet. Given the stories put on display on behalf of BRUTUS.1, the alphabet in question would seem to be { Aa, Bb, Cc, $\ldots$, :, !, ;, $\ldots$}, that is, basically the characters on a computer keyboard. Let's denote this alphabet by `E.' Elementary string theory tells us that though $E^\ast$, the set of all strings that can be built from E, is infinite, it's countably infinite, and that therefore there is a program P which enumerates $E^\ast$. (P, for example, can resort to lexicographic ordering.) From this it follows that the set of all stories is itself countably infinite.
However, though we concede that there is good reason to think that even if the set of all stories is in some sense typographic, it needn't be countably infinite. Is the set $\cal A$ of all letter As, countable? (Hofstadter [Hofstadter, 1982] says ``No.") If not, then simply imagine a story associated with every element within $\cal A$. For a parallel route to the same result, think of a story about $\pi$, a story about $\sqrt{2}$, indeed a story for every real number!

On the other hand, stories, in the real world, are often neither strings nor, more generally, typographic. After all, authors often think about, expand, refine, $\ldots$ stories without considering anything typographic whatsoever. They may ``watch" stories play out before their mind's eye, for example. In fact, it seems plausible to say that strings (and the like) can be used to represent stories, as opposed to saying that the relevant strings, strictly speaking, are stories.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... linked.27
For instance, although both decimal and roman numeral notations can represent numbers, the process of multiplying roman numerals is much more difficult than the process for multiplying decimals. Of the two notations, decimal notation is in general the better representation to enable mathematical processes [Marr, 1982].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... representation.28
We imagine some readers asking: ``Mightn't `morphogenesis' fit better?" We use `epigenesis' in order to refer to the complete process of mapping from the genotype to the phenotype. However, morphogenesis does capture the essence of the process that is used in our (Noels') work; frankly, we are smitten with the analogy. Choosing the atom features (say pixel for images, Hardy waves for sound) is similar to starting with a specialized cell, then forming the cell into some organization -- morphogenesis.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... selecting.29
Our intuition, for what it's worth, is that humans here provide a holistic evaluation function that mimics the forces of nature.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... phenotype.30
Of course, even inspiration, insight, and phenomenal consciousness can preclude creativity, if one warps the example enough. But our point is really a practical warning: limiting and bounding the phenotype, ceteris paribus, can preclude creativity, so computational engineers beware!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... pre-determined.31
Here are some more details on MONA-LISA: The DNA code is a vectorized string in which each gene represents one of the pixels in an associated image (usually a 25 x 25 pixel array). The level of a gene encodes the color of the pixel (usually 4 grays, or 8 colors). Fifty images are in each generation, of which the evolver selects ten to be the parents for the next generation. Reproduction is accomplished by selecting two parents at random and generating the offspring's DNA by randomly and uniformly selecting between the genes of the two parents at each allele site. Each population after the initial generation consists of the ten parents and forty offsprings allowing incest. MONA-LISA is a Boltzmann machine; as such its activation function if stochastic. Motivated readers may find it profitable to refer back to the brief account of neural nets given above, wherein the role of an activation function is discussed.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... work.32
We conclude with some brief remarks on point 3: As one might expect, an increase in the creative capacity of a system causes an increase in the system's complexity. Our evolutionary system creates representations with a new level of complexity over previous work in evolutionary computation. The increase in complexity is due to an increase in the cardinality of the relationships, increases in the level of emergent properties, and an increase in what Löfgren calls interpretation and descriptive processes [Löfgren, 1974]. The potential for complexity in a representation is determined by the relationships among the features. In the image elicitation system, the image is described at the atomic level. The low level description allows for an extremely large number of relationships as compared to systems that use a higher, feature-level representation. Image elicitation requires that features emerge along with the configuration, rather then serving to define the features with only the configuration evolving. As stated, our system promotes both polygenic and pleiotropic relationships between the genotype and the phenotype. Because of these relationships, the complexity of interpretation and description increases.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... gold.33
For a complete treatment of super-computation and related matters, including literary creativity, see [Bringsjord & Zenzen, 1997] and [Bringsjord, mingb].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Selmer Bringsjord
2000-03-13