next up previous
Next: References Up: Sherlock Holmes and Tchaikovsky Previous: The Music BoxThe

Why COG is Doomed

Suppose now that the bump-and-prong and brain engineers have married. Working with ``real flesh" neural networks turned out to be too untidy, and besides, their essential structure can be preserved in artificial neural networks. So the connectionist is born. His central device is synthetic, yes, but also brain-like; his strategy is evolution; his home base is MIT; and his triumph is to be not a symphony, but rather a robot: COG.gif

COG's creators, as cognoscenti will know, are a team led by Rodney Brooks and Lynn Andrea Stein, another member of which is Daniel Dennett, whose eloquent synopsis [4] of the project boldly proclaims that COG is to be a humanoid robot -- capable of seeing and recognizing objects in its environment (including its ``mothers"), and of performing appropriate (physical) actions in response, all at a level that will encourage humans interacting with COG to ascribe to it such profound properties as consciousness.

COG, as its creators proudly admit, is completely devoid of a logicist soul: not a shred of knowledge representation and reasoning to be found under its hood -- no modus ponens, no modus tollens, and certainly no reductio ad absurdum. If COG is ever to reason in a fashion modellable by (say) first-order logic, such reasoning will need to emerge from the engine of evolution, not a knowledge-based injection. As Dennett says:

How plausible is the hope that COG can retrace the steps of millions of years of evolution in a few months or years of laboratory exploration? Notice first that [the evolution of COG and its descendants] is a variety of Lamarckian inheritance that no organic lineage has been able to avail itself of. The acquired design innovations of COG-I can be immediately transferred to COG-II, a speed-up of evolution of tremendous, if incalculable, magnitude. Moreover, if you bear in mind that, unlike the natural case, there will be a team of overseers ready to make patches whenever obvious shortcomings reveal themselves, and to jog the systems out of ruts whenever they enter them, it is not so outrageous a hope, in our opinion. But then, we are all rather outrageous people. ([4], p. 140)

That COG's ``parents" are outrageous is something we gladly accept; that they are good scientists is a somewhat less sturdy claim.

For suppose the year is 2019, and our connectionists have produced remarkable offspring -- in the form of a robot (or android), SHER-COG (COG-n, for some tex2html_wrap_inline86 ), capable of the sort of behavior associated with Sherlock Holmes, and possessed of all the concomitant mental powers -- deduction, introspection, and even, let's assume, full-fledged sentience.gif Now consider perhaps Holmes' greatest triumph: solving the mystery surrounding the disappearance of the racehorse known as ``Silver Blaze" [5]; and suppose that SHER-COG is asked (by an analogue for Dr. Watson), after cracking this case, how it accomplished the feat. How can our robotic sleuth communicate an answer?

One thing that would surely fail to inform would be for SHER-COG to invite humans to examine its neural nets. In order to see this, you have only to imagine what it would be like to study these nets in action. How would information about the states of nodes and the weights on connections between them help you divine how SHER-COG deduced that the culprit in this mystery could not be a stranger to dogs on the farm that was Silver Blaze's home? The reasoning Watson strives to apprehend can be gleaned from neural nets about as easily as the bump-and-prong engineer can read Tchaikovsky's secret off the drum of a music box.

Of course, SHER-COG, like Tchaikovsky when revealing The Pathétique, could resort to introspection and natural language. It could proceed to explain its solution in (e.g.) English, in much the same way that Sherlock Holmes often explains things to the slower Dr. Watson. But this route concedes our point, for by it we end up once again invoking logicist AI in all its glory. This is so because in order to really understand what SHER-COG is telling us in English, to explain scientifically how it is he has done what he has done, it will be necessary to analyze this English formally; and the formal analysis will bring to bear the machinery of logical systems happily jettisoned by the connectionist.gif

For example, to truly understand Holmes' explanation, conveyed to the nonplussed Watson, of how he solved the mystery of Silver Blaze, it would do no good to hear from a ``modernized" Holmes: ``My dear Watson, it's really quite elementary, for undergirding my relevant ratiocination was intense C-fiber activity in neocortical areas 17 and 21. Here, let me show you the PET scan." In order to move toward understanding of how Holmes saved the day yet again, one must come to grasp the following chain of reasoning (which involves the famous clue about the ``dog doing nothing in the night-time").

  1. If the dog didn't bark, then the person responsible for lacing the meal with opium couldn't be a stranger.
  2. The dog didn't bark.
  3. The person responsible for lacing the meal with opium couldn't be a stranger. (from 1. and 2.)
  4. Simpson was a stranger.
  5. Simpson was not responsible. (from 3. and 4.)

At work here, of course, are none other than modus ponens and modus tollens (and standard quantifier rules), cornerstones of logicist AI.gif Absent these cornerstones, and the enlightening analysis they allow when brought to bear on what cognizers think and say, SHER-COG's success will be impenetrable, and will thus fail to advance our understanding of how detectives do what they do.

So, if in the future we desire not only to build human-matching robots, but to understand them, it seems to us that AI ought to saddle up just one ecumenical horse.


next up previous
Next: References Up: Sherlock Holmes and Tchaikovsky Previous: The Music BoxThe

Selmer Bringsjord
Tue Oct 8 12:16:30 EDT 1996