Suppose now that the bump-and-prong and brain engineers have married. Working with ``real flesh" neural networks turned out to be too untidy, and besides, their essential structure can be preserved in artificial neural networks. So the connectionist is born. His central device is synthetic, yes, but also brain-like; his strategy is evolution; his home base is MIT; and his triumph is to be not a symphony, but rather a robot: COG.
COG's creators, as cognoscenti will know, are a team led by Rodney Brooks and Lynn Andrea Stein, another member of which is Daniel Dennett, whose eloquent synopsis  of the project boldly proclaims that COG is to be a humanoid robot -- capable of seeing and recognizing objects in its environment (including its ``mothers"), and of performing appropriate (physical) actions in response, all at a level that will encourage humans interacting with COG to ascribe to it such profound properties as consciousness.
COG, as its creators proudly admit, is completely devoid of a logicist soul: not a shred of knowledge representation and reasoning to be found under its hood -- no modus ponens, no modus tollens, and certainly no reductio ad absurdum. If COG is ever to reason in a fashion modellable by (say) first-order logic, such reasoning will need to emerge from the engine of evolution, not a knowledge-based injection. As Dennett says:
How plausible is the hope that COG can retrace the steps of millions of years of evolution in a few months or years of laboratory exploration? Notice first that [the evolution of COG and its descendants] is a variety of Lamarckian inheritance that no organic lineage has been able to avail itself of. The acquired design innovations of COG-I can be immediately transferred to COG-II, a speed-up of evolution of tremendous, if incalculable, magnitude. Moreover, if you bear in mind that, unlike the natural case, there will be a team of overseers ready to make patches whenever obvious shortcomings reveal themselves, and to jog the systems out of ruts whenever they enter them, it is not so outrageous a hope, in our opinion. But then, we are all rather outrageous people. (, p. 140)
That COG's ``parents" are outrageous is something we gladly accept; that they are good scientists is a somewhat less sturdy claim.
For suppose the year is 2019, and our connectionists have produced remarkable offspring -- in the form of a robot (or android), SHER-COG (COG-n, for some ), capable of the sort of behavior associated with Sherlock Holmes, and possessed of all the concomitant mental powers -- deduction, introspection, and even, let's assume, full-fledged sentience. Now consider perhaps Holmes' greatest triumph: solving the mystery surrounding the disappearance of the racehorse known as ``Silver Blaze" ; and suppose that SHER-COG is asked (by an analogue for Dr. Watson), after cracking this case, how it accomplished the feat. How can our robotic sleuth communicate an answer?
One thing that would surely fail to inform would be for SHER-COG to invite humans to examine its neural nets. In order to see this, you have only to imagine what it would be like to study these nets in action. How would information about the states of nodes and the weights on connections between them help you divine how SHER-COG deduced that the culprit in this mystery could not be a stranger to dogs on the farm that was Silver Blaze's home? The reasoning Watson strives to apprehend can be gleaned from neural nets about as easily as the bump-and-prong engineer can read Tchaikovsky's secret off the drum of a music box.
Of course, SHER-COG, like Tchaikovsky when revealing The Pathétique, could resort to introspection and natural language. It could proceed to explain its solution in (e.g.) English, in much the same way that Sherlock Holmes often explains things to the slower Dr. Watson. But this route concedes our point, for by it we end up once again invoking logicist AI in all its glory. This is so because in order to really understand what SHER-COG is telling us in English, to explain scientifically how it is he has done what he has done, it will be necessary to analyze this English formally; and the formal analysis will bring to bear the machinery of logical systems happily jettisoned by the connectionist.
For example, to truly understand Holmes' explanation, conveyed to the nonplussed Watson, of how he solved the mystery of Silver Blaze, it would do no good to hear from a ``modernized" Holmes: ``My dear Watson, it's really quite elementary, for undergirding my relevant ratiocination was intense C-fiber activity in neocortical areas 17 and 21. Here, let me show you the PET scan." In order to move toward understanding of how Holmes saved the day yet again, one must come to grasp the following chain of reasoning (which involves the famous clue about the ``dog doing nothing in the night-time").
At work here, of course, are none other than modus ponens and modus tollens (and standard quantifier rules), cornerstones of logicist AI. Absent these cornerstones, and the enlightening analysis they allow when brought to bear on what cognizers think and say, SHER-COG's success will be impenetrable, and will thus fail to advance our understanding of how detectives do what they do.
So, if in the future we desire not only to build human-matching robots, but to understand them, it seems to us that AI ought to saddle up just one ecumenical horse.