next up previous
Next: The communicative advantages of Up: AI's Other Spouse: Connectionism Previous: AI's Other Spouse: Connectionism

A framework for evaluating LAI and CAI

Recall that our account of `Implemented LAI System' was based on these five categories:

We can generalize this scheme so that it covers CAI. In order to do so, we need only change ``Ontology'' to ``Analysis'', where the new term is meant to cover not just the relation-based carving up of domains practiced by logicists, but also the broader notion of arriving at a conceptualization of the environment that may well be non-symbolic. Similarly, we can change the ``Knowledge'' category to ``Acquisition'', so that it encompasses not just the building up of information in logicist knowledge representation schemes, but also the connectionist notion of functions that guides successful neural nets. We can leave the term ``Derivation'' alone, as long as we simply reinterpret it to include not only the proof-theoretic construal of the term made by LAIniks, but also the broader notion of information-processing performed by neural nets. The remaining two categories can essentially remain untouched. Now, how do we deploy the expanded framework?

Honest LAIniks will concede that their systems have fared poorly in realizing the acquisition function. Pure LAI systems, as we noted earlier in the paper, traditionally remove this function from the system itself: they have an external human designer simply populate a pre-defined ontology, instead of giving the system a sensor/effector-based interchange with the environment. (Of course, today, sophisticated LAIniks welcome hybrid systems in which connectionist-based sensors and effectors mediate between environmental stimuli and declarative knowledge representation and reasoning; cf. Pollock 1995. Alert readers will have noticed that our hypthetical robotank, ROB, used above for expository purposes, is a hybrid system.) On the other hand, logic, as we attempt to show below, is apparently ideally suited as a vehicle for enabling clear human-human (and human-machine and machine-machine) communication.

Connectionists have enjoyed impressive success in the acquisition, recognition and action functions. Perhaps the crystallization of such success will be seen soon (if it isn't already) in the robot COG and his descendants, the creators of which are a team led by Rodney Brooks and Lynn Andrea Stein at MIT. As Dennett's (1994) eloquent synopsis of the COG project makes clear, COG is to be a humanoid robot - a robot capable of seeing and recognizing objects in its environment (including its ``mothers''), and of performing appropriate (physical) actions in response, all at a level that will encourage humans interacting with COG to ascribe to it such properties as consciousness. (Dennett has reported that some people are already inclined to make such ascriptions.) For our purposes, it's important to note that COG is completely devoid of LAI technology. If COG is ever to reason in a fashion worthy of the logical systems we have discussed above, such reasoning will need to emerge from the engine of simulated accelerated evolution produced in the lab that is COG's home. Dennett says:

How plausible is the hope that COG can retrace the steps of millions of years of evolution in a few months or years of laboratory exploration? Notice first that [the evolution of COG and its descendants] is a variety of Lamarckian inheritance that no organic lineage has been able to avail itself of. The acquired design innovations of COG-I can be immediately transferred to COG-II, a speed-up of evolution of tremendous, if incalculable, magnitude. Moreover, if you bear in mind that, unlike the natural case, there will be a team of overseers ready to make patches whenever obvious shortcomings reveal themselves, and to jog the systems out of ruts whenever they enter them, it is not so outrageous a hope, in our opinion. But then, we are all rather outrageous people (p. 140).

That COG's ``parents'' are outrageous is something we gladly accept; that they are good scientists is, as we attempt to show in the next section, another matter. Before that section we finish with our brief analysis of CAI and LAI under our five-fold scheme.

Interestingly, neither camp has effectively addressed the automation of the analysis function. As far as we know, there is no information-processing model, whether it be symbolic or non-symbolic, that can generate an adequate ontology from exposure to an arbitrary environment. Both camps, CAI and LAI, develop architectures that trivialize the analysis function by relying, in some fashion, on human intervention to flesh out this part of the architecture. LAIniks manually populate a pre-defined ontology. CAIniks use evaluation functions to describe the important patterns that are used to recognize and categorize input data. These evaluation functions represent the system's fundamental conceptualization of the external world and are assembled with key insight from human designers. Inventing a mechanism able to automate the analysis task remains one of AI's great challenges.

Our diagnosis of CAI and LAI is summed up in Table 2.

  table406
Table 2: CAI-LAI Breakdown

We now turn to justification for the entries in that table under ``Communication.''


next up previous
Next: The communicative advantages of Up: AI's Other Spouse: Connectionism Previous: AI's Other Spouse: Connectionism

Selmer Bringsjord
Mon Nov 17 14:57:06 EST 1997