next up previous
Next: Thayse 1991 Up: LAI in today's implementations Previous: Kim

Hayes

Hayes et al. is composed of a set of 20 papers covering these five areas:

  1. Mechanics of Knoweldge Processing
  2. Inductive Formation of Programs and Descriptions
  3. Optimality and Error in Learning Systems
  4. Qualitative Representations of Knowledge
  5. Applications and Models of Knowledge Aquisition.

These papers cover advanced topics in AI research, and a comprehensive treatment of them would be encyclopedic. However, one paper in particular, in our opinion, is worth looking at in some detail - because it nicely captures the spirit and, to an appreciable degree, the technical orientation of Hayes et al. and because it also interweaves the topics of defeasible reasoning, LP, implementation, metaprogramming and foundational issues (e.g., learning) central to the LAI-CAI clash. The paper in question is Chapter 8, ``Non-monotonic Learning'', by Bain and Mugleton. It introduces a non-monotonic formalism called closed world specialization. This formalism's implementation in Prolog results in an incremental learning system that revises its beliefs upon receiving new examples.

One way to learn from experience is to specialize ``over-general'' beliefs. Consider yet again the belief that all birds fly; it might be represented in LP by the following rule.

displaymath1647

If the new conjunctive belief that an emu is a bird and emus can't fly were introduced, the above rule would no longer be true. The system would have to learn; it would have to specialize the rule so as to accurately represent the new belief. Incremental learning systems may generalize their belief sets in the face of a new example that these sets don't entail, or such systems may specialize their belief set if it is contradicted by a new example. Previous approaches tend to over-specialize in an attempt to adapt to contradictory examples. It's not hard to make this clear:

Consider the case where a system's initial belief set, tex2html_wrap_inline797 , includes the following statements.

displaymath1651

>From tex2html_wrap_inline797 the system (an LP system here) can conclude that

displaymath1655

Since it is not true that flies(emu), the system must revise its beliefs so that

displaymath1657

One way is to delete the clause flies(X) tex2html_wrap_inline1641 bird(X) and produce the new belief set

displaymath1661

Since tex2html_wrap_inline1663 is incomplete with respect to flies(eagle), this set can be generalized to produce

displaymath1665

The set tex2html_wrap_inline1667 , however, is an over-specialization. The fact flies(sparrow) is unfortunately not deducible from

displaymath1669

To solve the problem of over-specialization in incremental learning systems, Bain and Mugleton define the most-general-correct-specialization (MGCS) and develop a resolution-based method for constructing an MGCS from an incorrect clausal theory (an intial set of beliefs plus the new contradictory example). Using meta-programming (see our discussion of this topic above) the authors demonstrate how this method can be implemented in Prolog.

The method exploits the LP notion of negation as failure. A common (but limited) approach to non-monotonic reasoning is based on the ``Closed World Assumption'' (CWA) inference rule. This rule states that if a statement p is not a logical consequence of a theory, then infer tex2html_wrap_inline1673 . Prolog implements this inference by interpreting failure as negation. If Prolog fails to prove p, where p is a ground atom (a fact with no variables), then Prolog infers tex2html_wrap_inline1673 .

An example demonstrates the results of this approach. Consider again the initial belief set tex2html_wrap_inline797 and the fact that flies(emu) is not true. The proposed method produces the revised belief set tex2html_wrap_inline797 composed of

The method introduces a new predicate; in this case flightless. Its negation (by failure), not(flightless(X)), is added to the conditions for flies(X) and the fact flightless(emu) is added to the belief set. The result is the revised belief set tex2html_wrap_inline797 which doesn't incorrectly entail flies(emu); and flies(sparrow) can be correctly proved from tex2html_wrap_inline1689 {bird(sparrow)}.

Several other papers in the Hayes et al. volume (e.g., ``Inductive Formation of Programs and Descriptions'', yet another case of work that exploits on meta-programming) make considerable headway against the common but mistaken notion that learning is outside the reach of LAI but is the forté CAI.


next up previous
Next: Thayse 1991 Up: LAI in today's implementations Previous: Kim

Selmer Bringsjord
Mon Nov 17 14:57:06 EST 1997