The next cluster of logical systems we discuss fall under
**defeasible logic** (or revisable, or nonmonotonic, logic),
an area that has for a long
while justifiably received the attention of many LAIniks.
Such systems mark attempts to circumvent
**monotonicity**, a property that,
put in terms of , amounts to the fact that if is provable
from , then adding new information to , no matter what that
information is,
never invalidates . Clearly, our everyday
reasoning isn't monotonic. Nearly all readers, for example, will at one time
or another have affirmed the proposition that birds can fly. Expressed in
straight first-order logic, this fact could be captured by

But such readers will also have confronted the fact that ostriches *can't* fly.
Clearly, there is a need to revise what had earlier been believed. But, just
as clearly, it's very implausible that humans, in everyday reasoning, employ
modifications of (4) like

because, for starters, there are an infinite number of exceptions to (4).
The solution,
in general, would seem to
be that we use a system that allows us to make *defeasible*
or *revisable*
inferences. The challenge is to find and implement a correlate
to this system.

There are a number of different families of attacks on the revisable reasoning challenge, among which are the following.

- Reiter's default logic (1980). The key idea here is domain-dependent inference rules - ``defaults'' - which permit rules with exceptions to be expressed without the exceptions having to be enumerated.
- Drew McDermott's non-monotonic modal logic (1982), built upon the predicate modal logic and designed to provide a domain-independent proof theory allowing for the characterization of maximal sets of consistent assertions that can be inferred from a knowledge base.
- Robert Moore's autoepistemic logic (1985), designed to model the introspective ability of an ideal agent whose beliefs can be revised in light of this introspection.
- Minimization techniques,
which include the
**closed-world assumption**(treated briefly below in §6) and**circumscription**, a rather tricky technique introduced by McCarthy (1980) and refined by, among others, Vladimir Lifschitz (1987). - Oscar, a theory of defeasible reasoning devised by Pollock (1995), which is somewhat similar to default logic, but which, in our opinion, is superior (technically and epistemologically) to all other approaches to the problem. (`Oscar' is the name of the ``artilect'' who reasons using Pollock's theory.)

Chapter 5 of Thayse 1989, ``Formalization of Revisable Reasoning'', provides what is in our opinion one of the best introductions anywhere to minimization techniques. The treatment explains the intuitions behind minimization techniques and then proceeds to formalize the approach, gradually moving from the simplest incarnation of minimization techniques to circumscription.

Though Drew McDermott once claimed that only
two people on the planet understood circumscription,
the basic idea behind the technique, as Thayse 1989
shows, is really quite simple: Suppose some
agent has knowledge represented by a set of formulas
in .
We partition the relation symbols, or predicates, deployed in
into three sets, , , and .
The predicates in contain those to be circumscribed, or
``minimized'';
the predicates in are the ones whose
extensions (= the things in the domain picked out by the
predicates) are permitted to change during the
circumscription process; and holds all remaining predicates.
Now, let and be two interpretations that model
. (An interpretation models a set of formulas
just in case it models every formula in the set.)
is
said to be **inferior** to exactly when the following three
conditions are satisfied:

- and have the same domain and interpret function symbols and variables in exactly the same way.
- and interpret the predicates in in exactly the same way.
- For every predicate
*R*in the set of circumscribed predicates, the extension of*R*in is a subset of the extension of*R*in .

These three conditions can be understood intuitively as follows: The two interpretations in question, as it is said in Thayse 1989, interpret our agents knowledge, , ``almost identically''. But interpretation interprets the predicates in as true ``less often'' than interpretation does. The key interpretations are those that model and are minimal with respect to the ordering relation produced by these three conditions. A formula can be inferred from a circumscribed knowledge base when it is true in all minimal models. The hope is that the formulas that can be inferred will be precisely those that are desirable in cases like the ostrich one we presented above. Thayse 1989 shows, in fact, that

can be deduced in the ostrich example. The demonstration follows the traditional proof-theoretic route through circumscription: It's shown that if a certain second-order formula , (i.e., a formula from the logical system described at the outset of this paper) is added to the knowledge in this case, the desired deduction can be performed. Thayse 1989 then explain the problems plaguing circumscription, at least in its traditional form (e.g., partitioning the predicates seems to require knowledge of the desired outcome).

Readers who assimilate the material we presented earlier in §2 and then the presentation of circumscription in Thayse 1989 will be able to proceed to the many papers in Brachman et al. that relate to revisable reasoning. We mention a number of these papers below under the ``Metatheory'' category of our four-fold breakdown of LAI.

Mon Nov 17 14:57:06 EST 1997