Logic can be thought of as a proposed means to precisely communicate knowledge and its underlying justification. Aristotle, as Glymour eloquently explains, set a 2,300-year-old standard for what constituted a precise expression of, and justification for, knowledge, including the mathematical knowledge produced by Euclid. (Glymour provides a superlative treatment of Euclidean reasoning and Aristotle's theory of the syllogism.) As Glymour explains (in Chapter 4, ``The Laws of Thought''), the Aristotelian scheme started to genuinely change when, in the middle of the nineteenth century, George Boole attempted to realize Leibniz's conception of an ``algebra of thought'', but even Boole's proposals failed to account for even simple geometric and number-theoretic proofs of the sort given by Euclid. Everything changed, as we noted above, when Frege hit the scene. He laid the foundation for erecting (and for a number of other things - e.g., aspects of modern economics).

For our purposes at the moment, it's important to note that the predicate calculus is the foundation of LP, because it provided, for the first time, a precise characterization of the concept of proof. Furthermore, proofs cast in are themselves mathematical objects that can be formally and symbolically represented, analyzed, and manipulated by algorithms.

In 1934, Alfred Tarski
provided a semantic theory for the predicate calculus
that established a link between its syntax and the notion of
truth in a given interpretation
(see Tarski 1956).
(Our presentation above of the semantics
of first-order
logic follows Tarski closely.)
In 1936, Gerhard Gentzen
showed
that, if a predicate calculus proof for a sentence exists, it
can be produced from the concepts in the sentence without the
*creative* introduction of extraneous concepts
(see Gentzen 1969).

Gentzen's results sparked the search for algorithms that can find proofs for sentences in predicate calculus - theorem-provers. This quest, inspired by theoretical advances in the 1930's, laid the foundation for modern LP.

The first attempts at theorem-proving algorithms, however, were exhaustive searches through the exponential space of possible proofs. Despite the power of the computers introduced in the 1950s, it was obvious that the exponential complexity of the proposed predicate-calculus proof procedures was overwhelming.

In the 1960s, a simpler form of predicate logic called **clausal
logic** was introduced. This form proved very important for
producing efficient computational procedures.
Standard predicate logic is
redundant in that there are often alternative ways to express the
same thing:
Two propositions may be syntactically distinct but
semantically equivalent.
While this might be an advantage for humans wanting to represent
knowledge with predicate logic, it is a source of inefficiency for
the automated procedures designed to manipulate these expressions
in search of proofs.
The simpler clausal logic led to different,
more efficient approaches to manipulating proofs.

The 1960s also saw the abandonment of approaches using proof
procedures based on
human-oriented inference rules like *modus ponens*.
These rules were supplanted with machine-oriented
reasoning patterns that could take more productive steps in the
production of a proof without getting bogged down with the
conceptual details that a human reasoning process might require.
In 1963, J. Alan Robinson invented a clause logic based on a single,
very
effective inference rule that he called **resolution**.
Though not an intuitive inference rule, resolution
allowed the proof procedure to
take large steps toward a goal. Resolution relied on a process called
`unification'. This process, based on work done by Jacques Herbrand,
enabled the direct computation of clause instantiations, which
eliminated
the need for much of the detailed search required in the earlier theorem
provers.

By 1963, largely due to Robinson, we had a resolution-based algorithm for producing clausal predicate-calculus proofs that was significantly more effective than any of its predecessors. In fact, it was effective enough to form the basis of fully functional programming systems based on logic.

The declarative approach to building AI systems based in logic
was under scrutiny contemporaneously with the strides made in
LP research. In the late 1960s, the AI community
was boiling with a debate over whether knowledge is better represented
for AI purposes declaratively or procedurally.
The fate of LP as an accepted basis for the
implementation side of AI was at stake during this heated
controversy.
In 1974, Robert Kowalski showed the equivalence
of a Horn-clause representation and a procedural representation of
knowledge. (A **clausal representation**
is a - possibly empty - disjunction
of literals, where a literal is a negated or unnegated predicate. A
**Horn-clause representation** is a clausal representation restricted
to
contain at most one unnegated literal.)
Implementation decisions regarding a commitment to
procedural or declarative representations, after Kowalski's work,
were best viewed as issues of programming style rather than of
anything more substantive and
the technological and political stage was set for the development
and application of logic-programming systems. Refinements made
on efficient implementation strategies - most notably by
Robert Boyer and J Strother Moore (1972) - ultimately led to the popular and
widely applicable LP language Prolog, which is at the heart of Kim,
a volume discussed after our characterization of implemented LAI systems.

Mon Nov 17 14:57:06 EST 1997