To fully appreciate the advantages of LAI in the area of communication, an encapsulated review of simple neural nets (the standard vehicle for non-logical information processing) is provided. After that review we offer a thought-experiment designed to sharpen our point concerning the communicative advantages of LAI over CAI.

Neural nets are composed of **units** or
**nodes**, which are connected by **links**, each of which has a numeric
**weight**. It is usually assumed that some of the units work in symbiosis
with the external environment; these units form the sets of **input**
and **output** units. Each unit has a current **activation level**,
which is its output, and
can compute, based on its
inputs and weights on those inputs,
its activation level at the next moment in time. This computation is entirely
local: a unit takes account of but its neighbors in the net. This local
computation is calculated in two stages. First, the **input function**,
, gives the weighted sum of the unit's input values, that is, the sum
of the input activations multiplied by their weights:

In the second stage, the **activation function**, *g*, takes the input from
the first stage as argument and generates the output, or activation level, :

One common (and confessedly elementary) choice for the activation function
(which ususally governs all units in a given net) is the step function, which
usually has a threshold *t* that sees to it that a 1 is output when the input
is greater than *t*, and that 0 is output otherwise.McCulloch
and Pitts (1943) showed long ago that such a simple
activation function allows for the representation of the basic
Boolean functions of AND, OR and NOT.
(This is supposed to
look ``brain-like'' to some degree, given the metaphor that 1 represents
the firing of a pulse from a neuron through an axon, and 0 represents
no firing.)

As you may know, there are many different kinds of neural nets. The
main distinction is between **feed-forward** and **recurrent** nets.
In feed-forward nets, as their name suggests, links move information in
one direction, and there are no cycles; recurrent nets allow for cycling back,
and can become rather complicated. But no matter what neural net you care
to talk about, it should be relatively easy to see that it is likely
to be exceedingly difficult for such a net to communicate how, exactly,
it has done something, *if* the ``something'' naturally calls for a
symbolic explanation. In order to make the point vivid, begin by
noting that neural nets can be viewed as a series of snapshots
capturing the state of its nodes. For example, if we assume for simplicity that
we have a 3-layer net (one input layer, one ``hidden'' layer, and one output
layer) whose nodes, at any given time, or either ``on'' (filled circle) or
``off'' (blank circle), then here is such a snapshot:

As the units in this net compute and the net moves through time, snapshots will capture different patterns. But how could our observation of these non-symbolic patterns provide illuminating answers to questions calling for an account of deliberate ratiocination?

Mon Nov 17 14:57:06 EST 1997