The next objection we consider marks an appeal to connectionism; it runs
as follows. ``Your argument, I concede, destroys Computationalism.
But that suits me just fine: I don't affirm this doctrine, not in the least.
For you see, cognition isn't computation; cognition, rather, is
suitably organized processing in
an architecture which is at
least to some degree genuinely neural. And I'm sure you will agree that
a Turing Machine is far from neural! Neural nets, however, as their name
suggests, *are*
brain-like; and it's on the shoulders of neural nets
that the spirit of `Strong' AI
and Cognitive Science continues to live, and indeed to thrive."

This objection entails a rejection of Proposition 1''. As such, it succeeds
in dodging the Argument From Irreversibility -- since this argument has
this proposition as a premise. However, the argument can be rather
easily revived, by putting in place of Proposition 1'' a counterpart
based on neural nets, viz. (where iff *x* is an corporeal neural net),

**Proposition 2**.
*x* is conscious from
to passes through some process -- partly determined by causal interaction
with the
environment -- which is identical to the consciousness *x* enjoys through

This proposition is affirmed by connectionists (of the ``Strong" variety,
anyway).
But Proposition 2, combined with
a chain of equivalence holding between neural nets, cellular automata, *k*-tape
Turing Machines, and standard TMs, is enough resurrect the Argument from
Irreversibility in full force. The chain of equivalence has been
discussed in detail by one of us (Bringsjord) elsewhere [2f],
in a paper which purports to show that connectionism, at bottom, is
orthodox Computationalism in disguise.
Here, it will suffice to give an intuitive recapitulation of the
chain in question -- a chain which establishes the
interchangeability of neural nets and Turing Machines.

Before we sketch this chain, let's pause to make clear that it underlies a certain proposition which can be conjoined with Proposition 2 in order to adapt the Argument From Irreversibility. This proposition is

**Proposition 3**.
passes through some process through
, where this computation
is identical to the process *x* enjoys through

It's easy to prove in elementary logic (using such rules as
universal elimination and *modus ponens*) that Proposition 2,
conjoined with Proposition
3, reenergizes the Argument From Irreversibility. But why is Proposition 3
true? In order to answer this question, we need to look first at neural nets.
After that, even a casual look at ``two-dimensional" TMs should make it plain
that Proposition 3 is true.

Neural nets are composed of **units** or
**nodes**, which are connected by **links**, each of which has a numeric
**weight**. It is usually assumed that some of the units work in symbiosis
with the external environment; these units form the sets of **input**
and **output** units. Each unit has a current **activation level**,
which is its output, and
can compute, based on its
inputs and weights on those inputs,
its activation level at the next moment in time. This computation is entirely
local: a unit takes account of but its neighbors in the net. This local
computation is calculated in two stages. First, the **input function**,
, gives the weighted sum of the unit's input values, that is, the sum
of the input activations multiplied by their weights:

In the second stage, the **activation function**, *g*, takes the input from
the first stage as argument and generates the output, or activation level, :

One common (and confessedly elementary) choice for the activation function
(which usually governs all units in a given net) is the step function, which
usually has a threshold *t* that sees to it that a 1 is output when the input
is greater than *t*, and that 0 is output otherwise.
(This is supposed to
look ``brain-like" to some degree, given the metaphor that 1 represents
the firing of a pulse from a neuron through an axon, and 0 represents
no firing.)

As you might imagine, there are many different kinds of neural nets. The
main distinction is between **feed-forward** and **recurrent** nets.
In feed-forward nets, as their name suggests, links move information in
one direction, and there are no cycles; recurrent nets allow for cycling back,
and can become rather complicated. But no matter what neural net you care
to talk about, Proposition 3's deployment of its universal quantifier remains
justified. Proving this would require a paper rather more substantial than
the present one, but there is a way to make the point in short order. The
first step is to note that neural nets can be viewed as a series of snapshots
capturing the state of its nodes. For example, if we assume for simplicity that
we have a 3-layer net (one input layer, one ``hidden" layer, and one output
layer) whose nodes, at any given time, or either ``on" (filled circle) or
``off" (blank circle), then here is such a snapshot:

As the units in this net compute and the net moves through time,
snapshots will capture different patterns. But
Turing Machines can accomplish the very same thing. In order to show
this, we ask that you briefly consider **two-dimensional** TMs.
We saw *k*-tape TMs above; and we noted the equivalence between these
machines and standard TMs.
One-head two-dimensional TMs
are simpler than *k*-tape machines, but (in the present
multidisciplinary context)
more appropriate than their standard *k*-tape cousins.
Two-dimensional TMs have an infinite two-dimensional
grid instead of a one-dimensional tape. As an example, consider
a two-dimensional TM which produces an infinite ``swirling" pattern.
We present a series of snapshots (starting with the initial configuration,
in which all squares are blank, and moving through the next eight configurations
produced by the ``swirl" program) of this machine in action.
Here's the series:

The point of this series of snapshots is to convey that snapshots of a
neural net in action can be captured by a one-head two-dimensional
TM (and, more easily, by a *k*-tape, *k*-head machine). Hopefully you can
see why this is so, even in the absence of the proof itself. The trick,
of course, is to first view the neural net in question as an
array (we ignore the part of the infinite grid beyond this finite array),
as we did above.
Of course, it's necessary to provide a suitably enlarged alphabet for
our neural-net-copying TM: it will need to have an alphabet which contains
a character corresponding to all the states a node can be in. For this
reason, our swirl TM is a bit limited, since it has but a binary alphabet.
But it's easy enough to boost the alphabet (and thereby produce
some entrancing pyrotechnics).

The proponent of Objection 7
might say that a sequence of configurations of a Turing
Machine *M* proposed to capture
a neural net *N* as it passes through time isn't complete,
because such a sequence includes, at best, only *discrete*
``snapshots" of the *continuous* entity *N*.
But suppose that the sequence includes
snapshots of *N* at *t* (= )
and *t*+1 (= ), and that our opponent is
concerned with what is left out here,
i.e., with the
states of *N* between *t* and *t*+1. The concern is easily handled: one can
make the interval of
time during which the state of *N* is ignored arbitrarily small. Doing so
makes it the case that
the states of *N* which had formerly been left out are now captured.

Fri Sep 6 11:58:56 EDT 1996