Computationalism relies upon automata like Turing Machines (TMs) to fix the concept of computation. But what's a Turing Machine? To some readers these automata will be old hat, but let's play it safe and ask cognoscenti to return with us to the elementary aspects of computability theory upon which the Argument From Irreversibility is built.

Put intuitively, TMs include a **two-way infinite tape** divided into squares,
a **read/write head**
for writing and erasing **symbols** (from some finite, fixed **alphabet**)
on and off this tape, a **finite control
unit** which at any step in a computation is in one particular state from
among a finite number of possible states,
and a set of **instructions** (= program) telling the machine what to do,
depending
upon what state it's in and what (if anything) is written on the square
currently scanned by it's head. Formally speaking, a TM is a set, specifically
a quintuple , where: *S* is a finite set (of states);
is an alphabet containing the special ``blank" symbol *b*, but not
the symbols *L* (for ``left") and *R* (for ``right"); is the **
initial state**; is the **halt state**; and *f* is a
**transition function**
from to {*h*} {*L*, *R*}).

This formalism makes it easy to render the notion of a **configuration** of
a TM precise,
but an informal account will suffice for our
purposes: We'll say that a configuration is composed of four pieces of
information: first, the state the TM is in; second, information representing
what's to the left of the square currently scanned by the read/write head;
third, the single symbol being scanned; and four, the
contents of the tape to the right of the head. In a moment, we'll concretize
the notion of a
configuration by explaining and analyzing the particular TM specified in
Figure 1. But first some notation for computations: Let
denote configurations. We write to indicate that TM *M*
has moved ``in one step" from configuration to . For every TM
*M*, is the reflexive, transitive closure of . A
**computation** by some TM *M* is a sequence of configurations where such that

There are many ways to steamline the full set-theoretic description of TMs. One such method is the state diagram approach used in Figure 1. This TM, ``Gordon's 19 in 186," is designed to start on a 0-filled infinite tape and produce, after 186 steps, 19 1's.

Let's ``hand simulate" an initial segment of the computation of
Gordon's TM -- let's label the machine *G* -- so
that we completely fix the core mathematical concepts. The alphabet used
is simply 0, 1 . The
initial state of *G* is 0 (represented by the node labeled 0), and at
the outset we'll assume that the tape is filled with 0's. The first thing
*G* does is check to see what symbol it finds under its read/write head. In
this case it initially finds a 0, so the arc labeled with 0 R is taken, which
means that the head moves one square to the right and the machine enters state
5. At this point, since there is another 0 found beneath the head, the 0 is
changed to a 1, and the machine reenters state 0. It now finds a 1, and hence
takes the arc labeled 1 R to state 1 (i.e., the machine moves its head one
square to the right, and then enters state 1) -- etc. The machine's activity
can be perfectly captured by a tedious catalogue of its configurations from
start to finish, e.g.,

If this is your first exposure to TMs, you will doubtless be struck by how
primitive and unassuming they are. But the surprising thing is that TMs
apparently capture computation *in all its guises*. More precisely,
whatever can be accomplished by way of an algorithm, by way of a programmed
supercomputer, by way of a neural network, a cellular automaton, etc. --
whatever
can be accomplished by any of these can be accomplished by a
TM.
Furthermore, we know that augmenting the
architecture our TMs doesn't give them
any additional power. For example, if we give a TM *two* tapes rather
than one, nothing which was impossible for the one-tape machine becomes doable
for the two-tape creature. (This is a fact
we revisit below.) Usually, such a fact is established through
a so-called ``simulation proof," the essence of which
consists in showing that the
new automaton can be perfectly simulated by the original one.

Now, Bennett [6] has established that TM computation
can be logically reversed in a ``useful" way. It was
clear before Bennett that computation can be logically reversed in
*non*-useful
ways, and in fact this result is sufficient for our argument.
Non-useful ways of reversing TM computation are easy to come by; here's
one: First, go get some paper, a pencil, and
an eraser. Next, watch a given TM
*M* in action, recording each configuration through which it passes. Finally,
using your list of configurations, build a new TM which goes
from the last entry on your list to the first in such a way that it visits
your entries in reverse sequence.

Though we are informal here,
it should be obvious that (in keeping with the inaugural
writings of Turing and Post, in which the notion of a human computist
is primitive)
we have here a straightforward
*algorithm*, call it , for
reversing computation. You are encouraged to try your hand at simulating
on the TM shown in Figure 1. All you need is some patience, and,
as we say, a pencil,
an eraser, and sufficient paper.

If you're short of time, we suggest a shortcut constructive ``proof" as a substitute: Purchase the software ``Turing's World" , run one of the TMs that come bundled with this package, and then use the REVERSE command to step back through the configurations your selected machine has run. You can be sure that the existence of some such algorithm as is precisely what enables you to carry out this reversal using ``Turing's World" .

It will be helpful if we set off the relevant proposition as a theorem:

**Theorem 1**. For every computation
,
there exists a computation
which can be obtained from the original computation via some algorithm .

Fri Sep 6 11:58:56 EDT 1996