Our principal concern to this point has been to articulate and defend the Argument From Irreversibility. This argument, with respect to Computationalism (whether of the logicist or connectionist variety), is a negative one: its conclusion is that Computationalism is simply false. Do we have anything constructive to say? What view of the mind would we put in place of Computationalism? What are the consequences of what we've uncovered for AI and Cog Sci practitioners? Though such questions in many ways take us outside our expertise, we'll venture a brief answer, in the form of a three-point homily.
First, it's important to realize that we consider the view advocated herein to be the prolegomenon to a more sophisticated science of the mind. Just because cognition is (at least in part) uncomputable doesn't mean that it will resist scientific analysis. After all, computer science includes an entire sub-field devoted to the rigorous study of uncomputability. We know that there are grades of uncomputability, we know much about the relationships between these grades, we know how uncomputability relates to computability, and so on; uncomputability theory, mathematically speaking, is no different than any other mature branch of classical mathematics. So, why can't uncomputability theory be linked to work devoted to the scientific analysis of the uncomputable side of human cognition? That it can be so linked is the view developed and championed in a forthcoming monograph of ours [2b].
The second point, which is related to the first, is this: One of the interesting properties of AI research is that people have come to expect that it invariably have two sides, a scientific side, and an implementation side. The basic idea is that the scientific side, which can be quite theoretical, ought to be translated into working computer programs. We think this idea, as a general policy, is wrongheaded. Why is it that the physicist can be profitably occupied with theories that can't be implemented, while the AI researcher labors under the onerous expectation that those aspects of the mind which admit of scientific explanation must also admit of replication in the form of computer programs? In short, perhaps we can come to understand cognition and consciousness scientifically (and this would entail, in the present context, exploiting information-processing systems which aren't reversible, e.g., ``machines" more powerful than Turing Machines), while at the same time acknowledging that we can't build conscious computers.
The third and final point in our sermon is this. Suppose that we're correct; suppose that human cognition is uncomputable, and that therefore it is something no computer can possess. From this is doesn't follow that no system can appear to enjoy consciousness. There are some well-known uncomputable functions which many are doing their best to ``solve." (One such line of research is the attack on the uncomputable busy beaver function .) AI, as far as we can see, has never settled the fundamental clash between those who, like Turing, aim only at engineering a device whose behavior is indistinguishable from ours (``Weak" AI), and those who seek not only to create the behavior but also the underlying conscious states which we humans enjoy (``Strong" AI). Nothing we have said herein precludes success in the attempt to engineer a computational system which appears to to be genuinely conscious. Indeed, an approach which cheerfully resigns itself to engineering behavior only seems to us to be a route worth taking. What we purport to have shown, or at least made plausible, is that the road down which aspiring ``person builders" are walking is ultimately a dead end, because no mere computational system can in fact be conscious, and this is true in part because while computation is reversible, consciousness isn't.