Perhaps no foundational take on AI is complete without at least mention of John Searle's infamous Chinese Room Argument (CRA), and Glymour doesn't disappoint. In fact, Glymour gives a new objection to CRA, one based on temporal factors:
An important aspect of many actions is the time it takes. A system that executes the very same computational procedures that I carry out in catching a baseball or reading a story or solving a problem but that takes considerably longer than I do to carry out those procedures does not do what I do. (p. 350.)
Glymour goes on to claim that what Searle imagines himself doing (e.g., looking up appropriate Chinese strings for output in response to certain Chinese strings as input) is something he can't do ``fast enough to simulate a Chinese speaker'' (p. 350). The problem, it seems to us, is that Searle could be given, say, an electronic rolodex in order to accelerate the time needed for giving feedback. Since, as Glymour would apparently agree, a rolodex (and the like) surely doesn't carry bona fide understanding with it into the Chinese Room, Glymour's objection would seem to be easily surmounted. Interestingly, Glymour does intimate problems for AI that he believes may be fatal. One of these, left here in mere nutshell form, is that cognition may require continuous, analog information-processing beyond the reach of Turing machines, which are traditionally taken by AIniks to define the limits of computation and cognition. (On this issue, see Siegelman 1995.) This aspect of Glymour is of course relevant to the LAI-CAI clash, since connectionists often see themselves as working with continuous, analog systems.