One of us [Bringsjord, 1997b] recently wrote:
That Strong AI is still alive may have a lot to do with its avoidance of true tests. When Kasparov sits down to face the meanest chessbot in town, he has the deck stacked against him: his play may involve super-computation, but we know that perfect chess can be played by a finite-state automaton, so Kasparov loses if the engineers are sufficiently clever [[Bringsjord, 1997b], p. 9; paraphrased slightly to enhance out-of-context readability]
This quote carries the kernel of the present (embryonic) paper, a robust version of which will incorporate discussion at the workshop.
We find it incredible that anyone would have wagered that computers of the future would not manage to play at a level well beyond Kasparov. (We confess to indecisiveness concerning which prediction -- Simon saying three decades ago that thinking machines would be upon us within days; or Dreyfus betting the farm that formidable chessbots would forever be fictitious -- was the silliest.) After all, we know that there exists a perfect winning strategy for chess, and that strategy, at bottom, isn't mathematically interesting. Complexity, of course, is another issue: it's complexity that generates interestingness in this domain -- but the bottom line is that if complexity is somehow managed, a human player has his or her hands full. Deep Blue versus Kasparov was proof of that.