next up previous
Next: About this document Up: The communicative advantages of Previous: Neural nets

Reflections on a robotic Sherlock Holmes

In order to take the point beyond a mere rhetorical question, let's conduct a simple thought-experiment: Suppose the year is 2019, and that the CAI marriage has produced remarkable offspring - in the form of a robot (or android), SHER-COG, capable of the sort of behavior associated with Sherlock Holmes.We use a fictional character only to render our points vivid. A ``real life'' human detective could, without diminishing our point, be substituted for Holmes in our fable. Consider perhaps Holmes' greatest triumph, namely solving the mystery surrounding the disappearance of the racehorse known as ``Silver Blaze'' (Doyle 1984); and suppose that SHER-COG is asked (by an analogue for Dr. Watson), after cracking this case, how it accomplished the feat. What options does our robotic sleuth have for communicating an answer?

One thing that would surely fail to enlighten would be to allow humans to examine the neural nets of SHER-COG. In order to see this, you have only to imagine what it would be like to study gargantuan versions of such snapshots as those sketched above. How would information about the states of nodes and the weights on connections between them help you divine how SHER-COG deduced that the culprit in this mystery could not be a stranger to dogs on the farm that was Silver's Blaze's home? If our snapshots don't get the point accross, think of the impenetrability of binary core dumps. How could study of such things enlighten one as to the reasoning employed by the likes of SHER-COG?

Of course, SHER-COG could resort to natural language. It could proceed to explain its solution in (e.g.) English, in much the same way that Sherlock Holmes often explains things to the slower Dr. Watson. But this route concedes our point, for by it we end up once again invoking LAI in all its glory. This is so because in order to really understand what SHER-COG is telling us in English, it will be necessary to analyze this English formally; and the formal analysis will bring to bear the machinery of logical systems we have discussed in this paper.

For example, to truly understand Holmes' explanation, conveyed to the nonplussed Watson, concerning the mystery of Silver Blaze, one must come to see the following chain of reasoning (which involves the famous clue about the ``dog doing nothing in the night-time'').

  1. If the dog didn't bark, then the person responsible for lacing the meal with opium couldn't be a stranger.
  2. The dog didn't bark.
  3. The person responsible for lacing the meal with opium couldn't be a stranger. (from 1. and 2.)
  4. Simpson was a stranger.
  5. Simpson was not responsible. (from 3. and 4.)

At work here, of course, are the rules modus ponens and modus tollens, cornerstones of LAI.Lest it be thought that the ratiocination of Sherlock Holmes is a phenomenon confined to the world of fiction, we direct readers to the remarkable reasoning used by Robert N. Anderson (Ybarra 1996) to recently solve the 80 year-old mystery of what caused the fire that destroyed Jack London's ``Wolf House'' in 1913. Wolf House was to be London's ``manly'' residence, a 15,000 square foot structure composed of quarried volcanic rock and raw beams from ancient redwoods. The conflagration occurred just days before London was to move in, and though London vowed to rebuild, he died three years later with the house still in ruins.

Our conclusion is that if in the future we desire not only to build human-matching robots (or androids), but to understand them (and cognition in general) as well, then the logic-AI marriage ought to be sustained - and sustaining it shouldn't be hard: The more than 2,500 pages we assimilated for this review article seem to us to provide unassailable evidence that the passion at the heart of that marriage, though as old as Euclid, will endure.



  1. Allen, J.F. (1984) ``Towards a General Theory of Action and Time," Artificial Intelligence 23: 123-154.
  2. Anderson, A. & Belnap, N. (1975) Entailment: The Logic of Relevance and Necessity I (Princeton, NJ: Princeton University Press).
  3. Barwise, J. & Perry, J. (1983) Situations and Attitudes (Cambridge, MA: MIT Press).
  4. Boolos, G & Jeffey, R. (1989) Computability and Logic (Cambridge, UK: Cambridge University Press).
  5. Bowen, K.A., Kowalski, R.A. (1982) ``Amalgamating Language and Metalanguage in Logic Programming," in Clark & Tarlund, pp. 153-172.
  6. Boyer, R.S. & Moore, J.S. (1972) ``The Sharing of Structure in Theorem Proving Programs," Machine Intelligence 7: 101-116.
  7. Brachman et al. R.J., Levesque, H.J. & Reiter, R. (1992) Knowledge Representation (Cambridge, MA: MIT Press).
  8. Bringsjord, S. and Ferrucci, D. (forthcoming) Artificial Intelligence, Literary Creativity, and Story Generation: the State of the Art (Hillsdale, NJ: Lawrence Erlbaum).
  9. Bringsjord, S. (1992) What Robots Can and Can't Be (Dordrecht, The Netherlands: Kluwer).
  10. Bringsjord, S. (1991) ``Is the Connectionist-Logicist Clash one of AI's Wonderful Red Herrings?" Journal of Experimental & Theoretical AI 3.4: 319-349.
  11. Bringsjord, S. & Zenzen, M. (1991) ``In Defense of Hyper-Logicist AI," IJCAI 91 , (Mountain View, CA: Morgan Kaufmann), pp. 1066-1072.
  12. Carnap, R. (1967) The Logical Construction of the World (Berkely, CA: University of California Press).
  13. Clark, K.L. and Tarlund, S.A. (1982) Logic Programming (Orlando, FL: Academic Press).
  14. Dennett, D.C. (1994) ``The Practical Requirements for Making a Conscious Robot," Philosophical Transactions of the Royal Society of London 349: 133-146.
  15. Doyle, A.C. (1984) ``The Adventure of Silver Blaze," in The Celebrated Cases of Sherlock Holmes (Minneapolis, MN: Amarenth Press), pp. 172-187.
  16. Doyle, J. (1988) ``Big Problems for Artificial Intelligence," AI Magazine , Spring: 19-22.
  17. Ebbinghaus, H.D., Flum, J. & Thomas, W. (1984) Mathematical Logic (New York, NY: Springer-Verlag).
  18. Fetzer, J.H. (1994) ``Mental Algorithms: Are Minds Computational Systems?" Pragmatics & Cognition 2: 1-29.
  19. Genesereth, M.R. & Nilsson, N.J. (1987) Logical Foundations of Artificial Intelligence (Los Altos, CA: Morgan Kaufmann).
  20. Gentzen G. (1969) The Collected Papers of Gerhard Gentzen , edited by M.E.Szabo (city, Holland: North Holland).
  21. Glymour, C. (1992) Thinking Things Through (Cambridge, MA: MIT Press.
  22. Hayes et al. J.E., Michie, D. & Tyugu, E. (1991) Machine Intelligence 12: Toward an Automated Logic of Human Thought

    (Oxford, UK: Oxford University Press).

  23. Kamp, J. (1984) ``A Theory of Truth and Semantic Representation," in Groenendijk et al. (eds.), Truth, Interpretation and Information

    (Dordrecht, The Netherlands: Foris Publications).

  24. Kim, S.H. (1991) Knowledge Systems Through Prolog (Oxford, UK: Oxford University Press).
  25. Lifschitz, V. (1987) ``Circumscriptive Theories: a Logic-Based Framework for Knowledge Representation," Proc. AAAI-87 , 364-368.
  26. Martins, J.P. & Shapiro, S.C. (1988) ``A Model for Belief Revision," Artificial Intelligence 35: 25-79.
  27. McCarthy, J. (1980) ``Circumscription: a Form of Non-monotonic Reasoning," Artificial Intelligence 13: 27-39.
  28. McCarthy, J. (1968) ``Programs with Common-sense," in Minsky, M.L., ed., Semantic Information Processing (Cambridge, MA: MIT Press), pp. 403-418.
  29. McCulloch, W.S. & Pitts, W. (1943) ``A Logical Calculus of the Ideas Immanent in Nervous Activity," Bulletin of Mathematical Biophysics 5: 115-137.
  30. McDermott, D. (1982) ``Non-monotonic Logic II: Non-monotonic Modal Theores," Journal of the ACM 29.1: 34-57.
  31. Moore, R.C. (1985) ``Semantical Considerations on Non-Monotonic Logic," Artificial Intelligence 25: 75-94.
  32. Pollock, J. (1995) Cognitive Carpentry (Cambridge, MA: MIT Press).
  33. Pollock, J. (1992) ``How to Reason Defeasibly," Artificial Intelligence 57: 1-42.
  34. Reiter, R. (1980) ``A Logic for Default Reasoning," Artificial Intelligence 13: 81-131.
  35. Robinson, J.A. (1992) ``Logic and Logic Programming," Communications of the ACM 35.3: 40-65.
  36. Russinoff, S. (1995) ``Review of Clark Glymour's Thinking Things Through ," Symbolic Logic 60.3: 1012-1013.
  37. Shapiro, Stuart C., & Rapaport, William J. (1987) ``SNePS Considered as a Fully Intensional Propositional Semantic Network," in N. Cercone & G. McCalla (eds.), The Knowledge Frontier: Essays in the Representation of Knowledge

    (New York, NY: Springer-Verlag), 262-315.

  38. Siegelmann, H.T. (1995) ``Computation Beyond the Turing Limit," Science 268: 545-548.
  39. Smolensky, P. (1988a) ``On the Proper Treatment of Connectionism," Behavioral & Brain Sciences 11: 1-22.
  40. Smolensky, P. (1988b) ``Putting Together Connectionism - Again," Behavioral & Brain Sciences 11: 59-70.
  41. Tarski, A. (1956) Logic, Semantics and Mathematics: Papers from 1923 to 1938 , translated by J.H. Woodger (Oxford, UK: Oxford University Press).
  42. Thayse tex2html_wrap_inline1751 , A. (1989a) From Standard Logic to Logic Programming: Introducing a Logic Based Approach to Artificial Intelligence (NY, NY: Wiley).
  43. Thayse 1989, A. (1989) From Modal Logic to Deductive Databases: Introducing a Logic Based Approach to Artificial Intelligence (NY, NY: Wiley).
  44. Thayse 1991, A. (1991) From NLP to Logic for Expert Systems: A Logic Based Approach to AI (NY, NY: Wiley).
  45. Thomason, R., ed. (1974) Formal Philosophy: Selected Papers of Richard Montague (New Haven, CT: Yale University Press).
  46. Ybarra, M.J. (1996) ``Discovering an Answer in the Flames," New York Times , Sunday, February 4, Section A, p. 13.

next up previous
Next: About this document Up: The communicative advantages of Previous: Neural nets

Selmer Bringsjord
Mon Nov 17 14:57:06 EST 1997