Fetzer's semiotic conception is one I find very attractive, because it seems ready to do justice to intentionality, the property we have in virtue of the fact that some of our thoughts are ``directed at" objects external to us. Of course, I doubt Jim will agree with me that though intentionality (of the sort famously promoted by Meinong, Brentano, and my teacher Roderick Chisholm) is analyzable with help from semiotic theory, it cannot be captured by any information-processing system, even a semiotic one.
Fetzer seems to believe that when connectionism is appropriately wed to his semiotic theory, there is in place an engine for creating an artifact whose mentation matches our own. I find this a bit hard to swallow because I have longstanding skepticism about connectionism simpliciter. Part of this skepticism is expressed in ; it is based on the fact that neural nets are, at bottom, just TMs. Another part of my skepticism about connectionism derives from certain gedankenexperiments devised with Dave Ferrucci which seem to show that connectionism will have a hard time explaining, scientifically, how it is we do what we do. I conclude with such a thought-experiment:
Suppose the year is 2019, and that connectionism has produced remarkable offspring -- in the form of a robot (or android), SHER-COG, capable of the sort of behavior associated with Sherlock Holmes. Consider perhaps Holmes' greatest triumph, namely solving the mystery surrounding the disappearance of the racehorse known as ``Silver Blaze" ; and suppose that SHER-COG is asked (by an analogue for Dr. Watson), after cracking this case, how it accomplished the feat. What options does our robotic sleuth have for communicating an answer?
One thing that would surely fail to enlighten would be to allow humans to examine the neural nets of SHER-COG. After all, how would information about the states of nodes and the weights on connections between them help you divine how SHER-COG deduced that the culprit in this mystery could not be a stranger to dogs on the farm that was Silver's Blaze's home?
Of course, SHER-COG could resort to natural language. It could proceed to explain its solution in (e.g.) English, in much the same way that Sherlock Holmes often explains things to the slower Dr. Watson. But this route seems to invoke the traditional logicist machinery the connectinist approach was supposed to replace. This is so because in order to really understand what SHER-COG is telling us in English, it will be necessary to analyze this English formally; and the formal analysis will bring to bear logical systems.
For example, to truly understand Holmes' explanation, conveyed to the nonplussed Watson, concerning the mystery of Silver Blaze, one must grasp the following chain of reasoning (which involves the famous clue about the ``dog doing nothing in the night-time").
At work here, of course, are the rules modus ponens and modus tollens, cornerstones of logicist AI.
So, insofar as I worry about the scientific merit of connectionism, I worry about Fetzer's approach.