The main purpose of this site is to serve as a platform for the continuing pursuit of a Unified Cognitive Science as suggested in the book. Selected reader comments will be presented here, possibly with a response, but this is not a blog. More importantly, your comments and suggestions will be reflected as text changes and additional references on the Updates page.
Although this will probably change, for now you can just send your comments as email.
It's all just AI
The most common objection to the book that I hear is that modeling at the computational level invalidates any claim to biological plausibility. Some of this is just the standard knee-jerk dismissal mechanism, but some is clearly not.
David Ritchie, who is generally sympathetic to both the book and to embodied cognitive science writes:
I also see a second theme, an attempt to test the biological model by developing partial models of language processes that can run on current generation digital computers , and more to the present point , incorporate into your account models that have already been developed, in particular the models developed by Bailey and Narayanan, each of which gets a full chapter. The problem that this poses for me, and I suspect for many of your readers, is that these models in effect smuggle back in exactly the kind of assumptions about language processing that your biological explanation is attempting to overcome. Thus these chapters seem, to me, to contradict the central features of your biology-based analysis.
He then quotes my cautionary notes from page 141: "For both practical and pedagogical reasons, our computational level models are based on formalisms and techniques that are well established in computer and cognitive sciences. There are also dangers in using conventional formalisms and methods. None of the traditional techniques were developed for linking brain activity to behavior and they all are inadequate if used only in the conventional way. In addition, the standard notation might be taken as the whole theory, ignoring the underlying bridge to the brain."
And goes on to conclude:
I actually think the standard notation doesn't merely ignore, but contradicts the underlying neural processes.
David kindly agreed to further pursue this point and we seem to have sorted out his central concern, which was that he took the book to say that all mental processing was mediated by serial symbolic analysis. This may have once been my view (cf. p.63 )
but I did not think it necessary to explicitly disavow it in the book. There is also a more general objection to using any notation from symbolic processing that merits further discussion.
Science necessarily involves levels of analysis
I didn't make this point sufficiently clear in the book, but the following story seems to help. First consider the chemical formula:
2 SO2 + 2 H2O + O2 -> 2 H2SO4
This is the standard notation for the reaction that gives rise to acid rain. It says that 2 water molecules and one oxygen molecule can combine with sulfur dioxide to yield sulfuric acid. For many purposes, this formula is fine. But we know that the reality is nowhere near this simple. All such reactions are bi-directional and depend on temperature, pressure, etc. Moreover, according to Wikipedia, a more accurate chemical story is:
SO2 + OH -> HOSO2
which is followed by:
HOSO2 + O2 -> HO2 + SO3
SO3(g) + H2O(l) -> H2SO4(l)
where (g) means gas phase and (l) denotes liquid.
And, of course, the story becomes too complex to write down if we look at the detailed physics involved.
Science is always done at different levels of abstraction. What makes it science is that the technical treatments at each level should be consistent. This is what we are trying to do in the NTL project and what I try to describe in the book. The computational level models and formalisms (including grammars) should be both explanatory of the phenomena and consistent with all relevant constraints and findings. Some people feel that it is premature to attempt this unification, but the eventual answer will necessarily take the from of consistent theories.
There is a shrill September 17, 2006 review by Hairball on Amazon.com that contains the incredible statement: "The most excruciating displeasure I've ever had". I will never comment on issues of style or choice of material in M2M, but will try to address any technical questions that arise.
Hairball has studied neuromorphic engineering and states:" Personally, I don't think you can begin to tackle language until you have a robot that can physically deal with the real world 1/10th as well as a real animal does."
People have lots of reasons why we shouldn't attempt to build M2M models. On the positive side, there is good work on robotic embodied language, especially at MIT, as cited in the Brooks reference in M2M
"One notably missing item in his explanation of language is the ability of humans to *hear* language."
This is a reasonable point. The group next to the NTL space at ICSI is one the leading speech research efforts and I probably take it too much for granted. There is some discussion in M2M of intonation and prosody, as well as gesture, but nothing technical on speech. However, I can't think of anything that would change because speech mechanisms were included in the book.
Section 1. Embodied Information Processing
Section 2. How the Brain Computes
Section 3. How the Mind Computes
Section 4. Learning Concrete Words
Section 5. Learning Words for Actions
Section 6. Abstract and Metaphorical Words
Section 7. Understanding Stories
Section 8. Combining Form and Meaning
Section 9. Embodied Language
References and Further Reading