Brownian thought space

Cognitive science, mostly, but more a sometimes structured random walk about things.

My Photo
Name:
Location: Rochester, United States

Chronically curious モ..

Wednesday, May 24, 2006

How do we know we use symbols?

{This is Gary.} In the last post I was wondering about the categorization issues raised by a paper. Halfway along, I spoke about another paper that went bad (Clarke & Thornton, 1997, BBS {C&T}). In one of the responses to the target article by C&T, Gary Marcus explains how Elman networks (backpropogation, hidden units, momentum etc etc) generally cannot really do things that are hallmarks of the kinds of stuff that we can do, and the example is worth considering. Gary considered the following: imagine I said to you "a rose is a rose", or "a duck is a duck"; what would you reply to "a dax is a ___"? If you thought of anything besides "dax", your neurons probably look like little points with arrows sticking in an out, reminiscent of Toshirô Mifune towards the end of Kurosawa's Throne of Blood. What Gary saw was that with many many versions of an Elman network, the network was simply unable to generalize the "a (__) is a (__)" pattern. This is because a network operates over the input set, looking for correlations and the like; usually it cannot abstract well to stuff that is outside the input stimulus set. {Rider: certain patterns it can of course generalize; these typically lie inside the training set, in the sense that they can be arrived at by intrapolation}. Which reminds me very much of some of the things that Fodor says are non-negotiable for a proper theory of the mind. One of them is something similar: systematicity. Systematicity means that if I can say "John kisses Mary", then I can as easily say "Mary kisses John". It is as if the verb kisses is surrounded by two slots, which can be filled by the kisser and the kissee.

Back to categories

Why is the issue of categories interesting? Because, just maybe, categories that we form might tell us something about how we acquire concepts. Somehow I find concept acquisition just too bothersome. There's just NO nice answer for concept acquisition. As mentioned previously, even the nice Relevance Theory comes only after the concepts are acquired. But, from the Waldmann & Hagmayer paper, one can see that categories seem to enter into the kinds of propositions that concepts can occur in as well. So, the participants in those experiments appear to make propositions like "A causes disease", where A is some category formed in the training phase. So, is concept acquisition anywhere in site, howsoever remotely? No. Just because we categorize things and use category labels in forming propositions doesn't make the categories into concepts. Remember that concepts include those that are phrasal in nature: categories can be seen as simply reflecting those 'concepts' that have a complex internal structure that reflects, via propositions, the statistical structure in the input. So, A might be mentally encoded as BRIGHT-VIRUSES-CAUSING-SPLENOMEGALY; a perfectly propositional structure; reflecting the observed correlation between Bright and Splenomegaly. This sounds no different from how, in Relevance theory, we form essentially phrasal concepts on the fly (like PAIN*); again leaving the problem of concept acquisition essentially a bloody mystery. Coming up: The difference between rules and Gestalts

0 Comments:

Post a Comment

<< Home