Brownian thought space

Cognitive science, mostly, but more a sometimes structured random walk about things.

My Photo
Name:
Location: Rochester, United States

Chronically curious モ..

Friday, June 30, 2006

Philosophically demanding telephone call

Wednesday, June 28, 2006

Pinak

One of them traditional, space-saving, thousand- word- worth pictures.

Tuesday, June 27, 2006

Theory of Intentionality (replaces ToM)

Read this Science report about animal intelligence, and got me thinking about intelligence in general and the whole thing about having a mind. One of the criticisms against many of the studies is that they do not show that animals are capable of false beliefs, and so do not show that they possess a Theory of Mind (ToM). But, (as seen in a previous post), the point about cognition is to ascribe mental states. And so, what if we replaced ToM with ToI? That is, to try and understand how animals behave etc not throught the associationist strategies; but neither insisting on showing false beliefs. Instead, if one can show that animals are capable of treating certain objects/persons/things as having intentionality, they might have specific ways of dealing with them. Put another way: like naive physics deals with some basic knowledge of the physical world, the naive intentionality would be expected to deal with basic knowledge of the mental world. This involves beliefs, desires and the like, whatever they actually might be. So, if it turns out that a certain supposedly mental capacity was not mental after all, the question still remains: does treating it as if it was overlaid over a system of beliefs and desires help to somehow compress the description and draw predictions? If it does, that would be a great short-hand way of representing something about creatures that are more than merely chemical bags. etc.

Telepathy: Update

I guess some bits were not clear from the previous Telepathy post. Clarification: the telepathic thing was to illustrate that before spoken language, there must've existed structured representations of thought which were sufficiently similar that. with little external cues, one could communicate the meaning. This means that some actualy physical manifestation was absolutely required, even if this included just being there. This is NOT to say that there was real telepathy!

Thursday, June 22, 2006

Paper!!

My first ever experimental psychology paper is now online at the Cognitive Psychology website :) Getcher copy of the pre-print here..

Analyzing cognition

Here's a nice quote from the article "Why are animals cognitive?" by Richard Byrne & Lucy Bates in Current Biology (Jun 2006, 16(12), R445-R448):
Indeed, whether an animal’s behaviour is cognitive, and thus by implication ‘clever’, or associatively learnt is not an empirical question at all. These are simply two different ways of studying the same behaviour, and in the complex natural environments of most species only the cognitive approach leads to testable predictions.
The nicest thing about this is a rather clean exposition of why the "cognitive" can (should!) be considered as a separate level of representation. Specially for understanding animal behavior. And of course for understanding human behavior. Why the attribution of mental states as a level of explanation should be a mystery to anyone is a mystery to me...

Wednesday, June 21, 2006

Bloofing

Bloofing (v., intrans.): Goofing off with your blog. But pretty satisfactory bloofing, today: a favicon, and actually managed to hardcode stuff into the template style to create the Links section :) Oh and here is a really nice short film: Android 207. Says a LOT about Intentionality, apart from being a very cool short film by Paul Whittington. Recommended for Cognitive Scientists :) Way to go. (altlink on Undergroundfilm)

Telepathy as a precursor to Language?

Lori Markson's talk's been getting me thinking a lot about the whole word learning thing. Is there or is there not something like the mental lexicon? By this I mean, is there (or not) a specific competence that involves learning and storing something like words? Coming back (again, *yawn*) to Relevance theory, it appears to be the case that when we communicate, what we do is to provide information that would cause the other person to entertain a certain proposition, along with its truth value. Here's a halfbaked idea. Imagine a (Fodorian) pre-language world. The language of thought is in place, but there isn't yet any overt language; speech, sign language or anything. Imagine now that the requirements of Relv. Th. is in place. I'm thinking of a world in which there is communication, but this is not yet codified in somethig like speech. Nevertheless, it is the case that I can provide information that would cause the other person to entertain a certain proposition, along with its truth value. In fact even now we do this all the time. What comes next? [Insert Just So Story of your choice here] I'm guessing it could be anything. If every time I held up a stick I did so in a context in which you clearly interpreted it as representing me, my holding up a stick would not be terribly different from a "word" (arbitrary, standing in a particular relation to me). Probably the commonly used conceptual "units" were the earliest to be codified into words. Probably thats why they are shared across cultures. Probably if people started off from scratch, they'd do something similar. I remember reading from when I was quite little (and I mean absurdly young) that these two infants made up a babble language. Makes sense from this point of view... it's as if they made up their words as they went along. Which implies, that before the Word, there was probably still communication. We have a word for that, and it fetches you a tidy 17 on the scrabble board (without bonuses).

Sunday, June 18, 2006

Core Capacities of Word Learning

Recently we had Lori Markson here with her husband, Camillo Padoa-Schioppa (who spoke about the neuronal representation of economic value.) Lori's talk was: Core Capacities of Word Learning. The best things were the experiments themselves. The other nice thing was the claim that several of the capacities for word learning might not be specific to word learning, but might be related to 'general' capacities for, e.g., learning the properties of objects, discovering categories of different kinds and so forth. But, I really am very very Gallistelaciously suspicious of general capacities of any kind (see this paper by Gallistel for starters). And I discovered, there's yet another thing that I'm suspicious of, but I'm not sure! Imagine that there is some competence that we all know, love and recognize; let's call it Riding a Bicycle (RaB, not to be confused with Rab). However, we soon find that things we require for RaB are not specific to it. For example, grasping and manipulating (approximately) cylindrical objects is not RaB-specific; we find the same competence even for playing tennis. Neither is setting the body in balance specific to RaB: riding horses, skating on ice and even plain old-fashioned walking require it. The competence seems genetically built in: take a baby from a non-RaB culture (Australian aborigines?) and introduce it to bicycles and behold! RaB develops! Next, we find that other species can do it as well. Bears can be taught RaB. So can monkeys and great apes. It doesn't extend all the down to , for example, to fish or insects. So much for comparative evidence. Evidence from pathologies shows that RaB is somewhat robust. A lack of hands is not a necessary impedement to RaB, although lack of feet typically is. Can we now conclude that RaB is a competence that is built out of just these other ('general') competences like object grasping and manipulation, throwing one's weight around and the like? I suppose so. So it is, apparently, with word learning, according to Lori. But the question is, really, IS word learning really like riding a bicycle? Somehow I think not; but the point is not clear enough. Why not? Well for one, we are the only species that have it. But then we seem to be the only species that has language, if by language we mean our kind of language (our kinds of generative rules, to avoid circularity). So, the closest I can get is to say that word learning is a language-specific thing. Corollary: you can have ALL the competences that Lori requires, and STILL not have word-learning. This is not true for bicycle-riding. In this view (see also Pinker & Jackendoff), word-learning skills utilize other skills, but the goals are clear; to learn a vocabulary. How does this fit with (a) a Mentalese view of language and (b) Sperbian accounts of Relevance?

Friday, June 16, 2006

Modern Love Song Gaah! This song hits just too close to home. WHY on earth do games not have save-anywhere possibilities? I think non-single gamers ought to boycot non-save-anywhere games. And choose games like the fantastic Zone of the Enders 2: Second Runner. BTW: note the cool way in which YouTube allows directly inserting the video into your blog :)

Monday, June 12, 2006

Return of the Subset Principle

This was a talk by Theresa Biberauer & Ian Roberts (T&I) from the University of Cambridge at the DiGS meeting here in Trieste. I SO wish I'd attended more talks there! Anyhow; this was about the Subset Principle:
"the learner must guess the smallest possible language compatible with the input at each stage of the learning procedure" (Clarks & Roberts (1993) Linguistic Inquiry 24, 299-345)
The idea is that imagine a child is learning the relation between some variable x and some variable y. It is well known that the observed (x,y) pairs will vastly under-determine the possible underlying generating mechanisms. For example, {(0,0), (1,1)} is compatible with just about any function you choose. The subset principle idea is that a child should stay with the simplest grammar till (positive) evidence indicates the contrary. One problem that T&I raise is that, in many cases, it doesn't look like there are subset relations. For example, if you saw only "John(S) walks(V)", you would not know what the Verb-Object order was. However, the moment you saw "John(S) Mary(O) kisses(V)" or "John(S) kisses(V) Mary(O)", you would know it was OV (former) or VO (latter). I guess the bottom line the way I see it is, you cannot have binary parameters and hierarchical nesting of the languages. But, and this is kind of the point of the paper: if you think of the fact that certain parameters are logically necessary for other parameters to operate, then you CAN have nesting. Here's how: imagine (binary) parameters P1, P2, P3, P4. Now imagine that P2 to P4 are irrelevant if P1 is set to 'No'. Now, imagine that this is recursive: if P2 is set to 'No', P3-P4 are irrelevant. What this means is that if a learner assumes that P1 is set to 'No', it is left with a small subset of the possible languages. This is a subset: the full set includes all the languages that would have been specified if P1 were set to 'Yes'. So, learning happens when positive evidence is encountered that P1 is actually 'Yes'. This opens up the P1-Yes tree,so to say. Now the learner can assume that P2 is 'No', again it will consider only a subset of languages. And so forth. Here's the nice thing. Imagine that there are just a few parameters like P1: super-Parameters, if you want. You start off with the default setting of all these super-Parameters. Clearly the assumption is that these default settings imply that the sub-parameters are irrelevant given the default setting of the super-Parameter. Whenever you find positive evidence against it, you will suddenly open up the possibility that the sub-parameters are not irrelevant anymore, and will need to look only for that (positive) evidence that sets that sub-parameter. And so on. As far as I can tell (and I cannot tell very far, not having a carrying voice), this seems to be something like what that very nice man Pino Longobardi (University of Trieste) is saying. If it's exactly the same, apologies; it took me a while to get it :)

Young scientists ;)

Our old microbiology lab at the Garware college, Pune, ca. 1994. We five boys up front are still in touch and that's very very satisfying :) Also, 4/5 are still in Science, mostly; although only 1 is still in microbiology. Noteworthy points: - My copy of Nature, (borrowed from the British Library). - Percentage of women: Microbology had something like 85% women for some reason. - Test-tube racks, petri plates, gas Bunsen burners. - Of the five of us guys, only I'm still unmarried. Also the only one still without a Ph.D. - The only one to be, to a first approximation, out of the biological sciences. Update: Adish claims the copy of Nature is his....

Friday, June 09, 2006

Arbitrary classes?

It is quite clear that syntactic classes like Nouns and Verbs are not like other classes like Stop Consonants. The main difference is that the members of Nouns and Verbs do not share any obvious overt features; at best (as Chomsky said, re-citing structuralists before him (ah! "Recite" probably comes from re - cite; to cite again! :) ) they can be slotted into similar structural positions. Ansgar in this lab showed that the leading and trailing edges can be treated as positional variables: if A1-X-C1 and A2-X-C2, where X is an arbitrary element, then people accept A1-X-C2 as valid. Presumably, under these circs., the As and the Cs are classified as beginnings and ends of strings; so any A-X-C string is treated as valid. But still, one can asK: can these categories of arbitrary tokens be used in other contexts? What if I now introduced a rule, "C-A". Would participants learn this rule? This question came up with Jean-Remy. He is trying (currently) to understand what goes into generalizing the (AB)^n and the AnBn "grammars" popularized by Hauser, Chomsky & Fitch. We wondered: could positionally defined (arbitrary) classes be used to learn a rule like AnBn? Turns out that Ansgar DID try to induce rules like C-A, after training participants with A-X-C. The result: NO generalizations! The positional variable remain tied to the positions! So here is one possible interpretation: maybe, the (human) mind is capable of creating categories with truly arbitrary members ONLY when such categories are innately specified, like Nouns and Verbs. Sounds like an easy-to-falsify hypothesis; being so darn strong @ first glance.

Tuesday, June 06, 2006

Icobes: Filibuster

Icobes is a Belgian microbrewery. Probably the more famous (and better?) beers from this brewery are the Buccaneer series. The filibuster is an all right pils. Not too fond of pils / lagers as a rule, and this beer conforms to it :) The label is pretty, though!

Dictionary blog trick

I really shouldn't be wasting my time like this. But this is pretty darn cool! Try double-clicking any word on this blog: you should get a popup with the definition, wikipedia link, (and lots of ads). Courtesy of TheFreeDictionary. *Update*: Doesn't seem to work with the insensitive Safari browser. HINT: Use mozillaware like Camino. I think I'm moving almost entirely to Camino.

Monday, June 05, 2006

Goooooogle! Mail!!

Just figured out that the marvelous GMail has a brilliant POP & SMTP server, so I can really move fully over to Goooooooogle! Yay!!! Also, got a nice webcounter for the blog :) (Link at page bottom). Decided to include the piclink to the website because they seem to be the only ones where you needn't include any...

Thursday, June 01, 2006

Pals :)

Here is an old old picture of my fantastic friends Adish and Ajey. This was probably one of the best times of my life: an Integrated Ph.D. student in the beautiful IISc campus, doing all kinds of cool biology things :) This to note on the wall: The Nirvana poster, and the poster of a painting by Blake.. some combo!