Just got a mail from Gergo Csibra; he is indeed collaborating with Dan Sperber on the new stuff! No

wonder his talk and Dan's talk had so much in common!
Anyhow, this blog is to tie Dick Aslin in.
[

Reminder: The idea is to look at common grounds across the works of Sperber, Csibra, Aslin & Baillergeon]
Starting from

Saffran, Newport & Aslin, it's pretty much clear that even very young infants are able to extract statistical regularities from their input; noot just for speech but for pretty much anything.
In his recent talk (link coming soon!) Dick talked about how such a computational system might be constrained. Since there are a very large number of statistical regularities in a given sequence, are there any constraints that limit the extraction of regularities? Dick showed evidence that this might be the case.
However, me and others have a

paper in press where we show that (adults) appear to have a constraint on statistical extraction of words: sequences that span

prosodic boundaries are not considered as good word candidates.
BUT! what we show in this paper is not that prosody blocks computing statistical regularities, but that prosody acts as a filter. This means that the statistical engine tries to extract word candidates based on their distributional properties, but the output of this system is weighed by other factors; in this case whether or not the word candidates are properly aligned with prosodic boundaries.

### Constraints on statistical regularities: Updated

Here, then is the updated story of constraints on statistical learning. Remember the problem: there are lots of statistical regularities; only some are really useful or whatever. So how do you constrain the statistical-regularity-extraction engine? here is an updated answer, from Dick's work, our paper and other stuff randomly thrown in:
1) Statistics are preferentially computed over some units and not others. This was suggested by

Bonatti et al for speech, and might be general.
2) The statistical engine is itself constrained: even with appropriate units, it computes statistics over only some tokens and not others. Dick showed this for tones: interleaved tone sequences in different octaves are

perceptually streamed, and

transition probabilities (TPs) are computed over the two streams independently.
3) The output of the statistical system is passed through other filters. This seems to be the case in adults in my experiments.
4) The statistical engine might have constraints on what models it chooses.
This last point is not yet clear enough in my mind, but the general thrust is something like this: at least in Linguistics, we all know since

Gold that

inference is a nasty beast. One possibility suggested by some empirical work (help! who by?!) which shows (if I remember correctly) that at any point, human adults are predisposed to project the simplest hypothesis compatible with the evidence presented till then. What is not clear is if, there are multiple complex possibilities (statistical models), if there is a hardwired bias for some and not others.
[[Sorry.. can't do better than that! Maybe there's nothing in this whole paragraph.]]

### [[Csibra+Sperber] + Aslin]

So the updated jigsaw looks like this: What if there are constraints on statistical learning of stuff, which comes out of the inferences that we make based on social interactions in the Csibra+Sperber ideas?
Methinks worthwhile exploring :)

## 1 Comments:

Mo,

Hey, awesome blog! I agree this topic is work a close gander. Have you covered Saffran's work yet? How come we never talked about cognitive development at Dartmouth? Oh yes, we were too busy practicing our skits and singing at that fabulous karaoke bar you found. A very enriching experience. I can still hear our lovely choral rendition of "Mambo Number 5". Wait, I may have photos I can upload.... (hee hee)

OK, I do have a more serious question... will there be a universal binary version of PsyScope? How accurate are the timings under Rosetta?

Shannon

Post a Comment

## Links to this post:

Create a Link

<< Home