Brownian thought space

Cognitive science, mostly, but more a sometimes structured random walk about things.

My Photo
Name:
Location: Rochester, United States

Chronically curious モ..

Monday, September 03, 2012

Encyclopedia Brown

One of the reasons I knew early that I was destined for all things Science was my fascination with Encyclopedia Brown and the intense frustration when I'd not solve the cases.

Just read that the author died earlier this year.. RIP Donald Sobol!

Encyclopedia Brown Strikes Again (1965)

Tuesday, October 04, 2011

Downgrade Lion to Snow Leopard on new iMacs

Got a new iMac? Came pre-installed with Lion? Want to go back to Snow Leopard for whatever reason? (Unlike the Fanboy, I realize we have different needs). I've been trying to migrate everything off of my work laptop (MacBook Pro) running SL to my new iMac.
First I tried just sticking in a SL install disk. It causes a pernel kanic (it just calls it a "panic") and the gray screen if death. Complains of incompatible hardware. Then, I attached my SL Time Machine backup, restarted the Lion-iMac, holding down the ALT key to give me a choice of starting up from different sources (fyi - you CAN see the SL install DVD this way, but when you select it, you get the panic again). Then, I selected the "Recovery HD" option, and tried to install from Time Machine Backup. But again, Lion told me I could not do nada.
Here's what worked for me. Remember, you take full responsibility for trying these tricks. Keep good backups and backups of your backups in case things get screwy.
It requires you to have :
1) Fully upgraded SL to the latest 10.6.8, with all updates in place on your old SL Mac. At some point, in anticipation of Lion, SL had got a Migration Assistant upgrade..
2) A Time Machine backup from your fully upgraded SL Mac. I have mine on a WD external HD. All the cables to attach the external HD to your old SL Mac.
3) A firewire 800 cable (the rectangular ones that cost $45[wtf?!]) - of course, make sure both the computers have the firewire ports. Also, if your TM HD also uses firewire, make sure you have multiple firewire ports on your old SL Mac
Then follow these simple steps:
(NB: 1&2 are probably not necessary, but better safe than sorry)
1) Run Disk Utilities on your old SL Mac, repair disk permissions.
2) Do a Time Machine backup. Turn off Time Machine backups, so it doesn't do freaky stuff during your transfer
3) Mount the external HD with the TM backup
4) Restart the old SL Mac, holding down the ALT key. You should see a gray screen with your HD(s) and "Recovery HD". Double click the "Restore HD"
5) On the Lion-iMac, go to System Preferences, click on Startup Disk, and click Target Disk Mode... to restart the computer in TDM. Attach a firewire cable to the Lion-iMac
6) When the Lion-iMac restarts, you should see the thunderbolt+firewire symbols floating around the screen.
7) Attach the other end of the firewire to the old SL Mac.
8) On the old SL Mac, go to (Disk?) Utilities and click on Restore from Time Machine Backup.
9) Select the TM backup disk as your source, and the Lion-iMac HD mounted onto your SL Mac as the target.
10) You'll get a warning that the target will be erased. Sure. Get it running.
11) At this point, it looked like my old SL-Mac restarted. I unmounted the TM backup disk; couldn't see the unmount button for the (ex)Lion-iMac, so turned it off, got the device removal warning. Unplugged all firewire etc cables. Restarted both computers...
Everything seems good so far. Mail on the new (now SL) iMac rebuilt databases when it first started, but seems just like the older one. Programs run. No crashes yet.
hth

Wednesday, January 26, 2011

Why do some language universals have exceptions?

In a recent article, Evans & Levinson argue that there are no language universals, because for almost any universal, one can find exceptions. Of course, if you're a Chomskyan-like linguist, the obvious simple response is something like this being another case of failing to draw a competence/performance distinction. But still, one needs an explanation of why we find some rare forms; and here is (another) general possibility.
(Sidenote on Evans & Levinson: Imagine we went around observing living species. We would see a bewildering diversity, and I'm quite confident that for any "rule" relating to the phenotypes, there would always be some exceptions. E.g., swimming mammals, flying snakes, flightless birds, carnivorous plants etc. But from this, should we conclude that there is no underlying "universal" organizing principles? No - we know that there is an underlying organizing principle, but that the variation can sometimes make it hard to see. But it's a better explanation of the facts than to assume that each species somehow arises independently of the other, in response to the local environmental niche. [Of course, this analogy might not be perfect; living systems do evolve; while it's not clear if languages do/have done in quite the same way.])
The possible explanation for rare forms arises from work on language typology and language change. In the former, one finds broad generalizations, sometimes with exceptions. For example, Jenny C. recently told us all about Greenberg's Universal 18, by which Adjective-Noun and Noun-Numeral combinations are forbidden (so you can't say something like "red balls three"). It turns out that, in a large survey of languages, about 4% actually do show this pattern (on the surface).
Now we also know that languages change; and in some cases languages change along gradients. Note that this sense of 'gradient' is different from some others, which equate 'gradient' effectively with non-discrete (and hence non-symbolic?) views of language. In the sense used in this post, a gradient is a hierarchy of constituents to which a certain rule can be applied at different levels of the hierarchy. To give an example (thanks to Jenny.C.), consider the hierarchy of DP definitiveness:
Pronoun > Definite DP > Indefinite DP/Quantifier > Wh-phrase
where X > Y implies that Y is higher up in the hierarchy than X. So, for example, if a certain grammatical rule (like agreement) applies to Y, it must also apply to X, but not vice-versa. (In the above example, one can think of the left-to-right progression as getting successively more unspecific about the intended referent.)
Why should a language change from left to right? A general answer (e.g. in this paper) is that it is due to a tendency for generalizations. I.e., if in generation 1 agreement is only on pronouns, and there is even a little evidence for generalizations for definite DPs, over time the regularization bias will quickly move the generalization to the right in the hierarchy. Presumably, there might not even need to be any evidence - if the input is sparse and the learner is trigger-happy, s/he might hypothesize that a generalization holds at a level higher than the one observed in the data.
But there are two possible scenarios for such gradients like X > Y > Z. One is that this sequence is self-closing, such that X > Y > Z > X ..., much like Rock-Paper-Scissors. In such a scenario, one would expect that, starting from different positions in the hierarchy, and changing at different rates, at any given time one might expect to find all three kinds of systems.
However, if X > Y > Z, and that's the end of the story, then, over time, the elements higher up in the hierarchy will tend to dominate, and Xs will tend to be infrequent or disappear.
(A third possibility is that, for different reasons, learners might be able to move to both the right and the left of a hierarchy. In this case too, languages would be expected to be of all three kinds.)
In essence, the reason why certain grammatical features might be rare is that they represent what a complex system theorist might call repellers - regions that a dynamically evolving system avoids. That is, gradients of change might mostly lead away from some linguistic forms, making them universally rare.

Labels: , ,

Thursday, November 18, 2010

A rule by another name...

Is this a rule or is this a little portion of an HMM?
Imagine some rule-based system, that has the following:
x -> {x | y}
It's in a made-up notation, but the meaning should be clear enough: 'x' can go to either of 'x' or 'y'. The first part (x->x) is just what this colorful picture represents (minus some probabilities of how likely is it that x->x). In fact, the above figure requires some kind of an identity function (x goes to itself).
Here are some more rules:
However, for some strange reason, people who think human language doesn't require rules, only mean rules like (4) - rules 1-3 (and, arguably, infinitely many more) are not just ok, they're often required.
Of course, there is a bigger question: can (4) be re-stated in a different system of rules? Probably. But, to take an example from physics, the Newtonian
does a fantastic job for most of our daily purposes; although we now "know" that the correct (relativistic) form is:
So, till those computationalists who deny rules like (4) can give us a better rule system with the kind of intra- and inter-language advances made by assuming variants of (4), it just remains a promissory note with limited substance.

Labels: , , ,

Wednesday, October 06, 2010

Matlab boxplot notch "error" resolved

Use Matlab? Try this:
>> load carsmall
>> boxplot(MPG,Origin,'notch','on')
See something weird? Something that looks like this image with the boxes folded over? It's been driving me crazy for a while; I thought I'd broken Matlab somehow; till I discovered this little documentation. Essentially, before R2008b, the notches were truncated to the edges of the box. Since that's wrong, in the sense that the notches (robust estimates of the median) could extend beyond the box edges, Matlabbers fixed this, so now the notches go to wherever they please; but sometimes it makes the figures look weird.
From the Mathworks website:
  • For small groups, the 'notch' parameter sometimes produces notches that extend outside of the box. In previous releases, the notch was truncated to the extent of the box, which could produce a misleading display. A new value of 'markers' for this parameter avoids the display issue.

As a consequence, the anova1 function, which displays notched box plots for grouped data, may show notches that extend outside the boxes.

For comparison, the second figure is how boxplot used to work.

Sunday, September 26, 2010

The question of dumb rats (and smart babies)

Some scientists seem to believe that rats predicting when the next foot shock happens is all we need to know to understand how babies learn language. Should you?
I came across an interesting blog post that is essentially more anti-Chomsky/Pinker rhetoric. It seems that some baby research likes to throw its little subjects out with their clichés, to describe «new and revolutionary ways» in thinking about language. So here is some attempt to restore balance to the universe.
The general line from that blog is something like this:
(1) Lack of negative evidence is critical to ChomskyPinkerian's position that there must be innate constraints on language acquisition.
(2) In a tone-shock associative task, rats are sensitive to the statistical pattern of presence and absence of the tone, and learn to predict the shock based on (what she calls) positive and negative evidence.
(3) Since rats can learn arbitrary relationships from such negative evidence, why do ChomskyPinkerians believe babies cannot, and instead invoke innate constraints? (followed by some wrist-slapping).
First of all, notice that the example is rotten - this is not the sense of negative evidence that's relevant at all. This is just saying that rats are sensitive to the probability of a shock given a tone. In the "negative example" case, it's just that the conditional probability p(shock|tone) is less than 1.0
The lack of negative evidence is a logical argument that's quite ridiculously simple: given finite data, you cannot deduce the true underlying generative system. Therefore, if all you're exposed to is a finite set of sentences, you cannot infer the true underlying (generative grammatical) system, unless you have something that guides you to the right solution space.
But you don't have to believe me when I say it's hard - look at the simple evidence: even with several bazillion sentences at our disposition, and all the fancy computational tools, no one has yet come up with an adequate description of a generative system that will produce all and only grammatical English sentences.
But a 5-year-old does end up knowing the generative system, and will produce sentences you sometimes wish she wouldn't, and will write little stories, making original sentences that she couldn't possibly have just overheard anywhere.
That is roughly the argument for why there must be something inside the baby that makes it such a genius at figuring out how the language system works. And this something (let's call it Flynn) is (part of) the reason why human babies, but not rat babies or komodo dragon babies or little guppies, end up learning language.
Does this mean there cannot be any general cognitive principles? Of course not, and no one claims there cannot be or that they don't affect language learning (that would be the Flab). What they are and how they do their job is an empirical question. Saying something is innate is merely a description - it doesn't tell you how the system works; that's what keeps researchers in business.
And coming to negative evidence, here's a fantastic quote from Roger Brown et al (quoted in Dan Slobin's Psychlinguistics):

What circumstances did govern approval and disapproval directed at child utterances by parents? Gross errors of word choice were sometimes corrected, as when Eve said What the guy idea. Once in a while an error of pronunciation was noticed and corrected. Most commonly, however, the grounds on which an utterance was approved or disapproved ... were not strictly linguistic at all. When Eve expressed the opinion that her mother was a girl by saying He a girl mother answered That's right. The child's utterance was ungrammatical but mother did not respond to the fact; instead she responded to the truth value of the proposition the child intended to express. In general the parents fit propositions to the child's utterances, however incomplete or distorted the utterances, and then approved or not, according to the correspondence between the proposition and reality. Thus Her curl my hair was approved because mother was, in fact, curling Eve's hair. However, Sarah's grammatically impeccable There's the animal farmhouse was disapproved because the building was a lighthouse and Adam's Walt Disney comes on, on Tuesday was disapproved because Walt Disney comes on, on some other day. It seems then, to be truth value rather than syntactic well-formedness that chiefly governs explicit verbal reinforcement by parents. Which render mildly paradoxical the fact that the usual product of such a training schedule is an adult whose speech is highly grammatical but not notably truthful (Brown, Cazden, and Bellugi, 1967, pp. 57-58).

Friday, July 30, 2010

PRAAT script to extract sound tokens

The problem:

You have recorded a single long sound file with many tokens (words, syllables), separated by silence for your fantastic upcoming experiment. You want to now segment the long recording and extract each of the tokens into separate sound files.

The solution:

How to use:

Place this with all your other favorite scripts. Save all your recordings in a single directory. To be safe, put copies of all your recordings in a single directory. Load up PRAAT and run the script.

Notes

  1. The script also ensures that each of the sound files extracted have zero crossings at the beginning and end, so things sound good.
  2. You might have to play around with the parameters for what constitutes silence. Search for the "#" line in the script and modify the parameters in the following line. Look up help on Sound: To TextGrid (silences)... in the PRAAT help.

Labels: