Over at Language Log, Mark Liberman has posted very interesting two-part discussion of “creole birdsong” and its implications for human language acquisition, especially regarding the question of innateness of language.
Creoles, i.e. stabilized and grammaticalized languages emerging from and incomplete linguistic input, crop up when a group of children is exposed only to impoverished pidgin input. The fact that children are able to construct a new, full-fledged language from poor data, is often attributed to a view of language acquisition as
“an interaction between environmental exposure and innate abilities.” (Senghas & Coppola 2001)
A particularly interesting example is Nicaraguan Sign Language,
“a signed language spontaneously developed by deaf children in a number of schools in western Nicaragua in the 1970s and 1980s.” The children had no exposure to any other kind of language and beforehand only communicated in homesign.
"sequential cohorts of learners systematized the grammar of this new sign language.” (Senghas & Coppola 2001). Interestingly, the learners which systematized the language were all under the age of 10, an age when the proposed linguistic biases are supposedly more pronounced than in adulthood.
Liberman reports that something similar happened in Ofer Tchernichovski’s lab colony of zebra finches. The initial founder of the colony grew up isolate from others and didn’t learn to sing properly, because there was no one to imitate. However,
“As each succeeding generation learns songs from the preceeding one, the effects of biases in the learning process accumulate, so that after a few generations, normal zebra finch songs have re-emerged.”
Liberman links this to the discussion of Derek Bickerton’s language bioprogram hypothesis, which proposes that children use their innate language capacities, some kind of a mental template for language, when learning and creating language, which has come under critique in the last two ecades.
Liberman argues that, when social learning is involved,
“perhaps it's normal for the phenotype to emerge over multiple generations, and to involve complex relationships among genetic and social structures.”
This also sits well with the proposals of Simon Kirby and his colleagues from the Language Evolution and Computation Research Unit, who propose that language acquisition can be seen as the interaction of a small set of genetic biases (a set of “Universal Biases” instead of a Universal Grammar, so to speak) where
“cultural transmission can magnify weak biases into strong linguistic universals” (Kirby et al. 2007) and language is the “result of nontrivial interactions between three complex adaptive systems: learning, culture, and evolution“ (Kirby et al. 2007).
“a population whose speakers are linguistically biased — for whatever reason — may, over many generations, transform its language in ways that reflect the preponderance of individual biases among language acquirers“ (Ladd et al. 2007, see also this previous post on the work of Dan Dediu and Robert Ladd)
(As I just see, Robert Ladd has also drawn attention to this recent paper of his in the comment section of Language Log)
Kirby and his colleagues also have published a paper on the evolution of birdsongs, but do not refer to Tchernikovski's work, and the paper in general is way to advanced and technical for me to understand it.
In his second post, Mark Liberman, extends his thoughts on the possibility of a “multi-generational language bioprogram”, and gives some examples of simple learning algorithms which embody
“a bias towards certain outcomes” that lead to “coherent shared patterns,” like a shared vocabulary, or, more generally, a shared language
The comment sections of the two posts are also very interesting, as prominent figures such as Derek Bickerton himself, and Shimon Edelman, professor of psychology at Cornell university weigh in on the question.
"The way the brain does it causes the child automatically to put words together in certain ways."
Uhm. Er well. So the brain does it. Cool. problem solved.
On a related note, Talking Robots, a very cool podcast on Robotics and AI, features an interview with Henry Markram of the Blue Brain Project, which is
“the first comprehensive attempt to reverse-engineer the mammalian brain, in order to understand brain function and dysfunction through detailed simulations.”