The learnability of abstract syntactic principles

Cognition. 2011 Mar;118(3):306-38. doi: 10.1016/j.cognition.2010.11.001. Epub 2010 Dec 24.

Abstract

Children acquiring language infer the correct form of syntactic constructions for which they appear to have little or no direct evidence, avoiding simple but incorrect generalizations that would be consistent with the data they receive. These generalizations must be guided by some inductive bias - some abstract knowledge - that leads them to prefer the correct hypotheses even in the absence of directly supporting evidence. What form do these inductive constraints take? It is often argued or assumed that they reflect innately specified knowledge of language. A classic example of such an argument moves from the phenomenon of auxiliary fronting in English interrogatives to the conclusion that children must innately know that syntactic rules are defined over hierarchical phrase structures rather than linear sequences of words (e.g., Chomsky, 1965, 1971, 1980; Crain & Nakayama, 1987). Here we use a Bayesian framework for grammar induction to address a version of this argument and show that, given typical child-directed speech and certain innate domain-general capacities, an ideal learner could recognize the hierarchical phrase structure of language without having this knowledge innately specified as part of the language faculty. We discuss the implications of this analysis for accounts of human language acquisition.

MeSH terms

  • Bayes Theorem*
  • Child
  • Generalization, Psychological / physiology*
  • Humans
  • Language Development*
  • Learning / physiology*
  • Psycholinguistics / methods*