Send to

Choose Destination
See comment in PubMed Commons below
Front Psychol. 2012 Oct 1;3:374. doi: 10.3389/fpsyg.2012.00374. eCollection 2012.

Statistical speech segmentation and word learning in parallel: scaffolding from child-directed speech.

Author information

Department of Psychology, Stanford University Stanford, CA, USA.


In order to acquire their native languages, children must learn richly structured systems with regularities at multiple levels. While structure at different levels could be learned serially, e.g., speech segmentation coming before word-object mapping, redundancies across levels make parallel learning more efficient. For instance, a series of syllables is likely to be a word not only because of high transitional probabilities, but also because of a consistently co-occurring object. But additional statistics require additional processing, and thus might not be useful to cognitively constrained learners. We show that the structure of child-directed speech makes simultaneous speech segmentation and word learning tractable for human learners. First, a corpus of child-directed speech was recorded from parents and children engaged in a naturalistic free-play task. Analyses revealed two consistent regularities in the sentence structure of naming events. These regularities were subsequently encoded in an artificial language to which adult participants were exposed in the context of simultaneous statistical speech segmentation and word learning. Either regularity was independently sufficient to support successful learning, but no learning occurred in the absence of both regularities. Thus, the structure of child-directed speech plays an important role in scaffolding speech segmentation and word learning in parallel.


child-directed speech; frequent frames; speech segmentation; statistical learning; word learning

PubMed Commons home

PubMed Commons


    Supplemental Content

    Full text links

    Icon for Frontiers Media SA Icon for PubMed Central
    Loading ...
    Support Center