Send to

Choose Destination
See comment in PubMed Commons below
J Acoust Soc Am. 2000 Mar;107(3):1659-70.

A self-learning predictive model of articulator movements during speech production.

Author information

Department of Engineering, University of Cambridge, England.


A model is presented which predicts the movements of flesh points on the tongue, lips, and jaw during speech production, from time-aligned phonetic strings. Starting from a database of x-ray articulator trajectories, means and variances of articulator positions and curvatures at the midpoints of phonemes are extracted from the data set. During prediction, the amount of articulatory effort required in a particular phonetic context is estimated from the relative local curvature of the articulator trajectory concerned. Correlations between position and curvature are used to directly predict variations from mean articulator positions due to coarticulatory effects. Use of the explicit coarticulation model yields a significant increase in articulatory modeling accuracy with respect to x-ray traces, as compared with the use of mean articulator positions alone.

[Indexed for MEDLINE]
PubMed Commons home

PubMed Commons

How to join PubMed Commons

    Supplemental Content

    Loading ...
    Support Center