• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Brain Res. Author manuscript; available in PMC May 18, 2007.
Published in final edited form as:
PMCID: PMC1853329
NIHMSID: NIHMS14276

Electrophysiological differentiation of phonological and semantic integration in word and sentence contexts

Abstract

During auditory language comprehension, listeners need to rapidly extract meaning from the continuous speech-stream. It is a matter of debate when and how contextual information constrains the activation of lexical representations in meaningful contexts. Electrophysiological studies of spoken language comprehension have identified an event-related potential (ERP) that was sensitive to phonological properties of speech, which was termed the phonological mismatch negativity (PMN). With the PMN, early lexical processing could potentially be distinguished from processes of semantic integration in spoken language comprehension. However, the sensitivity of the PMN to phonological processing per se has been questioned, and it has additionally been suggested that the “PMN” is not separable from the N400, an ERP that is sensitive to semantic aspects of the input. Here, we investigated whether or not a separable PMN exists and if it reflects purely phonological aspects of the speech input. In the present experiment, ERPs were recorded from healthy young adults (N =24) while they listened to sentences and word lists, in which we manipulated semantic and phonological expectation and congruency of the final word. ERPs sensitive to phonological processing were elicited only when phonological expectancy was violated in lists of words, but not during normal sentential processing. This suggests a differential role of phonological processing in more or less meaningful contexts and indicates a very early influence of the overall context on lexical processing in sentences.

Keywords: Language, Speech comprehension, Phonology, Semantics, Event-related potential (ERP)

1. Introduction

One of the remarkable characteristics of human speech comprehension is its speed. Humans are able to recognize spoken words at a rate of at least three words per second, and listeners can extract meaning from ongoing speech with little delay. This capability requires that speech comprehenders rapidly map the incoming speech signal onto representations of words and their meanings in the mental lexicon. Furthermore, comprehenders must relate this in real time to the ongoing context being built up by the preceding word, sentence or discourse.

To accommodate the speed of spoken language comprehension, many models have proposed that processing spoken words presented in isolation starts immediately from the very first meaningful sound(s) or phoneme(s) (Goldinger et al., 1989, 1992; Marslen-Wilson, 1987, 1989; Marslen-Wilson and Welsh, 1978; McClelland and Elman, 1986; McQueen et al., 1994; Norris, 1994; Norris et al., 1995, 2000). The degree of activation of a particular lexical candidate directly reflects the goodness-of-fit between stored form information and the acoustic input at that point in time. This leads to the graded activation of multiple lexical representations that partially match the word initial phoneme(s) (i.e., lexical access). The activated lexical candidates compete with each other for recognition (e.g., Norris et al., 1995). As more acoustic information becomes available, the number of activated lexical representations is narrowed down to the item that best matches the input, and a representation of the word that was heard is chosen (i.e., lexical selection). Lexical selection of a word presented in isolation can only be completed at the point where the acoustical input of this word is uniquely differentiated from that of its competitors — the isolation point (compare for example: captain and capitol).

However, most of the time words are not processed in isolation, but in the context of sentences or discourse. It is a matter of much debate at what point in time context starts to exert an influence on the incoming speech signal. Whereas some models propose that the incoming speech signal is initially processed independent of context (e.g., Norris, 1994), other models suggest that there is an early or immediate interaction between context and the speech input (McClelland and Elman, 1986). A number of behavioral studies have shown that context starts to exert an influence very early during the process of word recognition, such that less acoustical information is needed and lexical selection can occur before the word can be uniquely distinguished from its competitors (e.g., Zwitserlood, 1989). But, the ultimate goal of the listener is to comprehend the meaning of spoken words in their context, which requires integration into a higher order meaning representation of the whole utterance (i.e., lexical integration).

1.1. Behavioral studies of spoken word recognition

Behavioral studies of spoken word recognition in lists have manipulated the phonological relationship between primes and targets, both in unimodal and cross modal priming paradigms. The phonological overlap in these studies has been in final parts of the word (rhyme priming, e.g., task–mask; factor–tractor) or at word onsets (alliterative priming, e.g., chap–chat; captain–capitol); other studies have manipulated phoneme overlap between items that that did not rhyme or alliterate.

Targets that rhyme with their primes are recognized more quickly than non-rhyming targets (e.g., Donnenwerth-Nolan et al., 1981; Emmorey, 1989; Monsell and Hirsh, 1998; Praamstra et al., 1994; Radeau et al., 1998; Shulman et al., 1978; Slowiaczek et al., 2000). This effect has been found in many studies and across many paradigms, but not in cross modal paradigms (Cutler et al., 1999; Radeau et al., 1995). This suggests that this effect may be pre-lexical, although a recent study (Norris et al., 2002) observed that rhyme priming may have both strategic and automatic components.

Alliterative priming effects are not as robust as rhyme priming effects. Some studies have suggested that alliterative priming only occurs when phonological aspects of word processing are strategically important (Goldinger et al., 1992). Other studies have shown inhibition of alliterating target words when the amount of onset overlap between prime and target words increases (Slowiaczek and Hamburger, 1992; Dufour and Peereman, 2003). This inhibition effect is strongest when strategic effects are least likely to develop and is therefore considered to be a reflection of automatic aspects of word recognition, more specifically the effects of lexical competition between simultaneously activated lexical candidates.

Some behavioral studies of sentence comprehension have also investigated when and how the speech-driven activation of phonological form representations begins to interact with contextually driven semantic information. These studies have focused on the onsets of words to identify when sentential context starts to influence word processing. The main question here is whether semantic context can activate phonological form representations without support from the actual speech input, or, alternatively whether the presence of this context can reduce the amount of speech input that is needed to recognize the actual word that was heard.

The majority of these studies have used a cross modal paradigm, where an auditorily presented sentence context was paired with a visually presented probe. The probe word was presented with the relevant target word in the sentence and was presented either before or after the isolation point of the target word. A key finding in this literature has been that sentential semantic context can facilitate the processing of the contextually appropriate candidate before the isolation point but does not preclude the activation of contextually inappropriate word candidates (Zwitserlood, 1989). Zwitserlood concluded that these data are consistent with the idea that sentential context can influence word processing at the level of lexical selection, but others have argued that this influence is during lexical integration processes (Marslen-Wilson, 1987; Gaskell and Marslen-Wilson, 2001). Gow and Gordon (1995) have further shown that the clarity of the articulation of the speech signal may influence which lexical candidates are activated and that ambiguity of the signal can sometimes lead to activation of candidates that were not intended by the speaker (compare for example tulips vs. two lips).

1.2. ERP studies of spoken word recognition

Relatively few event-related potential (ERP) studies have investigated the influence of word or sentential context on the process of spoken word recognition (Connolly and Phillips, 1994; Friederici et al., 2004; Friedrich et al., 2004; Hagoort and Brown, 2000; Praamstra et al., 1994; Radeau et al., 1998; Van den Brink et al., 2001, 2006; Van den Brink and Hagoort, 2004; Van Petten et al., 1999). These studies supported the findings of behavioral studies and have found electrophysiological evidence for a very rapid influence of context on spoken word processing. This effect is manifested as an early N400 or, possibly, as a separable ERP component sensitive to early aspects of spoken word recognition.

The discovery of the N400, first reported by Kutas and Hillyard (1980), allowed electrophysiological research of language to rapidly develop. The N400 is a negative polarity ERP component that peaks approximately 400 ms post-word onset, and it is maximal over posterior scalp sites (Kutas and Hillyard, 1980). The N400 has been related specifically to semantic aspects of the input and has been found independent of modality of input, being present for written, spoken and signed language (e.g., Holcomb and Neville, 1991; Kutas and Hillyard, 1980; Kutas et al., 1987; Neville et al., 1997). It has been proposed that the amplitude of the N400 effect is modulated as a function of the ease with which the meaning of a word can be integrated with a higher order representation of the preceding word, sentence or discourse context. For example, the amplitude of the N400 is reduced to words that can be more easily integrated into the preceding context (e.g., Brown and Hagoort, 1993; Holcomb, 1993; Van Berkum et al., 1999, but see for example Kutas and Federmeier, 2000; Federmeier et al., 2006 for an alternative proposal).

In studies of spoken language comprehension that have used single word contexts, it has been well established that the N400 amplitude is modulated as a function of the associative or semantic relatedness of a preceding context word. A reduced N400 has been found as a function of priming by related words (Bentin et al., 1985; Hagoort et al., 1996; Holcomb, 1988; Holcomb and Neville, 1991). Similarly, other studies have suggested that rhyme priming (shared word offsets) and alliterative priming (shared word onsets) also affect the amplitude of the N400, such that it is reduced to words (and in some cases non-words) that are preceded by a word that rhymes (e.g., Praamstra et al., 1994; Radeau et al., 1998; Rugg, 1985, 1987) or by a word that has the same onset as the critical word (e.g., O'Rourke and Holcomb, 2002; Praamstra et al., 1994). This ERP effect began much earlier for alliterative priming (250–700 ms) than for rhyme priming (450–700 ms). Interestingly, Praamstra et al. (1994) observed electrophysio-logical and behavioral rhyme priming effects, but only electrophysiological alliterative priming effects.

Several ERP studies of spoken sentence comprehension have observed early effects of sentential context on spoken word recognition. These studies agree that this effect is most likely due to facilitation of lexical selection or integration processes, but they differ in their interpretation of whether the semantic context representation interacts with the activated form representations in terms of goodness-of-fit with phonology or semantics.

Connolly and Phillips (1994) manipulated semantic and phonological congruence of sentence final words in four conditions. For example: (1) High Cloze: at night the old woman locked her door (semantically and phonologically expected), (2) Low Cloze: they left the dirty dishes in the kitchen (semantically and phonologically less expected than the most expected ending: “sink”), (3) Phonologically Congruent: Phil put some drops in his icicles (semantically anomalous but phonologically congruent with most expected ending: “eyes”), (4) Anomalous: Joan fed her baby some warm nose (semantically and phonologically anomalous). An early negative shift (150–350 ms) was found in response to sentence final words that violated the expected word onset (conditions 2 and 4). This early negativity was absent in the waveforms of the most expected sentence final words (1) and to final words that started with the same phoneme as the most expected ending (3). Because the amplitude of this early negative shift varied as a function of the congruence of the word initial phoneme(s) with the contextually most expected word, Connolly and Phillips (1994) labeled this the phonological mismatch negativity or PMN. In a more recent study, they combined high density EEG and MEG to model the possible sources of the PMN and they concluded that it may have a left anterior source (Connolly et al., 2001). This finding was further bolstered with the results of an fMRI study (D'Arcy et al., 2004). However, their findings in the EEG/MEG and fMRI studies were in word priming paradigms and not sentence contexts.

Other studies have also reported an early negativity preceding the N400 (Hagoort and Brown, 2000; Van den Brink et al., 2001; Van den Brink and Hagoort, 2004), but these studies differed in the interpretation of this ERP result from Connolly and Phillips (1994). Van den Brink et al. (2001) replicated and extended the Connolly and Phillips (1994) study and also reported a biphasic ERP to the critical words that were semantically and/or phonologically incongruent with the most expected word. But because they also found an early negativity to words that were fully congruent with the preceding context, they argued that this effect cannot be attributed to phonological mismatch, but rather to the goodness-of-fit between the activated form candidates and the preceding context. They labeled this early effect the N200 rather than the PMN. In addition, they found that this N200 was more broadly distributed across anterior and posterior regions in contrast to the centro-parietally distributed N400. They concluded from these results that the N200 is separable from the N400 and that it reflects contextual influence on lexical selection in spoken word recognition. However, in a more recent study, Van den Brink and Hagoort (2004) did not replicate the differential topographic distribution of the early negativity, which led them to conclude that it may in fact not be separable from the N400.

Van Petten et al. (1999) also found early effects of sentence context on spoken word recognition in a study in which they had identified the isolation point of the target stimuli. The authors predicted that the effects of the spoken sentence contexts should occur before the isolation point if semantic processing can begin on incomplete acoustic information. Their ERP results indeed supported this prediction. The authors argued that this early ERP effect was indistinguishable from the N4001 because statistical analysis did not reveal a biphasic negativity and grand average waveforms did not show biphasic activity either. The authors acknowledge that, in theory, either phonological expectation or semantic expectation could have influenced the ERP waveforms in their study, but they argue that the bulk of previous studies have pointed to context supporting and generating semantic expectations rather than form-based phonological expectations.

1.3. The present study

In the present study, we further tested the existence of a separable electrophysiological correlate of the early influence of context on spoken word comprehension. Furthermore, we tested whether this component was sensitive to phonological mismatch per se or to more general lexical selection processes. This was examined in two types of context, words in lists and words in meaningful sentence contexts.

Instead of the traditional two-word priming paradigm that has been used in previous studies, the priming paradigm of this study was modified such that, instead of one prime word, there were seven prime words preceding the critical word. This was done because it enabled maximum separation of the effects of phonological and semantic priming on spoken word recognition and because it provided a better baseline for the comparison of these effects to the sentence conditions. Participants were asked to listen to word lists that consisted of either a series of eight semantically and associatively unrelated words that had identical initial phoneme(s) (Alliterative lists) or a series of words from the same semantic category that started with different phonemes (Category lists). For examples, see Table 1. The congruency of the terminal word of each type of list was manipulated. In the Alliterative lists, the terminal word was either phonologically congruent or incongruent with respect to the word onset of the preceding words. Importantly, in both congruent and incongruent Alliterative lists, none of the words was semantically or associatively related to one another. This allowed for the assessment of the effects of phonological mismatch per se, without the influence from semantic expectancy. In the Category lists, the terminal word was either semantically congruent or incongruent with respect to the semantic category of the preceding words. Critically in both the congruent and incongruent word lists there was no alliterative overlap in word onsets. In the Category lists, participants could build up an expectation that the final word would be part of a semantic category, but many candidates could fit this final word of the category, i.e., the expectancy for one particular word was very low. Therefore, no effects of phonological mismatch between the contextually most expected word and the actual presented word are expected to occur in the Category lists. Instead, we predicted that the ERP results comparing congruent and incongruent conditions should reflect the semantic match or mismatch with the preceding context.

Table 1
Examples of list conditions

In addition, participants listened to four types of sentences in which semantic and phonological congruency of the terminal word was manipulated. The final word of the sentences could be (1) High Congruent: semantically and phonologically congruent with the most expected ending2, (2) Phonologically Congruent: semantically incongruent but phonologically congruent with the most expected ending, (3) Low Congruent: semantically and phonologically congruent with a less expected ending and (4) Incongruent: semantically and phonologically incongruent. See Table 2 for examples of sentence stimuli.

Table 2
Examples of sentence conditions

The EEG was recorded from an array of 59 electrodes distributed over the scalp (see Fig. 1). This electrode density allowed us to test for possible dissociations in topographic distributions of the N400 from an early negative shift that may reflect the influence of context on spoken word processing. Differences in topographic distribution of the N400 from an early negativity would be consistent with the idea that these components were generated from (partially) non-overlapping neuronal sources (e.g., Rugg and Coles, 1995) and would support separability of the N400 from an early negative component.

Fig. 1
Overview of the electrode positions and regions used for statistical analysis. Positions that do not correspond to 10–20 locations have an additional letter to indicate their position relative to the nearest 10–20 location (‘i’ ...

In the word list conditions, we predicted differential ERP effects of phonological context for the Alliterative lists but not for the Category lists. This effect may have the early onset that has been reported in previous studies of alliterative priming (e.g., Praamstra et al., 1994). The topographic distribution of this effect is crucial in identifying if this phonological ERP effect is separable from the N400. In contrast, we predicted differential effects of semantic context for the Category lists but not for the Alliterative lists. We predicted this effect to emerge as a classic N400 with a posterior scalp distribution (e.g., Kutas and Hillyard, 1980, 1982).

Typical N400 semantic congruency effects were also predicted in the sentence conditions: relative to the Incongruent, Phonologically Congruent and Low Congruent conditions, we should expect a reduction of the N400 in the High Congruent condition. In addition, we expected to replicate the finding of an early negative shift that has been reported in other studies of spoken word processing in sentence contexts (e.g., Connolly and Phillips, 1994, Van Petten et al., 1999; Van den Brink et al., 2001). Our aim was to identify if this early negative shift can indeed be differentiated from the N400 in terms of its topographic distribution. In conjunction with this, we aimed to determine if this early negative shift is sensitive to phonological mismatch, or, alternatively, to semantic goodness-of-fit. This was done by comparing the topographic distribution of the early ERPs in the sentence conditions to those in words lists.

2. Results

Repeated measures analyses of variance (ANOVAs) were performed on the mean amplitude of the ERPs to the terminal words of lists and sentences in two time windows: An early time window (200–300 ms) to capture a possible early negative effect (i.e., PMN (Connolly and Phillips, 1994), N200 (Van den Brink et al., 2001) or early N400 (Van Petten et al., 1999; Van den Brink and Hagoort, 2004)) and a late time window (300–600 ms) to capture the N400 effect. Time windows were selected on the basis of previous literature and careful visual inspection of the grand average waveforms. Separate overall ANOVAs were conducted for the lists and the sentences: an overall region analyses and a midline analyses. The overall region analyses, for both lists and sentences, included the factors: Hemisphere (left, right), Region (frontal, central, parietal and occipital) and Electrode (5 levels). In the midline analyses, the factor Electrode (8 levels) was included. In addition, both the overall and the midline list analyses included the factors: List Type (Alliterative, Category) and Condition (Incongruent, Congruent), and the sentence analyses included the factor Sentence Type (High Congruent, Phonologically Congruent, Low Congruent and Incongruent). Fig. 1 shows the regions for statistical analyses.

Follow up overall analyses were done for separate regions (frontal, central, parietal and occipital) for the lists and sentences, respectively. In addition to these overall analyses, planned pairwise comparisons were performed for the Alliterative and the Category lists separately. For the sentence experiment, three a priori pairwise comparisons were conducted using ANOVAs with a 2 level congruency factor to compare High Congruent with Incongruent, Phonologically Congruent and Low Congruent, respectively. Greenhouse–Geisser corrected p values will be reported for analyses with more than one degree of freedom in the numerator.

2.1. List analyses

Fig. 2 displays the grand average waveforms to the incongruent and congruent final words of the Alliterative and the Category lists. Fig. 3 shows the topographic distribution of the effects.

Fig. 2
Grand average ERPs to the final words in the phonologically congruent and incongruent conditions for the Alliterative lists (left panel) and to the semantically congruent and incongruent conditions in the Category lists (right panel). In this figure (and ...
Fig. 3
Topographic distribution of the ERP effects for the Alliterative (left panel) and Category (right panel) lists in the 200–300 ms (top part) and 300–600 ms (bottom part) epochs. Pink colors indicate the negative maximum of the effects, ...

The results of the ANOVAs for the overall region and midline analyses are presented in Table 3, for the overall analyses over individual regions in Table 4, for the pairwise overall and midline analyses for the Alliterative and Category lists in Table 5 and for the pairwise comparisons over the individual regions for the Alliterative and the Category lists in Table 6.

Table 3
Word lists main effects and interactions: overall region and midline analyses
Table 4
Word lists: overall analyses over individual regions
Table 5
Word lists pairwise comparisons: overall region and midline analyses
Table 6
Word lists pairwise comparisons: individual regions

In comparison to Congruent, Alliterative endings, waveforms to Incongruent, Alliterative endings became more negative over the occipital region of the scalp in the 200–300 ms epoch. In the Category lists, the waveforms of Incongruent and Congruent endings did not diverge during this time window. This pattern was evident from the results of the following analyses: in the overall analyses over individual regions (Table 4), we found a marginal interaction of List Type by Condition in the occipital region; the overall pairwise comparisons (Table 5) for the Alliterative lists showed a significant interaction of Condition by Electrode and marginally significant interactions of Condition by Region and Condition by Region by Electrode in the region analyses and a significant Condition by Electrode interaction in the midline analyses, but no significant effects were obtained for the Category lists; the pairwise comparisons of the individual regions (Table 6) for the Alliterative Lists showed a significant interaction of Condition by Electrode for the parietal region and a significant main effect and interaction with Electrode in the occipital region. Again, in this time window, no significant effects were obtained for the Category Lists. In summary, in this early time window, a dissociation was found for the results of the Alliterative and Category lists. An effect of phonological mismatch was observed in the form of an early negative shift that was maximal over the occipital region. In contrast, effects of category violation did not result in significant ERP differences.

In contrast in the N400 time window (300–600 ms), a typical N400 effect that was maximal over centro-parietal sites was obtained for the Category lists. But for the Alliterative lists, a larger negative shift was obtained to the congruent endings over frontal sites, and no N400 effect was found. This pattern was evident from the following results: the overall region analyses (Table 3) showed a significant interaction of List Type by Condition and significant interactions of List Type, Condition and Region, List Type, Condition and Electrode and List Type, Condition, Region and Electrode, and the overall midline analyses showed significant interactions between List Type and Condition, and List Type, Condition and Electrode; the overall analyses for the separate regions (Table 4) showed significant interactions of List Type and Condition and List Type, Condition and Electrode in frontal, central and parietal regions, and for the frontal region a three-way interaction between List Type, Condition and Electrode was observed as well; in the separate overall region analyses (Table 5) of the Alliterative lists, there was a significant interaction of Condition by Region and the overall midline analyses showed a significant interaction of Condition by Electrode, and for the Category lists, there was a main effect of Condition, and interactions of Condition and Region, Condition and Electrode, and Condition, Region and Electrode; for the Alliterative lists, the analysis of the individual regions (Table 6) only revealed a significant effect of Condition and a significant interaction between Condition and Electrode in the frontal region, but for the Category Lists, significant effects of Condition and an interaction of Condition by Electrode were found in frontal, central and parietal regions. In all three regions, the incongruent endings were more negative in relation to the congruent endings, and this effect was maximal over central and parietal electrode sites, which is typical of the N400.

In summary, a typical N400 effect was found in the Category lists but not in the Alliterative lists. In contrast, in the Alliterative lists, a frontal negativity was observed to congruent endings.

2.2. Sentence analyses

Fig. 4 shows the ERP results for the sentence conditions. Fig. 5 displays the topographic distribution of the results. The results of the ANOVAs for the overall region and midline analyses are presented in Table 7, for the overall analyses over individual regions in Table 8, for the pairwise overall and midline comparisons in Table 9 and for the pairwise comparisons over the individual regions in Table 10.

Fig. 4
Grand average ERPs to the final words in the sentence conditions. The left panel shows the ERPs to the High Congruent and Incongruent conditions, the middle panel to the High Congruent and Phonologically Congruent conditions, and the right panel to the ...
Fig. 5
Topographic distribution of the ERP effects in the comparison of Incongruent–High Congruent (left panel), Phonologically Congruent–High Congruent (middle panel) and Low Congruent–High Congruent (right panel). Pink colors show a ...
Table 7
Sentences: overall region and midline analyses
Table 8
Sentences: overall analyses over individual regions
Table 9
Sentences pair wise comparisons: overall region and midline analysis
Table 10
Sentences pair wise comparisons: individual regions

In the early time window from 200 to 300 ms, a widely distributed negative shift is seen to the final words in the Incongruent condition. A small non-differential negative shift is also visible for the High Congruent, the Phonologically Congruent and the Low Congruent conditions, but only over frontal and central electrode sites. This pattern was evident from the following results of the statistical analyses: the overall region analysis (Table 7) revealed a main effect of Condition and interactions of Condition and Electrode, and Condition, Hemisphere, Region and Electrode, and the overall midline analysis (Table 7) also showed a main effect of Condition; overall analyses for each of the individual regions (Table 8) only showed a significant interaction of Condition, Hemisphere and Electrode for the frontal region, and for the central region a main effect of Condition and an interaction of Condition by Electrode were found; the three overall a priori pairwise comparisons (Table 9) only resulted in significant differences between High Congruent and Incongruent conditions, i.e., main effects of Condition in the region and midline analyses and interactions of Condition and Electrode, Condition, Region and Electrode, and Condition, Hemisphere, Region and Electrode in the region analyses; the pairwise comparisons over the individual regions (Table 10) only revealed significant differences in the comparison of Incongruent and High Congruent endings in the form of a main effect of Condition in central and parietal Regions, and a significant interaction of Condition, Hemisphere and Electrode in the frontal region. In summary, only the Incongruent Condition differed from the High Congruent Condition in this early time window. This effect was maximal over centroparietal electrode sites.

In the 300–600 ms, N400 time window, relative to the High Congruent conditions, all the other conditions show a more negative shift that has the classic posterior N400 distribution. This was evident from the following results of the statistical analyses: in the overall region and midline analyses (Table 7), main effects of Condition and significant interactions of Condition and Electrode were obtained, and in the Region analyses significant interactions were also found for Condition and Region, Condition, Hemisphere and Electrode, Condition, Region and Electrode, and Condition, Hemisphere, Electrode and Region; the overall analyses of each of the individual regions (Table 8) revealed a main effect of Condition and an interaction of Condition and Electrode for all four regions, and a Condition, Hemisphere and Electrode interaction in the frontal and occipital regions; the overall a priori pairwise comparisons for the region and midline analyses (Table 9) showed main effects of condition and interactions of Condition and Electrode for all comparisons, but only for the comparison between High Congruent and Incongruent Conditions in the midline analyses. Additional significant interactions were found for all comparisons in the region analyses for Condition and Region and Condition, Region and Electrode, but only the comparison of the High Congruent and Incongruent Conditions showed significant interactions of Condition, Hemisphere and Electrode, and Condition, Hemisphere, Region and Electrode; pairwise comparisons over the four individual regions (Table 10) revealed main effects of Condition for all comparisons in central, parietal and occipital regions, and interactions of Condition and Electrode in all four regions. Both the comparison of the High Congruent and Incongruent conditions and of the High Congruent and Phonologically Congruent conditions also showed an interaction of Condition, Hemisphere and Electrode in the occipital Region, and for the comparison of High Congruent and Incongruent, a main effect of Condition and a Condition, Hemisphere and Electrode Interaction were found in the frontal region as well.

In summary, as expected, the N400 effects were maximal when the sentence final words were anomalous with respect to their preceding context (i.e., in the Incongruent and Phonologically Congruent Conditions), but a significant N400 effect was also found when the sentence final word was semantically congruent but less expected (Low Congruent condition). These N400 effects were maximal over centroparietal sites, as is typical for this ERP.

3. Discussion

This study investigated the electrophysiological signature of the early influence of context on word processing in spoken language comprehension. The phonological and semantic congruency of critical words was manipulated in two types of contexts: lists of words and sentences.

The word list manipulation was designed to maximally separate phonological and semantic constraints. In the word lists, phonologically incongruent list final words elicited an early negative ERP (200–300 ms) that was maximal over occipital leads. In addition, in the 300–600 ms time window, a negative shift was also obtained that was maximal over frontal leads, but now to the phonologically congruent list final words. However, no N400 effect (300–600 ms) was found, indicating that the phonological congruence of the critical words did not have a differential effect on the semantic processing of the spoken words.

Interestingly, there was a complete dissociation of the pattern of results for the Category lists from that of the Alliterative lists. For the Category lists, a robust N400 effect was obtained with the classic posterior distribution in the 300–600 ms time window, but there was no early negative effect (200–300 ms). The frontal negativity that was found for phonologically congruent words in the Alliterative lists was also not obtained for the Category lists.

For the sentence stimuli, consistent with findings of Van den Brink et al. (2001, 2006), an early negative ERP was observed in all conditions, but there was only a significant reduction of this ERP to critical words that were High Congruent relative to those that were Incongruent. Final words that had the same onset as the most expected word but that were semantically anomalous (i.e., the Phonologically Congruent condition) did not differ from the congruent final words. The difference between congruent and incongruent final words in this early time window was obtained over central and posterior regions and was marginal (p = .055) in the occipital region. In the later time window (300–600 ms), results were consistent with previous N400 results. The N400 elicited in the High Congruent condition was significantly smaller in amplitude than the N400s obtained in all other sentence conditions. The largest reduction was seen when comparing the High Congruent and Phonologically Congruent and Incongruent conditions. A smaller reduction was seen when comparing the High Congruent condition to the Low Congruent condition.

3.1. Spoken word recognition in lists

Alliterative priming in the word lists resulted in early and late effects on spoken word recognition. The early effect obtained here resembles the ERP effect that was found in previous studies that have primed word onsets (e.g., Praamstra et al., 1994). Praamstra et al. (1994) suggested that this early effect could be due to activation of shared phonological segments in the prime and target pairs or alternatively to a process whereby a congruency check is performed on the phonological overlap of primes and targets. This latter interpretation is consistent with findings of behavioral studies that only showed alliterative priming when phonological aspects of word processing are strategically important to subjects (e.g., Goldinger et al., 1992). In the present study, even though there was no task that made it strategically important for participants to generate an expectancy of the list final word, the construction of the Alliterative lists could have led participants to expect words that would be phonologically consistent or not with the preceding prime words. Taken together, the presence of an early ERP in the Alliterative lists of this study is consistent with the idea that that early ERP effects of phonological mismatch can be obtained as a function of priming of word onsets.

The early negativity that was found in the Alliterative list manipulation of the present study is similar in latency to another well-established ERP component, the mismatch negativity (MMN). The MMN has been found in response to tasks that require processing of language sounds, usually vowels. However, we do not believe that the early negativity reported here is the MMN. First, the MMN has been studied extensively and typically has a frontal distribution. Consistent with this distribution, the neuronal source of the MMN has been localized to auditory cortex (Diesch et al., 1998; Naatanen, 2001; Rinne et al., 1999). The early negativity in our study has a focal posterior distribution. This is inconsistent with both the scalp distribution of the MMN and Heschl's gyrus as the sole source of this component. Second, the MMN typically has an earlier onset than the negativities seen in the present study (100–150 ms vs. 200–300 ms). Third, the MMN is not normally elicited in experimental tasks that require natural speech processing or that use whole words as stimuli. Typically, the MMN is elicited by oddball paradigms which require participants to detect infrequent stimuli. In studies that have elicited the MMN in response to speech, stimuli are typically vowel sounds that do not match other stimuli in a stream of stimuli.

It is also not likely that the early ERP effect that was obtained in the Alliterative lists can be attributed to an artifact inherent to the presentation of lists of words. If this were the case, one would expect the ERP effect to be present in all list conditions. If this ERP effect was due to the violation of a general expectancy, we would have observed the effect to any incongruent stimulus regardless of whether it was presented in an Alliterative or Category list.

The absence of an early occipital ERP effect in the Category lists is difficult to reconcile with the idea that this ERP that was obtained to phonological incongruency in the Alliterative lists is sensitive to the process of lexical selection per se. In the Category lists, the context constrained the semantic category of the final word, but it did not strongly constrain the actual lexical candidate. Therefore, if the early negative shift is indeed sensitive to the influence of context on the lexical selection process, then an early negativity should have occurred for both the contextually appropriate and the contextually inappropriate list final words, possibly reduced in the semantically congruent condition.

Even though there was no early negativity present to the critical words in the Category lists of this study, the waveforms in Fig. 2 do reveal a biphasic morphology in the later time window (300–600 ms). This is especially the case over posterior and central leads, and more pronounced for the category incongruent condition. This biphasic morphology resembled what has been found in auditory sentence studies (e.g., Connolly and Phillips, 1994; Van den Brink et al., 2001; see also Camblin et al., 2006) and discourse studies (e.g., Van Berkum et al., 2003) and may reflect processes of lexical selection (earlier shift) and semantic matching (later shift). Relative to spoken word recognition in sentences, lexical selection may have been postponed because participants were now engaged in list like processing, which is likely much more slow and strategic than real-time spoken sentence comprehension. However, it appears that early and later negative shifts in the 300–600 ms window had similar topographic distributions, and both may well be part of the well-established N400 effect.

For the phonologically congruent final words in the Alliterative lists, a larger frontally distributed negative shift was obtained than for incongruent final words in the 300–600 ms latency. This is not the typical N400 effect both in terms of topographic distribution and in terms of the direction of the effect, i.e., a larger negative shift to phonologically congruent words. But this finding might fit with models of spoken word recognition such as the cohort model. As was discussed before, models of spoken language comprehension assume that, as soon as the initial phoneme of the speech input is heard, several lexical items that match this phoneme become activated. As more of the speech signal becomes available, lexical selection processes zoom in on the lexical candidate that actually matches the speech input. At the same time, all lexical candidates that do not fit the input may be inhibited. In the alliterative condition, this might mean that the representation of the final congruent word is inhibited by the repeated previous presentations of the same initial phonemes. If this frontal negativity to the consistent final words in the alliterative lists indeed reflects this form of inhibition, then we would predict that such a finding would not be obtained in rhyming conditions. Previous studies with two word rhyme pairs have indeed not obtained this frontal negativity (e.g., Praamstra et al., 1994). However, future studies will have to determine the precise nature of this effect.

The presence of an early 200–300 ms ERP effect in the Alliterative list manipulation and the absence of this effect in the Category list manipulation is consistent with an explanation in terms of phonological expectancy. If strong phonological constraints are violated, such as in the phonological incongruent condition of the Alliterative lists, then an early negative shift is obtained. Conversely, when phonological constraints are met (Alliterative congruent final words) or when no phonological expectancy can be generated as in the congruent and incongruent conditions of the Category lists, then no early effect emerged. This effect appears therefore highly sensitive to manipulations of phonological constraints per se and is probably sensitive to phonological mismatch.

3.2. Spoken word recognition in sentences

In the sentence experiment, consistent with previous findings (Connolly and Phillips, 1994; Van den Brink et al., 2001; Van den Brink and Hagoort, 2004; Van Petten et al., 1999), an early negative shift (200–300 ms) was observed in all conditions, but there was only a significant difference in this shift when comparing the Incongruent to the High Congruent condition. This effect was obtained over central and posterior regions but did not reach significance over the occipital region. This is in sharp contrast to the effect of phonological mismatch in the Alliterative lists, which was only significant over electrode leads in the occipital region. The distribution of the early negative ERP effect in the sentence manipulation is however consistent with that of the N400. Therefore, we conclude with Van Petten et al. (1999) and Van den Brink and Hagoort (2004) that this ERP most likely reflects an early onset of the N400 effect. This then indicates that the N400 is sensitive to very early effects of contextual constraint on spoken word comprehension.

This interpretation is not consistent with Connolly and Phillips (1994), who used the same sentence manipulations of phonological and semantic (in)congruency as in the present study but concluded that the early negative shift they obtained in their study was separable from the N400, and sensitive to phonological mismatch (PMN), because the amplitude of this early negative shift varied as a function of the congruence of the word initial phoneme(s) with the contextually most expected word. However, in their 1994 study, the ERPs were obtained from 5 electrodes, and the authors were therefore not able to identify if the topographic distribution of the early negative shift was different from that of the N400, something that was clearly established in the present study. More recent studies of Connolly and colleagues (Connolly et al., 2001; D'Arcy et al., 2004) with high density ERP/MEG and fMRI have modeled a possible left anterior source of the PMN. However, in these studies, word priming paradigms were used instead of sentence contexts, and as has been shown in the present study, these two types of contexts may have differential effects on processing.

In addition, on the basis of a sentence design alone, it is very difficult to determine whether an early negative ERP effect would be sensitive to phonological mismatch per se or to initial semantic goodness-of-fit as has been proposed by Van den Brink et al. (2001, 2006). For example, in the comparisons between Phonologically Congruent and Congruent conditions (“He mailed the letter without a sta…”), the semantic goodness of fit between stamp (congruent ending) and stance (phonologically congruent ending) is initially identical. Therefore, the same prediction of a reduction of the early shift would be made on the basis of phonological congruence and semantic goodness-of-fit. However, with the results of the alliterative lists in the present study, it is now possible to determine that the occipital distribution of the phonological mismatch effect is not what is found for the early negative shift in the sentences, and the combined pattern of results is more consistent with the idea that the early shift in the sentence conditions reflects semantic goodness-of-fit.

As in the studies of Van den Brink et al. (2001, 2006) and Van Petten et al. (1999), our findings show that this early negative shift is most likely not distinct from the N400 effect, and this indicates that the N400 is sensitive to early effects of context on lexical selection and integration processes.

3.3. Comparison of spoken word recognition in sentences and lists

Comparing results obtained in the word list and the sentence contexts, clearly consistent results were obtained with respect to the N400. Both in list of words and in sentence contexts, the N400 was reduced to words that could be easily matched or integrated with the preceding context. However, separable early effects were obtained in the list and sentence contexts. That is, evidence of the effects of phonological congruence was obtained in the alliterative lists, but not in the sentence conditions. This divergence of results between the list and sentential conditions is likely attributable to differential processing demands across these conditions. In the list conditions, the stimuli were deliberately constructed such that a strong emphasis was placed on phonological processing in the Alliterative lists and on semantic processing in the Category lists. In addition, participants were likely engaged in single word processing that did not facilitate the integration of lexical items into a higher order representation of the context. In contrast, in the sentence manipulation, as in normal sentential processing, the goal of the comprehender is to rapidly integrate individual lexical items into a higher order representation of the preceding context. In fact, many recent ERP studies have found evidence for the idea that processes that operate in the service of creating a coherent higher order representation of the sentence or discourse context have at least as great an effect on the processing of a given word as do more lexically specific processes (e.g., Camblin et al., 2006; Swaab et al., 2004; Ledoux et al., in press(a); Ledoux et al., in press(b); Van Berkum et al., 1999, 2003; Van den Brink et al., 2006). In the present study, in the sentence condition, the lexical selection process was influenced less by the phonological onset of words as present in the speech input. In other words, whereas in the list conditions participants may have been driven to process phonological and semantic aspects of the input more separately, these features were immediately integrated in the sentence conditions. It is possible that, under normal circumstances, such as in the sentential conditions, that either (1) the processing of phonological aspects of stimuli does not have a central role in comprehension or (2) that, once initial phonological processing has occurred, other levels of analysis, such as semantic and contextual processing, begin to take on immediate importance. Either of these interpretations would lead to conditions that are not optimal for the expression of a component that is solely sensitive to phonological processing, unlike what is seen in the Alliterative list conditions. We therefore suggest that, during sentential processing, there is no separate early negativity but that the N400 is sensitive to very early influences of context on lexical selection in spoken language comprehension.

In conclusion, a separable ERP component sensitive to phonological processing was elicited under certain specific experimental conditions. However, during normal sentential processing, waveform patterns reflected goodness-of-fit between the semantic features of the lexical candidates activated by the phonological input and the previous semantic context. This may be due to a differential role phonological processing has in the activation of lexical candidates when a larger semantic context is absent or present, as in the list and sentence manipulations. This indicates that, although phonological processing is an integral aspect of spoken language comprehension, semantic and contextual information very rapidly influences the processing of words in meaningful contexts.

4. Experimental procedures

4.1. Subjects

Twenty-four, right-handed native English speakers participated in this experiment (17 women, mean age = 21.6, range = 19–28). Handedness was determined via an abridged version of the Edinburgh Handedness Inventory (Oldfield, 1971). Subjects participated in this experiment for research credit or payment. Participants had no history of psychiatric or neurological disorders, and they reported normal hearing and normal or corrected to normal vision. Informed consent was obtained from all participants.

4.2. Materials

The stimulus materials consisted of lists of words in four conditions and sentences in four conditions. See Tables Tables11 and and22 for examples. Each condition contained 40 items. Each word list consisted of eight words presented auditorily (ISI = 30 ms). In the Alliterative lists, the first seven words on average shared the same initial two phonemes, and the onset of the final word was either Congruent or Incongruent with the onset of the seven words that preceded it. None of the words in any of the Alliterative word lists was semantically or associatively related to any of the other words in that list (this was verified with the Edinburgh Word Association Thesaurus (EAT); http://www.eat.rl.ac.uk/).

In the Category lists, the final word was either semantically Congruent or Incongruent with the semantic category of the seven words that preceded it. Between each word of a list, there were no (strong) associations, and the penultimate word was never associated to the final (critical) word of the list (also verified using the EAT). This was done to maximize conceptual priming and limit spread of activation at the form level alone (e.g., Hagoort et al., 1996). Furthermore, by avoiding associations, there is no clear best completion to these lists, other than that it should be a member of the same semantic category, and many items could fit. This was done to minimize phonological expectancy. Moreover, none of the words in Category lists shared the same initial phoneme with any other word of that Category list, which further minimized phonological expectancy. Information about category membership was taken from previous norms of category membership (Battig and Montague, 1969; Hunt and Hodge, 1971).

Sentences were presented at normal speaking rates and sentence final words were (1) semantically and phonologically congruent with the most expected ending (High Congruent), (2) semantically incongruent but phonologically congruent (Phonologically Congruent), (3) semantically and phonologically congruent with a less expected ending (Low Congruent) and (4) semantically incongruent and phonologically incongruent (Incongruent). In the Phonologically Congruent Condition, phoneme overlap between the most expected congruent ending averaged 2.4 phonemes. The sentences and cloze probability ratings used in the sentence condition were taken from previously published lists of materials (Bloom and Fischler, 1980; Schwanenflugel, 1986). See Table 2 for examples.

All the materials were spoken by an experienced female speaker and recorded in a sound attenuating booth on a DAT recorder (Sony model TCD-D8). Each word in the alliterative and category lists was read separately and in random order and later spliced together with an ISI of 30 ms. The same physical tokens were used for the seven context words that preceded the incongruent or congruent final words in the alliterative lists, and the same physical tokens were used for the congruent and incongruent conditions in the category lists. For the alliterative lists, in cases where there were two possible pronunciations of a word (e.g., address), care was taken to ensure that the particular pronunciation incorporated into the list was identical to the other words of the list. In addition, in almost all cases, we made sure that the stress pattern of the eight words that constituted a list was the same (i.e., the words had initial stress).

The sentences were spoken with normal intonation and at a normal speaking rate. In the anomalous, the low cloze and the phonologically incongruent conditions, no specific voice changes marked the sentence-final word as anomalous. Furthermore, sentences were spoken four times, once for each possible ending type, to eliminate the need for splicing and to preserve coarticulation. No additional artificial pause was inserted prior to the terminal word in any of the conditions. The inter-trial interval (ITI) between items in all list and sentence conditions was 3 s.

Visual and auditory inspection was used to determine the onset of the critical final words with a speech waveform editing system (Cool Edit Pro). The information of the onset of the critical words was later used in the Presentation software (http://www.neurobehavioralsystems.com) to send out a code with the continuously recorded EEG for later off-line averaging to the onset of the critical words.

Word length, frequency, concreteness and duration of all terminal words were controlled. In no case did any of these variables differ significantly across conditions (Word Lists: length, F(3,316)=0.817, p=0.485; frequency, F(3,313)=0.763, p=0.515; concreteness, F(3,313)=8.23, p=0.482; and duration, F(3,316)=1.51, p=0.211; Sentences: length, F(3,636)=0.647, p=0.585; frequency, F(3,636)=0.625, p=0.599; concreteness, F(3,636)=2.02, p=0.11; and duration F(3,636)=1.8, p=0.14). The average cloze probability for High Congruent endings was 0.71. The average cloze probability for Low Congruent endings was 0.048. All the corresponding means can be viewed in Table 11.

Table 11
Mean values for word length (number of characters), lexical frequency, concreteness and duration of all terminal words in list and sentence conditions

4.3. Procedure

Participants were individually tested in a dimly lit, sound attenuating booth, seated in a comfortable chair. They were instructed to fixate on a cross, visible on a computer screen in front of them, and asked to move as little as possible during the experiment. Additionally, participants were instructed to blink only between trials in order to minimize eye blink artifacts in the EEG. The task was to listen attentively to the stimuli. Word lists and sentences were presented in a randomized, counterbalanced, blocked design, so that subjects only heard word lists or sentences within a block. The stimuli were presented via a computer through Sennheiser HD 265 linear closed-ear headphones. Participants were interrupted at random intervals and asked to recall the terminal word of the last stimulus. The purpose of the secondary task was to provide a break for participants and to ensure that they listened for comprehension.

4.4. EEG recording

The electroencephalogram (EEG) was continuously recorded from 59 tin electrodes in a custom elastic cap (Electrocap International Incorporated). Electrode placement was in part based on the 10–20 system with additional electrode sites interspersed to provide an even distribution across the scalp (Jasper, 1958). The position of the electrodes is displayed in Fig. 1. Electrode positions that do not correspond to positions according to the 10–20 system have an additional letter to indicate their position relative to the nearest 10–20 locations (for example, Ozi is inferior to Oz; C1p is posterior to C1; P3a is anterior to P3; T3–5i is inferior to a position midway between T3 and T5). Horizontal eye movements were monitored via a bipolar pair of electrodes placed at the external canthus of each eye. Vertical eye movements were monitored via bipolar pairs of electrodes, placed below each eye and with the two lowest frontal electrodes on the cap (Fp1m and Fp2m in Fig. 1). The right mastoid electrode was used as the recording reference electrode. The left mastoid was also recorded for later off-line algebraic re-referencing.

The EEG signal was amplified with band pass cutoffs at .01 and 30 Hz and digitized on-line at a sampling rate of 250 Hz. EEG was recorded on a computer along with accompanying stimulus codes used for subsequent averaging. Impedances were kept below 5 kΩ.

4.5. EEG data analysis

Trials that contained artifacts due to movements, eye blinks, saccades, drifting or amplifier blocking were not included in the analyses. On average, 10% of trials (i.e., 4 trials) were rejected in each condition. Artifact-free trials were averaged and time-locked to the onset of the word, with an averaging epoch of 200 ms pre-stimulus and 1000 ms of post-stimulus activity.

Spherical spline interpolated topographic voltage maps (Perrin et al., 1989) of the difference waves were derived for time windows of interest to provide a qualitative visualization of the scalp distribution of major ERP effects identified in the data.

Acknowledgments

We would like to thank Marty Woldorff and Gregory McCarthy for their support and advice and Ron Mangun, C. Christine Camblin, Kerry Ledoux and two anonymous reviewers for their very helpful comments on this manuscript. The research reported in this paper was supported by an NSF fellowship to Michele Diaz. TYS is supported by NIMH R01 MH066271.

Footnotes

1Of relevance in the comparison between these three ERP studies of spoken word processing in sentences is the cloze probability of the critical words in the sentences. In the study by Connolly and Phillips (1994), the stimuli were adapted from Bloom and Fischler (1980), but cloze values were not reported. In Van den Brink et al. (2001), the cloze probability for the congruent endings was 84%, and in the Van Petten et al. (1999) study, it was 58%.

2Terminal word expectation was determined via cloze ratings.

REFERENCES

  • Battig WF, Montague WE. Category norms for verbal items in 56 categories: a replication and extension of the Connecticut category norms. J. Exp. Psychol. Monogr. 1969;80:1–44.
  • Bentin S, McCarthy G, Wood CC. Event-related potentials, lexical decision and semantic priming Electroencephalogr. Clin. Neurophysiol. 1985;60:343–355. [PubMed]
  • Bloom PA, Fischler I. Completion norms for 329 sentence contexts. Mem. Cogn. 1980;8:631–642. [PubMed]
  • Brown CM, Hagoort P. The processing nature of the N400: evidence from masked priming. J. Cogn. Neurosci. 1993;5(1):34–44. [PubMed]
  • Camblin CC, Ledoux K, Boudewyn MA, Gordon PC, Swaab TY. Processing new and repeated names: effects of coreference on repetition priming with speech and fast RSVP. Brain Research. 2006 (this issue). doi:10.1016/j.brainres.2006.07.033. [PMC free article] [PubMed]
  • Connolly JF, Phillips NA. Event-related potential components reflect phonological and semantic processing of the terminal words of spoken sentences. J. Cogn. Neurosci. 1994;6(3):256–266. [PubMed]
  • Connolly JF, Service E, D'Arcy RC, Kujala A, Alho K. Phonological aspects of word recognition as revealed by high-resolution spatio-temporal brain mapping. NeuroReport. 2001;12(2):237–243. [PubMed]
  • Cutler A, van Ooijen B, Norris D. Vowels, consonants and lexical activation. In: Ohala JJ, Hasegawa Y, Ohala M, Granville D, Bailey AC, editors. Proceedings of the 14th International Congress of Phonetic Sciences. Vol. 3. University of California; Berkeley: 1999. pp. 2053–2056.
  • D'Arcy RCN, Ryner L, Richter W, Service E, Connolly JF. The fan effect in fMRI: left hemisphere specialization in verbal working memory. NeuroReport. 2004;15(12):1851–1855. [PubMed]
  • Diesch E, Biermann S, Luce T. The magnetic mismatch field elicited by words and phonological non-words. NeuroReport. 1998;9:455–460. [PubMed]
  • Donnenwerth-Nolan S, Tanenhaus MK, Seidenberg MS. Multiple code activation in word recognition: evidence from rhyme monitoring. J. Exp. Psychol. Hum. Learn. Mem. 1981;7:170–180. [PubMed]
  • Dufour S, Peereman R. Lexical competition in phonological priming: assessing the role of phonological match and mismatch lengths between primes and targets. Mem. Cogn. 2003;31(8):1271–1283. [PubMed]
  • Emmorey KD. Auditory morphological priming in the lexicon. Lang. Cogn. Processes. 1989;4:73–92.
  • Federmeier KD, Wlotko E, De Ochoa Dewald E, Kutas M. Multiple effects of sentential constraint on word processing. Brain Research. 2006 (this issue). doi:10.1016/j.brainres.2006.06.101. [PMC free article] [PubMed]
  • Friederici AD, Gunter TC, Hahne A, Mauth K. The relative timing of syntactic and semantic processes in sentence comprehension. NeuroReport. 2004;15:165–169. [PubMed]
  • Friedrich CK, Kotz SA, Friederici AD, Gunter Th.C. ERPs reflect lexical identification in word fragment priming. J. Cogn. Neurosci. 2004;16:541–552. [PubMed]
  • Gaskell MG, Marslen-Wilson WD. Lexical ambiguity resolution and spoken word recognition: bridging the gap. J. Mem. Lang. 2001;44:325–349.
  • Goldinger SD, Luce PA, Pisoni DB. Priming lexical neighbors of spoken words: effects of competition and inhibition. J. Mem. Lang. 1989;28:501–518. [PMC free article] [PubMed]
  • Goldinger SD, Luce PA, Pisoni DB, Marcario JK. Form-based priming in spoken word recognition: the roles of competition and bias. J. Exper. Psychol., Learn., Mem., Cogn. 1992;18(6):1211–1238. [PMC free article] [PubMed]
  • Gow DW, Gordon PC. Lexical and prelexical influences on word segmentation: evidence from priming. J. Exp. Psychol. Hum. Percept. Perform. 1995;21(2):344–359. [PubMed]
  • Hagoort P, Brown CM. ERP effects of listening to speech: semantic ERP effects. Neuropsychologia. 2000;38:1518–1530. [PubMed]
  • Hagoort P, Brown CM, Swaab TY. Lexical–semantic event-related potential effects in patients with left hemisphere lesions and aphasia, and patients with right hemisphere lesions without aphasia. Brain. 1996;119:627–649. [PubMed]
  • Holcomb PJ. Automatic and attentional processing: an event-related brain potential analysis of semantic priming. Brain Lang. 1988;35(1):66–85. [PubMed]
  • Holcomb PJ. Semantic priming and stimulus degradation: implications for the role of the N400 in language processing. Psychophysiology. 1993;30:47–61. [PubMed]
  • Holcomb PJ, Neville HJ. Natural speech processing: an analysis using event-related brain potentials. Psychobiology. 1991;19(4):286–300.
  • Hunt KP, Hodge MH. Category-item frequency and category name meaningfulness (m′): taxonomic norms for 84 categories. Psychon. Monogr. Suppl. 1971;4:97–121.
  • Jasper HH. The ten–twenty electrode system of the international federation. Electroencephalogr. Clin. Neurophysiol. 1958;10:371–375.
  • Kutas M, Federmeier KD. Electrophysiology reveals semantic memory use in language comprehension. Trends Cogn. Sci. 2000;4(12):463–470. [PubMed]
  • Kutas M, Hillyard SA. Making sense of senseless sentences: brain potentials reflect semantic incongruity. Science. 1980;207:203–205. [PubMed]
  • Kutas M, Hillyard SA. The lateral distribution of event-related potentials during sentence processing. Neuropsychologia. 1982;20(5):579–590. [PubMed]
  • Kutas M, Neville HJ, Holcomb PJ. A preliminary comparison of the N400 response to semantic anomalies during reading, listening and signing. Electroencephalogr. Clin. Neurophysiol., Suppl. 1987;(39):325–330. [PubMed]
  • Ledoux K, Gordon PC, Camblin CC, Swaab TY. Coreference and lexical repetition: neural mechanisms of discourse integration. Memory and Language. in press(a) [PMC free article] [PubMed]
  • Ledoux K, Camblin CC, Swaab TY, Gordon PC. Reading words in discourse: the modulation of intralexical priming effects by message-level context. Behavioral and Cognitive Neuroscience Reviews. in press(b) [PMC free article] [PubMed]
  • Marslen-Wilson WD. Functional parallelism in spoken word-recognition. Cognition. 1987;25:71–102. [PubMed]
  • Marslen-Wilson WD. Access and integration: projecting sound onto meaning. In: Marslen-Wilson WD, editor. Lexical Representation and Process. MIT Press; Cambridge, MA: 1989. pp. 3–24.
  • Marslen-Wilson WD, Welsh A. Processing interactions and lexical access during word-recognition in continuous speech. Cogn. Psychol. 1978;10:29–63.
  • McClelland JL, Elman J. The TRACE model of speech perception. Cogn. Psychol. 1986;18:1–86. [PubMed]
  • McQueen JM, Norris D, Cutler A. Competition in spoken word recognition: spotting words in other words. J. Exper. Psychol., Learn., Mem., Cogn. 1994;20(3):621–638.
  • Monsell S, Hirsh KW. Competitor priming in spoken word recognition. J. Exper. Psychol., Learn., Mem., Cogn. 1998;24(6):1495–1520. [PubMed]
  • Naatanen R. The perception of speech sounds by the human brain as reflected by the mismatch negativity (MMN) and its magnetic equivalent (MMNm) Psychophysiology. 2001;38:1–21. [PubMed]
  • Neville HJ, Coffey SA, Lawson D, Fischer A, Emmorey K, Bellugi U. Neural systems mediating American sign language: effects of sensory experience and age of acquisition. Brain Lang. 1997;57:285–308. [PubMed]
  • Norris D. Shortlist: a connectionist model of continuous speech recognition. Cognition. 1994;52:189–234.
  • Norris D, McQueen JM, Cutler A. Competition and segmentation in spoken-word recognition. J. Exper. Psychol., Learn., Mem., Cogn. 1995;21(5):1209–1228. [PubMed]
  • Norris D, McQueen JM, Cutler A. Merging information in speech recognition: feedback is never necessary. Behav. Brain Sci. 2000;23:299–370. [PubMed]
  • Norris D, McQueen JM, Cutler A. Bias effects in facilitatory phonological priming. Mem. Cogn. 2002;30:399–411. [PubMed]
  • Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971;9:97–113. [PubMed]
  • O'Rourke TB, Holcomb PJ. Electrophysiological evidence for the efficiency of spoken word processing. Biol. Psychol. 2002;60(2–3):121–150. [PubMed]
  • Perrin F, Pernier J, Bertrand O, Echallier J. Spherical splines for scalp potential and current density mapping. Electroencephalogr. Clin. Neurophysiol. 1989;72:184–187. [PubMed]
  • Praamstra P, Meyer AS, Levelt WJM. Neurophysiological manifestations of phonological processing: latency variations of a negative ERP component timelocked to phonological mismatch. J. Cogn. Neurosci. 1994;6(3):204–219. [PubMed]
  • Radeau M, Morais J, Segui J. Phonological priming between monosyllabic spoken words. J. Exp. Psychol. Hum. Percept. Perform. 1995;21(6):1297–1311.
  • Radeau M, Besson M, Fontaneau E, Castro SL. Semantic, repetition, and rime priming between spoken words: behavioral and electrophysiological evidence. Biol. Psychol. 1998;48:183–204. [PubMed]
  • Rinne T, Alho K, Alku P, Holi M, Sinkkonen J, Virtanen J, et al. Analysis of speech sounds is left-hemisphere predominant at 100–150 ms after sound onset. NeuroReport. 1999;10:1113–1117. [PubMed]
  • Rugg MD. The effects of semantic priming and word repetition on event-related potentials. Psychophysiology. 1985;22(6):642–647. [PubMed]
  • Rugg MD. Dissociation of semantic priming, word and non-word repetition effects by event-related potentials. Q. J. Exp. Psychol., A Human Exp. Psychol. 1987;39(1A):123–148.
  • Rugg MD, Coles MGH. The ERP and cognitive psychology: conceptual issues. In: Rugg MD, Coles MGH, editors. Electrophysiology of Mind: Event-related Brain Potentials and Cognition. Oxford Univ. Press; New York: 1995. pp. 27–39.
  • Schwanenflugel PJ. Completion norms for final words of sentences using a multiple production measure. Behav. Res. Meth. Instrum. Comput. 1986;18:363–371.
  • Shulman H, Hornak R, Sanders M. The effects of graphemic, phonetic, and semantic relationships on access to lexical structures. Mem. Cogn. 1978;6:115–123.
  • Slowiaczek LM, Hamburger M. Prelexical facilitation and lexical interference in auditory word recognition. J. Exper. Psychol., Learn., Mem., Cogn. 1992;18(6):1239–1250. [PubMed]
  • Slowiaczek LM, McQueen JM, Soltano EG, Lynch M. Phonological representations in prelexical speech processing: evidence from form-based priming. J. Mem. Lang. 2000;43:530–560.
  • Swaab TY, Camblin CC, Gordon PC. Reversed lexical repetition effects in language processing. J. Cogn. Neurosci. 2004;16:715–726. [PubMed]
  • Van Berkum JJA, Hagoort P, Brown CM. Semantic integration in sentences and discourse: evidence from the N400. J. Cogn. Neurosci. 1999;11(6):657–671. [PubMed]
  • Van Berkum JJA, Brown CM, Hagoort P, Zwitserlood P. Event-related brain potentials reflect discourse-referential ambiguity in spoken-language comprehension. Psychophysiology. 2003;40:235–248. [PubMed]
  • Van den Brink D, Hagoort P. Influence of semantic and syntactic context constraints on lexical selection and integration in spoken-word comprehension as revealed by ERPs. J. Cogn. Neurosci. 2004;16(6):1068–1084. [PubMed]
  • Van den Brink D, Brown CM, Hagoort P. Electrophysiological evidence for early contextual influences during spoken-word recognition: N200 vs. N400 effects. J. Cogn. Neurosci. 2001;13(7):967–985. [PubMed]
  • Van den Brink D, Brown CM, Hagoort P. The cascaded nature of lexical selection and integration in auditory sentence processing. J. Exper. Psychol., Learn., Mem., Cogn. 2006;32:364–372. [PubMed]
  • Van Petten C, Coulson S, Rubin S, Plante E, Parks M. Time course of word identification and semantic integration in spoken language. J. Exper. Psychol., Learn., Mem., Cogn. 1999;25:394–417. [PubMed]
  • Zwitserlood P. The locus of the effects of sentential semantic context in spoken-word processing. Cognition. 1989;32:25–64. [PubMed]

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • MedGen
    MedGen
    Related information in MedGen
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...