U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Murray MM, Wallace MT, editors. The Neural Bases of Multisensory Processes. Boca Raton (FL): CRC Press/Taylor & Francis; 2012.

Cover of The Neural Bases of Multisensory Processes

The Neural Bases of Multisensory Processes.

Show details

Chapter 26Cross-Modal Spatial Cueing of Attention Influences Visual Perception

, , , and .

26.1. SPATIAL ATTENTION: MODALITY-SPECIFIC OR SUPRAMODAL?

It has long been known that “looking out of the corner of one’s eye” can influence the processing of objects in the visual field. One of the first experimental demonstrations of this effect came from Hermann von Helmholtz, who, at the end of the nineteenth century, demonstrated that he could identify letters in a small region of a briefly illuminated display if he directed his attention covertly (i.e., without moving his eyes) toward that region in advance (Helmholtz 1866). Psychologists began to study this effect systematically in the 1970s using the spatial-cueing paradigm (Eriksen and Hoffman 1972; Posner 1978). Across a variety of speeded response tasks, orienting attention to a particular location in space was found to facilitate responses to visual targets that appeared at the cued location. Benefits in speeded visual performance were observed when attention was oriented voluntarily (endogenously, in a goal-driven manner) in response to a spatially predictive symbolic visual cue or involuntarily (exogenously, in a stimulus-driven manner) in response to a spatially nonpredictive peripheral visual cue such as a flash of light. For many years, the covert orienting of attention in visual space was seen as a special case, because initial attempts to find similar spatial cueing effects in the auditory modality did not succeed (e.g., Posner 1978). Likewise, in several early cross-modal cueing studies, voluntary and involuntary shifts of attention in response to visual cues were found to have no effect on the detection of subsequent auditory targets (for review, see Spence and McDonald 2004). Consequently, during the 1970s and 1980s (and to a lesser extent 1990s), the prevailing view was that location-based attentional selection was a modality-specific and predominantly visual process.

Early neurophysiological and neuropsychological studies painted a different picture about the modality specificity of spatial attention. On the neurophysiological front, Hillyard and colleagues(1984) showed that sustaining attention at a predesignated location to the left or right of fixation modulates the event-related potentials (ERPs) elicited by stimuli in both task-relevant and task-irrelevant modalities. Visual stimuli presented at the attended location elicited an enlarged negative ERP component over the anterior scalp 170 ms after stimulus onset, both when visual stimuli were relevant and when they were irrelevant. Similarly, auditory stimuli presented at the attended location elicited an enlarged negativity over the anterior scalp beginning 140 ms after stimulus onset, both when auditory stimuli were relevant and when they were irrelevant. Follow-up studies confirmed that spatial attention influences ERP components elicited by stimuli in an irrelevant modality when attention is sustained at a prespecified location over several minutes (Teder-Sälejärvi et al. 1999) or is cued on a trial-by-trial basis (Eimer and Schröger 1998). The results from these ERP studies indicate that spatial attention is not an entirely modality-specific process.

On the neuropsychological front, Farah and colleagues (1989) showed that unilateral damage to the parietal lobe impairs reaction time (RT) performance in a spatial cueing task involving spatially nonpredictive auditory cues. Prior visual-cueing studies had shown that patients with damage to the right parietal lobe were substantially slower to detect visual targets appearing in the left visual field following a peripheral visual cue to the right visual field (invalid trials) than when attention was cued to the left (valid trials) or was cued to neither side (neutral trials) (Posner et al. 1982, 1984). This location-specific RT deficit was attributed to an impairment in the disengagement of attention, mainly because the patients appeared to have no difficulty in shifting attention to the contralesional field following a valid cue or neutral cue. In Farah et al.’s study, similar impairments in detecting contralesional visual targets were observed following either invalid auditory or visual cues presented to the ipsilesional side. On the basis of these results, Farah and colleagues concluded that sounds and lights automatically engage the same supramodal spatial attention mechanism.

Given the neurophysiological and neuropsychological evidence in favor of a supramodal (or at least partially shared) spatial attention mechanism, why did several early behavioral studies appear to support the modality-specific view of spatial attention? These initial difficulties in showing spatial attention effects outside of the visual modality may be attributed largely to methodological factors, because some of the experimental designs that had been used successfully to study visual spatial attention were not ideal for studying auditory spatial attention. In particular, because sounds can be rapidly detected based on spectrotemporal features that are independent of a sound’s spatial location, simple detection measures that had shown spatial specificity in visual cueing tasks did not always work well for studying spatial attention within audition (e.g., Posner 1978). As researchers began to realize that auditory spatial attention effects might be contingent on the degree to which sound location is processed (Rhodes 1987), new spatial discrimination tasks were developed to ensure the use of spatial representations (McDonald and Ward 1999; Spence and Driver 1994). With these new tasks, researchers were able to document spatial cueing effects using all the various combinations of visual, auditory, and tactile cue and target stimuli. As reviewed elsewhere (e.g., Driver and Spence 2004), voluntary spatial cueing studies had begun to reveal a consistent picture by the mid 1990s: voluntarily orienting attention to a location facilitated the processing of subsequent targets regardless of the cue and target modalities.

The picture that emerged from involuntary spatial cueing studies remained less clear because some of the spatial discrimination tasks that were developed failed to reveal cross-modal cueing effects (for detailed reviews of methodological issues, see Spence and McDonald 2004; Wright and Ward 2008). For example, using an elevation-discrimination task, Spence and Driver found an asymmetry in the involuntary spatial cueing effects between visual and auditory stimuli (Spence and Driver 1997). In their studies, spatially nonpredictive auditory cues facilitated responses to visual targets, but spatially nonpredictive visual cues failed to influence responses to auditory targets. For some time the absence of a visual–auditory cue effect weighed heavily on models of involuntary spatial attention. In particular, it was taken as evidence against a single supramodal attention system that mediated involuntary deployments of attention in multisensory space. However, researchers began to suspect that Spence and Driver’s (1997) missing audiovisual cue effect stemmed from the large spatial separation between cue and target, which existed even on validly (ipsilaterally) cued trials, and the different levels of precision with which auditory and visual stimuli can be localized. Specifically, it was hypothesized that visual cues triggered shifts of attention that were focused too narrowly around the cued location to affect processing of a distant auditory target (Ward et al. 2000). Data from a recent study confirmed this narrow-focus explanation for the last remaining “missing link” in cross-modal spatial attention (Prime et al. 2008). Visual cues were found to facilitate responses to auditory targets that were presented at the cued location but not auditory targets that were presented 14° above or below the cued location (see also McDonald et al. 2001).

The bulk of the evidence to date indicates that orienting attention involuntarily or voluntarily to a specific location in space can facilitate responding to subsequent targets, regardless of the modality of the cue and target stimuli. In principle, such cross-modal cue effects might reflect the consequences of a supramodal attention-control system that alters the perceptual representations of objects in different modalities (Farah et al. 1989). However, the majority of behavioral studies to date have examined the effects of spatial cues on RT performance, which is at best a very indirect measure of perceptual experience (Luce 1986; Watt 1991). Indeed, measures of response speed are inherently ambiguous in that RTs reflect the cumulative output of multiple stages of processing, including low-level sensory and intermediate perceptual stages, as well as later stages involved in making decisions and executing actions. In theory, spatial cueing could influence processing at any one of these stages. There is some evidence that the appearance of a spatial cue can alter an observer’s willingness to respond and reduce the uncertainty of his or her decisions without affecting perception (Shiu and Pashler 1994; Sperling and Dosher 1986). Other evidence suggests that whereas voluntary shifts of attention can affect perceptual processing, involuntary shifts of attention may not (Prinzmetal et al. 2005).

In this chapter, we review studies that have extended the RT-based chronometric investigation of cross-modal spatial attention by utilizing psychophysical measures that better isolate perceptual-level processes. In addition, neurophysiological and neuroimaging methods have been combined with these psychophysical approaches to identify changes in neural activity that might underlie the cross-modal consequences of spatial attention on perception. These methods have also examined neural activity within the cue–target interval that might reflect supramodal (or modality specific) control of spatial attention and subsequent anticipatory biasing of activity within sensory regions of the cortex.

26.2. INVOLUNTARY CROSS-MODAL SPATIAL ATTENTION ENHANCES PERCEPTUAL SENSITIVITY

The issue of whether attention affects perceptual or post-perceptual processing of external stimuli has been vigorously debated since the earliest dichotic listening experiments revealed that selective listening influenced auditory performance (Broadbent 1958; Cherry 1953; Deutsch and Deutsch 1963; Treisman and Geffen 1967). In the context of visual–spatial cueing experiments, the debate has focused on two general classes of mechanisms by which attention might influence visual performance (see Carrasco 2006; Lu and Dosher 1998; Luck et al. 1994, 1996; Smith and Ratcliff 2009; Prinzmetal et al. 2005). On one hand, attention might lead to a higher signal-to-noise ratio for stimuli at attended locations by enhancing their perceptual representations. On the other hand, attention might reduce the decision-level or response-level uncertainty without affecting perceptual processing. For example, spatial cueing might bias decisions about which location contains relevant stimulus information (the presumed signal) in favor of the cued location, thereby promoting a strategy to exclude stimulus information arising from uncued locations (the presumed noise; e.g., Shaw 1982, 1984; Shiu and Pashler 1994; Sperling and Dosher 1986). Such noise-reduction explanations account for the usual cueing effects (e.g., RT costs and benefits) without making assumptions about limited perceptual capacity.

Several methods have been developed to discourage decision-level mechanisms so that any observable cue effect can be ascribed more convincingly to attentional selection at perceptual stages of processing. One such method was used to investigate whether orienting attention involuntarily to a sudden sound influences perceptual-level processing of subsequent visual targets (McDonald et al. 2000). The design was adapted from earlier visual-cueing studies that eliminated location uncertainty by presenting a mask at a single location and requiring observers to indicate whether they saw a target at the masked location (Luck et al. 1994, 1996; see also Smith 2000). The mask serves a dual purpose in this paradigm: to ensure that the location of the target (if present) is known with complete certainty and to backwardly mask the target so as to limit the accrual and persistence of stimulus information at the relevant location. Under such conditions, it is possible to use methods of signal detection theory to obtain a measure of an observer’s perceptual sensitivity (d′)—the ability to discern a sensory event from background noise—that is independent of the observer’s decision strategy (which, in signal detection theory, is characterized by the response criterion, β; see Green and Swets 1966).

Consistent with a perceptual-level explanation, McDonald and colleagues (2000) found that perceptual sensitivity was higher when the visual target appeared at the location of the auditory cue than when it appeared on the opposite side of fixation (Figure 26.1a and b). This effect was ascribed to an involuntary shift of attention to the cued location because the sound provided no information about the location of the impending target. Also, because there was no uncertainty about the target location, the effect could not be attributed to a reduction in location uncertainty. Consequently, the results provided strong evidence that shifting attention involuntarily to the location of a sound actually improves the perceptual quality of a subsequent visual event appearing at that location (see also Dufour 1999). An analogous effect on perceptual sensitivity has been reported in the converse audiovisual combination, when spatially nonpredictive visual cues were used to orient attention involuntarily before the onset of an 800-Hz target embedded in a white-noise mask (Soto-Faraco et al. 2002). Together, these results support the view that sounds and lights engage a common supramodal spatial attention system, which then modulates perceptual processing of relevant stimuli at the cued location (Farah et al. 1989).

FIGURE 26.1. Results from McDonald et al.

FIGURE 26.1

Results from McDonald et al.’s (2000, 2003) signal detection experiments. (a) Schematic illustration of stimulus events on a valid-cue trial. Small light displays were fixed to bottoms of two loudspeaker cones, one situated to the left and right of (more...)

To investigate the neural processes by which orienting spatial attention to a sudden sound influences processing of a subsequent visual stimulus, McDonald and colleagues (2003) recorded ERPs in the signal-detection paradigm outlined above. ERPs to visual stimuli appearing at validly and invalidly cued locations began to diverge from one another at about 100 ms after stimulus onset, with the earliest phase of this difference being distributed over the midline central scalp (Figure 26.1c and d). After about 30–40 ms, this ERP difference between validly and invalidly cued visual stimuli shifted to midline parietal and lateral occipital scalp regions. A dipole source analysis indicated that the initial phase of this difference was generated in or near the multisensory region of the superior temporal sulcus (STS), whereas the later phase was generated in or near the fusiform gyrus of the occipital lobe (Figure 26.1e). This pattern of results suggests that enhanced visual perception produced by the cross-modal orienting of spatial attention may depend on feedback connections from the multisensory STS to the ventral stream of visual cortical areas. Similar cross-modal cue effects were observed when participants made speeded responses to the visual targets, but the earliest effect was delayed by 100 ms (McDonald and Ward 2000). This is in line with behavioral data suggesting that attentional selection might take place earlier when target detection accuracy (or fine perceptual discrimination; see subsequent sections) is emphasized than when speed of responding is emphasized (Prinzmetal et al. 2005).

26.3. INVOLUNTARY CROSS-MODAL SPATIAL ATTENTION MODULATES TIME-ORDER PERCEPTION

The findings reviewed in the previous section provide compelling evidence that cross-modal attention influences the perceptual quality of visual stimuli. In the context of a spatial cueing experiment, perceptual enhancement at an early stage of processing could facilitate decision and response processing at later stages, thereby leading to faster responses for validly cued objects than for invalidly cued objects. Theoretically, however, changes in the timing of perceptual processing could also contribute to the cue effects on RT performance: an observer might become consciously aware of a target earlier in time when it appears at a cued location than when it appears at an uncued location. In fact, the idea that attention influences the timing of our perceptions is an old and controversial one. More than 100 years ago, Titchener (1908) asserted that when confronted with multiple objects, an observer becomes consciously aware of an attended object before other unattended objects. Titchener called the hypothesized temporal advantage for attended objects the law of prior entry.

Observations from laboratory experiments in the nineteenth and early twentieth centuries were interpreted along the lines of attention-induced prior entry. In one classical paradigm known as the complication experiment, observers were required to indicate the position of a moving pointer at the moment a sound was presented (e.g., Stevens 1904; Wundt 1874; for a review, see Boring 1929). When listening in anticipation for the auditory stimulus, observers typically indicated that the sound appeared when the pointer was at an earlier point along its trajectory than was actually the case. For example, observers might report that a sound appeared when a pointer was at position 4 even though the sound actually appeared when the pointer was at position 5. Early on, it was believed that paying attention to the auditory modality facilitated sound perception and led to a relative delay of visual perception, so that the pointer’s perceived position lagged behind its actual position. However, this explanation fell out of favor when later results indicated that a specific judgment strategy, rather than attention-induced prior entry, might be responsible for the mislocalization error (e.g., Cairney 1975).

In more recent years, attention-induced prior entry has been tested experimentally in visual temporal-order judgment (TOJ) tasks that require observers to indicate which of two rapidly presented visual stimuli appeared first. When the attended and unattended stimuli appear simultaneously, observers typically report that the attended stimulus appeared to onset before the unattended stimulus (Stelmach and Herdman 1991; Shore et al. 2001). Moreover, in line with the supramodal view of spatial attention, such changes in temporal perception have been found when shifts in spatial attention were triggered by spatially nonpredictive auditory and tactile cues as well as visual cues (Shimojo et al. 1997).

Despite the intriguing behavioral results from TOJ experiments, the controversy over attention-induced prior entry has continued. The main problem harks back to the debate over the complication experiments: an observer’s judgment strategy might contribute to the tendency to report the cued target as appearing first (Pashler 1998; Schneider and Bavelier 2003; Shore et al. 2001). Thus, in a standard TOJ task, observers might perceive two targets to appear simultaneously but still report seeing the target on the cued side first because of a decision rule that favors the cued target (e.g., when in doubt, select the cued target). Simple response biases (e.g., stimulus–response compatibility effects) can be avoided quite easily by altering the task (McDonald et al. 2005; Shore et al. 2001), but it is difficult to completely avoid the potential for response bias.

As noted previously, ERP recordings can be used to distinguish between changes in high-level decision and response processes and changes in perceptual processing that could underlie entry to conscious awareness. An immediate challenge to this line of research is to specify the ways in which the perceived timing of external events might be associated with activity in the brain. Philosopher Daniel Dennett expounded two alternatives (Dennett 1991). On one hand, the perceived timing of external events may be derived from the timing of neural activities in relevant brain circuits. For example, the perceived temporal order of external events might be based on the timing of early cortical evoked potentials. On the other hand, the brain might not represent the timing of perceptual events with time itself. In Dennett’s terminology, the represented time (e.g., A before B) is not necessarily related to the time of the representing (e.g., representing of A does not necessarily precede representing of B). Consequently, the perceived temporal order of external events might be based on nontemporal aspects of neural activities in relevant brain circuits.

McDonald et al. (2005) investigated the effect of cross-modal spatial attention on visual time-order perception using ERPs to track the timing of cortical activity in a TOJ experiment. A spatially nonpredictive auditory cue was presented to the left or right side of fixation just before the occurrence of a pair of simultaneous or nearly simultaneous visual targets (Figure 26.2a). One of the visual targets was presented at the cued location, whereas the other was presented at the homologous location in the opposite visual hemifield. Consistent with previous behavioral studies, the auditory spatial cue had a considerable effect on visual TOJs (Figure 26.2b). Participants judged the cued target as appearing first on 79% of all simultaneous-target trials. To nullify this cross-modal cueing effect, the uncued target had to be presented nearly 70 ms before the cued target.

FIGURE 26.2. Results from McDonald et al.

FIGURE 26.2

Results from McDonald et al.’s (2005) temporal-order-judgment experiment. (a) Schematic illustration of events on a simultaneous-target trial (top) and nonsimultaneous target trials (bottom). Participants indicated whether a red or a green target (more...)

To elucidate the neural basis of this prior-entry effect, McDonald and colleagues (2005) examined the ERPs elicited by simultaneously presented visual targets following the auditory cue. The analytical approach taken was premised on the lateralized organization of the visual system and the pattern of ERP effects that have been observed under conditions of bilateral visual stimulation. Several previous studies on visual attention showed that directing attention to one side of a bilateral visual display results in a lateralized asymmetry of the early ERP components measured over the occipital scalp, with an increased positivity at electrode sites contralateral to the attended location beginning in the time range of the occipital P1 component (80–140 ms; Heinze et al. 1990, 1994; Luck et al. 1990; see also Fukuda and Vogel 2009). McDonald et al. (2005) hypothesized that if attention speeds neural transmission at early stages of the visual system, the early ERP components elicited by simultaneous visual targets would show an analogous lateral asymmetry in time, such that the P1 measured contralateral to the attended (cued) visual target would occur earlier than the P1 measured contralateral to the unattended (uncued) visual target. Such a finding would be consistent with Stelmach and Herdman’s (1991) explanation of attention-induced prior entry as well as with the view that the time course of perceptual experience is tied to the timing of the early evoked activity in the visual cortex (Dennett 1991). Such a latency shift was not observed, however, even though the auditory cue had a considerable effect on the judgments of temporal order of the visual targets. Instead, cross-modal cueing led to an amplitude increase (with no change in latency) of the ERP positivity in the ventral visual cortex contralateral to the side of the auditory cue, starting in the latency range of the P1 component (90–120 ms) (Figure 26.2ce). This finding suggests that the effect of spatial attention on the perception of temporal order occurs because an increase in the gain of the cued sensory input causes a perceptual threshold to be reached at an earlier time, not because the attended input was transmitted more rapidly than the unattended input at the earliest stages of processing.

The pattern of ERP results obtained by McDonald and colleagues is likely an important clue for the understanding the neural basis of visual prior entry due to involuntary deployments of spatial attention to sudden sounds. Although changes in ERP amplitude appear to underlie visual perceptual prior entry when attention is captured by lateralized auditory cues, changes in ERP timing might contribute to perceptual prior entry in other situations. This issue was addressed in a recent study of multisensory prior entry, in which participants voluntarily attended to either visual or tactile stimuli and judged whether the stimulus on the left or right appeared first, regardless of stimulus modality (Vibell et al. 2007). The ERP analysis centered on putatively visual ERP peaks over the posterior scalp (although ERPs to the tactile stimuli were not subtracted out and thus may have contaminated the ERP waveforms; cf. Talsma and Woldorff 2005). Interestingly, the P1 peaked at an average of 4 ms earlier when participants were attending to the visual modality than when they were attending the tactile modality, suggesting that modality-based attentional selection may have a small effect on the timing of early, evoked activity in the visual system. These latency results are not entirely clear, however, because the small-but-significant attention effect may have been caused by a single participant with an implausibly large latency difference (17 ms) and may have been influenced by overlap with the tactile ERP. Unfortunately, the authors did not report whether attention had a similar effect on the latency of the tactile ERPs, which may have helped to corroborate the small attention effect on P1 latency. Notwithstanding these potential problems in the ERP analysis, it is tempting to speculate that voluntary modality-based attentional selection influences the timing of early visual activity, whereas involuntary location-based attentional selection influences the gain of early visual activity. The question would still remain, however, how very small changes in ERP latency (4 ms or less) could underlie much larger perceptual effects of tens of milliseconds.

26.4. BEYOND TEMPORAL ORDER: THE SIMULTANEITY JUDGMENT TASK

Recently, Santangelo and Spence (2008) offered an alternative explanation for the finding of McDonald and colleagues (2005) that nonpredictive auditory spatial cues affect visual time order perception. Specifically, the authors suggested that the behavioral results in McDonald et al.’s TOJ task were not due to changes in perception but rather to decision-level factors. They acknowledged that simple response biases (e.g, a left cue primes a “left” response) would not have contributed to the behavioral results because participants indicated the color, not the location, of the target that appeared first. However, Santangelo and Spence raised the concern that some form of “secondary” response bias might have contributed to the TOJ effects (Schneider and Bavelier 2003; Stelmach and Herdman 1991).* For example, participants might have decided to select the stimulus at the cued location when uncertain as to which stimulus appeared first. In an attempt to circumvent such secondary response biases, Santangelo and Spence used a simultaneity judgment (SJ) task, in which participants had to judge whether two stimuli were presented simultaneously or successively (Carver and Brown 1997; Santangelo and Spence 2008; Schneider and Bavelier 2003). They reported that the uncued target had to appear 15-17 ms before the cued target in order for participants to have the subjective impression that the two stimuli appeared simultaneously. This difference is referred to as a shift in the point of subjective simultaneity (PSS), and it is typically attributed to the covert orienting of attention (but see Schneider and Bavelier 2003, for an alternative sensory-based account). The estimated shift in PSS was much smaller than the one reported in McDonald et al.’s earlier TOJ task (17.4 vs. 68.5 ms), but the conclusions derived from the two findings were the same: Involuntary capture of spatial attention by a sudden sound influences the perceived timing of visual events. Santangelo and Spence went on to argue that the shift in PSS reported by McDonald et al. might have been due to secondary response biases and, as a result, the shift in PSS observed in their study provided “the first unequivocal empirical evidence in support of the effect of cross-modal attentional capture on the latencies of perceptual processing” (p. 163).

Although the SJ task has its virtues, there are two main arguments against Santangelo and Spence’s conclusions. First, the authors did not take into account the neurophysiological findings of McDonald and colleagues’ ERP study. Most importantly, the effect of auditory spatial cuing on early ERP activity arising from sensory-specific regions of the ventral visual cortex cannot be explained in terms of response bias. Thus, although it may be difficult to rule out all higher-order response biases in a TOJ task, the ERP findings provide compelling evidence that cross-modal spatial attention modulates early visual-sensory processing. Moreover, although the SJ task may be less susceptible to some decision-level factors, it may be impossible to rule out all decision-level factors entirely as contributors to the PSS effect.* Thus, it is not inconceivable that Santangelo and Spence’s behavioral findings may have reflected post-perceptual rather than perceptual effects.

Second, it should be noted that Santangelo and Spence’s results provided little, if any, empirical support for the conclusion that cross-modal spatial attention influences the timing of visual perceptual processing. The problem is that their estimated PSS did not accurately represent their empirical data. Their PSS measure was derived from the proportion of “simultaneous” responses, which varied as a function of the stimulus onset asynchrony (SOA) between the target on the cued side and the target on the uncued side. As shown in their Figure 2a, the proportion of “simultaneous” responses peaked when the cued and uncued targets appeared simultaneously (0 ms SOA) and decreased as the SOA between targets increased. The distribution of responses was fit to a Gaussian function using maximum likelihood estimation, and the mean of the fitted Gaussian function—not the observed data—was used as an estimate of the PSS. Critically, this procedure led to a mismatch between the mean of the fitted curve (or more aptly, the mean of the individual-subject fitted curves) and the mean of the observed data. Specifically, whereas the mean of the fitted curves fell slightly to the left of the 0-ms SOA (uncued target presented first), the mean of the observed data actually fell slightly to the right of the 0-ms SOA (cued target presented first) because of a positive skew of the distribution

Does auditory cueing influence the subjective impression of simultaneity in the context of a SJ task? Unfortunately, the results from Santangelo and Spence’s study provide no clear answer to this question. The reported leftward shift in PSS suggests that the auditory cue had a small facilitatory effect on the perceived timing of the ipsilateral target. However, the rightward skew of the observed distribution (and consequential rightward shift in the mean) suggests that the auditory cue may actually have delayed perception of the ipsilateral target. Finally, the mode of the observed distribution suggests that the auditory cue had no effect on subjective reports of simultaneity. These inconclusive results suggest that the SJ task may lack adequate sensitivity to detect shifts in perceived time order induced by cross-modal cueing.

26.5. INVOLUNTARY CROSS-MODAL SPATIAL ATTENTION ALTERS APPEARANCE

The findings of the signal-detection and TOJ studies outlined in previous sections support the view that involuntary cross-modal spatial attention alters the perception of subsequent visual stimuli as well as the gain of neural responses in extrastriate visual cortex 100–150 ms after stimulus onset. These results largely mirrored the effects of visual spatial cues on visual perceptual sensitivity (e.g., Luck et al. 1994; Smith 2000) and temporal perception (e.g. Stelmach and Herdman 1991; Shore et al. 2001). However, none of these studies directly addressed the question of whether attention alters the subjective appearance of objects that reach our senses. Does attention make white objects appear whiter and dark objects appear darker? Does it make the ticking of a clock sound louder? Psychologists have pondered questions like these for well over a century (e.g., Fechner 1882; Helmholtz 1866; James 1890).

Recently, Carrasco and colleagues (1994) introduced a psychophysical paradigm to address the question, “does attention alter appearance.” The paradigm is similar to the TOJ paradigm except that, rather than varying the SOA between two visual targets and asking participants to judge which one was first (or last), the relative physical contrast of two targets is varied and participants are asked to judge which one is higher (or lower) in perceived contrast. In the original variant of the task, a small black dot was used to summon attention to the left or right just before the appearance of two Gabor patches at both left and right locations. When the physical contrasts of the two targets were similar or identical, observers tended to report the orientation of the target on the cued side. Based on these results, Carrasco and colleagues (2004) concluded that attention alters the subjective impression of contrast. In subsequent studies, visual cueing was found to alter the subjective impressions of several other stimulus features, including color saturation, spatial frequency, and motion coherence (for a review, see Carrasco 2006).

Carrasco and colleagues performed several control experiments to help rule out alternative explanations for their psychophysical findings (Prinzmetal et al. 2008; Schneider and Komlos 2008). The results of these controls argued against low-level sensory factors (Ling and Carrasco 2007) as well as higher-level decision or response biases (Carrasco et al. 2004; Fuller et al. 2008). However, as we have discussed in previous sections, it is difficult to rule out all alternative explanations on the basis of the behavioral data alone. Moreover, results from different paradigms have led to different conclusions about whether attention alters appearance: whereas the results from Carrasco’s paradigm have indicated that attention does alter appearance, the results from an equality-judgment paradigm introduced by Schneider and Komlos (2008) have suggested that attention may alter decision processes rather than contrast appearance.

Störmer et al. (2009) recently investigated whether cross-modal spatial attention alters visual appearance. The visual cue was replaced by a spatially nonpredictive auditory cue delivered in stereo so that it appeared to emanate from a peripheral location of a visual display (25° from fixation). After a 150-ms SOA, two Gabors were presented, one at the cued location and one on the opposite side of fixation (Figure 26.3a). The use of an auditory cue eliminated some potential sensory interactions between visual cue and target that might boost the contrast of the cued target even in the absence of attention (e.g., the contrast of a visual cue could add to the contrast of the cued-location target, thereby making it higher in contrast than the uncued-location target). As in Carrasco et al.’s (2004) high-contrast experiment, the contrast of one (standard) Gabor was set at 22%, whereas the contrast of the other (test) Gabor varied between 6% and 79%. ERPs were recorded on the trials (1/3 of the total) where the two Gabors were equal in contrast. Participants were required to indicate whether the higher-contrast Gabor patch was oriented horizontally or vertically.

FIGURE 26.3. Results from Störmer et al.

FIGURE 26.3

Results from Störmer et al.’s (2009) contrast-appearance experiment. (a) Stimulus sequence and grand-average ERPs to equal-contrast Gabor, recorded at occipital electrodes (PO7/PO8) contralateral and ipsilateral to cued side. On a short-SOA (more...)

The psychophysical findings in this auditory cueing paradigm were consistent with those reported by Carrasco and colleagues (2004). When the test and standard Gabors had the same physical contrast, observers reported the orientation of the cued-location Gabor significantly more often than the uncued-location Gabor (55% vs. 45%) (Figure 26.3b). The point of subjective equality (PSE)—the test contrast at which observers judged the test patch to be higher in contrast on half of the trials—averaged 20% when the test patch was cued and 25% when the standard patch was cued (in comparison with the 22% standard contrast; Figure 26.3a). These results indicate that spatially nonpredictive auditory cues as well as visual cues can influence subjective (visual) contrast judgments.

To investigate whether the auditory cue altered visual appearance as opposed to a decision or response processes, Störmer and colleagues (2009) examined the ERPs elicited by the equal-contrast Gabors as a function of cue location. The authors reasoned that changes in subjective appearance would likely be linked to modulations of early ERP activity in visual cortex associated with perceptual processing rather than decision- or response-level processing (see also Schneider and Komlos 2008). Moreover, any such effect on early ERP activity should correlate with the observers’ tendencies to report the cued target as being higher in contrast. This is exactly what was found. Starting at approximately 90 ms after presentation of the equal-contrast targets, the waveform recorded contralaterally to the cued side became more positive than the waveform recorded ipsilaterally to the cued side (Figure 26.3a). This contralateral positivity was observed on those trials when observers judged the cued-location target to be higher in contrast but not when observers judged the uncued-location target to be higher in contrast. The tendency to report the cued-location target as being higher in contrast correlated with the contralateral ERP positivity, most strongly in the time interval of the P1 component (120–140 ms), which is generated at early stages of visual cortical processing. Topographical mapping and distributed source modeling indicated that the increased contralateral positivity in the P1 interval reflected modulations of neural activity in or near the fusiform gyrus of the occipital lobe (Figure 26.3c and d). These ERP findings converge with the behavioral evidence that cross-modal spatial attention affects visual appearance through modulations at an early sensory level rather than by affecting a late decision process.

26.6. POSSIBLE MECHANISMS OF CROSS-MODAL CUE EFFECTS

The previous sections have focused on the perceptual consequences of cross-modal spatial cueing. To sum up, salient-but-irrelevant sounds were found to enhance visual perceptual sensitivity, accelerate the timing of visual perceptions, and alter the appearance of visual stimuli. Each of these perceptual effects was accompanied by modulation of the early cortical response elicited by the visual stimulus within ventral-stream regions. Such findings are consistent with the hypothesis that auditory and visual stimuli engage a common neural network involved in the control and covert deployment of attention in space (Farah et al. 1989). Converging lines of evidence have pointed to the involvement of several key brain structures in the control and deployment of spatial attention in visual tasks. These brain regions include the superior colliculus, pulvinar nucleus of the thalamus, intraparietal sulcus, and dorsal premotor cortex (for additional details, see Corbetta and Shulman 2002; LaBerge 1995; Posner and Raichle 1994). Importantly, multisensory neurons have been found in each of these areas, which suggests that the neural network responsible for the covert deployment of attention in visual space may well control the deployment of attention in multisensory space (see Macaluso, this volume; Ward et al. 1998; Wright and Ward 2008).

At present, however, there is no consensus as to whether a supramodal attention system is responsible for the cross-modal spatial cue effects outlined in the previous sections. Two different controversies have emerged. The first concerns whether a single, supramodal system controls the deployment of attention in multisensory space or whether separate, modality-specific systems direct attention to stimuli of their respective modalities. The latter view can account for cross-modal cueing effects by assuming that the activation of one system triggers coactivation of others. According to this separate-but-linked proposal, a shift of attention to an auditory location would lead to a separate shift of attention to the corresponding location of the visual field. Both the supramodal and separate-but-linked hypotheses can account for cross-modal cueing effects, making it difficult to distinguish between the two views in the absence of more direct measures of the neural activity that underlies attention control.

The second major controversy over the possible mechanisms of cross-modal cue effects is specific to studies utilizing salient-but-irrelevant stimuli to capture attention involuntarily. In these studies, the behavioral and neurophysiological effects of cueing are typically maximal when the cue appears 100–300 ms before the target. Although it is customary to attribute these facilitatory effects to the covert orienting of attention, they might alternatively result from sensory interactions between cue and target (Tassinari et al. 1994). The cross-modal-cueing paradigm eliminates unimodal sensory interactions, such as those taking place at the level of the retina, but the possibility of cross-modal sensory interaction remains because of the existence of multisensory neurons at many levels of the sensory pathways that respond to stimuli in different modalities (Driver and Noesselt 2008; Foxe and Schroeder 2005; Meredith and Stein 1996; Schroeder and Foxe 2005). In fact, the majority of multisensory neurons do not simply respond to stimuli in different modalities, but rather appear to integrate the input signals from different modalities so that their responses to multimodal stimulation differ quantitatively from the simple summation of their unimodal responses (for reviews, see Stein and Meredith 1993; Stein et al. 2009; other chapters in this volume). Such multisensory interactions are typically largest when stimuli from different modalities occur at about the same time, but they are possible over a period of several hundreds of milliseconds (Meredith et al. 1987). In light of these considerations, the cross-modal cueing effects described in previous sections could in principle have been due to the involuntary covert orienting of spatial attention or to the integration of cue and target into a single multisensory event (McDonald et al. 2001; Spence and McDonald 2004; Spence et al. 2004).

Although it is often difficult to determine which of these mechanisms are responsible for cross-modal cueing effects, several factors can help to tip the scales in favor of one explanation or the other. One factor is the temporal relationship between the cue and target stimuli. A simple rule of thumb is that increasing the temporal overlap between the cue and target will make multisensory integration more likely and pre-target attentional biasing less likely (McDonald et al. 2001). Thus, it is relatively straightforward to attribute cross-modal cue effects to multisensory integration when cue and target are presented concurrently or to spatial attention when cue and target are separated by a long temporal gap. The likely cause of cross-modal cueing effects is not so clear, however, when there is a short gap between cue and target that is within the temporal window where integration is possible. In such situations, other considerations may help to disambiguate the causes of the cross-modal cueing effects. For example, multisensory integration is largely an automatic and invariant process, whereas stimulus-driven attention effects are dependent on an observer’s goals and intentions (i.e., attentional set; e.g., Folk et al. 1992). Thus, if cross-modal spatial cue effects were found to be contingent upon an observer’s current attentional set, they would be more likely to have been caused by pre-target attentional biasing. To our knowledge, there has been little discussion of the dependency of involuntary cross-modal spatial cueing effects on attentional set and other task-related factors (e.g., Ward et al. 2000).

A second consideration that could help distinguish between alternative mechanisms of cross- modal cueing effects concerns the temporal sequence of control operations (Spence et al. 2004). According to the most prominent multisensory integration account, signals arising from stimuli in different modalities converge onto multimodal brain regions and are integrated therein. The resulting integrated signal is then fed back to the unimodal brain regions to influence processing of subsequent stimuli in modality-specific regions of cortex (Calvert et al. 2000; Macaluso et al. 2000). Critically, such an influence on modality-specific processing would occur only after feedforward convergence and integration of the unimodal signals takes place (Figure 26.4a). This contrasts with the supramodal-attention account, according to which the cue’s influence on modality-specific processing may be initiated before the target in another modality has been presented (i.e., before integration is possible). In the context of a peripheral cueing task, a cue in one modality (e.g., audition) would initiate a sequence of attentional control operations (such as disengage, move, reengage; see Posner and Raichle 1994) that would lead to anticipatory biasing of activity in another modality (e.g., vision) before the appearance of the target (Figure 26.4b). In other words, whereas multisensory integration occurs only after stimulation in two (or more) modalities, the consequences of spatial attention are theoretically observable after stimulation in the cue modality alone. Thus, a careful examination of neural activity in the cue–target interval would help to ascertain whether pre-target attentional control is responsible for the cross-modal cueing effects on perception. This is a challenging task in the case of involuntary cross-modal cue effects, because the time interval between the cue and target is typically very short. In the future, however, researchers might successfully adapt the electrophysiological methods used to track the voluntary control of spatial attention (e.g., Doesburg et al. 2009; Eimer et al. 2002; Green and McDonald 2008; McDonald and Green 2008; Worden et al. 2000) to look for signs of attentional control in involuntary cross-modal cueing paradigms such as the ones described in this chapter.

FIGURE 26.4. Hypothetical neural mechanisms for involuntary cross-modal spatial cueing effects.

FIGURE 26.4

Hypothetical neural mechanisms for involuntary cross-modal spatial cueing effects. (a) Integration-based account. Nearly simultaneous auditory and visual stimuli first activate unimodal auditory and visual cortical regions and then converge upon a multisensory (more...)

26.7. CONCLUSIONS AND FUTURE DIRECTIONS

To date, most of the research on spatial attention has considered how attending to a particular region of space influences the processing of objects within isolated sensory modalities. However, a growing number of studies have demonstrated that orienting attention to the location of a stimulus in one modality can influence the perception of subsequent stimuli in different modalities. As outlined here, recent cross-modal spatial cueing studies have shown that the occurrence of a nonpredictive auditory cue affects the way we see subsequent visual objects in several ways: (1) by improving the perceptual sensitivity for detection of masked visual stimuli appearing at the cued location, (2) by producing earlier perceptual awareness of visual stimuli appearing at the cued location, and (3) by altering the subjective appearance of visual stimuli appearing at the cued location. Each of these cross-modally induced changes in perceptual experience is accompanied by short-latency changes in the neural processing of targets within occipitotemporal cortex in the vicinity of the fusiform gyrus, which is generally considered to represent modality-specific cortex belonging to the ventral stream of visual processing.

There is still much to be learned about these cross-modally induced changes in perception. One outstanding question is why spatial cueing appears to alter visual perception in tasks that focus on differences in temporal order or contrast (Carrasco et al. 2004; McDonald et al. 2005; Stormer et al. 2009) but not in tasks that focus on similarities (i.e., “same or not” judgments; Santangelo and Spence 2008; Schneider and Komlos 2008). Future studies could address this question by recording physiological measures (such as ERPs) in the two types of tasks. If an ERP component previously shown to correlate with perception were found to be elicited equally well under the two types of task instructions, it might be concluded that the same-or-not judgment lacks sensitivity to reveal perceptual effects.

Another outstanding question is whether the cross-modal cueing effects reviewed in this chapter are caused by the covert orienting of attention or by passive intersensory interactions. Some insight may come from recent ERP studies of the “double flash” illusion produced by the interaction of a single flash with two pulsed sounds (Mishra et al. 2007, 2010). In these studies, an enhanced early ventral stream response at 100–130 ms was observed in association with the perceived extra flash. Importantly, this neural correlate of the illusory flash was sensitive to manipulations of spatial selective attention, suggesting that the illusion is not the result of automatic multisensory integration. Along these lines, it is tempting to conclude that the highly similar enhancement of early ventral-stream activity found in audiovisual cueing studies (McDonald et al. 2005; Störmer et al. 2009) also results from the covert deployment of attention rather than the automatic integration of cue and target stimuli. Future studies could address this issue by looking for electrophysiological signs of attentional control and anticipatory modulation of visual cortical activity before the onset of the target stimulus.

A further challenge for future research will be to extend these studies to different combinations of sensory modalities to determine whether cross-modal cueing of spatial attention has analogous effects on the perception of auditory and somatosensory stimuli. Such findings would be consistent with the hypothesis that stimuli from the various spatial senses can all engage the same neural system that mediates the covert deployment of attention in multisensory space (Farah et al. 1989).

REFERENCES

  1. Boring E.G. A history of experimental psychology. New York: Appleton-Century; 1929.
  2. Broadbent D.E. Perception and communication. London: Pergamon Press; 1958.
  3. Cairney P.T. The complication experiment uncomplicated. Perception. 1975;4:255–265.
  4. Calvert G.A, Campbell R, Brammer M.J. Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex. Current Biology. 2000;10:649–657. [PubMed: 10837246]
  5. Carrasco M. Covert attention increases contrast sensitivity: Psychophysical, neurophysiological, and neuroimaging studies. In: Martinez-Conde S, Macknik S.L, Martinez L.M, Alonso J.M, Tse P.U, editors. Progress in Brain Research, Volume 154, Part 1: Visual Perception. Part I. Fundamentals of Vision: Low and Mid-level Processes in Perception. Amsterdam: Elsevier; 2006. pp. 33–70.
  6. Carrasco M, Ling S, Read S. Attention alters appearance. Nature Neuroscience. 2004;7:308–313. [PMC free article: PMC3882082] [PubMed: 14966522]
  7. Carver R.A, Brown V. Effects of amount of attention allocated to the location of visual stimulus pairs on perception of simultaneity. Perception & Psychophysics. 1997;59:534–542. [PubMed: 9158328]
  8. Cherry C.E. Some experiments on the recognition of speech with one and two ears. Journal of the Acoustical Society of America. 1953;25:975–979.
  9. Corbetta M, Shulman G.L. Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews Neuroscience. 2002;3:201–215. [PubMed: 11994752]
  10. Dennett D. Consciousness explained. Boston: Little, Brown & Co; 1991.
  11. Deutsch J.A, Deutsch D. Attention: Some theoretical considerations. Psychological Review. 1963;70:80–90. [PubMed: 14027390]
  12. Doesburg S.M, Green J.J, McDonald J.J, Ward L.M. From local inhibition to long-range integration: A functional dissociation of alpha-band synchronization across cortical scales in visuospatial attention. Brain Research. 2009;1303:97–110. [PubMed: 19782056]
  13. Driver J, Noesselt T. Multisensory interplay reveals crossmodal influences on ‘sensory-specific’ brain regions, neural responses, and judgments. Neuron. 2008;57:11–23. [PMC free article: PMC2427054] [PubMed: 18184561]
  14. Driver J, Spence C. Crossmodal spatial attention: Evidence from human performance. In: Spence C, Driver J, editors. Crossmodal space and crossmodal attention. Oxford: Oxford Univ. Press; 2004. pp. 179–220.
  15. Dufour A. Importance of attentional mechanisms in audiovisual links. Experimental Brain Research. 1999;126:215–222. [PubMed: 10369144]
  16. Eimer M, van Velzen J, Driver J. Cross-modal interactions between audition, touch, and vision in endogenous spatial attention: ERP evidence on preparatory states and sensory modulations. Journal of Cognitive Neuroscience. 2002;14:254–271. [PubMed: 11970790]
  17. Eimer M, Schröger E. ERP effects of intermodal attention and cross-modal links in spatial attention. Psychophysiology. 1998;35:313–327. [PubMed: 9564751]
  18. Eriksen C.W, Hoffman J.E. Temporal and spatial characteristics of selective encoding from visual displays. Perception & Psychophysics. 1972;12:201–204.
  19. Farah M.J, Wong A.B, Monheit M.A, Morrow L.A. Parietal lobe mechanisms of spatial attention—modality-specific or supramodal. Neuropsychologia. 1989;27:461–470. [PubMed: 2733819]
  20. Fechner G.T. Revision der Hauptpunkte der Psychophysik. Leipzig: Breitkopf & Hartel; 1882.
  21. Folk C.L, Remington R.W, Johnston J.C. Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human Perception and Performance. 1992;18:1030–1044. [PubMed: 1431742]
  22. Foxe J.J, Schroeder C.E. The case for feedforward multisensory convergence during early cortical processing. Neuroreport. 2005;16:419–423. [PubMed: 15770144]
  23. Fukuda K, Vogel E.K. Human variation in overriding attentional capture. Journal of Neuroscience. 2009;29:8726–8733. [PMC free article: PMC6664881] [PubMed: 19587279]
  24. Fuller S, Rodriguez R.Z, Carrasco M. Apparent contrast differs across the vertical meridian: Visual and attentional factors. Journal of Vision. 2008;8:1–16. [PMC free article: PMC2789458] [PubMed: 18318619]
  25. Green D.M, Swets J.A. Signal detection theory and psychophysics. New York: Wiley; 1966.
  26. Green J.J, McDonald J.J. Electrical neuroimaging reveals timing of attentional control activity in human brain. PLoS Biology. 2008;6:e81.
  27. Heinze H.J, Mangun G.R, Hillyard S.A. Visual event-related potentials index perceptual accuracy during spatial attention to bilateral stimuli. In: Brunia C, et al., editors. Psychophysiological Brain Research. Tilburg, The Netherlands: Tilburg Univ. Press; 1990. pp. 196–202.
  28. Heinze H.J, Mangun G.R, Burchert W, et al. Combined spatial and temporal imaging of brain activity during visual selective attention in humans. Nature. 1994;372:543–546. [PubMed: 7990926]
  29. Helmholtz H.V. Treatise on psychological optics. 3. 2 & 3. Rochester: Optical Society of America; 1866.
  30. Hillyard S.A, Simpson G.V, Woods D.L, Vanvoorhis S, Münte T.F. Event-related brain potentials and selective attention to different modalities. In: Reinoso-Suarez F, Ajmone-Marsan C, editors. Cortical Integration. New York: Raven Press; 1984. pp. 395–414.
  31. James W. The principles of psychology. New York: Henry Holt; 1890.
  32. LaBerge D. Attentional processing: The brain's art of mindfulness. Cambridge, MA: Harvard Univ. Press; 1995.
  33. Ling S, Carrasco M. Transient covert attention does alter appearance: A reply to Schneider 2006. Perception & Psychophysics. 2007;69:1051–1058. [PMC free article: PMC2789452] [PubMed: 18018987]
  34. Lu Z.L, Dosher B.A. External noise distinguishes attention mechanisms. Vision Research. 1998;38:1183–1198. [PubMed: 9666987]
  35. Luce P.A. A computational analysis of uniqueness points in auditory word recognition. Perception & Psychophysics. 1986;39:155–158. [PubMed: 3737339]
  36. Luck S.J, Heinze H.J, Mangun G.R, Hillyard S.A. Visual event-related potentials index focussed attention within bilateral stimulus arrays: II. Functional dissociation of P1 and N1 components. Electroencephalography and Clinical Neurophysiology. 1990;75:528–542. [PubMed: 1693897]
  37. Luck S.J, Hillyard S.A, Mouloua M, Hawkins H.L. Mechanisms of visual–spatial attention: Resource allocation or uncertainty reduction? Journal of Experimental Psychology: Human Perception and Performance. 1996;22:725–737. [PubMed: 8666960]
  38. Luck S.J, Hillyard S.A, Mouloua M, Woldorff M.G, Clark V.P, Hawkins H.L. Effects of spatial cuing on luminance detectability: Psychophysical and electrophysiological evidence for early selection. Journal of Experimental Psychology: Human Perception and Performance. 1994;20:887–904. [PubMed: 8083642]
  39. Macaluso E, Frith C.D, Driver J. Modulation of human visual cortex by crossmodal spatial attention. Science. 2000;289:1206–1208. [PubMed: 10947990]
  40. McDonald J.J, Green J.J. Isolating event-related potential components associated with voluntary control of visuo-spatial attention. Brain Research. 2008;1227:96–109. [PubMed: 18621037]
  41. McDonald J.J, Teder-Sälejärvi W.A, Di Russo F, Hillyard S.A. Neural substrates of perceptual enhancement by cross-modal spatial attention. Journal of Cognitive Neuroscience. 2003;15:10–19. [PubMed: 12590839]
  42. McDonald J.J, Teder-Sälejärvi W.A, Di Russo F, Hillyard S.A. Neural basis of auditory-induced shifts in visual time-order perception. Nature Neuroscience. 2005;8:1197–1202. [PubMed: 16056224]
  43. McDonald J.J, Teder-Sälejärvi W.A, Heraldez D, Hillyard S.A. Electrophysiological evidence for the “missing link” in crossmodal attention. Canadian Journal of Experimental Psychology. 2001;55:141–149. [PubMed: 11433785]
  44. McDonald J.J, Teder-Sälejärvi W.A, Hillyard S.A. Involuntary orienting to sound improves visual perception. Nature. 2000;407:906–908. [PubMed: 11057669]
  45. McDonald J.J, Ward L.M. Spatial relevance determines facilitatory and inhibitory effects of auditory covert spatial orienting. Journal of Experimental Psychology: Human Perception and Performance. 1999;25:1234–1252.
  46. McDonald J.J, Ward L.M. Involuntary listening aids seeing: Evidence from human electrophysiology. Psychological Science. 2000;11:167–171. [PubMed: 11273425]
  47. Meredith M.A, Nemitz J.W, Stein B.E. Determinants of multisensory integration in superior colliculus neurons:1. Temporal factors. Journal of Neuroscience. 1987;7:3215–3229. [PMC free article: PMC6569162] [PubMed: 3668625]
  48. Meredith M.A, Stein B.E. Spatial determinants of multisensory integration in cat superior colliculus neurons. Journal of Neurophysiology. 1996;75:1843–1857. [PubMed: 8734584]
  49. Mishra J, Martinez A, Sejnowski T.J, Hillyard S.A. Early cross-modal interactions in auditory and visual cortex underlie a sound-induced visual illusion. Journal of Neuroscience. 2007;27:4120–4131. [PMC free article: PMC2905511] [PubMed: 17428990]
  50. Mishra J, Martinez A, Hillyard S.A. Effect of attention on early cortical processes associated with the sound-induced extra flash illusion. Journal of Cognitive Neuroscience. 2010;22:1714–1729. [PubMed: 19583464]
  51. Pashler H.E. The psychology of attention. Cambridge, MA: MIT Press; 1998.
  52. Posner M.I. Chronometric explorations of mind. Hillsdale, NJ: Lawrence Erlbaum; 1978.
  53. Posner M.I, Cohen Y, Rafal R.D. Neural systems control of spatial orienting. Philosophical Transactions of the Royal Society of London Series B-Biological Sciences. 1982;298:187–198. [PubMed: 6125970]
  54. Posner M.I, Raichle M.E. Images of mind. New York: W. H. Freeman; 1994.
  55. Posner M.I, Walker J.A, Friedrich F.J, Rafal R.D. Effects of parietal injury on covert orienting of attention. Journal of Neuroscience. 1984;4:1863–1874. [PMC free article: PMC6564871] [PubMed: 6737043]
  56. Prime D.J, McDonald J.J, Green J, Ward L.M. When cross-modal spatial attention fails. Canadian Journal of Experimental Psychology. 2008;62:192–197. [PubMed: 18778148]
  57. Prinzmetal W, Long V, Leonhardt J. Involuntary attention and brightness contrast. Perception & Psychophysics. 2008;70:1139–1150. [PubMed: 18927000]
  58. Prinzmetal W, McCool C, Park S. Attention: Reaction time and accuracy reveal different mechanisms. Journal of Experimental Psychology: General. 2005;134:73–92. [PubMed: 15702964]
  59. Rhodes G. Auditory attention and the representation of spatial information. Perception & Psychophysics. 1987;42:1–14. [PubMed: 3658631]
  60. Santangelo V, Spence C. Crossmodal attentional capture in an unspeeded simultaneity judgement task. Visual Cognition. 2008;16:155–165.
  61. Schneider K.A, Bavelier D. Components of visual prior entry. Cognitive Psychology. 2003;47:333–366. [PubMed: 14642288]
  62. Schneider K.A, Komlos M. Attention biases decisions but does not alter appearance. Journal of Vision. 2008;8:1–10. [PubMed: 19146287]
  63. Schroeder C.E, Foxe J. Multisensory contributions to low-level, “unisensory” processing. Current Opinion in Neurobiology. 2005;15:454–458. [PubMed: 16019202]
  64. Shaw M.L. Attending to multiple sources of information:1.The integration of information in decisionmaking. Cognitive Psychology. 1982;14:353–409.
  65. Shaw M.L. Division of attention among spatial locations: A fundamental difference between detection of letters and detection of luminance increments. In: Bouma H, Bouwhui D.G, editors. Attention and Performance X. Hillsdale, NJ: Erlbaum; 1984. pp. 109–121.
  66. Shimojo S, Miyauchi S, Hikosaka O. Visual motion sensation yielded by non-visually driven attention. Vision Research. 1997;37:1575–1580. [PubMed: 9231224]
  67. Shiu L.P, Pashler H. Negligible effect of spatial precueing on identification of single digits. Journal of Experimental Psychology: Human Perception and Performance. 1994;20:1037–1054.
  68. Shore D.I, Spence C, Klein R.M. Visual prior entry. Psychological Science. 2001;12:205–212. [PubMed: 11437302]
  69. Smith P.L. Attention and luminance detection: Effects of cues, masks, and pedestals. Journal of Experimental Psychology: Human Perception and Performance. 2000;26:1401–1420. [PubMed: 10946722]
  70. Smith P.L, Ratcliff R. An integrated theory of attention and decision making in visual signal detection. Psychological Review. 2009;116:283–317. [PubMed: 19348543]
  71. Soto-Faraco S, McDonald J, Kingstone A. Gaze direction: Effects on attentional orienting and crossmodal target responses. Cognitive Neuroscience Society; San Francisco, CA: 2002. Poster presented at the annual meeting of the.
  72. Spence C.J, Driver J. Covert spatial orienting in audition—exogenous and endogenous mechanisms. Journal of Experimental Psychology: Human Perception and Performance. 1994;20:555–574.
  73. Spence C, Driver J. Audiovisual links in exogenous covert spatial orienting. Perception & Psychophysics. 1997;59:1–22. [PubMed: 9038403]
  74. Spence C, McDonald J.J. The crossmodal consequences of the exogenous spatial orienting of attention. In: Calvert G.A, Spence C, Stein B.E, editors. The handbook of multisensory processing. Cambridge, MA: MIT Press; 2004. pp. 3–25.
  75. Spence C, McDonald J.J, Driver J. Exogenous spatial cuing studies of human crossmodal attention and multisensory integration. In: Spence C, Driver J, editors. Crossmodal space and crossmodal attention. Oxford: Oxford Univ. Press; 2004. pp. 277–320.
  76. Sperling G, Dosher B.A. Strategy and optimization in human information processing. In: Boff K.R, Kaufman L, Thomas J.P, editors. Handbook of Perception and Human Performance. New York: Wiley; 1986. pp. 1–65.
  77. Stein B.E, Meredith M.A. The merging of the senses. Cambridge, MA: MIT Press; 1993.
  78. Stein B.E, Stanford T.R, Ramachandran R, Perrault T.J, Rowland B.A. Challenges in quantifying multisensory integration: Alternative criteria, models, and inverse effectiveness. Experimental Brain Research. 2009;198:113–126. [PMC free article: PMC3056521] [PubMed: 19551377]
  79. Stelmach L.B, Herdman C.M. Directed attention and perception of temporal-order. Journal of Experimental Psychology: Human Perception and Performance. 1991;17:539–550. [PubMed: 1830091]
  80. Stevens H.C. A simple complication pendulum for qualitative work. American Journal of Psychology. 1904;15:581.
  81. Stone J.V, Hunkin N.M, Porrill J, et al. When is now? Perception of simultaneity. Proceedings of the Royal Society of London Series B: Biological Sciences. 2001;268:31–38. [PMC free article: PMC1087597] [PubMed: 12123295]
  82. Störmer V.S, McDonald J.J, Hillyard S.A. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli. PNAS. 2009;106:22456–22461. [PMC free article: PMC2799760] [PubMed: 20007778]
  83. Talsma D, Woldorff M.G. Selective attention and multisensory integration: Multiple phases of effects on the evoked brain activity. Journal of Cognitive Neuroscience. 2005;17:1098–1114. [PubMed: 16102239]
  84. Tassinari G, Aglioti S, Chelazzi L, Peru A, Berlucchi G. Do peripheral non-informative cues induce early facilitation of target detection. Vision Research. 1994;34:179–189. [PubMed: 8116277]
  85. Teder-Sälejärvi W.A, Munte T.F, Sperlich F.J, Hillyard S.A. Intra-modal and cross-modal spatial attention to auditory and visual stimuli. An event-related brain potential study. Cognitive Brain Research. 1999;8:327–343. [PubMed: 10556609]
  86. Titchener E.N. Lectures on the elementary psychology of feeling and attention. New York: The MacMillan Company; 1908.
  87. Treisman A, Geffen G. Selective attention: Perception or response? Quarterly Journal of Experimental Psychology. 1967;19:1–18. [PubMed: 6041678]
  88. Vibell J, Klinge C, Zampini M, Spence C, Nobre A.C. Temporal order is coded temporally in the brain: Early event-related potential latency shifts underlying prior entry in a cross-modal temporal order judgment task. Journal of Cognitive Neuroscience. 2007;19:109–120. [PubMed: 17214568]
  89. Ward L.M, McDonald J.J, Golestani N. Cross-modal control of attention shifts. In: Wright R.D, editor. Visual attention. New York: Oxford Univ. Press; 1998. pp. 232–268.
  90. Ward L.M, McDonald J.J, Lin D. On asymmetries in cross-modal spatial attention orienting. Perception & Psychophysics. 2000;62:1258–1264. [PubMed: 11019621]
  91. Watt R.J. Understanding vision. San Diego, CA: Academic Press; 1991.
  92. Worden M.S, Foxe J.J, Wang N, Simpson G.V. Anticipatory biasing of visuospatial attention indexed by retinotopically specific-band electroencephalography increases over occipital cortex. Journal of Neuroscience. 2000;20(RC63):1–6. [PMC free article: PMC6772495] [PubMed: 10704517]
  93. Wright R.D, Ward L.M. Orienting of attention. New York: Oxford Univ. Press; 2008.
  94. Wundt W. Grundzuge der physiologischen psychologies [Foundations of physiological psychology]. Leipzig, Germany: Wilhelm Engelmann; 1874.

Footnotes

*

This argument would also apply to the findings of Vibell et al.’s (2007) cross-modal TOJ study.

*

Whereas Santangelo and Spence (2008) made the strong claim that performance in SJ tasks should be completely inde­pendent of all response biases, Schneider and Bavelier (2003) argued only that performance in SJ tasks should be less susceptible to such decision-level effects than performance in TOJ tasks.

The mismatch between the estimated PSS and the mean of the observed data in Santangelo and Spence’s (2008) SJ task might have been due to violations in the assumptions of the fitting procedure. Specifically, the maximum likelihood procedure assumes that data are distributed normally, whereas the observed data were clearly skewed. Santangelo and Spence did perform one goodness-of-fit test to help determine whether the data differed significantly from the fitted Gaussians, but this test was insufficient to pick up the positive skew (note that other researchers have employed multiple goodness-of-fit tests before computing PSS; e.g., Stone et al. 2001). Alternatively, the mismatch between the estimated PSS and the mean of the observed data might have arisen because data from the simultaneous-target trials were actually discarded prior to the curve-fitting procedure. This arbitrary step shifted the mode of the distribution 13 ms to the left (uncued target was presented 13 ms before cued target), which happened to be very close to the reported shift in PSS.

Copyright © 2012 by Taylor & Francis Group, LLC.
Bookshelf ID: NBK92863PMID: 22593885

Views

  • PubReader
  • Print View
  • Cite this Page

Other titles in this collection

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Similar articles in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...