• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
J Am Acad Audiol. Author manuscript; available in PMC Jan 1, 2011.
Published in final edited form as:
J Am Acad Audiol. Jan 2010; 21(1): 52–65.
PMCID: PMC2857406
NIHMSID: NIHMS187417

An Attempt to Improve Bilateral Cochlear Implants by Increasing the Distance between Electrodes and Providing Complementary Information to the Two Ears

Abstract

Objectives

The purpose of this investigation was to determine if adult bilateral cochlear implant recipients could benefit from using a speech processing strategy in which the input spectrum was interleaved among electrodes across the two implants.

Design

Two separate experiments were conducted. In both experiments, subjects were tested using a control speech processing strategy and a strategy in which the full input spectrum was filtered so that only the output of half of the filters was audible to one implant, while the output of the alternative filters was audible to the other implant. The filters were interleaved in a way that created alternate frequency “holes” between the two cochlear implants.

Results

In experiment one, four subjects were tested on consonant recognition. Results indicated that one of the four subjects performed better with the interleaved strategy, one subject received a binaural advantage with the interleaved strategy that they did not receive with the control strategy, and two subjects showed no decrement in performance when using the interleaved strategy. In the second experiment, 11 subjects were tested on word recognition, sentences in noise, and localization (it should be noted that not all subjects participated in all tests). Results showed that for speech perception testing one subject achieved significantly better scores with the interleaved strategy on all tests, and seven subjects showed a significant improvement with the interleaved strategy on at least one test. Only one subject showed a decrement in performance on all speech perception tests with the interleaved strategy. Out of nine subjects, one subject preferred the sound quality of the interleaved strategy. No one performed better on localization with the interleaved strategy.

Conclusion

Data from this study indicate that some adult bilateral cochlear implant recipients can benefit from using a speech processing strategy in which the input spectrum is interleaved among electrodes across the two implants. It is possible that the subjects in this study who showed a significant improvement with the interleaved strategy did so because of less channel interaction; however, this hypothesis was not directly tested.

Keywords: Bilateral, cochlear implants, electrodes

Bilateral cochlear implants have the potential to provide some useful binaural hearing benefits, including hearing speech in noise and localization (van Hoesel and Clark, 1997, 1999; Tyler et al, 2001, 2002; van Hoesel and Tyler, 2003; Litovsky et al, 2004, 2006; Senn et al, 2005; Verschuur et al, 2005; Ricketts et al, 2006; Grantham et al, 2007; Firszt et al, 2008). These benefits are consistent with results from studies of subjects with normal hearing (e.g., Koenig, 1950; Middlebrooks and Green, 1991; Wightman and Kistler, 1997) and of subjects with impaired hearing (e.g., Byrne et al, 1992; Peissig and Kollmeier, 1997; Byrne and Noble, 1998).

Although bilateral cochlear implants can improve performance compared to unilateral implant users (Gantz et al, 2002; Müller et al, 2002; Laszig et al, 2004; Nopp et al, 2004; Litovsky et al, 2006; Tyler et al, 2007; Buss et al, 2008; Dunn et al, 2008), there are still wide individual differences in overall performance. One possible contributor to poor performance may be channel interactions that can occur with electrical stimulation (Shannon, 1983; White et al, 1984; Wilson et al, 1991; Stickney et al, 2006). Electrical activity presented to one electrode can be transmitted over a broad region due to current spread through cochlear fluids. This impedes the attempted spatial separation of multiple electrodes positioned at different locations within the cochlea. Specifically, the current presented on nearby electrodes stimulates the same nerve fibers as other electrodes, instead of each electrode stimulating different fibers. This results in misrepresentation of the normal frequency representation of the auditory system, which is thought to be critical for speech understanding (e.g., Rosen and Fourcin, 1986; Tyler, 1986).

Nonsimultaneous stimulation of electrodes reduces some channel interaction because the current is turned off from one electrode before the current is turned on at another electrode (Wilson et al, 1991). However, this also has limitations. First, the spread of activity remains an issue because nearby electrodes will still stimulate similar nerve fibers, just not at the same time. Second, the effects of electrical stimulation and nerve activity do not cease immediately after stimulus termination. Adaptation and temporal summation effects continue after the electrical stimulation has been turned off, and this will affect the excitability of subsequent stimulation. Thus, channel interactions can occur even for nonsimultaneous stimulation (e.g., Boëx et al, 2003).

Bilateral cochlear implants provide a unique opportunity to study the reduction of channel interaction. Frequency information can be divided between the two implants, and active electrodes can be spaced farther apart, perhaps resulting in less interaction. The entire speech spectrum is still made available to the brain, provided that the information from each ear can be integrated centrally. Wilson et al (reported in Lawson et al, 1999; QPR 4 on NIH Project NO1-DC-8-2105) point out that if current interaction is a leading cause of poor spatial and spectral resolution for stimulation presented to a single cochlea, then this could be overcome by splitting stimulation between two cochlear implants in an “interlacing” method. Current interaction on a single cochlea would, by definition, be reduced. They studied this programming method in two individuals with bilateral cochlear implants. They presented information from channels 1, 3, and 5 to the left cochlear implant and information from channels 2, 4, and 6 to the right implant (and also the reverse). They found in one patient that scores for the bilateral “interlaced” condition were significantly higher than any of the unilateral conditions. However, no difference across conditions was found with the second patient. They speculate that for some individuals the electrode reduction might be minimal if the current extent was wide and the inter-electrode distances still relatively small.

This finding mirrors the conflicting evidence found in the literature as to the importance of reducing channel interaction for improved speech perception. There is some evidence suggesting that reducing channel interaction is beneficial for electrode pitch ranking (Hughes and Abbas, 2006; Hughes, 2008), but it is unclear how this improved electrode pitch ranking through reduced channel interaction would lead to improved consonant recognition or speech perception (Mens and Berenstein, 2005; Verschuur, 2009).

This paper is a report of a clinical application of the “interlacing” work of Lawson et al. This is an attempt to improve the performance of bilateral cochlear implant patients by dividing information between the two ears where the input spectrum is interleaved among electrodes across the two implants.

EXPERIMENT 1

Method

Subjects

Four subjects participated in this study, including one male and three females. Each of the subjects had received bilateral cochlear implants during a single operation prior to their participation. Table 1 displays individual biographical information. At the time of the study, three subjects had 3 mo of cochlear implant experience, while one subject had 12 mo of experience. While it is possible that results of this study could be influenced by inexperienced cochlear implant use, 3 mo sentence recognition scores in quiet (HINT [Hearing in Noise Test] sentences; Nilsson at al. 1994) were excellent for two of the subjects (12422b = 100% and 12452b = 93%) and very good for one (12454b = 72%). Subjects ranged in age from 36 to 70 yr. All subjects had postlingually acquired profound bilateral sensorineural hearing loss, received minimal benefit from hearing aids prior to implantation, and met the standard cochlear implant criteria, at that time, in each ear. In accordance with one of those criteria, none of the subjects achieved a score of 40% correct or better in recognizing the Central Institute for the Deaf (CID) everyday sentences (Silverman and Hirsh, 1955) in the best-aided condition. Additionally, the subjects were selected for this study because they had differences in duration of deafness and hearing thresholds across ears prior to bilateral cochlear implantation.

Table 1
Biographical Information for Participants in Experiment 1

Signal Processing and Devices

All of the subjects in this study received the Nucleus® 24 cochlear implant system, wore the body-worn SPrint speech processor, and utilized a continuous interleaved sampling (CIS) coding strategy that was programmed specifically for this study. CIS presents a continuous representation of the signal envelope for each channel (Wilson et al. 1991; Wilson, 1993,2000). Pulses of information are interleaved in time to eliminate one principal component channel interaction, as discussed in the Introduction.

Our primary interest was a focus on two conditions. In one condition, subjects used a control-condition where the full input spectrum was filtered into 12 channels presented to 12 electrodes at a rate of 1200 pps (pulses per second), bilaterally. This control strategy was devised to be different from the standard strategy that the subjects used in everyday listening. This was done because many cochlear implant users accommodate readily to whatever program they are exposed to and that a long-utilized strategy has an advantage over an acutely tested strategy, and the intent was to control for this likely advantage (e.g., Tyler et al, 1986).

A secondary interest was to explore four conditions: unilateral left using 12 channels and stimulus sites, unilateral right using 12 channels and stimulus sites, bilateral full, and bilateral interleaved.

The following 12 electrodes were used to create the strategies: 18,17,16, 15, 14,13, 12, 11, 10, 9, 8, and 7, bilaterally. In the control condition, all 12 electrodes were programmed by setting threshold and most comfortable level, as is accomplished with a standard clinical fitting. In the second condition, subjects used an interleaved strategy where the input spectrum was filtered into 12 channels, however, only the output of six of the filters was audible to one implant, while the output of the six alternative filters was audible to the other implant. To accomplish this, C-levels on every other electrode were set at or just below threshold. This does not of course guarantee that no stimulation occurs on these electrodes, only that it will be substantially less than that on electrodes given a full dynamic range and on the companion matching (with the same bandwidth) electrode in the opposite ear. This unique programming created alternate frequency “holes” between the two cochlear implants, which cannot be accomplished by simply turning every other electrode off. When electrodes are turned off, the programming software reallocates the input frequency to compensate for the missing electrode.

The stimulation rate was 1200 pps for each implant, as with the first set of conditions. Electrodes set with a full dynamic range for the left implant were electrodes 18,16,14,12,10, and 8, while electrodes set with a full dynamic range for the right ear were 17, 15, 13, 11, 9, and 7. Table 2 shows the upper and lower cutoff frequencies for the interleaved program.

Table 2
Upper and Lower Cutoff Frequencies for the Active Electrodes for Each Interleaved Program for Subjects in Experiment 1

Each processor was programmed separately. After both processors were activated, subjects were allowed to adjust each volume control to achieve a comfortable level. Processors were then balanced for loudness based on the voice of the clinician talking in front of them. During testing, subjects were allowed to adjust the volume control(s) to obtain a comfortable loudness for both unilateral and bilateral conditions. Sensitivity settings were held constant for each subject across all conditions.

Procedure

Subjects were tested using the left ear only, using the right ear only, and bilaterally for each condition. Subjects received minimal exposure and no auditory training with either speech processing strategy prior to testing. Testing took place in a sound-treated booth using a video-disc player to present all testing materials. Sound stimuli were presented via a front-facing loudspeaker or 0° azimuth at 1 m distance from the subject. The sound level for the speech stimuli was measured using a sound level meter placed at ear level of each subject and calibrated to be presented from 0° azimuth at 70 dB(C).

Speech Reception Testing

The Iowa Medial Consonant Test (Tyler et al, 1983; Tyler et al, 1997) was administered, audition only, in quiet. This test was chosen because linguistic and cognitive factors are minimized and a speech features analysis (although not presented in this paper) can be done. This test uses a forced-choice format, in which response alternatives appeared on a monitor after a stimulus was presented. Subjects were asked to touch the appropriate item on the screen, and responses were scored in percent correct.

There are different versions of this test with a different total number of available response choices (e.g., 13, 16, or 24 choices). The 13-choice test presents each of 13 consonants six times for a total of 78 test presentations. The 16- and 24-choice versions present each of 16 or 24 consonants five times for a total of 80 and 120 test presentations, respectively. Subjects 12454b and 12435b took the 13-choice test, whereas subjects 12422 and 12452b took the 16-choice test. Subjects completed different test versions simply due to time constraints. Each subject completed the test twice. The presentation order for all tests was randomized, and patients chose their response from the alternatives presented in a/e/-consonant-/e/(13-choice) or a/a/-consonant-/a/(16-and 24-choice) context. A male talker produced the tokens for each test. At least three exemplars for each of the tokens were recorded for each consonant, and these exemplars were presented in randomized orders as well.

Results

The results for all subjects are presented in Figure 1. In each panel, consonant identification scores for one subject are shown for left ear only, right ear only, and bilateral stimulation. On the left side of each panel are the results for the interleaved conditions, and on the right side of each panel are the results for the control conditions. A two-sample test for binomial proportions (with normal theory approximation) was used to determine significance (alpha = .05) between the control and interleaved conditions. Each subject completed each test twice, and error bars were graphed.

Figure 1
Consonant identification in quiet with the speech from the front for each of four subjects (Figure 1a = subject 12422b, Figure 1b = subject 12435b, Figure 1c = subject 12452b, and Figure 1d = subject 12454b) tested in experiment 1. Shown are scores for ...

Figure 1a shows the results for subject 12422b. When comparing bilateral only scores, results show that this patient performed significantly better (p < .001) when using the control strategy (91%) than when using the interleaved strategy (82%). However, in the results for the individual ears, there was no difference between the bilateral and right only scores (p > .05) for the control strategy, suggesting that the significant difference between bilateral scores across conditions is not due to a binaural effect but a “better ear” effect.

Figure 1b shows results for subject 12435b. No significant differences were found among the interleaved and control conditions (p > .05). Average percent correct scores for the control strategy were 65% for the left ear only, 63% for the right ear only, and 72% bilaterally. For the interleaved strategy, average percent correct scores were 72% for the left ear only, 67% for the right ear only, and 72% correct bilaterally. Neither the interleaved nor the control conditions exhibited a bilateral advantage.

Figure 1c shows results for subject 12452b. No significant differences were found between the interleaved and control conditions for the bilateral cases. Average percent correct scores for the control strategy consisted of 17% correct for the left ear only, 21% correct for the right ear only, and 24% correct bilaterally. In comparison, for the interleaved strategy, scores were 16% correct for the left ear only, 31% correct for the right ear only, and 22% correct with bilateral stimulation. No bilateral advantage was found in either condition.

Figure 1d shows results for subject 12454b. This subject performed significantly better (p < .001) when using the interleaved strategy bilaterally (76% correct) than when using the control strategy bilaterally (60% correct). Scores for the control strategy were very similar across test conditions (that is, left ear only = 55% correct; right ear only = 60% correct, and bilaterally = 60% correct). In contrast, a bilateral advantage was found for the interleaved conditions (that is, left ear only = 63% correct; right ear only = 68% correct; and bilaterally = 76% correct). The scores for this subject were better in the interleaved monaural conditions than in the control monaural conditions. Recall that only every other frequency band was represented in the interleaved unilateral conditions.

Discussion

In this first experiment we attempted to improve the performance of bilateral cochlear implant users by increasing the distance between electrodes through a division of the frequency information across the two ears. We were successful in one of four subjects. Possibly, this subject had channel interactions among electrodes, although we did not measure this directly. Additionally, we note that the monaural interleaved configurations resulted in higher scores than the monaural control configurations for this particular subject. This was the case even though spectral gaps were present in the interleaved configurations. Therefore, we assume that the higher scores may have resulted from less channel interaction. Another explanation is that some electrodes resulted in a stimulation that created a distorted signal and that when these were eliminated from the program, performance increased. In addition it might be possible for the binaural system to extract critical timing, level, and spectral information to compare and contrast across ears when there is less information available.

We could not measure a benefit from our interleaved strategy in the other three subjects. It could be that these subjects did not suffer from channel interactions. Lawson et al (1999) reported that interlacing channels across devices was useful only when the electrodes on the two cochlea produced different pitch percepts. In this study we did not test whether the pitch percepts were different across ears and electrodes. The individual differences among subjects could be a function of whether, by chance, the pitches were the same or different on the two electrode arrays.

In addition, it also might be that our tests were not sensitive enough to demonstrate an advantage (e.g., Loizou et al, 2003) or that significant channel interaction exists but the central mechanism was insufficient to reconstruct the full spectrum allowing for an increase in performance. Although a benefit in using the interleaved strategy could not be demonstrated in these three subjects, it is interesting to note that overall performance was similar across conditions in two of the subjects and that a decrement in performance was not found when subjects were using the interleaved strategy.

In the present investigation, tests were performed only in quiet using a loudspeaker placed directly in front of the subject. It may be that the advantages observed with bilateral cochlear implant devices are more apparent with speech in noise testing, particularly when they originate from different sound sources. Additionally, sound source localization was not tested in this study. Presenting different, even though complementary, information between the two ears has the potential to distort localization cues. Particularly in the current study, similar place and frequencies cues would not be available at the two ears. Many models of binaural processing require such place-to-place comparisons between ears (e.g., Durlach, 1972; Jeffress, 1972). Thus, we conclude that there may be some (but not all) bilateral cochlear implant users who would benefit from the interleaved approach.

Based on our findings from the first experiment, we decided to perform a second experiment with more subjects and tests.

EXPERIMENT 2

In experiment 2 we conducted a study with a larger number of subjects and tested speech reception in quiet and noise, and additionally, measured sound source localization abilities.

Method

Subjects

Subjects for this experiment included 11 individuals (3 males, 8 females) who received bilateral cochlear implants during a single operation. Seven of the subjects were implanted with a Cochlear Corporation device (4 = CI24M; 3 = Contour) while four subjects were implanted with an Advanced Bionics device (4 = CIIHF1). Table 3 displays individual biographical information. Months of implant experience ranged from 6 to 48 mo with an average of 24 mo (SD = 13). Subjects ranged in age from 38 to 68 yr. All subjects had postlingually acquired profound bilateral sensorineural hearing loss, received minimal benefit from hearing aids prior to implantation, and met the standard cochlear implant criteria, at that time, in each ear. In accordance with one of those criteria, none of the subjects achieved a score of 50% correct or better in recognizing the HINT sentences (Nilsson et al, 1994) in the best-aided condition. (The sentences were presented in quiet.) Additionally, subjects were selected for bilateral implantation at this time because they had no differences in duration of deafness and hearing thresholds across ears pre-implantation.

Table 3
Biographical Information for Participants in Experiment 2

Signal Processing and Devices

Three subjects implanted with Cochlear Corporation devices wore the body-worn, SPrint speech processor while three subjects wore the Esprit 3G ear-level processor. All four of the Clarion subjects wore the CII ear-level speech processor.

Two conditions were tested. One condition was each subject’s standard everyday use processing strategy where the full input spectrum was filtered across all electrodes (see Table 4). The second condition was the interleaved strategy created by taking each subject’s own everyday strategy and filtering the input spectrum similarly to experiment 1 so that only the output of half of the filters was audible to one implant, while the output of the other half was audible to the other implant. This was done because we felt that if channel interactions were causing distortions, minimizing the amount of electrical interaction on each subject’s long-term use strategy might result in immediate improvements in performance.

Table 4
Standard Programming Parameters for Subjects in Experiment 1

Stimulation rate was held constant across the two conditions. Table 5 shows the upper and lower cutoff frequencies for the electrodes for each interleaved program. Programming and loudness balancing were completed in the same manner as experiment 1, Four subjects utilized the ACE (Advanced Combination Encoder) strategy, two utilized SPEAK (Spectral PEAK), and five utilized CIS.

Table 5
Upper and Lower Cutoff Frequencies for the Active Electrodes for Each Interleaved Program for Subjects in Experiment 2

Procedure

The test setup and calibration of the speech stimuli for experiment 2 was consistent with the procedures used in experiment 1. Subjects received minimal exposure and no auditory training with the interleaved programs prior to testing. Speech perception testing was completed with speech stimuli presented at (0° azimuth and noise either from the front (0° azimuth), right (+90° azimuth), or left (−90° azimuth).

Speech Perception

The following tests were used to evaluate performance across the two conditions; however, due to time constraints not every subject completed every test:

  • Consonant-nucleus-consonant monosyllabic words (CNC) (Tillman and Carhart, 1966) in quiet. Scores were reported in percent correct and recorded for both the word and phoneme level. Two lists of CNC words were presented for each condition. Lists were presented in a randomized order. All 11 subjects completed the CNC word testing.
  • CUNY (City University of New York) sentences (Boot-hroyd et al, 1985) in noise. Three conditions were tested: (1) speech and noise both presented from the front (0° azimuth); (2) speech from the front and noise from the right (+90° azimuth); and (3) speech from the front and noise from the left (−90° azimuth). The speech was set at 70 dB(C). The noise consisted of a multitalker babble. A signal-to-noise ratio (S/N) was individually set in the 0° azimuth condition to avoid ceiling and floor effects and remained constant for the other two conditions. CUNY sentences were scored by dividing the total number of words correctly identified by the total number of words possible. Four lists were administered during each condition, Lists were presented in a randomized order. All 11 subjects were tested with the CUNY sentences.
  • Subjective quality ratings were obtained using a test that consisted of six categories of sounds: adult voices, children’s voices, everyday sounds, music, and speech in noise. Each category contained 16 sound samples for a total of 96 items. Subjects were asked to listen to each randomly played sound, presented at 70 dB(C), emanating from a loudspeaker placed in front of them. Using a computer touch screen and a visual analog scale, subjects rated each sound for clarity ranging from zero (unclear) to 100 (clear). Nine subjects completed the sound quality test.

Localization

Localization of everyday sounds was evaluated using the materials and methods described in Dunn et al (2005). Briefly, 16 different sounds were presented at 70 dB(C) from one of eight loudspeakers placed 15.5° apart at the subject’s 0° azimuth forming an 108° arc. Subjects were seated facing the center of the speaker array and were asked to identify the speaker from which the sound originated. Smaller RMS-average-error scores represent better localization ability. Chance performance is a score exceeding approximately 40° RMS error. Nine subjects completed the localization test.

Comparisons between interleaved and standard programming were made for each individual for each test. Scores from the CNC words and CUNY sentences were analyzed using two-sample tests for binomial proportions (with normal theory approximation). An alpha of .05 was used to determine significance between the standard and interleaved conditions. Comparatively, results from the subjective sound quality rating and localization tests were analyzed using paired-t two-sample tests. An alpha of .05 was used to determine significance between the standard and interleaved conditions. Statistical significance was indicated using an asterisk (*) next to the subject ID. The magnitude of the improvement/decrement for each subject is shown in the vertical distance from the diagonal. For example, subject R47b scored 54% with the standard strategy versus 72% with the interleaved strategy. Another subject (H40b) scored 65% with the standard strategy versus 45% with the interleaved strategy.

Results

Speech Perception

A scatterplot displaying the CNC word scores is presented in Figure 2. Three subjects showed a significant improvement for word recognition in quiet with the interleaved strategy over the standard strategy (R47b, R36b, and H18b). Four subjects did as well with the interleaved strategy as they did with the standard strategy (R40b, M58b, M45b, and M63b). Four subjects did significantly worse with the interleaved strategy when compared to the standard strategy (H48b, M46b, H40b, and H27b).

Figure 2
Word recognition in quiet with the speech from the front for subjects tested in experiment 2. Shown are bilateral scores for each condition.

Speech perception results with CUNY sentences (speech and noise front) for all 11 subjects are shown in Figure 3. Five subjects showed a significant improvement with sentence recognition in noise (noise front) when using the interleaved strategy compared to the standard strategy (R47B, M45B, M58B, H40B, and H48B). Two subjects did equally well with both strategies (H18b and H27B), while four subjects did significantly worse with the interleaved strategy (M46B, R36B, M63B, and R40B).

Figure 3
Sentence recognition in noise presented from the front (0° azimuth) for subjects tested in experiment 2. Shown are bilateral scores for each condition.

Figures 4 and and55 show results with CUNY sentences in noise with the noise facing either the right (Fig. 4) or the left (Fig. 5). In Figure 4, when noise is facing the right cochlear implant, three subjects showed a significant improvement in sentence recognition when using the interleaved strategy (R47B, M45B, and H48B), three subjects did equally well with both strategies (H40B, H18b, and M63B), and five subjects did significantly worse when using the interleaved strategy (R40B, M58B, H27B, M46B, and R36B). In Figure 5, when noise is facing the left cochlear implant, five subjects showed significant improvement in sentence recognition with the interleaved strategy (R47B, M63B, M58B, H48B, and H40B), four subjects performed equally well with both strategies (R40B, H18b, M45B, and R36B), and two subjects did significantly worse with the standard strategy (M46B and H27B).

Figure 4
Sentence recognition in noise presented from the right (90° azimuth) for subjects tested in experiment 2. Shown are bilateral scores for each condition.
Figure 5
Sentence recognition in noise presented from the left (90° azimuth) for subjects tested in experiment 2. Shown are bilateral scores for each condition.

A priori, our interest was to examine individual performance over a variety of tests. Examining the performance of each subject across all four speech perception tests, four subjects showed no clear pattern for which strategy provided better performance across tests (M58B, M63B, H40B, and R36B). However, for the remaining seven subjects, we make the following observations:

  • Two subjects performed significantly better with one strategy across all tests. Subject R47B performed significantly better using the interleaved strategy, while subject M46B performed significantly better using the standard speech processing strategy on all tests.
  • Results from two subjects (H18b and M45B) never showed a clear improvement with the standard strategy. They either showed a significant improvement with the interleaved strategy or there was no difference between the two conditions.
  • Two subjects (R40B and H27B) never showed a clear improvement with the interleaved strategy. They either showed a significant improvement with the interleaved strategy or there was no difference between the two conditions.
  • One subject (H48B) did significantly better with the interleaved strategy in all three tests in noise but did significantly better with the standard strategy in quiet.

Sound Quality

Figure 6 shows the test results for the sound quality test. One subject (H18b) showed a statistically significant preference for the sound quality of the interleaved strategy over the standard strategy. One subject (H27B) favored both strategies equally well, while seven subjects preferred the sound quality of the standard strategy compared to the interleaved strategy (R36B, M46B, M58B, H40B, R47B, M63B, and M45B).

Figure 6
Subjective quality ratings for subjects in experiment 2. Shown are bilateral scores for each condition.

Localization

Figure 7 shows the results of the localization test. Five subjects performed significantly worse on localization with the interleaved strategy than the standard strategy (H27B, H40B, M45B, M46B, and M63B); that is, these subjects had greater RMS errors in localization. Four subjects did equally well with both strategies (R40B, R47B, M58B, and R36B), while no subjects performed better with the interleaved strategy for localization.

Figure 7
Eight-speaker everyday sounds localization testing for subjects in experiment 2. Shown are bilateral scores for each condition.

SUMMARY

The purpose of this investigation was to determine whether adult bilateral cochlear implant recipients could benefit from using a unique speech processing strategy in which the input spectrum is interleaved among electrodes across the two implants. The filters of each device were interleaved in a way that created alternate frequency “holes” between the two cochlear implants allowing for a full input frequency spectrum only when both devices were used together. Two separate experiments were conducted. In the first experiment, subjects were tested during acute laboratory trials using a control speech processing strategy and a unique interleaved strategy. Results indicated that one of the four subjects performed better with the interleaved strategy, one subject received a binaural advantage with the interleaved strategy that they did not receive with the control strategy, and two subjects showed no decrement in performance when using the interleaved strategy. Although these data were collected on a small number of individuals, they suggest that adult bilateral cochlear implant recipients can benefit from using a unique interleaved strategy.

In the second experiment, subjects also completed acute laboratory testing. However, subjects compared their own individual standard strategy that they used on a daily basis with a unique interleaved strategy. Interestingly, we found that over half of the total number of subjects in this study did equally well or better with the interleaved strategy when compared to their own individual standard strategy for speech perception. More specifically:

  • Seven out of 11 subjects did equally well or better with the interleaved strategy for CNC words in quiet.
  • Seven out of 11 subjects did equally well or better with the interleaved strategy for CUNY sentences with the speech and noise from the front.
  • Six out of 11 subjects did equally well or better with the interleaved strategy for CUNY sentences with the speech front and noise right.
  • Seven out of 11 subjects did equally well or better with the interleaved strategy for CUNY sentences with the speech front and noise left.

However, it should be noted that not all subjects did consistently better with the interleaved strategy on each task. There was a lot of variability. Some subjects performed better with the interleaved strategy on some tasks but not for others.

For those subjects who did significantly worse with the interleaved strategy, we conclude that perhaps the standard strategy presents information redundantly across the two sides. The redundancies might “fill in” gaps in nerve survival, or other deficits on one side, with the redundant stimulation on the contralateral side, and vice versa.

As for sound quality ratings, the majority of the subjects preferred their own standard strategy over the sound of the interleaved strategy. This is not surprising given that these individuals had an average of 6 to 48 mo of use with the standard strategy and only minimal listening exposure to the interleaved strategy prior to testing. What is interesting is that two out of nine of these subjects found the sound quality of the interleaved strategy as pleasant as the standard strategy. Field trials giving equal using time to the unique interleaved strategy compared to more standard programming strategies are needed to evaluate fully the sound quality of this strategy. Also, it is interesting to point out that one of the individuals who preferred the sound quality of the interlaced strategy did not show any improvement for speech perception or localization when using it.

This study was a clinical application of the “interlacing” method used by Lawson et al (1999). Because this was a clinical application, no direct measures of channel interaction were obtained. We can only speculate that when improvements in performance were found with the interleaved strategy, perhaps this was use to increasing the distance between electrodes through a division of the frequency information across the two ears. However, due to the small sample size of this paper and the variability of results across tests, much more work is required to determine the efficacy of the interleaved strategy for bilateral cochlear implant recipients.

Acknowledgments

Supported in part by research grant 2 P50 CD 00242 from the National Institute on Deafness and Other Communication Disorders. National Institutes of Health: grant RR00059 from the General Clinical Research Centers Program, Division of Research Resources. NIH: the Lions Clubs International Foundation: and the Iowa Lions Foundation.

We thank Abby Johnson for her assistance with the data collection.

Abbreviations

CIS
continuous interleaved sampling
CNC
consonant-nucleus-consonant
CUNY
City University of New York
HINT
Hearing in Noise Test
pps
pulses per second

Footnotes

In the interest of full disclosure, it should be noted that author A.J.P. is employed by, but has no other financial interest in, Cochlear Americas of Denver, CO, and that author B.S.W. is a consultant to, but has no other financial interest in, MED-EL Medical Electronics GmbH of Innsbruck, Austria.

Copyright of Journal of the American Academy of Audiology is the property of American Academy of Audiology and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

References

  • Boëx C, de Balthasar C, Kós MI, Pelizzone M. Electrical field interactions in different cochlear implant systems. J Acoust Soc Am. 2003;114(4):2049–2057. [PubMed]
  • Boothroyd A, Hanin L, Hnath T. A Sentence Test of Speech Perception: Reliability, Set Equivalence, and Short-Term Learning. New York: Speech and Hearing Sciences Research Center, City University of New York; 1985.
  • Buss E, Pillsbury HC, Buchman CA, et al. Multicenter U.S. bilateral MED-EL cochlear implantation study: speech perception over the first year of use. Ear Hear. 2008;29(1):20–32. [PubMed]
  • Byrne D. Clinical issues and options in binaural hearing aid fitting. Ear Hear. 1981;2(5):187–193. [PubMed]
  • Byrne D, Noble W. Optimizing sound localization with hearing aids. Trends Amplif. 1998;3(2):51–73.
  • Byrne D, Noble W, LePage B. Effects of long-term bilateral and unilateral fitting of different hearing aid types on the ability to locate sounds. J Am Acad Audiol. 1992;3(6):369–382. [PubMed]
  • Dunn CC, Tyler RS, Oakley S, Gantz BJ, Noble W. Comparison of speech recognition and localization performance in bilateral and unilateral cochlear implant users matched on duration of deafness and age at implantation. Ear Hear. 2008;29(3):352–359. [PubMed]
  • Dunn CC, Tyler RS, Witt SA. Benefit of wearing a hearing aid on the unimplanted ear in adult users of a cochlear implant. J Speech Lang Hear Res. 2005;48:668–680. [PubMed]
  • Durlach N. Binaural signal detection: equalization and cancellation theory. In: Tobias JV, editor. Foundations of Modern Auditory Theory. Vol. 2. New York: Academic Press; 1972. pp. 369–462.
  • Firszt JB, Reeder RM, Skinner MW. Restoring hearing symmetry with two cochlear implants or one cochlear implant and a contralateral hearing aid. J Rehabil Res Dev. 2008;45(5):749–767. [PubMed]
  • Franklin B. Split-band amplification: a HI/LO hearing aid fitting. Ear Hear. 1981;2(5):230–233. [PubMed]
  • Gantz BJ, Tyler RS, Rubinstein J, et al. Binaural cochlear implants placed during the same operation. Otol Neurotol. 2002;23:169–180. [PubMed]
  • Grantham DW, Ashmead DH, Ricketts TA, Labadie RF, Haynes DS. Horizontal-plane localization of noise and speech signals by postlingually deafened adults fitted with bilateral cochlear implants. Ear Hear. 2007;28(4):524–541. [PubMed]
  • Hughes ML, Abbas PJ. Electrophysiologic channel interaction, electrode pitch ranking, and behavioral threshold in straight versus perimodiolar cochlear implant electrode arrays. J Acoust Soc Am. 2006;119(3):1538–1547. [PubMed]
  • Hughes ML. A re-evaluation of the relation between physiological channel interaction and electrode pitch ranking in cochlear implants. J Acoust Soc Am. 2008;124(5):2711–2714. [PMC free article] [PubMed]
  • Jeffress A. Binaural signal detection: vector theory. In: Tobias JV, editor. Foundations of Modern Auditory Theory. 2. New York: Academic Press; 1972. pp. 349–468.
  • Koenig W. Subjective effects in binaural hearing. J Acoust Soc Am. 1950;22:61–62.
  • Laszig R, Aschendorff A, Stecker M, et al. Benefits of bilateral electrical stimulation with the nucleus cochlear implant in adults: 6-month postoperative results. Otol Neurotol. 2004;25(6):958–968. [PubMed]
  • Lawson DT, Wilson BS, Zerbi M, Finley CC. Fourth Quarterly Progress Report. NIH Project N01-DC-8-2105. 1999. Speech processors for auditory prostheses; pp. 1–27.
  • Litovsky RY, Parkinson A, Arcaroli J, et al. Bilateral cochlear implants in adults and children. Arch Otolaryngol Head Neck Surg. 2004;130(5):648–655. [PubMed]
  • Litovsky R, Parkinson A, Arcaroli J, Sammeth C. Simultaneous bilateral cochlear implantation in adults: a multicenter clinical study. Ear Hear. 2006;27(6):714–731. [PMC free article] [PubMed]
  • Loizou PC, Mani A, Dorman MF. Dichotic speech recognition in noise using reduced spectral cues. J Acoust Soc Am. 2003;114(1):475–483. [PubMed]
  • Mens LH, Berenstein CK. Speech perception with mono-and quadrupolar electrode configurations: a crossover study. Otol Neurotol. 2005;26(5):957–964. [PubMed]
  • Middlebrooks JC, Green DM. Sound localization by human listeners. Annu Rev Psychol. 1991;42:135–159. [PubMed]
  • Müller J, Schön F, Helms J. Speech understanding in quiet and noise in bilateral users of the MED-EL COMBI 40/40+ cochlear implant system. Ear Hear. 2002;23(3):198–206. [PubMed]
  • Nilsson M, Soli SD, Sullivan JA. Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. J Acoust Soc Am. 1994;95(2):1085–1099. [PubMed]
  • Nopp P, Schleich P, D’Haese P. Sound localization in bilateral users of MED-EL COMBI 40/40+ cochlear implants. Ear Hear. 2004;25(3):205–214. [PubMed]
  • Peissig J, Kollmeier B. Directivity of binaural noise reduction in spatial multiple noise-source arrangements for normal and impaired listeners. J Acoust Soc Am. 1997;101(3):1660–1670. [PubMed]
  • Ricketts TA, Grantham DW, Ashmead DH, Haynes DS, Labadie RF. Speech recognition for unilateral and bilateral cochlear implant modes in the presence of uncorrelated noise sources. Ear Hear. 2006;27(6):763–773. [PubMed]
  • Rosen SM, Fourcin AJ. Frequency selectivity and the perception of speech. In: Moore BCJ, editor. Frequency Selectivity in Hearing. London: Academic Press; 1986. pp. 373–487.
  • Senn P, Kompis M, Vischer M, Haeusler R. Minimum audible angle, just noticeable interaural differences and speech intelligibility with bilateral cochlear implants using clinical speech processors. Audiol Neurotol. 2005;10(6):342–352. [PubMed]
  • Shannon RV. Multichannel electrical stimulation of the auditory nerve in man. II. Channel interaction. Hear Res. 1983;12(1):1–16. [PubMed]
  • Silverman SR, Hirsh IJ. Problems related to the use of speech in clinical audiometry. Ann Otol Rhinol Laryngol. 1955;64(4):1234–1244. [PubMed]
  • Stickney GS, Loizou PC, Mishra LN, Assmann PF, Shannon RV, Opie JM. Effects of electrode design and configuration on channel interactions. Hear Res. 2006;211(1–2):33–45. [PubMed]
  • Tillman TW, Carhart R. Technical Report No. SAM-TR-66–55. Brooks Air Force Base, TX: USAF School of Aerospace Medicine; 1966. An Expanded Test for Speech Discrimination Utilizing CNC Monosyllabic Words. Northwestern University Auditory Test No. 6. [PubMed]
  • Tyler RS. Frequency resolution in hearing-impaired listeners. In: Moore BCJ, editor. Frequency Selectivity in Hearing. London: Academic Press; 1986. pp. 309–371.
  • Tyler RS, Dunn CC, Witt SA, Noble WG. Speech perception and localization with adults with bilateral sequential cochlear implants. Ear Hear. 2007;28(Suppl):86S–90S. [PubMed]
  • Tyler R, Gantz B, Rubinstein J, et al. Three-month results with bilateral cochlear implants. Ear Hear. 2002;23:80S–89S. [PubMed]
  • Tyler RS, Parkinson AJ, Woodworth GG, Lowder MW, Gantz BJ. Performance over time of adult patients using the Ineraid or nucleus cochlear implant. J Acoust Soc Am. 1997;102(1):508–522. [PubMed]
  • Tyler RS, Preece JP, Lansing CR, Otto SR, Gantz BJ. Previous experience as a confounding factor in comparing cochlear-implant processing schemes. J Speech Hear Res. 1986;29(2):282–287. [PubMed]
  • Tyler RS, Preece JP, Lowder MW. The Iowa Cochlear Implant Test Battery. Iowa City, IA: University of Iowa; 1983.
  • Tyler R, Preece J, Wilson B, Rubinstein J, Wolaver A, Gantz B. Distance, localization and speech perception pilot studies with bilateral cochlear implants. Cochlear Implants—An Update; Proceedings of the Asian Conference on Cochlear Implants; The Hague: Krugler Publications; 2001.
  • van Hoesel RJM, Clark GM. Psychophysical studies with two binaural cochlear implant subjects. J Acoust Soc Am. 1997;102(1):495–507. [PubMed]
  • van Hoesel RJM, Clark GM. Speech results with a bilateral multi-channel cochlear implant subject for spatially separated signal and noise. Aust J Audiol. 1999;21:23–28.
  • van Hoesel RJM, Tyler RS. Speech perception, localization, and lateralization with bilateral cochlear implants. J Acoust Soc Am. 2003;113(3):1617–1630. [PubMed]
  • Verschuur C. Modeling the effect of channel number and interaction on consonant recognition in a cochlear implant peak-picking strategy. J Acoust Soc Am. 2009;125(3):1723–1736. [PubMed]
  • Verschuur CA, Lutman ME, Ramsden R, Greenham P, O’Driscoll M. Auditory localization abilities in bilateral cochlear implant recipients. Otol Neurotol. 2005;26(5):965–971. [PubMed]
  • White MW, Merzenich MM, Gardi JN. Multichannel cochlear implants. Channel interactions and processor design. Arch Otolaryngol. 1984;110(8):493–501. [PubMed]
  • Wightman FL, Kistler DJ. Monaural sound localization revisited. J Acoust Soc Am. 1997;101(2):1050–1063. [PubMed]
  • Wilson BS. Strategies for representing speech information with cochlear implants. In: Niparko JK, Kirk KI, Mellon NK, Robbins AM, Tucci DL, Wilson BS, editors. Cochlear Implants: Principles and Practices. Philadelphia: Lippincott; 2000. pp. 129–172.
  • Wilson BS. Signal processing. In: Tyler RS, editor. Cochlear Implants: Audiological Foundations. San Diego: Singular Publishing Group; 1993. pp. 35–86.
  • Wilson BS, Finley CC, Lawson DT, Wolford RD, Eddington DK, Rabinowitz WM. Better speech recognition with cochlear implants. Nature. 1991;352(6332):236–238. [PubMed]
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • MedGen
    MedGen
    Related information in MedGen
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...