NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Institute of Medicine (US) Committee on Technological Innovation in Medicine; Rosenberg N, Gelijns AC, Dawkins H, editors. Sources of Medical Technology: Universities and Industry. Washington (DC): National Academies Press (US); 1995.

Cover of Sources of Medical Technology: Universities and Industry

Sources of Medical Technology: Universities and Industry.

Show details

6Innovation in Cardiac Imaging


Technological innovation has been the lifeblood of many sectors of the American economy and, as a result, managers, policymakers, and academic researchers have long sought to understand what factors encourage technological innovation and how this process can be made more productive. While innovation has been studied intensively in a wide range of contexts,1 there remains a considerable need for a better understanding of medical innovation as it occurs in academic, industrial, and government research and development settings. Innovation in medical technology takes place within a unique environment that raises many complex issues regarding the need for collaboration across disciplinary lines and the moral and ethical implications of working with human subjects. Improving our understanding of this process may help us to identify points of leverage and accelerate the pace of technological innovation.

This chapter presents some preliminary findings and hypotheses drawn from field interviews with key participants who are involved in the innovation process in two important and widely used technologies that provide diagnostic information about the heart: nuclear cardiology and echocardiography. These technologies pose some especially interesting problems for innovators since, in both instances, their development and eventual successful application required collaboration between individuals trained in medicine or the life sciences and those trained in engineering or the physical sciences.

Our approach has been to identify a number of distinct innovations within the overall development of each of the main technologies identified above. Through interviews with engineers, scientists, and clinicians in industry and academia who were involved in or highly knowledgeable about each development, we explored the sequences of events leading up to the innovation, the settings within which the events took place, and the backgrounds and interactions of the participants. (Several case write-ups of component innovations appear as appendixes.) Then, drawing upon the findings yielded by our research, we constructed a model to identify elements of the innovation process that seemed to be common to each of the developments we examined.

Analysis of this tentative model of the innovation process helped us to identify some points of leverage for increasing the rate and sharpening the focus of innovation. We discuss how these levers could productively stimulate changes in managerial and public policy.

Our focus upon two limited areas of technology reflects a conscious decision to opt for depth rather than breadth of analysis. With only two data points it is impossible to subject our observations and conclusions to rigorous empirical verification; thus, they should be taken as hypotheses and directions for further research rather than as firmly proven facts. Our hope is that an in-depth exploration of these two areas of innovation will provide greater insight into some of the qualitative and serendipitous aspects of the innovation process and inject some new ideas into the ongoing debate over what can and should be done to foster and support this process.

Overview Of Cardiac Imaging

The cardiovascular field provides an excellent opportunity to study the process of innovation. In the past 20 years especially, a number of technological advances in diagnosis and therapy have significantly changed clinical practice. Cardiology now attracts top medical school graduates and as its practice has become increasingly interventional many of these new capabilities have diffused from tertiary medical centers to the community. These developments have contributed to observed reductions in death rates from heart disease and to improvements in the quality of life.

Numerous techniques are available for producing images of the heart that provide valuable information for guiding diagnosis, patient assessment, and therapeutic intervention. From this set of techniques we have selected two areas of technology that in recent decades have seen especially significant advances: nuclear cardiology and echocardiography.2 Nuclear medicine techniques are minimally invasive and have been used, for the most part, in patients with known or suspected coronary artery disease—the progressive blockage of the coronary arteries that can eventually lead to ischemia, angina, and heart attack. The ultrasound technique, or echocardiography, is noninvasive and has figured most prominently in the diagnosis of a variety of heart conditions other than coronary disease, such as valve disease, septal defects, and wall motion abnormalities.

As shown in Table 6-1, our study focused on successive innovations in the development of these imaging modalities. We found the two streams of innovation to be largely nonoverlapping clinically, although this may change in the future as echocardiography evolves toward a more prominent place in the evaluation of coronary artery disease patients.

TABLE 6-1. Selected Innovations in Cardiac Imaging.


Selected Innovations in Cardiac Imaging.

Nuclear Cardiology

Although radioisotope tracer techniques have been used sporadically in cardiology since the 1920s, their use for the evaluation of myocardial blood flow and pumping ability has become widespread only within the past 15 years. With this technique, a radioisotope, given intravenously, is rapidly and selectively taken up by healthy cardiac tissue. The tracer agent emits high-energy photons whose spatial location can be detected by a scintillation camera. The pattern of high- and low-photon emission densities produces an image of the heart. The clinical utility of these images arises from the simple fact that the radioisotope is taken up only in regions of normal heart tissue with adequate blood flow. Thus, a region of scar tissue left over from a prior infarction will not take up the radioisotope and will show up on the image as a dark spot. Blockages of the coronary arteries resulting in regions of insufficient blood flow will also create dark spots.

Such an image of the heart can provide information that is unavailable through other means. One of the most important clinical applications of nuclear cardiology involves the detection and evaluation of reversible defects. In patients with coronary artery disease the adequacy of the blood flow provided by the occluded arteries depends upon the patient's level of exercise. At rest the blood flow may be adequate, thus images taken at rest may appear normal. With exercise, however, the oxygen demands of the myocardium may grow beyond what the occluded arteries can deliver, thus images taken after exercise may show defects. Such defects are characterized as reversible lesions.3 They can be treated via therapeutic interventions such as coronary bypass surgery or angioplasty aimed at restoring normal blood flow.4 However, defects that appear both at rest and after exercise are simply scar tissue, in contrast, a permanent defect. Such regions cannot be treated with the interventions noted above.

Nuclear cardiology techniques have found other applications in the management of coronary artery disease. Measurements taken at rest or during stress have been shown to provide important information regarding a cardiac patient's long-term prognosis. Nuclear cardiology techniques have been used extensively in patients recovering from heart attacks to determine how much myocardium and function may have been lost (Kotler and Diamond, 1990). Images taken of the blood pool during the brief period before the radioisotope is taken up by the heart can provide valuable quantitative information about the heart's pumping action.

There are at least three important categories of recent technological innovations in cardiac nuclear medicine imaging: advances in cameras and detectors; development of better isotropic labeling agents; and the wider use of computer techniques for image reconstruction and interpretation. Our research focused on five innovations within these three categories: the development of thallium-201 imaging (see Appendix A), Tc-99 sestamibi tracer (see Appendix B); the single photon emission computed tomographic (SPECT) camera; triple-headed camera; and computer-based quantitative image interpretation (see Appendix C for a review of developments in SPECT camera technology).


Echocardiography employs high-frequency sound waves to generate visual images of cardiac anatomy and function. The basic principles involved are not unlike those involved in sonar. The technological basis of echocardiography fundamentally shapes the kinds of clinical information it provides. Because the nature of the signal depends strongly on the attenuation and reflection properties of the structures through which it passes, echocardiography has always had a strong anatomical focus. The high rate of image acquisition it provides has also made it a valuable tool for examining and evaluating the movement of cardiac structures.

Over the past 15 years this technology has developed substantially, and there is little evidence that the procedures now in general use exhaust its potential. In the coming years we are likely to see continuing improvements in its accuracy, the breadth of clinical information it generates, and its ease of use. And, over the long-term, echocardiography may pose a substantial competitive threat to diagnostic nuclear cardiology.

The recent history of echocardiography illustrates the flexibility and power of this technology. The M-mode machines used in early clinical diagnoses could look only along a single axis, providing what was sometimes called an "ice-pick" view. An ultrasonic signal would be transmitted along this axis and reflected off any anatomical structures it encountered. As these structures moved, differences in the length of the travel path created corresponding differences in the time between emission and detection of the reflection. Originally, these machines were limited to the diagnosis of valvular disorders. A skilled clinician who could orient the machine toward a heart valve could observe how the value moved with the beating of the heart. The later addition of scanning and more advanced signal processing capabilities enabled clinicians to generate two-dimensional images of the heart. With this development, users of echocardiography were able to examine large-scale cardiac anatomy.

The technology quickly evolved from still pictures to real-time moving images. Moving pictures provided valuable information on cardiac function. Changes in volume over the course of the beat cycle provided a basis for assessing the heart's pumping ability. By revealing wall motion abnormalities, real-time imaging also provided indirect information on the presence of scar tissue and/or ischemia.

The development of Pulse Doppler allowed echocardiography to use changes in frequency caused by the motion of the blood in the heart (i.e., the Doppler effect) to generate quantitative information about velocities. The subsequent development of Color Doppler made it possible to display this information visually through color-coding of what had previously been a gray-scale image. These capabilities made it possible for clinicians to use echocardiography to monitor the movement of blood through the heart. Using these techniques, they could detect the backwash of blood through defective heart valves or see jets of blood generated by perforations in the wall separating the right and left sides of the heart.

Other developments have increased the accuracy and usefulness of this technology. For instance, the development of reliable probes that could be inserted into the esophagus improved the resolution of these images by permitting clinicians to view the heart without the distortions caused by the passage of signals through fat, bone, and lung tissue (see Appendix D). The more recent development of acoustic quantification (AQ) using techniques to detect and monitor the boundary between the inner edge of the myocardium and the blood pool that it contains has provided automated real-time measurements of ventricular function—information that has been shown to have major prognostic significance (see Appendix E).

A number of further efforts to extend the capabilities and increase the accuracy of echocardiography are currently underway. One line of investigation aims at the development of echocardiographic contrast agents. Current efforts aim at the development of agents that generate microbubbles able to pass through the microvasculature and that would produce an ultrasonic contrast effect. If successful, these agents would make it possible to determine the amount of blood supplied to different regions of the heart—a development that would place echocardiography in direct competition with nuclear cardiology. A second line of investigation involves the development of miniature probes that can be inserted into the heart through the coronary vasculature. At least one clinician has predicted that intraluminal ultrasound imaging using these probes will eventually displace angiography as the "gold standard" for detecting and localizing coronary artery disease. A third effort is attempting to use subtle aspects of the ultrasound signal to characterize the tissues through which the signal has passed. Investigators have attempted to use this technique to identify tumors and to distinguish between scar tissue and ischemic myocardium. Successful achievement of these goals would also firmly position echocardiography as a direct competitor of nuclear cardiology.

The Innovation Process

To analyze and profit from the lessons our collection of case studies generated, we should first describe the innovation lifecycle, which serves as a framework for organizing and interpreting the events that have occurred during the development of these two technologies.

The innovation lifecycle consists of five distinct phases, through which most of the innovations that we studied passed. The length of a stage for a particular innovation can vary depending on the technical or clinical expertise and involvement required, scientific advances, and competitive actions. Table 6-2 is an outline of the innovation lifecycle. Evidence of its application, based upon examples drawn from our case studies, follows.

TABLE 6-2. Innovation Lifecycle.


Innovation Lifecycle.

Examples from Echocardiography and Nuclear Cardiology


An innovation in imaging typically begins with an isolated investigator demonstrating the technological feasibility of a new technique. Commonly, the investigator is operating in an academic setting and in some instances the research may be funded by an equipment manufacturer, often through the donation of equipment. Proof of feasibility typically involves the use of a jury-rig setup that is awkward to use and that produces results too crude and/or too unreliable to be useful in ordinary clinical practice. Nonetheless, these early demonstrations prove or disprove the technological feasibility of the technique. Successful demonstrations suggest that if appropriately developed the technique could be clinically useful.

Consider the Color Doppler echocardiography, tested at the University of Washington. Investigators there searching for defects in the wall separating the right and left sides of the heart and using early ultrasound equipment fed their signals into a color display that showed, through color coding, the direction of movement. Documents we reviewed state that nuclear medicine researcher David Kuhl conducted successful early work at the University of Pennsylvania on computer-assisted image reconstruction of radioisotope scans before the wide dissemination of X-ray computed tomography.5 In the case of nuclear cardiology's SPECT camera, the demonstration took place at the University of Michigan, where John Keyes attached an early-generation planar gamma camera to a gantry and created the ''Humongatron," the first SPECT camera.

Concept development, however, need not always rely on the development of instrumentation. Initial academic work on signal attenuation at soft tissue boundaries was conducted by a bioengineer who was later hired by Hewlett-Packard (HP). His initial investigations laid the foundation for the development of prototypes to test the theories of edge detection and acoustic quantification.6 The concept behind the development of the technetium-based sestamibi radiopharmaceutical was born from the need to create improved performance parameters. Academic investigations determined that the existing thallium imaging agent needed improvement along two specific dimensions: brightness, having to do with the number and energy levels of the emitted photons; and the distribution properties of the agent in heart tissue. Thus, the development of a prototype involved a meticulous search for an agent with the required characteristics.

Long intervals sometimes separate the proof of a concept's viability from its commercialization. Academics offered initial proof of the technological feasibility of transesophageal echocardiography (TEE) at an early date, and Diasonics, a company active in the ultrasound equipment market, developed commercial TEE probes in the early 1980s. The company ran into financial difficulties, however, and only a small number of its TEE probes were ever produced. Later, at Hewlett-Packard, an electrical engineer working with "Herman," HP's lab skeleton, was concerned about the signal attenuation problems posed by fatty tissue around the ribs as well as with the narrow window on the heart afforded by the space between the ribs. He noticed the esophagus led directly behind the heart and surmised, correctly, that it would be an ideal window through which to image the heart. When he presented the idea to clinicians they were excited and recalled the successful product Diasonics developed a few years back. In fact, a few clinicians actually had an old Diasonics probe. One could argue that the technological feasibility of the concept was proven much earlier, but the commercial success of the innovation was triggered by this engineer's rediscovery of the idea while at Hewlett-Packard.

Although, in retrospect, our study respondents seemed to be in substantial agreement about when and where these breakthroughs occurred, it was frequently noted that the eventual significance of an innovation was not usually apparent at the time of concept development. A substantial technical gulf often separated the jury-rig setup, which proved technological feasibility, from the prototype, whose performance approximated that of the final commercial system. The genuine clinical capabilities of the commercialized system are visible only in embryonic forms at the proof-of-concept phase, and considerable clinical vision and experimentation is required to unveil the innovation's full potential.

Indeed, sometimes the companies funding research conceived an application that is far removed from what eventually proved clinically useful and commercially viable. Consider the case of acoustic quantification. When Hewlett-Packard provided research funding to Washington University in St. Louis, its goal was tissue characterization—identification of ischemic heart tissue by its acoustic signature. Many did not anticipate that this research would play a significant role in the efforts to develop edge detection capabilities—that is identification of the heart/blood pool boundary—that were eventually to enable real-time measurements of ejection fraction7 or ventricular function.


The development of a working prototype of the imaging device or agent marks the second step in our innovation process. The capabilities of this prototype bear some resemblance to those of the unit that is eventually made available commercially. At this stage, different groups—potential users, the manufacturing experts, system designers, and so forth—have an opportunity to assess both technical and economic feasibility. Ideally, initial problems are discovered in the sketches and models and corrected, and are not incorporated into the engineering of a full-scale prototype.

Where the prototype is developed is also an important element of the innovation process. In almost all the cases we investigated, this step was carried out in an industrial setting. There was only one major exception—the development of the technetium-99 sestamibi radiopharmaceutical, which, because it did not involve development and/or operation of a complex, novel piece of equipment, could be developed by a collaborating pair of university-based research chemists at their own facilities.8 In contrast, the development of working prototypes of ultrasound systems capable of Color Doppler flow imaging or acoustic quantification were sizable engineering undertakings whose success required the cooperation of teams of skilled technicians. Efforts of this type are not easily organized or carried out in an academic or clinical research setting.

The prototype for HP's introduction of Color Doppler is an important example (see, also, appendix F). HP initially believed that Color Doppler would require a complete redesign of the system architecture. The existing equipment simply was not designed to accommodate so many upgrades (Color Doppler would be the eleventh in the series). Such a system redesign would require enormous time and effort; yet Color Doppler was coming on the heels of a long-overdue Pulse Doppler product and there were concerns about how long a complete redesign would take. Eventually an engineer at HP devised a prototype that allowed HP to introduce Color Doppler as a simple upgrade rather than as part of a redesigned system, thus preserving the installed-base advantage that HP was building and permitting HP to bring the product to market sooner. Even this simpler solution, however, required a sophisticated understanding of the architecture of the existing systems and input from some of HP's best engineers.

Development of a fully functional prototype is a precondition for clinical research into the properties and capabilities of the new technology. Arguably the most successful and clinically significant imaging innovations allow physicians to see things they were not able to see before. For this very reason, however, the clinical significance of such innovations may not be initially clear. Acoustic quantification is a good example. Physicians are still somewhat unsure of the clinical implications of real-time measurements of ventricular function since they have never seen such measurements before. Access to prototypes allows leading physician collaborators to explore the new technology's capabilities. Publications based on these early investigations play a significant role in defining the initial market for the new technology, facilitating its diffusion into clinical practice.


From prototype to commercialization, the development effort focuses on design and manufacturing issues as the sponsoring firm strives to lower costs, to improve yields, and to enhance ease of use. The goal of this effort is commercial success; relatively less attention is paid to clinical concerns beyond those identified during trials with the prototype. At the prototype stage, clinical investigators may accept a degree of unwieldiness and unreliability that would be unthinkable in a product designed to appeal to a broad market. In the process of moving to commercial-scale production, however, these aspects of the devices must be refined.

Ramping up to commercial scale for production and developing a more user-friendly interface are not simple tasks. Consider the somewhat ill-fated "Revision L," the twelfth in HP's series of echocardiography enhancements. This project involved the complete redesign of the system architecture, a task that had been postponed by the successful upgrade for Color Doppler. Much of the impetus for this project was the competitive challenge posed by Accuson, a competitor of HP in the ultrasound market. Accuson developed a 128-channel phased array system, which was a great success in ultrasound radiology, and had plans to introduce this system to the cardiology market. There was no shared agreement among researchers at HP that 128 channels would necessarily lead to better images than the current 64-channel system. There was concern, however, that Accuson's success with a "more powerful" system in radiology would legitimize the success of 128-channel systems for echocardiography. Further, the 128-channel system was technologically challenging and offered a more appropriate platform for future expansion.

Hewlett-Packard engineers pushed ahead, but encountered two serious problems. The first was due to a change in manufacturing technology. Plans called for the use of surface mount manufacturing techniques instead of straight pin circuit board design. This new technique promised superior cost and space economies, which were achieved only after a tortuous learning period marked by extreme difficulty in debugging the new circuit boards. The second problem involved the scheduling of developmental milestones. This entire effort posed tremendous engineering challenges, not the least of which was development of an appropriate 128-channel architecture. It simply was not possible to develop all of the requisite new technologies and adhere to the project schedule. The result was cost and time overruns and lab morale problems at HP. When the system was finally introduced, several engineers and clinicians agreed that the image quality the new units yielded was no better than the 64-channel units being replaced.

Another example of the difficulties of commercializing a new technology is the development, in the early 1980s, of the multi-headed SPECT camera, which proceeded from a University of Texas laboratory to the Technicare subsidiary of Johnson & Johnson, whose engineers created a working prototype. In 1986, however, Johnson & Johnson closed down the operations of Technicare and licensed its work on the multi-headed SPECT to two start-up firms to complete the commercialization of the technology. SPECT cameras were eventually brought to market, but only after a number of different organizations had started and then abandoned the effort.


Diffusion of innovation is typically characterized by a process in which different categories of users become aware of and come to adopt the product. Early adopters are usually opinion leaders who learn about innovation from published scientific literature or from the marketing activities of manufacturers. Later adopters learn from earlier users with whom they are professionally and socially integrated. Some previous research on diffusion of innovation in medical equipment describes how certain kinds of product design decisions can influence how much control manufacturers exercise over innovations that flow from use by early and even later adopters (von Hippel and Finkelstein, 1979). Depending on the perceived need, manufacturers can offer a relatively open system architecture to encourage "tinkering" or a relatively closed architecture to discourage it.

As ultrasound developed for cardiac applications through the late 1970s into the 1980s, the number of expert clinical cardiologists from major academic medical centers who served as consultants to most manufacturers of the technology was relatively small. Some of those doctors provided input to manufacturers developing or offering competing products, although all the consultants insist that the confidentiality of the process was preserved. To encourage the development of new applications, manufacturers developing these early products offered relatively open architectures, which served the manufacturers' initial needs. In later-generation echocardiography products, however, manufacturers began to lose control over the publication process and the evolution of demands and expectations for the equipment.


In the final stage of the process, feedback from users of the technology leads the manufacturer to refine the product. On the basis of clinician and manufacturer experience, evolutionary changes are made to both the product and to manufacturing process. Process improvements encompass reengineering efforts designed to lower manufacturing cost and improve yields. Product improvements are divided into two groups: those that enhance the clinician's efficiency, and those that enhance diagnostic effectiveness. Efficiency improvements to enhance ease of use are identified through observation and interviews focused on the clinician's activities. Performance parameters for efficiency must be well articulated by the clinician in order to be actionable by the engineer. For instance, a number of revisions to HP's initial imaging product were designed to make it easier to use. The system was redesigned to fit on one cart rather than two, mobility was enhanced through the addition of heavy-duty wheels and suspension systems, and the user interface was continuously updated to reduce the technical knowledge required of the clinician. In the case of transesophogeal echocardiography, a number of HP's initial probes were broken because cardiologists subjected them to forces exceeding those anticipated by HP. The probe was initially designed as a transducer attached to the end of a gastroscope. Gastroenterologists are much more concerned about gentle treatment of a diseased esophagus; the cardiologist, on the other hand, just wanted a good picture and had fewer inhibitions about applying force to the probe to move it into a more favorable position.

The process of enhancing diagnostic effectiveness is far more complex than that of enhancing ease of use. In echocardiography a major part of this process was a quest for better image quality. In this field, however, image quality is a highly individualistic perception, and it proved difficult for clinicians to describe what was good or bad about an image in a way that an engineer seeking to refine the system would find meaningful. In contrast, in nuclear cardiology the shortcomings of thallium-201-based images were well understood and researchers were able to develop new imaging agents and better cameras with properties designed to overcome those shortcomings. As a result, the image quality of echocardiographics improved only gradually and was not marked by the same dramatic changes found in nuclear images, especially with the introduction of the technetium isotopes.

In both fields there was also a certain degree of interplay between the development of new technological capabilities and the corresponding development of enhanced clinical insight. By their very nature, new imaging techniques tend to reveal phenomena that were not previously visible. Clinicians have to work with a new technology for a time before its implications become clear. As this occurs, it is likely to lead to further demands for system refinement.

Figure 6-1 summarizes the information collected in the case studies regarding the institutional settings within which key steps in the innovation process were carried out. A number of patterns are apparent and worthy of comment.

FIGURE 6-1. Institutional drivers of the innovation process.


Institutional drivers of the innovation process. NOTE: U, university; I, industry; C, clinician; G, government laboratory; ?, our research to date has not allowed the elucidation of these relationships.

The most striking regularities apparent in Figure 6-1 concern the commercialization and diffusion phases of the innovation process. The former almost always took place in an industrial setting; the latter, in a clinical setting. These regularities arise almost by definition. For commercialization to occur the existence of a commercial entity is implied. Hence, by definition, commercialization takes place within industry. Similarly, diffusion is defined as the adoption of innovations by end users. Since the products that incorporate these innovations are intended for clinical use, this phase of the innovation process of necessity must take place in a clinical setting. While the resulting regularities are striking, they are, however, neither especially interesting nor unexpected.

The concept phase of the innovation process raises much more intriguing questions. At first glance, this part of the process appears highly disorderly. From innovation to innovation, different types of institutions are involved in a number of combinations, which presents a somewhat chaotic, if not serendipitous, picture. A comparison of this column to the other columns in the figure, however, does reveal one noteworthy pattern: universities are clearly much more heavily involved in this phase than in any other phase of the innovation process. The critical role of universities as a source of ideas arose repeatedly in the case studies.

A similarly noteworthy pattern is evident in the contrast between the ultrasound and nuclear cardiology prototype development phases. In the case of ultrasound, industry's role was critical and unambiguous. In nuclear cardiology, the contributions of the participants were much more varied. This difference seems to grow out of the nature of the technologies upon which the two fields are based: the development of working ultrasound systems was a complex undertaking requiring the cooperation of teams of engineers trained in a variety of fields ranging from materials science to signal processing. Assembling such a team is no mean feat in a university setting. In contrast, the range of technologies and system complexities that had to be mastered in nuclear cardiology were more limited and not as removed from the capabilities of university-based researchers. Even the gamma cameras used in nuclear cardiology were often constructed from commercially available computers, sensors, and other components. Thus, the nature of the technology itself often shapes the institutional environment within which innovation takes place.

In the refinement phase, in virtually all cases, contributions originated from a number of different institutional settings. Most often we see a collaboration between clinicians and industry that reflects the normal give and take between a vendor and a customer. University-based researchers also frequently participate in the process.

Barriers To Innovation

Our interviews have enabled us to identify two potential barriers to successful innovation in medical imaging technology. The first stems from the gulf separating the engineers charged with developing imaging technology from the clinicians charged with interpreting and using the resulting images. This gulf is due, primarily, to the differences in training, knowledge, experience, and orientation that separate the two professions. A second barrier concerns changes in the environment that threaten to ossify the innovation process. For instance, new constraints governing academic research funding and clinical trials make it much more difficult to obtain a quick assessment of the potential afforded by a new technology. And the paper trial required by an expanding bureaucracy, both in the public sector and within industry, threatens to choke the innovation process.

Coste (1989), in his account of the diagnostic ultrasound market, credits a symbiotic relationship between the engineers and the clinicians as a driving force in the development of the market. Our interviews, however, offered evidence of a vast gulf separating the two parties involved in the process. While interdisciplinary research is necessary for successful innovation programs in many industries, we found the problems coordinating and combining the required clinical expertise in cardiology with required technical expertise in electrical engineering and/or nuclear physics to be exceptionally challenging in many instances.

The differences between the players grow out of the background, culture, and language of their respective professional communities. Electrical engineers typically enter the discipline during their undergraduate education and, typically, much of their coursework is concentrated in the physical sciences. Clinical practitioners, in contrast, are more likely to have studied biological sciences as undergraduates. The practice of engineering often involves the application of well-tested formulas and rules-of-thumb. This mode of thinking is alien to physicians, who decry it as "cookbook medicine." They are taught to approach each patient as an individual, to immerse themselves in the specifics of a patient's condition, and to exercise considerable judgment in developing an individual treatment plan. Furthermore, both disciplines embrace centuries-old traditions and have developed their own languages and cultures. Until relatively recently, the two traditions have had little to do with one another.

In our case studies the industrial firms involved were heavily populated by individuals trained in engineering and/or the physical sciences. Individuals with formal clinical training were rare.9 The clinicians, on the other hand, were generally practicing cardiologists; their knowledge of the technological possibilities of these imaging modalities was necessarily limited. Because of this disciplinary orientation, decisionmakers in the innovating firms often had only a limited appreciation of the clinical utility of the devices they were developing. Hewlett-Packard, we were told, employed electrical engineers and software designers and excelled at instrumentation; new research and development (R&D) employees were required to be capable of engineering complex electronic instruments. New England Nuclear was described as a company that was "good at making things radioactive." The company's initial commanding position in the thallium market was based on its ability to coax sustained high yields from its cyclotrons. Both companies had unique competence in engineering and the physical sciences; it just happened that the competence had profitable applications to the field of cardiac diagnostic imaging.

The clinicians, for their part, often failed to appreciate the power of the technology that would enable them to see and diagnose conditions heretofore unobserved. Until they had an opportunity to work with new images, clinicians had little basis for evaluating the utility of technological advances. Serendipity, rather than market-led demands, has often played a key role in commercial successes.

A striking dissonance between the degree of technical difficulty involved in bringing a product to market and the clinical utility that product provided was also evident in our study. Both the initial development of the phased-array ultrasound technology and HP's introduction of a 128-channel system entailed significant engineering problems and seemingly offered the clinician little in return. Neither system generated better images than the mechanical or 64-channel systems that dominated clinical practice at the time they were introduced. The development of real-time echocardiography, however, was a tremendous leap from the clinician's perspective. For the first time one could observe the beating heart without opening up a patient's chest. This clinical breakthrough posed no technological challenges of the magnitude mentioned above. And yet, the radiology ultrasound market had never before shown a need for or interest in real-time scanning capabilities. Radiologists did not examine moving objects and were not particularly concerned with motion abnormalities.

The development of Color Flow Doppler did entail the application of significant engineering expertise and has been a tremendous clinical and commercial success. Yet even this innovation had a somewhat checkered past. Pulse Doppler could convey information only on blood flow velocity; Color Flow Doppler was required to obtain information on direction and turbulence. Yet at the time HP decided to develop the product there was considerable skepticism within the medical community about its eventual clinical utility. Excitement among members of HP's engineering staff about the technology and HP's strong engineering capabilities drove the HP development process.

These observations highlight the critical role frequently played by those individuals able to bridge the gulf between the medical and engineering disciplines and cultures. Their significance is out of all proportion to their numbers.

Our investigations also confirmed an aspect of the innovation process that has long been widely recognized—one cannot simply order innovation to happen. Instead, it is necessary to create an environment that is conducive to innovation and then to stand back, in a sense, and hope for the best. In the past the network of university researchers, clinical practitioners, industrial firms, and regulatory authorities in the area of cardiac diagnostic imaging were certainly able to do this, as innovations we examined demonstrate. Many of our respondents, however, claimed the activities that make up the innovation process are harder to conduct now than they were in the past. This opinion is prevalent enough to lead us to consider whether changes in the environment have actually made it less conducive to innovation.

Respondents within industrial firms generally felt their work environment offered less room for experimentation than it had in the past. Not only did they have less time to work on personal exploratory projects, the process of acquiring materials for the construction of prototypes had become more complex and generally less flexible.

The process of clinical testing—imaging the heart of a human subject using a new device or imaging technique—seems also to have become more formal. In the early days, our respondents indicated, this process was extremely simple. Engineers who wanted to try out a new device would contact a clinician directly, and that clinician would arrange to make a patient available. It is necessary now to obtain a series of formal approvals both from the industrial firm and from the clinical institution before going near a patient. Requiring such approvals may well be a sensible step to protect patients. It does, however, raise the height of a hurdle at an important step in the innovation process.

Growing efforts by universities and university-affiliated researchers to appropriate a greater portion of the commercial value of their work might serve to slow other aspects of the innovation process in medical technology. Both our interviews and our review of the literature made it clear that collaboration between university-based researchers and industrial firms was a basic element of the innovation process (Blume, 1992). It was also clear that the nature of this interaction had changed over the period covered by our investigations, shifting from relatively open and unstructured exchanges of opinion and information to more formal arrangements involving greater compensation of university-based personnel. In the case of the new imaging agent, sestamibi, commercialization also involved the payment of substantial royalties to the Massachusetts Institute of Technology and Harvard University, where the agent was developed. Although none of our respondents identified specifically the growing commercialization of university-industry relationships as a barrier to innovation, we believe this trend bears scrutiny. It can potentially slow the exchange of ideas and information that fuels the innovation process.

Efforts to contain health care costs could slow the diffusion of new techniques into clinical practice and thereby lengthen the cycle of refinement leading to breakthroughs. Here both industrial firms and innovative clinicians are in something of a Catch-22. To the extent that a new imaging modality reveals phenomena that have never been seen before, it is almost inevitable that their clinical significance is unclear. That significance emerges only as experience with the new modality accumulates and as clinicians interpret it. However, the growing unwillingness of third-party payers to cover ''experimental" procedures almost guarantees that there will be no financial support for the exploratory work involving the new modality.

It appears to us, unfortunately, that little can be done to lower many of these barriers to innovation. The combination of a stricter regulatory regime for drug and device testing with a generally more litigious environment makes it extremely unlikely that we will see a return to a less formal clinical testing environment. Financial pressures on universities and other publicly funded research institutions, such as the National Institutes of Health, suggest that we may see even greater efforts to appropriate more of the commercial value of their discoveries. And we are, of course, unlikely to see any reduction in the pressures for health care cost containment soon.

The one area in which the constraints on the innovation process can be eased is within industry itself. The challenge for an established firm like Hewlett-Packard, in echocardiography, or DuPont Merck, in nuclear cardiology, is to maintain an entrepreneurial environment in which individual visionaries have enough freedom and sufficient resources to demonstrate the value of their ideas. HP succeeded initially in meeting this challenge by setting up a small, entrepreneurial group and charging it broadly with responsibility for developing this new area of business. Unfortunately, as a company becomes bigger and better established with a broader product line and a place in the market to protect, maintenance of an appropriately entrepreneurial environment becomes difficult. Failure to do so, however, raises the risk of being overtaken by a new entrepreneurial start-up company, as has occurred many times in the history of the innovations we examined.

Conclusion: Moving Toward An Optimal R&D Program In An Industrial Firm

We conclude by offering a number of suggestions for ways to overcome these barriers and create an environment more conducive over the long-term to ongoing innovations. To successfully introduce a new medical technology to the marketplace, three capabilities are required: first, access to the research community and products; second, the resources to commercialize or design, manufacture, and market the innovation; and third, clinical assessment of the technology's capacity for adequately meeting the need it was designed to fulfill. The first two requirements can be managed within a corporate research lab. The third requires access to and insights from the clinician's perspective.

The most effective tool we observed for bridging the gap between engineers and clinicians was the visionary product champion. In our case studies respondents often identified a visionary who was uniquely able to appreciate both the possibilities inherent in a technology and the clinical value of a new product embodying these new capabilities. Visionaries whose efforts were successful were essentially entrepreneurs sensing opportunities only dimly perceived by others, persuading management of the value of their ideas, and overcoming barriers to their realization. These individuals often played critically important roles in the histories of the innovations we studied. Recognizing their role and creating a supportive environment constitutes a significant step toward creation of a sustainable innovation process. Without the driving force of the engineers who developed AQ and TEE, it is doubtful that HP would have realized the potential of these innovations. And it is not clear that another firm would have supplied these products in HP's absence.

Our research provides less support for the argument that such a visionary should serve as director of the R&D department, or be in a similar position of authority. These individuals can be extraordinarily effective when given substantial amounts of authority (indeed, in some instances it seemed that the director's sheer force of personality was required to initiate and energize uncertain and stalled projects); however, a tendency of visionary product champions is to trust their own clinical and technological intuition above that of all others. They are prone to ignore ideas and suggestions emanating from other parts of the organization and other institutions. Such singleminded pursuit of an end, when espoused by the lab director, can be severely detrimental to the creativity of the other researchers so critical to the lab's productivity. Also, it is possible for a lab director to err in judging the value of a new idea, in which case, because of his position of authority, the consequences for the organization can be severe. Our fieldwork revealed instances where a visionary led an all-encompassing lab effort to introduce a breakthrough new product, only to have the product subsequently stumble, damaging the company's position of technological leadership.10

An alternative to leadership by a visionary is leadership by a harvester. A harvester should be an extremely accomplished technologist, but one perhaps more capable of assessing both the technical feasibility and the clinical utility of competing projects than of generating new ideas. In terms of initiating the innovation lifecycle, the role of the harvester should be to identify visionaries and charge them with project responsibilities. In the refinement stage, the harvester must constantly push the lab to anticipate and respond to clinician demands for diagnostic effectiveness for the products in the field.

The ideal lab manager would be part visionary and part harvester. The lab manager must establish guiding principles for technology development but must not become tied to the success or failure of specific projects. It is a rare individual, however, who can act both passionately and objectively while constantly evaluating the allocation of scarce R&D resources among project champions.

There are a number of other ways to integrate the clinical community more effectively into the development process for new medical technologies. One way to improve communication between clinicians and engineers is to recruit more multidisciplinary professionals such as bioengineers for the research lab. Some firms have traditionally preferred to hire the best electrical or mechanical or software engineers and encourage them to learn applications on the job. That practice reflected, in part, the culture of the workplace, but also the belief that those who would choose to be labeled as bioengineers were somehow not the most technically capable.

Clinical consultants recognized as leading practitioners in the field are an invaluable source of insight, especially regarding concept creation and image quality assessment. At least one firm we know of invites clinicians to give presentations before lab employees. These presentations provide the R&D personnel with better insights regarding coronary function, the phenomena clinicians would like to observe, and the clinical implications of information on such phenomena. As leading practitioners, these consultants may be more aware of academic work by other clinicians regarding new uses for imaging technology. Furthermore, a long-term relationship allows the consultant to acquire a greater appreciation for the capabilities of the base technology and may help generate new concepts for internal development. In both the concept and prototype stages, these practitioners represent ideal sanity checks to determine if an innovation might become the leading edge in clinical practice or be relegated to the fanatical fringe. Of course, in a highly competitive marketplace, the firm must always be aware of the security risks posed by outside consultants, especially during the prototype stage. In the early years of echocardiography the same set of clinical experts were consulted by all companies. Later, one of the companies we interviewed decided to disguise innovations before requesting clinical assessments.

The broad trends that have made clinical testing and university collaboration more formal and less flexible highlight the importance for industry of maintaining an environment conducive to innovation. The problem of an ossifying innovation process in an organization that can afford bureaucratic controls is one that arises repeatedly. The key to success is maintaining a spirit of entrepreneurship, especially important in the concept and prototype stages.11 One of the more useful tools identified in this regard was HP's apparently unstated policy of ten percent unstructured engineering time. This "under the benches" policy seemed to be acknowledged by a majority of the original department engineers we interviewed, but most noted that such a policy was now honored more in the breach than in the observance. That 10 percent cushion, however, is said to be what permitted HP engineers to explore TEE and to propose a superior design for the upgrade of Color Flow Doppler.

Maintaining a continuing stream of medical innovations must continue to be a critical objective of the academic, industrial, clinical and governmental-based individuals who contribute to medical research and development. We hope that our work serves to stimulate additional research that will eventually accelerate the rate and alter the nature of technological change and, thus, be more responsive to society's needs.


The authors gratefully acknowledge the Hewlett-Packard Company's Medical Products group for the support of this research. The views expressed are those of the authors and not of the sponsoring organization. We especially thank Ben Holmes and Larry Banks, without whose time, effort, and encouragement this work would not have been possible.


  • Blume, S. S. 1992. Insight and Industry: On the Dynamics of Technological Change. Cambridge, Mass.: The MIT Press.
  • Coste, P. 1989. An Historical Examination of the Strategic Issues Which Influenced Technologically Entrepreneurial Firms Serving the Medical Diagnostic Ultrasound Market. Ph.D. dissertation. Claremont Graduate School.
  • Katz, R. 1988. Managing Professionals in Innovative Organizations. New York: Harper Collins.
  • Kotler, S. T., and G. A. Diamond. 1990. Exercise thallium-201 scintigraphy in the diagnosis and prognosis of coronary artery disease. Annals of Internal Medicine 113:684–702. [PubMed: 2221649]
  • Rosenberg, N. 1969. The direction of technological change: Inducement mechanisms and focusing devices. Economic Development and Cultural Change 18(October):1–24.
  • Rosenberg, N. 1975. Problems in the economist's conceptualization of technological innovation. In: History of Political Economy, vol. 7. Durham, N.C.: Duke University Press.
  • Rosenberg, N., and Mowery, D. 1979. The influence of market demand upon innovation: A critical review of some recent empirical studies. Research Policy 8:102–153.
  • von Hippel, E., and S. Finkelstein. 1979. Analysis of innovation in automated clinical chemistry analyzers. Science and Public Policy 6:24–37.
  • Taylor, W. 1990. The business of innovation: An interview with Paul Cook. Harvard Business Review, March–April 1990. [PubMed: 10104225]

Appendix A

Thallium Imaging

The introduction and diffusion into widespread clinical practice of a workable procedure for thallium-201 cardiac perfusion imaging was based on a number of distinct discoveries and developments. The various components of the procedure currently in use were developed piecemeal by different individuals at different institutions.

One of the important preconditions for the use of thallium-201 as a perfusion imaging agent was the development of a functional gamma camera. Development of the Anger camera by Hal Anger met this need, moving camera technology beyond the plateau it had achieved in 1960s. This linked array of photomultiplier tubes permitted higher-resolution pictures of the areas of the myocardium perfused by the radioactive tracer. This basic design was substantially refined by manufacturers who increased the number of photomultiplier tubes, improved collimators, added tomographic imaging capabilities, and increased the number of scanning heads in an effort to improve resolution.

Some of the most significant early development work on the imaging agent was carried out by Elliot Leibowitz, then a radiochemist at Brookhaven National Laboratories. He described thallium as a potassium analogue and recognized the relationship between blood flow and thallium uptake by the myocardium that is the foundation of thallium's usefulness as a perfusion imaging agent. He also developed a procedure for postirradiation purification of the cyclotron-produced radioisotope that was suitable for use in commercial-scale production.

Another key step in the development of the procedure took place at Massachusetts General Hospital, where time-delayed imaging studies carried out by Jerry Pohost explored the redistribution properties of thallium-201. These studies showed that over a period of hours following the initial injection of the radioisotope, it would redistribute to those portions of the heart that had initially experienced restricted blood flow. Reversible perfusion defects, it was found, served as markers for ischemic but viable areas of the myocardium. This discovery became the foundation for the development of the exercise-rest double-imaging protocol that is still used today.

A number of regulatory factors facilitated the rapid spread of thallium imaging in clinical practice. The toxicology of thallium was well known from prior applications. In 1974, the Atomic Energy Commission announced that it was turning the regulation of radiopharmaceuticals over to the Food and Drug Administration. The actual transfer occurred 18 months later, just as the cardiac imaging procedures for thallium-201 were being refined. Thus, the FDA, because of its lack of experience, was not in a position to make onerous regulatory demands. New England Nuclear (NEN), the company leading the effort to commercialize thallium-201 for radionuclide scanning, also benefited from its location in Massachusetts, one of the so-called "agreement" states that had taken over responsibility from the federal government for the regulation of cyclotron-produced products. Thus, NEN enjoyed a somewhat simplified regulatory regime even for its nuclear operations.

By the late 1970s, numerous papers had been published documenting the diagnostic properties of thallium-201 imaging. Its value was well established as a tool for detecting the presence of coronary artery disease (CAD), for measuring the extent of viable myocardium in late-stage CAD patients, in postinfarction patients, and in patients who had undergone either angioplasty or bypass surgery. Publications appearing throughout the 1980s documented the findings of long-term follow-up studies showing that the results of thallium-201 scintigraphy were valuable in establishing prognoses for CAD patients.

As experience with thallium-201 scintigraphy accumulated, its shortcomings also became apparent. Interpretation of the images produced by the test required a fair degree of skill. Interpretations could vary from one observer to another. Constraints on camera positioning caused certain areas of the myocardium to be difficult to image—specifically, the areas served by the left circumflex artery. Patients unable to achieve maximal exercise could not be tested reliably. The energy level of the photons emitted were not ideally matched to the detection capabilities of the available gamma cameras. Thallium-201's relatively long half-life, coupled with constraints on the total amount of radiation to which a patient could be subjected, limited the dose that could be administered to a patient, which placed a ceiling on the total number of photons emitted during a test and, thus, on the absolute information content of the test. And although the redistribution properties of thallium were valuable in distinguishing between scar tissue and areas of ischemia, they also placed constraints on the testing protocol. Initial imaging had to follow injection within a limited time period or redistribution would render the test results invalid.

In the years following the widespread adoption of thallium-201 scintigraphy, efforts were made to overcome the limitations of the original testing protocol. These efforts, in turn, spawned a series of subsequent innovations. Attempts to make the interpretation process more consistent and more readily available to smaller centers led to the development of quantitative image interpretation software. Efforts to improve the ability of the test to detect perfusion defects in hard-to-image portions of cardiac anatomy lead to the development of Single Photon Emission Computerized Tomography (SPECT). Concern over failure of patients to achieve maximal stress and the effects of this failure on the test accuracy led to the use of pharmacological stress agents. Concerns over the emission profile and half-life of thallium-201 led to a search for an imaging agent based on the technetium-99 isotope. Investigators working in this area also hoped to develop an agent with more favorable redistribution properties. Their efforts led eventually to the development of Tc-99 sestamibi.

Use of thallium-201 scintigraphy has been constrained by the widespread availability of a number of competing testing modalities. The presence of coronary artery disease is often apparent simply from a patient's history and presenting symptoms. An electrocardiogram with stress testing can detect the presence of ischemia at a lower cost than thallium scintigraphy (although with less accuracy). At the other end of the spectrum, angiography is both more expensive and more invasive than thallium scintigraphy, but provides what many cardiologists regard to be superior diagnostic information. Angiography is also thought to be a prerequisite for either angioplasty or bypass surgery aimed at elimination of the underlying causes of ischemia, providing, as it does, a "road map" for the surgeon or catheterization expert.

The proper place for thallium-201 scintigraphy in this crowded field of alternatives has been the subject of sometimes spirited debate. Some have argued that the incremental information content of a thallium test, given that the patient has already undergone a stress electrocardiogram, is small and of little value. A number of investigators have attempted to identify specific subsets of patients for whom thallium test results can play a critical role in defining the course of treatment. However, despite these systematic efforts to identify the appropriate role for thallium testing, the choice of diagnostic tests continues to be strongly influenced by individual physician preferences.

Appendix B

Tc-99 Sestamibi Tracer

Tc-99 sestamibi is a technetium-based synthetic radioisotope with certain properties that enable it to produce images of the heart allowing for an assessment of the "viability" of cardiac muscle and the identification of the possible presence of coronary artery disease. It is regarded by many as an incremental innovation over the use of thallium as the agent for stress imaging studies.

Observations about the clinical value of technetium-99 (Tc-99) were first made in the late 1950s by Powell Richardson, a nuclear medicine physician doing research at Brookhaven National Laboratory. He used a molybdenum generator to produce substantial amounts of this agent with high isotopic purity. His work was published but his process was not patented.

The later development of the gamma camera, first by Hal Anger at the Lawrence Radiation Laboratory, stimulated this line of investigation, as well as the field of nuclear medicine as a whole. Various commercial firms, including New England Nuclear (NEN), went into the business of producing radioisotopes for medical application. One of the early products was a Tc-based agent to image bones. Thallium-201, produced by cyclotron, became the isotope of choice for cardiac imaging work.

As thallium-201 gained acceptance and the performance of stress myocardial imaging became an important part of the evaluation of patients for coronary artery disease, many sought to improve on thallium's properties. First, thallium's rapid redistribution made the nature and quality of the image highly dependent on the elapse of time following injection. Second, the cyclotron-based production process for thallium limits its accessibility.12 Third, the relatively long half-life of thallium-201 limited the dose that could be administered to a patient. Finally, the energy profile of thallium's gamma emissions poorly fit the detection capabilities of the available cameras.

Two firms, NEN and Squibb, were known to be actively pursuing work on a Tc-99 agent for cardiac imaging in the early 1980s. NEN funded or collaborated with Deutsch at the University of Cincinnati to produce an agent in 1982. Its chemical structure was technetium dimethyl phosphene, and it was given the name "Cardiolyte." This agent was quite successful in animal studies, but when tested in humans at the Massachusetts General Hospital, it did not successfully image the human heart.

Since 1980, Alan Davison, an inorganic chemist at the Massachusetts Institute of Technology, and Alun Jones, a nuclear chemist at Harvard, had been actively pursuing research on a synthetic Tc-99 agent that would correct some of thallium's shortcomings to image the heart. They had personal relationships with staff at NEN and were working with the expectation that NEN would be interested in licensing their agent when their product was developed.

The acquisition of NEN by DuPont led to a shift in NEN's research priorities. Under DuPont's management, staff began to screen a large number of liquids for desirable properties in a search for an imaging agent of their own. Their initial lack of interest in Davison and Jones's agent can be seen as an example of the "not invented here" syndrome. DuPont eventually licensed the Davison/Jones agent—Tc-99 sestamibi—and sought FDA marketing approval. DuPont's inexperience with FDA submissions delayed the approval process by two years. Tc-99 sestamibi was approved and introduced to market in 1991, however, and by 1992 had achieved worldwide sales of over $100 million.

The Tc-99 agent eventually introduced by Squibb for cardiac imaging has not been quite as successful. Its extremely rapid washout properties make perfusion imaging even more technically challenging than with thallium-201. It has been said, however, that in the hands of a skilled operator these properties permit high patient throughput.

Clinicians have received these Tc-99 agents favorably and have adopted them for use in cardiac imaging. With regard to substituting for thallium, there is still debate over whether these new agents produce significantly more clinical information than their predecessors.

Appendix C

Single Photon Emission Computed Tomography

The early development of Single Photon Emission Computed Tomographic (SPECT) imaging took place between 1958 and the early 1970s at the University of Pennsylvania. Dr. David Kuhl, a physician trained in nuclear medicine, and his collaborators constructed an array of radiation detectors. They were then able to produce a map of radionuclide concentration by taking sequential images of a series of cross-sectional slices. These sequential images were then available for back projection and image reconstruction and could yield information on previously unobservable physiological changes in the body. The original work was driven by the clinical need to image the brain—cardiac imaging came later.

Much of Kuhl's work preceded the development and availability of X-ray computed technology (CT) in the early 1970s, when the use of image reconstruction algorithms became commonplace. Broader application of the early SPECT research and further progress was, however, facilitated by and stimulated by the acceptance of X-ray CT.

From 1970 to 1974, Dr. John Keyes, working at the University of Michigan and the University of Rochester, adapted a gamma camera to perform cross-sectional imaging of the brain using back projection. In 1974, he and his colleagues built a rotating gamma camera, facetiously referred to as the "Humongatron," used for early brain imaging by SPECT through 1976.

Initial images produced by Keyes's camera or Kuhl's detector array were crude and needed a great deal of improvement. For instance, the term "error" was used to refer to the (qualitative) difference between the image and the actual clinical condition, but "error" encompasses several dimensions that were either actually or potentially measurable or quantifiable. These include resolution, attenuation, scatter, artifact, collimator error, and uniformity.

Three streams of innovations (independently) pursued led to image improvement. These involved improved image reconstruction algorithms, better imaging agents, and the camera itself. Development of better algorithms took place in industry and academia, but most prominently at Lawrence Berkeley Laboratory under the direction of Thomas Budinger. Published articles (1978–1980) document his contribution to the physics and mathematics of SPECT image improvement.

The wide use of SPECT for brain imaging came about only with the availability of the Tc-based imaging agents. Cardiac imaging with SPECT began to be performed in the early 1980s with thallium. Many investigators were not satisfied that the quality of the images produced by thallium-SPECT was significantly better than those taken with planar cameras. Even so, commercial SPECT systems capable of imaging the heart began to become available from a number of manufacturers between 1980 and 1983.

Around 1981, Dr. James Willerson, a cardiologist at the University of Texas at Dallas, received a grant to seed the development of a multi-headed SPECT camera. Willerson eventually forged a relationship with the Technicare (instruments) division of Johnson & Johnson to build a prototype of the new kind of camera. Clinical testing of the prototype was to begin around 1986, but was delayed for several years due to Johnson & Johnson's closure of Technicare in that year. The work was continued at two companies licensed by Johnson & Johnson, Ohio Imaging and Trionics. Ohio Imaging was later acquired by Picker. The University of Texas cardiology group finally acquired their three-headed SPECT camera from Picker in 1988. That company and several others now offer commercially available three-headed SPECT cameras.

The three-headed SPECT cameras are capable of resolution to about 7-8 mm, compared to the 20-mm resolution of the very early SPECT cameras. Clinical users, including some who had been skeptical of any improvement in accuracy with single-headed SPECT, have been more impressed with the images that are produced by these multi-detector cameras. And new cardiac imaging agents such as Tc-99 sestamibi are said to produce further improvements in the quality of SPECT images over those generated with thallium.

Appendix D

Transesophageal Echocardiography

Transesophageal echocardiography (TEE) uses the esophagus as a window to image the heart. An ultrasonic transducer is mounted near the tip of a modified gastroscope, which is manually inserted down the patient's throat. Controls then permit the operator to position the transducer optimally.

TEE is used in both outpatient and operating room (OR) settings. In the outpatient market TEE is ideal for imaging otherwise "difficult" patients. Obesity, large chests, and narrow spacing between the ribs all make traditional echocardiography difficult and reduce its accuracy. With TEE, the transducer can be placed close to the heart with little attenuation of the ultrasound signal due to air-filled lungs or bony structures.

In the OR setting, TEE allows the anesthesiologist to constantly monitor cardiac function once the chest cavity has been opened. The clear image it provides makes it relatively easy to detect the wall motion abnormalities that mark ischemia. Before TEE there was no other way to image cardiac function during such surgical procedures.

Although TEE represents an advance in transducer technology and remote positioning, the TEE probe was actually developed as an add-on to established echocardiography systems.

TEE began in the academic environment. In 1976, researchers mounted a transducer on a coaxial cable and used the esophagus to image the left atrium and mitral valve. In 1982, academicians brought the idea to Diasonics, which initially commercialized the product. Diasonics sold approximately 50 TEE probes, but did not pursue the market opportunity.

In 1985 an engineer at Hewlett-Packard (HP) was working with ''Herman," the lab skeleton, and considering the problem of signal attenuation due to tissue around the ribs. He noticed the esophagus behind the heart and had an idea of mounting a transducer at the end of a gastroscope. When he approached clinicians with this observation they expressed considerable interest and produced for his examination a number of the old Diasonics probes.

At the time, HP was heavily involved in development of Color Flow Doppler imaging. As a result, there were few resources to spare to pursue the TEE opportunity. For the next year the engineer shepherded his underground project through the development process. Once out on the market, design deficiencies surfaced that had resulted from a poor appreciation for exactly how the device would actually be used. For instance, the engineer had talked to a number of gastroenterologists who were very concerned with the probe's potential to damage a diseased esophagus. Cardiologists, however, simply wanted to use the probe to obtain good pictures; they were not shy about applying force to the probe to position it correctly. Consequently, a number of the initial probes broke because the forces the cardiologist subjected them to were quite different from those that the engineer anticipated.

Eventually the bugs were worked out of the design, and the TEE probe became a small but significant addition to HP's electrocardiographic product line.

Appendix E

Acoustic Quantification

Acoustic quantification (AQ) is a technology that makes use of recognizable differences in patterns of ultrasonic back scatter to identify the "edge" of the heart, or the interface between the cardiac musculature and the blood. This software- and hardware-embodied innovation permits measurement of ventricular function on a beat-to-beat basis.

The AQ technology emerged from research conducted largely by academic engineers and physicians aimed at use of ultrasound for the characterization of tissues (i.e., for a "noninvasive biopsy"). AQ represented a successful tangent to a main line of research that had been ongoing for nearly two decades without leading to meaningful products.

Pete Melton, an academic researcher with B.S. and M.S. degrees in electrical engineering, a Ph.D. in biomedical engineering, and some experience running a large teaching hospital's clinical laboratory, began AQ work in 1978, examining the characteristics of the ultrasonic signal coming back from various tissues, especially the heart. In 1981, he and a collaborator, working at the University of Iowa, documented the differences in back scatter from the heart compared to other tissues. They published an article in 1983 showing real-time images and proposed an algorithm for identifying the edge of the heart and measuring left ventricular volume.

Melton left academia in 1984 and went to industry. After spending nine months at Diasonics, he came to the Hewlett-Packard (HP) Imaging Systems Division where he pursued this work until 1988. At HP, his assignment was to pursue tissue characterization (TC) rather than AQ. The common pathway to achieve successes in both TC and AQ diverged and the bioengineer made informal arrangements to continue his AQ work, making prototypes "quietly." To do so, he had help from two "unassigned" engineers, one specializing in hardware and one in software.

Melton did much of his work "outside the system." Clinical trials were done rather informally in collaboration with an anesthesiologist at the University of California, San Francisco, and cardiologists at Washington University (St. Louis) and the University of Iowa. Melton's work eventually led to the development of a prototype suitable for demonstration before clinicians. Their enthusiastic response convinced HP to "give the effort a project number" and initiate a full-scale development effort.

HP introduced AQ as an enhancement to its high-end cardiac ultrasound units. The company's eventual decision to emphasize AQ rather than TC was heavily influenced by the enthusiastic response of clinicians to Melton's prototype. AQ has, so far, been received quite favorably in the marketplace. HP believes it is gaining market share or at least solidifying its position as market leader because of it. Competitors acknowledge that their own products have suffered in comparison. Clinical specialists, intrigued by the AQ technology, say its real significance has yet to be established and it is still to early to say whether it will facilitate the making of noninvasive "statements" about cardiac function.

Appendix F

Color Flow

Doppler uses ultrasound to measure blood velocity. Initially, Doppler units were simply glorified stethoscopes—they were blind in terms of the area where blood velocity was being measured. Ideally, the clinician wanted to place a cursor on the screen and detect blood velocity at a certain point. Color flow was essentially two-dimensional Doppler allowing the clinician to measure blood flow velocity, volume, and direction. The great advantage of this technique is that it allows clinicians to detect eddies and backflow, which were evidence of abnormalities that could not be detected otherwise.

Color Flow Doppler required engineering advances in signal processing. A major issue was simply separating the imaging signal from the Doppler signal.

Engineers at Hewlett-Packard, Paul Magnin in particular, thought it was fairly obvious that physicians would like to measure and display velocity at every point in the image. They just did not know how to achieve it. After struggling to get the Doppler unit out, Magnin started to investigate color flow, developing some computer algorithms to process the signal. At the same time a paper published by the American Institute of Ultrasound Medicine from Aloka Research Labs showed an actual picture of a mechanically swept color flow image. Clearly it could be done, and done commercially. Consequently, development efforts were stepped up and from eight different algorithms, two were selected that seemed most likely to answer the call. It was not entirely clear which one would be more appropriate, so simulations were set up to test one algorithm against the other. Once clinicians saw the video of the various algorithms, however, it was not clear that color flow would be of clinical significance. Nonetheless, Hewlett-Packard pushed ahead with its development because it believed the technology has potential.

The path from concept to prototype was arduous. It seemed that color flow would require a redesign of the system architecture, which had not been designed to be upgradable. Once a redesign was begun it was nearly impossible to prevent everyone's pet projects from being added to the revision. Introduction of the product seemed to be far away. Another engineer, again working off the critical path, found a solution to move the product into the marketplace more quickly—sell it as an upgrade. Although this solution was very well received, the decision to offer Color Flow Doppler as an upgrade postponed some of the redesign problems to the next revision.



Nathan Rosenberg, for example, has published a number of papers on this topic. See, for example, "The direction of technological change: Inducement mechanisms and focusing devices" (Rosenberg, 1969); "Problems in the economist's conceptualization of technological innovation" (Rosenberg, 1975); and "The influence of market demand upon innovation: A critical review of some recent empirical studies" (Rosenberg and Mowery, 1979).


Other imaging technologies that have been used in this therapeutic area include X-ray imaging with contrast agents and magnetic resonance imaging (MRI).


The use of the term "reversible" grows out of the nature of the imaging protocol used in connection with thallium-201. This agent is administered after the patient has exercised sufficiently to raise his or her heart rate to its maximum level. The agent is then taken up by those regions of normal myocardium that have adequate blood flow. Over the next several hours, the thallium-201 redistributes to areas of normal myocardium that have blood flow at rest. The process is not unlike that which occurs when a drop of ink is released into a glass of water; over time the ink will redistribute throughout the volume of water. In the standard thallium protocol, a second image will be taken several hours after the patient has exercised. The redistribution will then have "reversed" the original lesions.

The improved imaging agent—Tc-99 sestamibi—was designed specifically to remain in those portions of the heart into which it was originally absorbed. Because it does not redistribute, its use entails a quite different protocol in which the patient receives two separate injections of the agent, each a day apart.


In bypass surgery the blocked coronary arteries are surgically replaced with open vessels taken from other parts of the body. In angioplasty a catheter with a balloon on the tip is inserted into the blocked artery. Inflation of the balloon tip at the site of the blockage mechanically forces open the artery.


Tomographic imaging involves the acquisition of multiple images, typically by rotating the camera or other image acquisition device around the patient. Computer analysis of these multiple images permits the construction of a three-dimensional representation of the structure under examination. That image can then be displayed at any depth and from any angle.

Tomographic imaging stands in contrast to simpler planar or two-dimensional imaging techniques.


Acoustic quantification involves the use of electrocardiographic techniques to measure the heart's pumping action in real time.


Ejection fraction is defined as the percent reduction in ventricular volume over the course of the beat cycle. A high ejection fraction (i.e., a large reduction in volume) implies strong pumping action.


Although thallium-201 is also a radiopharmaceutical, its production required the use of a cyclotron, a substantial piece of equipment. Indeed, when New England Nuclear enjoyed its most commanding position within the thallium market, it began construction of the world's only privately owned linear accelerator.


This observation applies most strongly to the early history of these organizations, and is less true now.


At least one respondent identified a situation in which a firm that once held a major place in a market eventually was forced to withdraw as a result of a series of poor decisions made by a strong-willed R&D director.


Paul M. Cook, CEO of Raychem Corporation, describes how his company has successfully maintained an innovative corporate culture in an interview with William Taylor (1990).


This process required that thallium-201 be produced at a small number of cyclotron sites, from which it would be shipped to physicians. During shipment the isotope would decay. Producers overfilled vials to guarantee that enough isotope would remain viable upon arrival to permit testing, which drove up costs. It was impossible to store thallium on-site, either at the cyclotron facility or at hospitals.

Copyright 1995 by the National Academy of Sciences. All rights reserved.
Bookshelf ID: NBK232039


  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (2.3M)

Related information

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...