NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Research Council (US) Chemical Sciences Roundtable. Carbon Management: Implications for R&D in the Chemical Sciences and Technology: A Workshop Report to the Chemical Sciences Roundtable. Washington (DC): National Academies Press (US); 2001.

Cover of Carbon Management

Carbon Management: Implications for R&D in the Chemical Sciences and Technology: A Workshop Report to the Chemical Sciences Roundtable.

Show details

4Opportunities for Carbon Control in the Electric Power Industry

John C. Stringer

Electric Power Research Institute

I begin by discussing the use of the new technique of roadmapping for the identification of longer-range technical challenges and illustrate some of the conclusions reached by the Electric Power Research Institute's (EPRI) Electric Technology Roadmap: 1999 Summary and Synthesis that are relevant to the topic we are considering. I want to look first at the global implications for carbon management and then consider some of the issues and the current options for the United States.

The key issue relates to energy and the role that it plays for human societies. At the most primitive level, the energy available to an individual was his (or her) own strength; with the development of family structures, this could be managed better, but the limits were much the same: the objective was basic survival. The first major change was the domestication of animals, such as the ox or the horse. The total energy used by individuals in the advanced societies is, by comparison, enormous, and the margin above the “survival minimum” can be used to achieve what we think of as “quality of life.”

In the world as a whole, there are people presently living with energy availability across this complete range. In an early analysis, Chauncey Starr, EPRI's founder, distinguished four ranges: (1) survival, (2) basic quality of life (literacy, life expectancy, sanitation, infant mortality, physical security, social security); (3) amenities (education, recreation, the environment, intergenerational investment); and (4) international collaboration (global peace, global investment, global technologies, global R&D) Starr, C. 1997). Each of these related to a range of energy availability per capita and wealth production, as measured by gross domestic product (GDP) per capita. On that basis, the EPRI roadmap suggests that a global objective should be to ensure that the energy available to each individual is a minimum that corresponds to a level between the second and third of these classifications.

There is the issue of how this energy can be made available to individuals. As recently as 1950, electricity represented 15% of the world's energy usage; by 2000, this had risen to about 38%; and extrapolations suggest that by 2050, electricity will represent 70% of the energy use. The EPRI roadmap suggests that the target should be providing a minimum electricity supply of 1,000 kWh per person per year by 2050.

At the same time, there has been a progressive improvement in the efficiency of energy use. A common unit for energy use is “tonnes of oil equivalent” (toe), and the overall efficiency of use was determined by the GDP. In 1950, 0.35 toe was required for each $1,000 GDP (1990 U.S. dollars). By 2000 this had fallen to 0.31, and extrapolation suggests that by 2050 it could be in the range of 0.12 - 0.18. This quantity is called the “energy intensity,” and for several years it has been decreasing at a rate of 1% per annum. EPRI's roadmap proposes a target of 2% per year.

The next issue is global population. This last year, the world's population exceeded 6 billion. By 2050, extrapolations suggest that this might rise as high as 10 billion, although earlier chapters have suggested that the most recent estimates may be somewhat less than this.

When these numbers are combined and the retirement of most of the world's current generating capacity by 2050 is considered, this goal is equivalent to adding 10,000 GW of generating capacity. This means building 200,000 MW of capacity per year, which at current costs represents investing something like $100 billion to $150 billion per year. While this is undoubtedly a large sum, it is less than 0.3% of the world GDP, and as EPRI's president, Kurt Yeager, says, “It is less than the world currently spends on cigarettes!”

The global efficiency of the production of electricity from the current fuel mix averages about 32%, which the EPRI roadmap proposes should be increased to 50% by 2050. Another important consideration is the “capacity factor” of a generating plant—that is, the fraction of the time that a given plant is in fact generating electricity. The overall global average is 50% for central station generation. The EPRI roadmap proposes that this be increased to 70% by 2050. However, further careful evaluation needs to be done to ensure that the manufacturing capabilities exist to meet these demands.

This gives an idea of the magnitude of the problem facing us over the next 50 years. In this chapter, I talk only about generation of electricity. I do not discuss the problem of delivery from the point of generation to the final user, although this too is a major issue.

Now, from the point of view of the workshop, the question is, How do we generate this electricity, and how does this contribute to the present and future production of anthropogenic greenhouse gases, specifically CO2?

Let us review the situation in the United States. In the United States, the carbon emissions in 1995 were 524 MtC (million tonnes of carbon equivalent) for buildings (heating, lighting, and so forth), 630 MtC for industry, and 473 MtC for transportation. Essentially all of the transportation emissions came from petroleum, while 123 MtC of the buildings' emissions came from natural gas, 42 MtC from petroleum, and 355 MtC from electricity. For the industry total, 177 MtC came from electricity and the remainder from a variety of sources. In terms of primary fuels, the numbers were 628 MtC from petroleum, 319 MtC from natural gas, and 533 MtC from coal. As a first approximation, therefore, the three major categories made equal contributions to carbon emissions.

For transportation, the sources of CO2 are many small, widely dispersed, and mobile entities. They need a storable, high-energy-density fuel. Petroleum-derived fuels fit these requirements very well. Removal of the CO2 emissions from internal combustion engine exhausts will present a significant problem, and the costs are likely to be socially and economically unacceptable. In the longer range, hybrid automobiles, which are now being introduced, may help. Electric vehicles might have the effect of transferring CO2 production from the vehicle to an electric utility generator. Fuel cells, particularly with hydrogen fuel, are the ultimate goal, but they are a few years away.

The most easily addressable source of CO2 is from the generation of electric power, since there are a much smaller number of very large stationary sources. That is the primary topic of this chapter. As pointed out above, one factor of importance here is the increasing “electrification” of primary energy sources with time, and this pattern is reflected in the rest of the world. For example, in much of the world, mass transportation systems are increasingly powered by electricity, and (as indicated above) recent research has been addressing the electrification of personal transportation, although there are still significant barriers to achieving systems that are acceptable to the public.

The consequences of this scenario for meeting the carbon management goals also need careful study, and this has been done. A major problem is that much of the building of generating plant will take place in developing countries with large populations, notably China and India. The principal indigenous fuel available is coal, and it appears at the moment, at least, that there is relatively little natural gas available.


As indicated above, electricity generation represents only about one-third of the anthropogenic CO2 generation in the United States, and the complete removal of all this will be insufficient to achieve what is believed to be the necessary reduction. Nevertheless, for a number of reasons—most obviously the large stationary sources—this is likely to be the area in which a reduction is first demanded. As a consequence, both the U.S. Department of Energy (DOE) and EPRI have been giving the problem much thought. In general, the approaches may be categorized as follows:

  1. Improvement of efficiency in electricity generation from fossil fuel-fired thermal systems;
  2. Increase in the hydrogen-to-carbon ratio of the fuels used, with an end point of hydrogen as the fuel;
  3. Increased use of so-called neutral fuels—those that depend on combustion in heat engines, but are regenerable: wood is an example, but various biomass fuels and municipal solid wastes are also examples;
  4. Switching to non-heat engine methods of deriving energy from the oxidation of fuels to avoid the Carnot limit—for example, fuel cells;
  5. Use of noncombustion heat sources for heat engines—nuclear fission, geothermal heat, solar heat;
  6. Use of photoelectrically produced electricity—thermoelectricity is also a possibility of this type, given two adjacent locations of significantly different temperatures;
  7. Increased use of hydroelectric generation, including low-head hydro;
  8. Generation depending on the management of tidal variations;
  9. Generation systems depending on the harnessing of ocean waves; and
  10. Use of wind energy, for example, via wind turbines.

The last six of these are often referred to as “renewables,” although this term is seldom if ever used for nuclear fission, while it almost always is for geothermal energy. It is common to treat biomass as renewable as well, although this could be argued. Hydroelectric generation, particularly from large reservoir systems is regarded as renewable in one sense, but environmentally damaging in another, and similar objections have been raised for large tidal schemes.

Improvement in the efficiency of end use of electricity clearly can also make very significant impacts on the emissions of greenhouse gases. However, as the recent rise in the sales of sport utility vehicles has shown, individuals are generally unwilling to sacrifice perceived personal benefits for something that may be for the greater good. This brings up an important point. In discussing the management of carbon, it is naïve to neglect wider issues that are more often thought of as sociological or political. For example, Congress has been unwilling to ratify the Kyoto Protocol because of concern about the impact on the U.S. economy and the negative effect it might have on U.S. industry's competitiveness in world markets. Significant impacts on the cost of electricity caused by carbon management might be unacceptable, although interestingly the public accepted significant increases in costs resulting from the introduction of SOx and NOx controls in the 1980s.

Over the last 100 years, the carbon intensity of world primary energy has been falling at an approximately linear rate, from 1 tonne of carbon per toe in 1900 to 0.7 tonne in 1990 (National Academy of Engineering, 1997). Extrapolation suggests a value of 0.55 by 2050. This rate of decarbonization is 0.3% per year, and EPRI has proposed a target of increasing this to 1.0% per year by 2030 and maintaining that rate thereafter.

This reduction was achieved largely as a result of switching to primary fuels with a higher hydrogen-to-carbon (H/C) ratio and of increases in hydroelectric generation in the early years and nuclear fission in the latter part of the last century. Both of these resources have essentially reached saturation in the United States.

In the United States for many years (certainly since 1920), coal has been the fuel that produces a little more than 50% of the electricity, and this has held steady as the generation capacity increased. In 1996, coal accounted for 1,797 × 109 kWh, or 52% of the total electricity generation, and this figure is predicted to increase to 2,304 × 109 kWh by 2020. In each of the last three years, the coal consumed by the electric utilities has been close to 900 million tonnes (for an example, see EIA November 2000).

For a time, oil was an important utility fuel, but at present, very little is used in this way. The use of natural gas has been increasing recently as new large, high-efficiency combustion turbines have become available, and this switch has been responsible for the continued increase in the overall H/C ratio.

It is worth reiterating that much of the expected increase in the demand for electric power over the next half-century will come from the developing world, and the greatest demand in the first few years will be from India and China. The largest internal fuel resource in both of these countries is coal.

The EPRI roadmap recommendations are aimed at improvements within the next 20 years, with some views extending out to 2050. DOE's Vision 21 scenarios are somewhat similar (DOE, 1999). It would be fair to say that within these time scales it is not believed that the so-called renewables will make much of an impact on U.S. electricity generation. Even the most optimistic predictions suggest less than 20% by 2050. The situation for nuclear power is much less clear. Recently, renewal licenses have been granted for a number of the older nuclear stations, extending their use for a further 20 years. If this continues, it seems probable that the nuclear contribution will be maintained for the next few years. It should be remembered that the proportion will fall as the total generation increases; presently it is about 20%. As yet, the climate of public opinion does not appear to have reached a point where new nuclear plants would be acceptable. However, it should be said that this would be the easiest way for the United States to reach compliance with Kyoto at minimum economic impact.

In leaving this aside, the issue becomes one of how to manage carbon emissions with a generation fleet that will consist of increasingly fossil fuel-fired thermal stations. By far the largest part of this in the past has come from large coal-fired Rankine-cycle plants, in which coal is pulverized and pneumatically injected into a large combustion chamber where it is burned. The burners are designed to produce a stable fireball in the center of the combustion chamber, whose walls are constructed of vertical tubes at the bottom of which preheated water is introduced. The water rises in the tubes, reaching boiling point at a point a little above the burner level; the heat is transmitted from the fireball by radiation. The cooling combustion gases then pass over a succession of further banks of tubes, superheating the steam. This is then expanded through a high-pressure steam turbine and returned to the boiler to be reheated. The reheated steam is then further expanded through subsequent stages of the steam turbine and finally condensed in a large condenser cooled by water from a large local source—the sea or a river, for example. As in any heat engine, the overall efficiency is determined by the difference between the maximum temperature and the minimum temperature of the working fluid. For the temperatures attained in a conventional large steam plant (538°C or so), the overall efficiency of a Rankine cycle is about 38% (coal pile to busbar). Efficiencies as high as 41% were achieved as long ago as the 1950s, but the maximum temperatures (650°C) were avoided because of materials problems. Research is currently in progress all over the world to attain higher-efficiency Rankine-cycle plants, but it seems unlikely that efficiencies much greater than 45% will be attainable in the immediate future.

Recently, essentially all of the generating plants that have been ordered have been advanced high-efficiency combustion turbines fired with natural gas. Many of these are “combined-cycle” machines, in which the hot exhaust from the combustion turbine (Brayton cycle) enters a heat recovery steam generator (HRSG); the steam from this is expanded through a steam turbine (Rankine cycle). Largely as a result of research funded by DOE's Advanced Turbine Systems (ATS) program, the newest generating systems will achieve an overall thermal efficiency of 60%. The largest of these systems generate approximately 400Mw(e) (megawatts of electrical power), which is the preferred size in the modern utility environment in the United States. In addition, they are relatively cheap to build, about half the cost of a fossil-fired steam generating plant of similar capacity, and the construction time is relatively short. However, they are natural gas fired; this fuel currently costs significantly more than coal on an energy basis, and there is some concern that a significant increase in demand may result in a further increase in price.

For many years, both DOE and EPRI have been conducting research into the advanced gasification of coal, and the use of the product in current-generation gas turbines has been demonstrated at relatively large scale. This technology is called integrated gasification combine cycle (IGCC) and is thus available as an option if either the price of natural gas rises too high or the availability is insufficient.

To give an idea of the magnitude of the problem, it is worth quoting an analysis done by EPRI a few years ago of a hypothetical Rankine-cycle generator located in Kenosha, Wisconsin, that was burning Appalachian bituminous coal (EPRI 1991). The coal contains 71.3% carbon, 6.0% moisture, 9.1% ash, 4.8% hydrogen, 4.8% oxygen, 2.6% sulfur, and 1.4% nitrogen (by weight ultimate analysis). The heating value is 13,100 British thermal units per pound (30,470 kJ/kg). The net efficiency of the unit (from the coal to the electricity delivered to the system busbar) is 35%. The corresponding coal burn rate is approximately 125 tonnes per hour. Typically, boilers operate with approximately 5% excess oxygen, and the flue gas characteristics are shown in Table 4.1.

TABLE 4.1. Flue Gas Characteristics (principal components only).


Flue Gas Characteristics (principal components only).

From Table 4.1 it can be seen that the quantities of CO2 are very large; if these were to be 100% sequestered as calcium carbonate (CaCO3), this would represent 666 tonnes of product per hour, or close to 16,000 tonnes per day. The cation source would amount to the equivalent of 9,000 tonnes per day. This compares with a coal supply of approximately 3,000 tonnes per day and an ash production of 270 tonnes per day. The masses involved more than 50 times greater than those involved in flue gas desulfurization, which is a practice used in essentially all U.S. boilers firing high-sulfur coal. It should also be noted that the CO2 represents about 18% by weight of the exhaust gas and 15% by volume, which emphasizes its relatively dilute character. Since this “typical” plant produces 2.32 tonnes of CO2 per tonne of coal, the utility industry is currently producing 2.1 × 109 tonnes of CO2 per year from burning coal. This does not take account of the CO2 produced from units burning natural gas.

U.S. coals have sulfur contents that range from as low as 1% to more than 4.5%, and environmental regulations require that this be removed from the combustion gases before they are released from the stack into the atmosphere. This is done by fuel gas desulfurization (FGD) systems, which are located between the boiler exhaust and the stack. Since the regulating legislation was passed in the 1970s, there has been considerable research and development into FGD systems, and more than 30 are now commercially available. The majority are “wet” systems and depend on contacting the exhaust gas with an aqueous slurry of calcium oxide, for example the product is calcium sulfate, which is environmentally benign and (as gypsum) has some commercial value. The product slurry typically goes to settling ponds and is then trucked away. Some dry systems are available for plants in regions that are relatively arid, but the basic chemistry is the same.

The utility industry is also familiar with satisfying environmental legislation that limits the emission of oxides of nitrogen (NOx) from the plant. This can be done either by modifying the combustion process (low-NOx burners) or by postcombustion techniques, such as selective catalytic reduction.

Particulates are also removed from the combustion gas, by techniques such as electrostatic precipitation (ESP) or the use of baghouse filters.

The point here is to indicate that physical and chemical methods of removing contaminants of various kinds are well known in the industry, and this involves the treatment of the entire exhaust gas stream.


This leads to the other major option in carbon management. As indicated above, in the United States the utility industry is the prime target for carbon control. Our first priority is to address the control of emissions from these units because we rely on coal-fired Rankine steam plants for more than half of our electricity, and because there seems little prospect of materially reducing this for some years. The coal-fired, relatively low-thermal-efficiency units are the most significant CO2 emitters in the utility system. Control of emissions from these units involves the separation of CO2 from what is a relatively dilute exhaust gas, its capture in some way, and finally its disposal in some environmentally acceptable and long-lived manner. This last step is called sequestration.

There have been several meetings over the last few years addressing this approach. In particular, the U.S. Department of Energy recently summarized the issues in Carbon Sequestration: Research and Development (DOE, 1999).

The separation of CO2 from gas streams is reasonably well known. The most obvious example is its removal from natural gas. In some cases, the amounts may be quite large: some economically recoverable natural gas reservoirs contain significant amounts of carbon dioxide. The Sleipner West field in the North Sea (for example) contains 10% by volume CO2; the sales specification is not more than 2.5%. Statoil (the Norwegian state oil company), which operates the field, uses an amine solvent technique to separate the excess, which is then pumped into a reservoir 1 km below the seabed. Approximately 1 million tonnes of CO2 is separated annually, which is about 40% of the model Kenosha plant described above.

Sleipner West illustrates one aspect of the current approaches to managing CO2 in coal-fired fossil systems. The favored approach is to somehow generate a CO2 stream that is highly concentrated. The DOE report mentioned above lists the following methods:

  • Chemical and physical absorption,
  • Physical and chemical adsorption,
  • Low-temperature distillation, and
  • Gas separation membranes.

Chemical absorption is preferred to physical absorption for low to moderate CO2 partial pressures, which is the case for fossil power plant exhausts; typical reagents are alkanolamines such as monoethanolamine (MEA). However, these have to be regenerated using hot steam stripping to produce the high-concentration CO2 stream, and various studies have shown that this regeneration imposes a significant economic penalty.

Adsorption processes depend on materials having very high specific surface areas and a high selectivity for the target gas. Zeolites are naturally occurring examples of such materials. An International Energy Agency (IEA, 1998) study concluded that this approach was unlikely to be economically viable for power plants, but the DOE report notes that adsorption techniques have been used in some large commercial CO2 point sources in hydrogen production and natural gas cleanup systems.

Low-temperature distillation is widely used commercially to produce high-purity CO2 from high-concentration sources. It does not appear to be appropriate for power plant exhaust gas treatment. Gas separation membranes have not been used to any great extent thus far for CO2 separation, but there has been a great deal of interest in the development of gas separation membranes in recent years, and this may well be an area meriting further research.

There are some interesting variations on addressing this overall problem that could change some of these conclusions. For example, the use of an oxygen-blown IGCC approach will produce a high-concentration CO2 gas stream since there is no nitrogen diluent. In addition, the sulfur and fuel-bound nitrogen species can be removed in the gasification process. Adding a shift reaction further increases the CO2 generated within the fuel preparation reactions preceding combustion. Furthermore, the combined-cycle aspect of the process can lead to an improvement in overall cycle efficiency.

Having produced, by whatever means, a high-concentration CO2 stream from the fossil fuel-fired plant, the issue moves to sequestration. Again, the most common approach at the moment is to compress the gas to form a liquid, which can then be pumped through pipelines to a sequestration site. Such sites include the following:

  1. Geological sites
    • Deep porous strata
    • Deep saline aquifers
    • Freshwater aquifers unconnected to potential drinking sources
    • Spent oil wells
    • Depleted natural gas wells
    • Deep unminable coal seams
  2. Marine sites
    • Very deep regions
    • Shallower regions that favor carbon dioxide hydrate formation
    • Near surface regions that allow biological capture processes

All of these sites have been discussed in considerable detail over the last three to five years in a number of symposia. The DOE report referred to above summarizes a number of them. The major issues associated with this concentration-pumping transfer-sequestration scenario are the following:

  • The economics of the concentration step from the very large volume of relatively dilute exhaust gas from a utility boiler,
  • The problems associated with the distance over which CO2 will have to be pumped between the generation site (the power plant) and the sequestration site,
  • The risks associated with leaks in the transport lines (since, unlike natural gas, released CO2 will concentrate in low-lying regions adjacent to the leak and is not easy to detect),
  • Leaks from the geological repositories,
  • The ultimate capacity of geological repositories, and
  • Local environmental effects on marine repositories.

Many authorities believe that these issues are not insurmountable from a technical point of view, but most also agree that licensing and insurance issues in the near term may present problems.

There is a clear preference for very long-lived or even permanent sequestration. Such permanent sequestration is offered by an examination of geological processes, since it is clear that over archaeological time scales, a considerable amount of CO2 has been sequestered—for example, as oolitic limestone deposits and dolomite deposits. Nature has sequestered carbon in two ways:

  1. As calcium carbonate generated by marine animals of various kinds, principally as shells or exoskeletons that are deposited on seabeds—this appears to be the method of formation of the very extensive oolitic limestone beds; and
  2. As carbonates generated by silicate → carbonate exchange, in which the by-product is silica (SiO2)—this is often referred to as the weathering process.

It has been suggested that some of the geological sequestration routes described above will eventually lead to permanent sequestration through the second of these paths. The rate of the exchange reaction is currently unknown, and the possibility of accelerating it, either by catalysis or by the use of high-surface-area silicate materials, has been discussed but not studied. This would seem to be a fruitful area for further research. There has, of course, been a significant discussion of the first option, in the biomimetic approach proposed by Bond and coworkers (New Mexico Institute of Technology, Personal Communication). Another approach that has been proposed is to catalyze the ocean processes responsible for the near-surface capture of CO2, for example, by using iron salts spread on the surface.

While the benign and more-or-less permanent sequestration offered by these techniques is attractive, as is the potential of being able to eliminate the concentration and pumping steps, the mass flows are still daunting. For example, if CO2 were to be sequestered by the pumping of seawater, with a calcium ion concentration in the seawater of 400 g/tonne, through a separation vessel at the utility site, 100% removal from a unit of the size of the model Kenosha plant would require a flow of 18 million tonnes of seawater per day. This seems like a very large number, and indeed it is, but the cooling water flow through such a unit would be of the order of 2.5 million tonnes per day. (Note that other methods for capturing the CO2 as carbonates are also being considered, and it is by no means clear that such a large flow through the plant is unavoidable!)


The important point in all this is that the problem is very large, whatever method is chosen to achieve a solution. There is no clearly superior method available at this time, and careful and thorough research is necessary for all the candidates that have so far been proposed. The hunt for as yet undiscovered approaches is an important—perhaps critical—part of the necessary research.

It is quite important not to commit too early to a process that, after some experience, turns out to be unsuitable or to write legislative goals that cannot be attained at a reasonable cost.

This is obviously an argument for more research, but it is also important to understand that in all aspects of this complex issue, the clock is running. Planning the research that is needed, specifying the goals that must be achieved, and deciding the times by which answers must be developed are essential. There is a large stakeholder group that includes in one sense everybody! Without good understanding by the stakeholders, acceptance of the limitations and the consequences will not be possible.

Roadmapping is, in my view, the only planning technique available to us that can develop a research approach appropriate to the problem.


  1. Bond GM.Personal Communication. New Mexico Institute of Technology;
  2. Department of Energy. U.S. Department of Energy Report DOE/SC/FE-1. Washington, D.C.: U.S. Government Printing Office; 1999. Carbon Sequestration: Research and Development.
  3. Electric Power Research Institute. Electric Technology Roadings:C1-112677-V1. 1999.
  4. Electric Power Research Institute. EPRI Final Report on Research Project 1610-6, Report No. GS-7193. Vol. 1. Palo Alto, Calif: 1991. Economic Evaluation of Flue Gas Desulfurization Systmes.
  5. International Energy Agency (IEA). Carbon dioxide capture from power stations. 1998. www​
  6. National Academy of Engineering (NAE). Technological Trajectories and the Human Environment. Washington, D.C.: National Academy Press; 1997.
  7. Starr C. Technological Trajectories and the Human Environment. Washington, D.C.: National Academy Press; 1997. Sustaining the human environment: the next two hundred years.


Jim Spearot, General Motors: Given your concerns regarding the licensing of large, centralized power generating stations and plants, what are your thoughts on a distributed electrical production system using fuel cells, and perhaps natural gas, as a way to get around the additional needs for electricity.

John Stringer: Thank you. I have a slide on this that I didn't put in. The issue of distributed power is one that has been exercising us for quite some time. We feel that distributed power does have a place. Exactly what that place is, is quite difficult to calculate. In very simple terms, it depends on whether in our jargon, you can “snip the wires.”

By this I mean that if somebody has a small power plant, a distributed power plant for say a mall, there are two possibilities. In the first, they rely entirely on their own power plant for their needs and do not require a utility to provide a backup in the event that their own unit goes down. In the second, which is much more common, they require the utility to put a line in and be available to them as a backup. The overall costs of the two are very different. The installation of the line is a cost item, of course, but the most significant item is the reserves that the utility has to maintain—unused mostly—to be able to supply them in the event they need it. This represents a capital cost to the utility that generally is not earning any money and for a large entity may be substantial. If the utility requires them to pay for what amounts to a rental for this backup, then the economic advantages of the distributed system may well disappear. That's one issue, and there are some other issues connected with distributed generation, but nevertheless, it's an excellent question, and we are looking at it very carefully.

Alex Bell, University of California at Berkeley: In separate parts of your talk, you discussed the desirability of burning a fuel with pure oxygen and then scrubbing out the CO2, as opposed to burning the same fuel in air prior to carrying out the removal of CO2. Can you talk about the trade-off in energy? Is there any net energy efficiency in large-scale air separation, where you use oxygen to burn your fuel and then remove CO2 from a nitrogen-free exhaust gas?

John Stringer: Again, it's a good question. For the calculations that we have done so far, with the separation techniques that we have available to us at the moment, the answer is yes. This calculation is quite difficult. For a coal-burning Rankine plant, the figures I gave show that the actual exhaust is quite dilute in CO2—around 18%—and consequently, a concentration process involves moving a considerable volume of gas. We and others have done detailed calculations for the widely used amine solvent technique for removing CO2 from a gas stream that was used at Sleipner West, for example, and it doesn't make economic sense at the moment. The costs aren't wholly unreasonable, but it isn't all that obvious that they can be reduced sufficiently to make the process economic for treating exhaust gas.

Alex Bell: Can you quote a figure in terms of the percentage of heating value of your fuel that goes into CO2 separation by one strategy versus another, because that would put everything on a common footing?

John Stringer: For some of these things we have the numbers. The calculations for the amine stripping were done by the International Energy Agency a couple of years ago. I don't have the numbers at my fingertips, but we have redone the numbers and we come up with something that is about the same. However, other people also have done the calculations, and these are quoted in the DOE report that I mentioned. As I recall, their numbers are slightly more optimistic than the IEA results, but I don't know what differences led to that. There are numbers available, and there are a number of studies, largely funded by DOE, that are currently examining the economic issues. EPRI has a number of industry-accepted methods for doing economic analyses of this kind that have been applied to, for example, fuel gas desulfurization techniques.

George Helz, University of Maryland: You mentioned a figure that I have often seen. It's something like 38% efficiency in a typical plant—that is, within the plant—from the coal entering the firebox to the wire carrying electricity away from the plant. I have often wondered what the total efficiency is from the coal deposit at the mine to the heat in the consumer's toaster. Do you have such figures? Are they largely different from the 38%? Is there a lot of additional energy cost in the total stream?

John Stringer: The coal prices in the United States are fairly modest; consequently, the costs incurred in the transfer from coal mine to the plant are not really very large. In terms of efficiency, there isn't any significant penalty at that stage—the total energy loss in the transportation is really quite small. I haven't seen a calculation, but it would be very easy to do since transportation within the United States is by train or barge. Also, on the transmission losses, in the United States. Once again, transmission losses are, at the moment at least, fairly small. I think the question is interesting, and I should be able to express efficiencies in those terms. When I get back to EPRI, I will try to put something together.

Jack Solomon, Praxair: This is a partial answer to Alex's question. For the amine CO2 separation systems, the extra CO2 required is 30-50%. The penalty for an oxygen-burning plant, just to look at that, is 15-20%. Thus, the oxygen-burning plant is a little better. It depends a lot on the details.

John Stringer: It depends, for example, on whether you've got a market for the nitrogen.

Jack Solomon: Also, you have to do something with the CO2. You mentioned integrated gasification combine cycle, and you mentioned penalties on the reliability. Do you have any comments on what those penalties are?

John Stringer: The experience base is, at this point, fairly small. I'm thinking mainly of the plant in Holland, where they sought a high degree of integration. The level of integration that has been required for this plant, coupled with the problems with the combustion turbine, caused it to be listed in the press as one of the major economic disasters in the Netherlands over the last 5-10 years. So the consequences were enormous there.

Now the fact is that you don't need that level of integration. The marginal benefits you get from full integration are relatively small. If you don't integrate, then the reliabilities are determined by the least reliable components of the plant.

Brian Flannery, Exxon Mobil: I understand that Finland is seeking a license to build a new nuclear power plant, partly in the context of what Finland has to do in terms of climate change. In the United States, we are going through relicensing. What are the prospects for nuclear energy in the United States? That is, what will the scale of nuclear be in the U.S. utility mix say in 10 years, and what are the prospects for having more nuclear?

John Stringer: We have slightly more than 100 nuclear plants at the moment, which are producing approximately 15-20% of the electricity in the United States. The first three relicensing cases have all gone through. We have another two or three in the works at the moment, and we have every reason to suppose that they will go through as well. So consequently, I don't expect that we will lose many nuclear plants in 10 year's time. We might lose one or two for basic reasons like the New England one, but I expect it to be roughly the same. The reason there is a falling percentage of total generation is because of the growth in generation we expect over that time.

Will we build a new nuclear plant in the next 10 years? There's not a chance in the United States, I believe. In the next 20 years, maybe. It depends a lot on what is going to happen with global warming, I think, because once it gets to that point, I think people will be looking at all the possibilities, and one possibility is nuclear.

What will actually happen depends on the outcomes of cases currently going through the courts in which two or three big utilities are suing the federal government for not actually completing its part of the agreement to store the waste fuel. This issue won't be resolved in the near future, and of course it depends very much on not having another nuclear disaster.

Fred Fendt, Rohm and Haas: It has been maybe a quarter of a century since we've built a new nuclear plant in the United States. Has research continued in nuclear power plants in that time? Would a new nuclear plant be markedly different from any of the existing ones?

John Stringer: We had a lot of research done on conceptual systems—when I say research done, I mean up to the stage of sort of fairly detailed planning studies. This has followed two major directions. One of them is the use of what's called a “pebble bed” reactor, and that's a higher efficiency type of reactor. The Germans played a large part in that research, and of course we had a program going in the United States. The South Africans, I believe are looking at the possibility of a pebble bed reactor at the moment as well.

The other direction is building what we call an “advanced light-water reactor,” which is designed to be able to achieve a sensible and controlled shutdown in the event of a component failure, such as a circulation pump failure. We have all of the designs for this. It was funded by EPRI and by the Department of Energy. So I think the next reactor we build, whenever it is, is not going to look the same as the reactors we have at the moment.

Geraldine Cox, EUROTECH: When we're in a situation like this, we often have to examine the original paradigm under which we operate. In this case, it is the supply-demand of energy. With coal plants, I recognize the size of the resource and the need to continue its exploitation. Yet coal is very inefficient in the sense that it is not able to supply a just-in-time response to energy demand, where gas turbines and others can turn on and off much more efficiently in response to peak demand. A cold start in a coal plant to full operation takes several days. Is industry studying approaches to minimizing off-peak power production to store the energy in some way that it can be used more efficiently during peak periods?

John Stringer: Yes, it is, but I have got to tell you that the scenarios do not make an enormous difference. It's at the margin.

First of all, the overall generation pattern required by any utility consists of essentially a base load, together with a number of fluctuating demands with different time scales. There are some variations on a weekly time schedule. There are ones with six-month variations, there are daily fluctuations, and there are variations that are of much shorter times than that.

A popular television program comes on at let's say 7:30 p.m., and the demand just rockets up. So what we do is we always have a generation mix, because if you have something that has been designed to satisfy the base load, it can operate best at the design point. Then there are some that are peakers—for example, a simple combustion turbine, because it can turn on very fast.

However, if we go to the high-efficiency combustion turbine systems that I was telling you about—the combined-cycle ones—their start-up time is not fast. We can perhaps start them up in simple cycles quickly and then bring up the steam generator, the Rankine part of the cycle, over a longer period. However, they will be operating only at around 35% efficiency or so when they are operating in simple cycle mode. So, yes, we do have a mix, and there is a quite complicated dance that goes on about what you dispatch at any particular time.

The other question you had related to storage of energy not used. The best way is pump storage. What you need is two lakes that are around about 1,000 feet apart. There aren't too many of those. Where you have them, they present the best opportunity. The pumps operate wonderfully. The hydrogenerator works great, and it's very high efficiency. There is a plant in the United Kingdom in a place called Dinorwick in Wales that has been operating like that for many years.

We have put a compressed air storage plant in Alabama, not far from Montgomery, that uses a salt cavern. Essentially, there are big salt masses down there, and you can open up a big cavity by dissolving the salt by injecting water. At the surface, there is a gas turbine-driven generator, which also drives an air compressor, and you store the compressed air in the cavity. When the demand for electric power increases, the stored compressed air is expanded through the compressor, which acts as a supplementary power generation turbine. Actually, the operation is a little bit more complicated than that, but at least this shows the storage principle.

Heinz Heinemann, Lawrence Livermore National Laboratory: You extrapolated from the year 2000 through the year 2050 and showed an increase in excess energy demand from 38 to 70%. Does this take into consideration the limitations in transmission of energy from production to consumers?

John Stringer: That's an excellent point. The most sensitive part of the overall electricity production and supply system in the United States, at the moment anyway, is the transmission system and, to some extent, also the distribution system. These are jargon words. Transmission means the long-range transfer of electricity from the generator to the main transformer park. Then distribution is from these transformers to the final user.

Some components of the distribution system have been in place for a long time. In New York, for example, I am told that we are still using part of the distribution system that was put in by Edison. So demand to ensure high reliability in the power delivery infrastructure is the number one priority in the industry in the United States at the moment. The other reason I didn't have it on my slides is because it's not really a carbon management thing, but in terms of the electricity supply industry, it's very important.

Klaus Lackner, Los Alamos National Laboratory: I have two comments. One is I heard a large number of discussions on the question of increased efficiency, but I really would like to come back to Jae Edmonds' comment that ultimately we have to go to zero emissions. If you go there, I don't think you have a choice but to deal with sequestration.

The other point deals with scrubbers. I agree with your number. I would even say that calcium oxide is not an adequate scrubber for the simple reason that calcium oxide started life as calcium carbonate. You could use magnesium silicate instead. Again, the transportation issue would indeed be overwhelming.

So the conclusion from this would be that you have to put the processing plant at the site where your scrubbing materials come from, because that is the largest mass flow in the system. The nice thing is that mineral carbonation is actually an exothermic reaction, so it works without requiring energy. This would allow for a different location for the power plant.

John Stringer: Yes, thank you. I had a slide, which I didn't show, that was going to comment on your work, because I think it's extremely good. We are reviewing these alkaline earth silicate deposits and going through the reactions that you have described, the classic weathering reaction I think, as part of our continuing attempts to imitate the ways in which nature has achieved the very long term sequestration of CO2. This is an extremely attractive thing to do, and I think we need to do much more research in that particular area—for example, by studying the kinetics and looking for ways to accelerate the relevant reactions, but I think that's nice work of yours. I like it very much.

Glenn Crosby, Washington State University: I had a question for our last speaker regarding the research in new reactor types. Could you inform the audience concerning the advances that have been made, say, by the French, who are using nuclear generation of electricity much more predominantly than we are, into their reactor design and where they are in their reactor performance?

John Stringer: Their reactor designs are really not all that dissimilar to ours. What is different is in their overall plant: first of all, they have a much higher proportion of their electricity generated with nuclear plants, as you know. Again, most of their nuclear plants were built some time ago. There were a few differences, but there isn't an intrinsic difference.

It isn't like a Canada Deuterium Uranium (CANDU) plant for example, which is quite different, or the gas-cooled reactors that the British developed. The French use a light-water reactor. However, their overall plant philosophy was ultimately based on getting the breeders going to close the fuel cycle. As you may recall, they had the demo Phoenix plant, and they were planning to build the one that was going to be the first of the actual units to do this, sort of make the whole thing work together.

It was called the Super Phoenix. They had a major problem with that because it used a liquid sodium potassium coolant, and they had a coolant leak into the water. This caused a very significant problem.

I don't think the problem was insoluble, but it was sufficient at that time to cause serious political and social questions inside France about whether France wanted to go to the route of the breeder reaction. At this point, the whole thing is on hold. At this time, EDF (Electricité de France) is looking at coal power plants. It has a big circulating fluidized bed reactor built in France, and it is focusing on international marketing, so EDF is marketing as many or more fossil plants as nuclear power plants internationally. Nevertheless, the intrinsic nuclear experiment has been very good. EDF sells cheap power to everybody else around. So it's been successful.

Panel Discussion

Richard Foust, Northern Arizona University: One thing I haven't seen addressed in the presentations this morning has to do with the change in life-style. For example, in China they are moving from bicycles to scooters to automobiles. In projecting 50 years into the future, this may have a more significant impact on carbon emissions than population growth. Has this issue been dealt with in the models, and how do you do something like that?

James Edmonds, Pacific Northwest National Laboratory: The simple answer is yes, this is probably the biggest driver, as you rightly suggest. It's the largest force moving emissions upward. It is much more important than the simple numbers of people. It is the big-ticket item.

John Stringer, Electric Power Research Institute: I agree with that, and I might add one thing to it. We, and I think others, have looked at the growth of megacities—those with populations in excess of 50 million. Megacity growth leads to a possibility that carbon emissions may go back a little bit if the growth also incorporates a public transportation system.

Chandrakant Panchal, Argonne National Laboratory: This question concerns the cost and technical issues of CO2 transportation. You have to properly link the coal and fuel, sources of CO2, its production and distribution, and its users. CO2 must be separated from diverse sources—utilities, refineries, and so forth. Most sources are in towns and cities. Many different cost estimates have been made. Does anyone want to comment on the reality of these, or are we embarking on something that is going to be very difficult to do?

David Thomas, BP-Amoco: Concerning the cost of CO2 transportation, I have talked to people who build CO2 pipelines, and they use some rather interesting units. They say the cost for a pipeline for high-pressure CO2 (2,200 pounds per square inch or 150 atmospheres) is on the order of $15,000 to $40,000 per inch-mile of capacity. This means that a 10-inch pipeline is $150,000 to $400,000 per mile for construction costs.

In terms of carrying capacity, my recollection is that a 14-inch-capacity pipeline can carry on the order of 5 million tonnes of CO2 per year. I may be low there. I'm using as an operational number for transportation for a couple hundred miles (300-400 km) an order of about $1 for a ton of CO2. This is based on experiences in the Southwest, where they are moving CO2 down pipelines for enhanced oil recovery. It's on that order, but the exact number depends on how far you transport it and under what kind of pipeline conditions. There are people who say that if you want to do a complete sequestration job using centralized storage in reservoirs of various kinds, you would effectively need to double the natural gas infrastructure. I think that's probably the high end. The low end is probably 10% of that. So somewhere in the middle lies reality.

Alan Wolsky, Argonne National Laboratory: I wonder if anybody has thought about how much we should spend on this? I don't mean how much does any single option cost. I mean, Is there some moral obligation to future generations to now raise the price of carbon by 10%? If we did this, or other things, would we be doing our duty to the future? At some point, I think one has to ask this to get a grip on the problem and its feasible solutions. I wonder if the panel would share its thoughts on this.

Brian Flannery, Exxon Mobil: I don't think people have asked the question quite the way you posed it, but the challenge is that no one knows what the cost of controlling carbon in the economy is. Kyoto was an attempt to control the amount of emissions at unknown cost. Others have suggested options such as putting a cap on the cost and requiring a permit to use carbon. Now, however, you are up against exactly the question you asked, What is the right level? How do you do it? Why should you do it? You are also up against the question, What will it achieve? Bill Nordhaus is an economist who has tried to show ways that you might impose a cost now in the hope of achieving something over a long period of time. Jae will be much more familiar with his numbers than I am, but the type of cost conclusion he came to was a rather low cost, compared with the cost of emissions-based outcomes that people have looked at so far. The answer to what is delivered depends on the technologies that are produced over the next 20 or 30 years and the climate signal that this level of cost might generate. Frankly, I think here, we're in assumption space. We don't really know the answers.

James Edmonds: I would like to add one more point. Given the stock nature of the carbon problem, economics is fairly familiar with this cost exercise, and the problem was solved about 70 years ago. The answer is that the cost of controlling carbon is not a fixed price. You start off with an initial value, and it should rise roughly at the rate of interest of the economy. If the rate of interest is roughly about 5%, controlling carbon costs should rise at about 5% per year. So if you start off at $10 a ton in the first year, it should be $10.50 in the second year, and so forth.

Alan Wolsky: Perhaps, the work of Harold Hotelling is in the minds of those trained in economics. His work concerned the desired rate of capital accumulation or nonrenewable resource depreciation. As I recall, his answer was the market interest rate. Consistent with his line of thought, it seems to me the comparison is between the wealth that you would otherwise pass onto future generations and the amount of money you spend now on passing on the same atmospheric conditions, and the interest rate has little do with how much we spend now. The interest rate has something to do with how much Hotelling's work suggests we should increase that spending over time. Perhaps our economic colleagues would share their views.

James Edmonds: Well, actually it has a lot to do with it. It's the rate at which you can transform wealth from one period to the next, so it's a very important determinant of any efficient solution to the problem. There are an infinite number of inefficient solutions, which impose either extra costs or additional costs borne by somebody else. It is an important element.

Alan Wolsky: So, should we spend $10 per ton or $50 per ton this year?

Richard Alkire: Maybe we should go to the next question. I should point out that we discussed how to shape the content of this Chemical Sciences Roundtable workshop and came to the conclusion that the one question we would use to try to do that is, If there is a carbon tax, what research in chemical science and engineering would be needed in order to understand how to proceed? So while there are many issues to the topic beyond the chemical sciences and research, the focus is what could be done by the chemical sciences community operating in this climate that would be helpful. Next question.

Dennis Lichtenberger, University of Arizona: We have heard some different comments on synthetic fuels and synthetic chemistry in general. A couple of people commented that they had negative experiences with the synthetic fuels issue. In contrast, another person mentioned that his company is now building a synthetic fuel plant and that synthetic gas chemistry is very important in many of its products. What is the feeling for the future role of the chemistry in this area?

Brian Flannery: The synthetic fuels program I was referring to was liquid fuels from coal and oil shales. Synthetic gas fuel is gas to liquid, which is a totally different technology. One of the reasons for that technology is to bring to market gas that otherwise doesn't have a market. The market that it serves is a totally different market than you would usually have for gas, because this process produces high-quality products that can be transformed into a premium liquid fuel, a diesel fuel, or a very interesting feedstock for a number of chemical processes, but that's a different market and regime from the efforts to provide massive amounts of liquid fuels from coal or shale. You are also consuming a lot of the energy when you transform gas to liquid. With coal you are working with a vast resource and converting it to a liquid, of which there was a smaller resource and problems with security.

So they are totally different technologies. The gas-to-liquid technology is an extremely attractive technology in certain options. The question is how to make it commercial. It does have interesting issues for climate change, because the emissions occur at the site of manufacture in a gas-to-liquid plant. It raises interesting questions about Annex 1 versus non-Annex 1 countries and where you would even site such a plant. The market may also raise interesting questions for corporations that have taken on internal carbon reduction targets, because emissions from these plants are operational emissions, not end-use emissions by the consumer.

Rosemarie Szostak, Department of the Army, Army Environmental Policy Institute: This question is for Brian, who had talked about government mandates as opposed to proactive behavior. We work under government mandate, not a proactive policy, although proactive would be nice. How would you envision implementing a proactive policy for carbon management that would be amenable to both the government and the general public?

John Stringer: It's an interesting question. Carbon mitigation will be similar to the experience we had with sulfur removal 20 years ago. At that time, the public was very keen to have sulfur removed. We went through the calculations and indicated that the sulfur removal to the levels that the government wanted to produce would increase the cost of electricity by about 35%. It was felt that at this level, the public desire to have sulfur removed was sufficiently high that it would in fact accept that. So legislation went through on that basis.

In fact, because of developments in technology, the actual cost impact now is quite a lot less than the 35% we had originally calculated. It's now probably down to around 15%. That is, the cost of electricity from coal is 15% higher than it would be if we didn't have to scrub the sulfur out.

Now I think the question with CO2 mitigation is going to come down to a similar question. How much is the public concerned about CO2 and global warming, and how much of an increase in the price of electricity is it prepared to accept?

Rosemarie Szostak: Turn that around. I'm asking, short of a carbon tax or mandating the control of carbon emissions, is there a way of being proactive that would be reasonable from a policy perspective? We certainly know from industry, and the BP representative who pointed out in his talk that you can decrease your emissions 10%, that it's good business. You save money. How do you come up with a policy that is not a government mandate?

John Stringer: We can't see a positive revenue side to removing CO2. This was discussed quite a lot in the Department of Energy panel that I was on. Once again, nothing came through that looked like a reasonable market. We can possibly talk about this in more detail, about specific cases.

David Thomas: Let me add a little bit from our point of view. At the top of my list of targets was enhanced oil recovery, because that is the only use for which people are willing to buy CO2 and pay us real money for it, so we can generate a revenue stream that offsets the cost of capture. With some modest changes to the enhanced oil recovery processes, one can turn them simultaneously into sequestration opportunities, which is the reason they are very attractive.

I think I agree very much with Brian that there needs to be an economic drive for most of these sequestration opportunities and emissions reductions to occur. We are of course pushing the cost reduction and savings through energy management. At the first stage, we are targeting those opportunities that will generate a revenue stream.

I don't like the idea of adding taxation. If anything, I would prefer a tax incentive. In other words, if I reduce my emissions, I would be incentivized. The guy who didn't do it would be taxed by paying a higher proportion of the income tax, rather than a punitive tax.

Brian Flannery: I think that in the first place, there are always some opportunities to reduce emissions through energy efficiency, fuel switching, things like that. The problem is that once you have installed the new advanced turbine this year, you are not going to do it again next year. It's a question of the rate at which you can afford to deploy these new technologies and the total reduction they deliver.

I think industry is reasonably good—or at least on the scale of multinational corporations—at having management systems in place that identify, plan, invest, and operate to improve efficiency. I know we have an energy management system that has been part of our system since the mid-1970s. It's been refurbished from time to time. I must say we find it perfectly capable of identifying attractive economic opportunities to reduce emissions. We don't find emissions trading adds any bit to our capability to make these types of decisions or investments.

There are perhaps ways to raise awareness about opportunities to reduce emissions that are proactive. Renewables are also potentially an opportunity. It's not one our corporation sees as attractive to shareholders. We've been there, done that, and lost a lot of money frankly, but there may be small opportunities for renewables in niche markets.

I think that when you are talking about sequestration and hydrogen, you are not talking about economic ways to move forward at this point. You are talking about demonstration projects, or research and development to make them less expensive, but today I think that's as far as you can go. They will be expensive, especially at this point. There is no way around this. It is a question of whether research and development can perhaps make them less expensive or options you want to deploy 30 years from now. However, I'll repeat, research is doing something. Doing it well and thinking about the systems and the infrastructure implications represents a major planning exercise.

Alex Bell, University of California at Berkeley: I want to pose a question for the general issue of setting the research agenda, and let me create a context for this question. Let's look at electricity generation, where it has been identified that you want to do air separation to produce oxygen. You want to have efficient burners for the efficient conversion of fuel to energy, efficient means of electricity generation, efficient scrubbing or removal of CO2, and transportation of that CO2 to the burial site, and then efficient burial and disposal of the CO2.

So the questions are, Where are the long-term opportunities for research to be carried out by academics and researchers in the national laboratories? How do we use federal tax dollars efficiently for this purpose?

John Stringer: It's connected with the previous discussion on the last point. That is to say, What is the value of the eventual product? You can only spend money that in some way relates to the cost savings you expect. For sequestration, we really don't know what we are doing.

The things that you hear about—the Sleipner West project and the CO2 reinjection into wells and so forth—are really very small things that don't in themselves permit scaling up and addressing the major problems. Now the major problem is permanent sequestration. There was a brief mention of permanent sequestration in the form of going through the silicate-to-carbonate reaction. You don't have to bother about anything, it is there forever. On the other hand, putting CO2 into a hole in the ground is always a little bit disturbing, because what went down can come up. The experience with the injection of CO2 into oil wells is just like that: 30% of it comes straight back.

So there is a lot of research to be done on let's say the fundamental aspects of the full train in the separation and sequestration. The things that we have at the moment are not good. In my view, they will not translate into practice.

James Edmonds: I might just add a couple of things to that. One is you want to have a fairly broad portfolio, and it needs to go all the way back to some pretty basic science, because if you are not laying down the basic science foundations for the next generation of technology, you are probably not going to get what you are looking for. That includes materials sciences and biological sciences. Then, as you move up toward the kind of things that the private sector tends to do best and you look at the technologies that are promising, which in many of the scenarios the concentrations are limited, you have a major role for technologies. Things such as the commercial biomass look to be areas where funding is at present pretty minimal and the marginal value of increasing support is fairly high. Similarly, fuel cells would fall into this category.

I think one of the big things that is going to have to happen if capture and sequestration is ever going to come to pass is that we are going to have to be able to guarantee that we know where the carbon is. If you think about it, any of the scenarios that have been discussed here this morning, when you integrate over the course of the century, you end up with captured and sequestered carbon that is denominated in hundreds of billions of tonnes. It doesn't take a rocket scientist to figure out that if you have a loss rate of only 1% and a stock of 200 billion tonnes, you are emitting 2 billion tonnes of carbon a year back from this reservoir. If at the end of the century, the total emission that the planet gets is on the order of 4 billion to 5 billion tonnes of carbon per year, you've got a problem.

Being able to tag the carbon and to guarantee that you know where it is, is going to be an extremely important issue. I think it comes back to some of the questions that people like Klaus were asking, and that is, the technologies in which controlling and monitoring become fairly easy are if CO2 is taken off as a solid. There may be some value in pursuing an investigation down those lines, as well as removing the carbon as a gas.

Brian Flannery: Just a very quick couple of things to add. I think it's easy to try to make a list of things that might be useful, but I think more to the point is that you would probably have to be thinking in terms of creating a process with some scenarios or portfolios, some identification of issues, and their link. If you make progress in one area, if you overcome a barrier, it can suddenly open up a whole new chain of thought.

So I think part of it is not just that we need metallurgy or material science or reaction separations. We need all of this, it's true, but you need some systems context in which to think. This context certainly has to involve basic things like compressors and pipelines and central plants or distributed plants because then you see whether a breakthrough in this area actually makes a difference. If suddenly distributed generation of hydrogen makes more sense than centralized generation and then distribution, you may go off in totally new directions.

In terms of a national interest program in technologies that might come on line in 10, 20, 30 years, you need some context for it. Then, within this program, you have specific areas that you go after, that are maybe very deep fundamental science, or monitoring equipment, or control systems, or scenario planning. But you need a process, and it has to look at the overall situation, to be flexible, and to have science recycled into the agency planning process. Frankly, I think this has been a problem in our natural science research on climate change. The money is being spent, but the science is not being recycled in terms of what are we learning to adjust the research.

So I think we need a process-oriented approach in which there are some scenarios or portfolios but the planning should contain a recycle step to account for learning. The learning should include especially scientific and technical input, not just political input.

Tom Brownscombe, Shell: I just want to make a quick comment on the cost issue that was raised before. Our thinking is that the total cost for the transportation, sequestration, infrastructure—everything—is probably around $10 a ton of CO2, similar to the cost that people are willing to pay for CO2 for enhanced oil recovery. I also think, as someone has mentioned before, that the majority of the cost has to do with the capture and separation of the CO2, which would be the prime thing to lower through research.

John Stringer: Just one small point about this. You can never just compare these things on the basis of the price that somebody is prepared to pay it, because the amount you are involved with is far more than the market you have to go to. When we started out the sulfur separation, we thought in terms of the gypsum product, we looked at the price that people were paying for gypsum, and we calculated the costs on that basis. We saturated the gypsum market so fast, you can't believe. It is nice that everyone builds these little cubicles in their offices, and if the cubicles ever get down to the space just occupied by your body, we can probably get rid of quite a bit more gypsum and justify oil and sulfur. Yet the fact is that you can't actually justify a separation activity on the basis of the current market.

David Thomas: Actually, I didn't get it in before. When we were talking about the process and the plan and the roadmapping for research in the area, I'm sure many of you are aware that the Department of Energy, Fossil Energy, and Office of Science put together what they call a carbon sequestration roadmap. I'm paraphrasing the title badly, but there is a section in there that deals with chemistry and chemical processes, and a lot of thought has been put in that. I just wanted to get it into the record that we make sure we look at what has already been done before we reinvent it, because there has been a great deal of effort already invested.

Bill Millman, Department of Energy: I come from the basic research community. One of the questions that you always get when you deal with Congress is the introduction of this basic research into technology. One of the things that has been going around in the carbon area is the fact that if you look into the future, you see developing countries being a source of major new inputs into the carbon arena.

One of the things that has been going on in the government is talk about using the cell phone, where if you don't have an infrastructure in a country, you can introduce brand new technology without suffering from any of the cost penalties that you would in a developed country. I would like to get your input onto how that would work in sort of a heavier industry.

Brian Flannery: I could make a quick comment. I don't think I can provide you with an answer to the way you framed the question, but it's true that in developing countries, in industries that are building grassroots facilities, such as the petroleum industry in many cases, you don't go back to your designers and say, let's build what we built 40 years ago in Louisiana. The materials have changed; the control systems have changed. You build a much, much more efficient, better-thought-through plant at this time. Now, that is not true in all industries, and it's not true in every case, but it's an enormous opportunity.

On the other hand, introducing brand new technologies in developing countries can add to the risk. You don't have the enabling infrastructure. You may have difficulties moving people and materials in and out easily. It is very risky, and if you are asking developing countries to participate in this, it is not clear that it's the best strategy at all. A good strategy for a developing country is to introduce good, proven, best-available technology, which usually makes an enormous improvement in things like local emissions and air quality. Yet asking a developing country to be the site perhaps of the introduction of brand new technology and its enabling infrastructure could potentially be a very risky enterprise.

We are also thinking about the 100-year problem. In many of the models, the developing countries become a lot more developed over the next 30 years, and their capacity to introduce new technologies could be much better. I'm increasingly convinced, however, that it's not so much an economy problem as a capacity-building problem in developing countries, where by capacity, you mean a whole range of issues.

These issues include the ability to make good decisions, to plan, to have legal structures, and to have an appropriate tax and regulatory framework—one that governments should be addressing on a priority basis for all kinds of reasons besides climate change.

Bill Millman: I would like to comment, and maybe John could address this. One of the areas in which there has been a significant amount of study is the nuclear power industry. You can't build a new nuclear power plant in this country. A lot of new technology has been developed since the last nuclear power plant was built, maybe 30 years ago. Building nuclear plants in Third World countries, on the other hand, where they have no electric industry, or just a very incipient one, provides some opportunities, and in fact, there has been talk about government subsidies for that. Could you comment on this?

John Stringer: Given the government subsidy, low-cost capital for building a power plant is the cheapest way of generating electricity. There is no doubt about this. The old comment about too cheap to meter (which the older among you may remember) didn't actually pan out, but it didn't pan out because of the number of additional costs that were added as a result of government actions at that time.

If you can subsidize the capital costs, it works out very cheaply. However, the point you made—that isn't all there is—is very important. If you go into a place, you need the infrastructure and, in particular, the infrastructure that relates to reliability, and the business of keeping a plant running and making sure it operates in a reliable and safe fashion requires an educational infrastructure that often doesn't exist.

I agree entirely with your point, and I think within the next 10 or 20 years the infrastructure problem will go away, but in the immediate future, the business of getting stuff maintained is very difficult—even things as trivial as automobiles. There are some parts of the world where you can buy an automobile, but you can't get it fixed, no matter what, and it would be a problem if one had that issue with a nuclear plant.

Antonio Lau, BP: We discussed zero emissions. We discussed spending almost unlimited amounts to save the future of the planet. We discussed the solutions. Do you know how much money is spent on analyzing the problem of climate change? One can track the CO2 concentrations, temperature changes, and so forth.

One can also analyze radiated heat transfer and things like that. Perhaps in the early stage we may want to put an emphasis on that side of research, and then once we verify that a problem exists, we should go forward and find a solution.

Brian Flannery: I'm willing to make a very quick stab at your question. I think everyone is convinced that the buildup of greenhouse gases traps extra infrared radiation in the atmosphere, and this could lead to a problem down the road. There are few people who aren't convinced, at least in terms of the radiative transfer issues. The deep scientific questions in climate change concern the feedbacks that may occur, and the future of forcing from human actions. Once heat is trapped, climate processes, such as clouds, ocean circulation, hydrology and moist convection can change in ways that science cannot yet predict.

How big the problem will be, how soon it will be here, and with what positive and negative consequences, remain fairly uncertain, but I think in terms of managing a risk. The political system is convinced that the scientific evidence is there and that there is a risk that needs to be managed.

The question we are all faced with, and have been for a long time is, How? How much effort do you put in, with what scope, and what initial emphasis? The Kyoto Protocol negotiation is certainly a clear sign of this, though it does look as though it is running into some difficult waters, because it may have tried to do too much too soon at too high a cost, and it doesn't address the long-term issues.

So now the question is, from a perspective of scientific research, What can the research community add to developing improved possibility of solutions? I would also add, but I don't think this is the focus of your meeting, what can the chemistry community add to our ability to understand the climate issue?

There is a great deal of money being spent on research. I think the United States has been spending in the neighborhood of $2 billion a year for the last several years. I do think that there could have been a better framework in which the research money was spent in terms of delivering answers to the societal questions, as opposed to the scientific inquiry questions or the political questions that are being asked.

There could be a better framework for the research to proceed. However, the world is spending a few billion dollars a year on research on climate change, and certainly the chemistry community has a lot it can contribute to this too on a fundamental level of gauging what the real nature of the issues is, what the nature of natural climate variability is, and related questions.

John Stringer: I can add something to that. First, I agree with all of this. There is a significant amount of research being done that hasn't been discussed here, but this doesn't mean it isn't being done. My institute does climate research for example, and we have supported climate research, which includes looking at some of the models suggesting that the warming over the last 100 years, or whatever number of years you think we have been warming for, is not necessarily causally related to the increase in CO2 that occurred over the same period. In other models, these two things happened, but they are not necessarily related. That is the basis of the disagreement argument. When I was very young, we had gone through a period of about 10 years when temperatures had been falling progressively over much of the world. At that time, we were looking at the risk of the next ice age. I found it very entertaining to remember back to those days and realize that now we are talking about the exact reverse of that.

So I'm sensitive to the point that you make—that maybe the case isn't proven. However, my point isn't really that. My point is related to the business of risk scenarios. When we first ran into problems in the nuclear industry, we tried to do careful calculations of risk, and risk-based planning of course is well known. The Moon mission, for example, was based on asking the astronauts what risk they would accept, and they said they would accept the same risk as getting killed by a car in Houston. I don't think they put it that way now, but that was the number they chose at the time, and that was the number that was used as the basis for risk calculations in the whole Moon program.

Now we tried to do a similar calculation for the risks involved with nuclear energy, and we found that the risks were very small. These calculations have been borne out by the performance over the subsequent 20 years of the nuclear industry, where the accident rate has been extremely low. Nevertheless, it became clear that the public didn't care that we had calculated that the risks were very small. It became a matter of what is called risk perception. Consequently, what we did was to shift lots of our research in this area to risk perception.

So at the moment, it doesn't matter whether or not our industry point of view is that there is a risk of global warming. If the perception of the electorate, the people, is that there is global warming, then legislation will follow because that's the way society works, and we better be ready to react to that. So I always think in terms of what the legislation will look like, and we're negotiating with people to try and get legislation to deal with things that we think we stand a chance of being done.

So that's the problem. It doesn't mean that we should stop climate research based on alternate scenarios, because at some point we may change people's perception. Yet that's the way it goes at the moment: risk perception is more important than risk.

Richard Wool, University of Delaware: I'm looking for marching orders. I know we are trying to set the research agenda, but I'm wondering what each of you thinks are the critical issues or strategies? If there was one thing that each of you would advise somebody to do, what is it?

Richard Alkire: One thing for the academic research community?

Richard Wool: The thing the academic research community could do that would have the greatest impact on the carbon management scenario.

David Thomas: This is not flippant. I think we need to be working on fusion a heck of a lot more than we are. I wish we would spend a great deal more effort in that area, because I think it has the potential of going beyond. I would argue that, although we will not run out of hydrocarbons in the foreseeable future, or at least the next several hundred years, we will see a hydrocarbon age, much as we saw a wood age, and so on. Also, I would argue that it may be 200, it may be 300, it may be 250, it may be 600 years, but there will be an end to the hydrocarbon age, and my question is, What is beyond the hydrocarbon age?

James Edmonds: Research is needed on the full range of sequestration technologies, going all the way from pulling it off as a solid, to disposing of it in the form of a gas, to nature reservoirs including soils and forests.

Brian Flannery: I'm going to offer you two suggestions. The first is be creative and invent something new, because none of the current approaches look like they are going to work in the near term. The second is very serious. Convince people that science and technology has been a positive force in their lives, and has something to offer. This can be a positive force in the debate, rather than being used to pick and choose from science and technology to build horror stories to convince people that there is no way out.

John Stringer: Electricity is a wonderful thing but is difficult to store. I would like to see the whole business of where you store things in the overall cycle addressed. As we move toward a hydrogen economy, this is going to become particularly important. Hydrogen is extremely difficult to store, electrons are very difficult to store, and some further work in these areas would transform the economies of some of the things that we can't do yet.

Copyright © 2001, National Academy of Sciences.
Bookshelf ID: NBK44139


  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (4.9M)
  • Disable Glossary Links

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...