NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Research Council (US) Safe Drinking Water Committee. Drinking Water and Health: Volume 1. Washington (DC): National Academies Press (US); 1977.

Cover of Drinking Water and Health

Drinking Water and Health: Volume 1.

Show details

VIIRadioactivity In Drinking Water

Since it was discovered that ionizing radiation produces detrimental biological effects, many national and international groups have studied the sources and levels of radiation to which the human population is exposed, and have estimated the corresponding biological effects. Some of these groups have also been responsible for establishing permissible levels of exposure. Consequently, there is a large body of information on the biological effects of ionizing radiation. The Subcommittee on Radioactivity in Drinking Water has relied heavily on the reports of those other groups and has abstracted and summarized pertinent sections. In some cases it was possible to take new published and unpublished information into account in this assessment of the probable effects of the radioactivity in drinking water on the population of the United States.

Among the groups whose reports were used were: the National Academy of Sciences Advisory Committee on the Biological Effects of Ionizing Radiation (BEIR), the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), the International Commission on Radiological Protection (ICRP), and the National Council on Radiation Protection and Measurements (NCRP).

Background Radiation

The natural ionizing radiation, to which all people are exposed, includes cosmic rays and products of the decay of radioactive elements in the earth's crust and atmosphere. Part of the terrestrial radiation dose is from sources external to the body, and part is due to the inhalation and ingestion of radioactive elements in air, food, and water. In the United States, this unavoidable background radiation gives, on the average, an annual dose of about 100 mrem to the population (Table VII-1 ). There is, however, great variability in the amount of background radiation, which depends on regional geological characteristics and altitude. It has been found, for example, that the annual background dose in Colorado is 100 mrem (or more) higher than that in Louisiana (BEIR Committee, 1972). Mankind has always lived with such radiation, to which, however, the radionuclides in drinking water contribute but a small share.

TABLE VII-1. Estimated Total Annual Whole-Body Doses from Natural Radiation in the United States (from BEIR Committee, 1972).


Estimated Total Annual Whole-Body Doses from Natural Radiation in the United States (from BEIR Committee, 1972).

Abundance Of Radionuclides In Water

Minute traces of radioactivity are normally found in all drinking water. The concentration and composition of these radioactive constituents vary from place to place, depending principally on the radiochemical composition of the soil and rock strata through which the raw water may have passed.

Many natural and artificial radionuclides have been found in water, but most of the radioactivity is due to a relatively small number of nuclides and their decay products. Among these are the following emitters of radiation of low linear energy transfer (LET): potassium-40 (40K), tritium (3H), carbon- 14 (14C), and rubidium-87 (87Rb). In addition, high-LET, alpha-emitting radionuclides, such as radium-226 (226Ra), the daughters of radium-228 (228Ra), polonium-210 (210Po), uranium (U), thorium (Th), radon-220 (220Rn), and radon-222 (222Rn), may also be present in varying amounts.

Natural Radionuclides

Sources of Low-Let Radiation

Some of the radionuclides that are responsible for the natural radioactivity in drinking water come from radioactive elements, and their decay products, that were incorporated in the earth at its formation, and others are produced continuously by cosmic ray bombardment. Tritium is produced by cosmic ray interactions with atmospheric oxygen and nitrogen. It is then oxidized to tritiated water, which mixes into the hydrosphere. Tritium concentrations in water supplies vary from about 10 to 25 pCi/liter (Jacobs, 1968).

In similar fashion, carbon-14, produced by cosmic ray [14N(n,p)14C] interactions with atmospheric nitrogen (UNSCEAR, 1972, p. 29), is oxidized to 14CO2, which is generally found at a concentration corresponding to about 6 pCi14C per gram of carbon. In water containing about I mg of carbon per liter, a concentration of 0.006 pCi/liter might be expected. In ocean water, the concentration might be about 0.1 pCi/liter (NCRP, 1975, p. 35).

Of all the natural radionuclides that occur in water and emit low-LET radiation, potassium-40 is likely to be the most significant. This primordial radionuclide occurs as a constant percentage (0.0118%) of total potassium. Adults in the United States ingest about 2,300 pCi of potassium-40 per day, but almost all of it is derived from foodstuffs. Since potassium concentrations in man seem to be under homeostatic control, wide fluctuations in drinking-water potassium would have negligible effects on internal concentrations. Assuming that there is 0.2% potassium in soft tissue, a dose rate of 19 mrad per year has been estimated; of this, 17 mrad are due to beta radiation (UNSCEAR, 1972, p. 30). In 1970, some California drinking water, for example, contained up to 4 pCi/liter of potassium-40. Consumption of 2 liters per day of such water might contribute as much as 8 pCi per day, but this is a negligible fraction of the total daily intake of 2,300 pCi of a nuclide that is the largest natural contributor to total body somatic and genetic dose.

Sources of High-Let Radiation

Radionuclides that are produced by the decay of uranium-238 and thorium-232 are widely distributed throughout the earth's crust. The majority of them are alpha-emitters and include isotopes of polonium, radon, and radium (UNSCEAR, 1972, p. 31). Concentrations of uranium in drinking water are extremely variable, apparently ranging from 0.02 to 200 µg/liter in fresh waters. The thorium content of drinking water has not been extensively measured, but its concentration in the human skeleton is about 1 fCi/g of ash; the corresponding abundance of uranium in the skeleton is about 10 times greater.

The natural alpha-emitters that occur in drinking water appear to be bone seekers. Of these, radium-226 and its daughters and the daughters of radium-228 probably have the greatest potential for producing radiation doses of some consequence to man. The radium-226 content of fresh surface water is variable, ranging from 0.01 to about 0.1 pCi/liter. Some groundwater may contain up to 100 pCi/liter. Drinking water obtained from surface supplies generally does not contain significant amounts of radium, and treatment processes, such as flocculation and water-softening, can remove the bulk of radium from water.

In the Midwest of the United States there is an area where groundwaters contain significant levels of radium-226 and radium-228. This area—primarily in Iowa, Illinois, Wisconsin, and Missouri—0includes an estimated population (1960 census) of approximately 1 million persons. The weighted mean concentration of radium-226 has been estimated to be approximately 5 pCi/liter (Peterson et al., 1966). Rowland, Lucas, and Stehney (1975) have reported that approximately 500,000 people in Illinois and Iowa have drinking water supplies whose radium-226 content is 3-6 pCi/liter; about 300,000 people, 6-9 pCi/liter; and about 120,000 people live in areas where well water contains 9-80 pCi/liter of radium-226. A personal communication (Rowland, Lucas, and Stehney, 1976) from the same investigators stated that, of the last group, 113,000 people drink water that contains less than 20 pCi/liter, and 5,700, 20-25 pCi/liter. The one community (1,200 persons) that had a well in which 80 pCi/liter of radium-226 was found, now uses water from a well containing only 3 pCi/liter.

In addition, a survey in 1966 that was designed to locate water supplies with high concentrations of radium found water supplies with more than 3 pCi of radium-226 per liter in areas other than those of the northern Midwest described above (Hickey and Campbell, 1968). These supplies served approximately 145,000 people. Thus, it appears that in the entire United States approximately 1.1 million people consume water that contains more than 3 pCi/liter of radium-226.

The major additional contribution to the alpha-emissions in drinking water is due to the decay of radium-228; although other alpha-emitting natural radionuclides have been found in drinking water, they occur in exceedingly small concentrations. For example, one analysis of water containing 5 pCi of radium-228 per liter was found to contain less than 0.02 pCi/liter of thorium isotopes and only 0.03 pCi/liter of uranium (Stehney, 1960).

Two other radium isotopes may be present in drinking water, but although both radium-223 and radium-224 may contribute to the gross alpha activity of water measured soon after drawing from the tap, their contributions to the long-term dose deposited in the skeleton are negligible because they have short half-lives. However, radium-228, which decays by beta emission, and therefore does not contribute to gross alpha activity in drinking water, will, as a result of its subsequent decay scheme, give rise to a series of alpha-emitting daughter products. It is these radium-228 daughter products, and radium-226 and its daughters, that produce, in our opinion, the major alpha-particle dose to the tissues of the body, particularly to the skeleton. Thus, when discussing radium in drinking water, it is essential to distinguish between the isotopic mixture measured in freshly drawn drinking water and the long-term alpha dose that might be accumulated in tissue.

Because of the different decay schemes for radium-226 and radium-228, different alpha doses are received under equilibrium conditions from each of these two radium isotopes. In waters of low alpha-particle radioactivity, the activity concentration of radium-228 is generally equal to that of radium-226, whereas at high radioactivity concentrations it is only half that of radium-226 (Lucas and Krause, 1960).

The abundance of the radioactive gas radon-222, which is formed by the decay of radium-226, is not highly correlated with the radium concentration in fresh water. Radon concentration is generally 1 pCi/liter in surface water, but activity concentrations in groundwater are typically a few thousand times greater. Some mineral or spa waters, however, may contain 500,000 pCi/liter.

Consumption of water containing 1 µCi of radon-222 will result in a stomach dose of about 20 mrads, but the doses to other organs will be lower by at least a factor of 10 (UNSCEAR 1975, p. 35). Furthermore, consumption of 2 liters per day of water containing 1 nCi/liter of radon-222 would deliver an annual stomach dose of about 12 mrad.

In three large American cities, the total daily intake of uranium, radium-226, radium-228, and lead-210 in water have all been quoted to range approximately between 0.01 and 0.05 pCi/day (NCRP, 1975, p. 92). When compared with other components of the diet, drinking water usually contributes less than about 2% of these alpha-emitting radionuclides to the daily dietary intake (NAS-NRC, 1973). The greatest dose potential from alpha-radiation from naturally occurring radionuclides in drinking water will be related to the ingestion of radium-226 in areas where its concentration is high.

Artificial Radionuclides

To some extent, all drinking water obtained from surface sources will reflect contamination from atmospheric testing of nuclear weapons. Extensive measurements have been made of the contribution of airborne fission products to drinking water contamination and in particular to the levels that were produced by testing weapons before the Nuclear Test Ban Treaty of 1963. The sharp decrease in radioactive fallout since that date has been followed by a corresponding decrease in the radioactivity of surface water. Although the analyses are not very extensive, the temporal characteristics provide some information that is useful in predicting the transport and fate of radionuclides in water. Some of the longer-lived radionuclides still persist from early tests, together with smaller quantities of fission products injected irregularly into the atmosphere from the testing of weapons by nontreaty nations.

Many of the states conduct periodic surveys of the radioactivity of drinking water. Unfortunately, these consist, for the most part, in counting only the gross beta and gross alpha activity in the water. In addition, there is a considerable body of data on the temporal patterns and regional concentrations of the fission products strontium-90 and cesium-137, the physical half-lives of which are about 30 yr.

There appears to be a fairly good correlation between the measurement of solids in finished water and radioactivity content measured as beta activity (Figure VII-1). It is likely that potassium-40 in soil suspensions might account for such an observation. Because they account for a major part of the potential dose from nuclear fission and activation products, and because of their biological significance, considerable attention has been devoted to strontium-90, cesium-137, iodine-131, tritium, and carbon-14 as potential water contaminants. These, however, are not necessarily correlated with the solids content of drinking water.

Figure VII-1. Relationship between total dissolved solids and radioactivity of California domestic water.

Figure VII-1

Relationship between total dissolved solids and radioactivity of California domestic water. Goldberg (1976).

Sources of man-made radionuclides, in addition to atmospheric weapons tests, include local discharge of radiopharmaceuticals and the possible entry of radioactivity into watersheds from the use and processing of nuclear fuel to produce electric power.


The release of radioactive materials in the exhaust air and liquid wastes from medical institutions has been studied many times in different locales. No evidence yet suggests a drinking-water hazard from medical effluents. This conclusion is based on data collected in many surveys (Sodd et al., 1975; Gesell et al., 1975; Klement et al., 1972; Kaul and Loose, 1975). The agents to which particular attention was given in these surveys were radioactive iodine and technetium-99m. Both are widely used in medical practice, and there is special concern over the iodine isotopes, because of their potential effects on the thyroid gland.

Since 1950, eight groups have reported on the extent of release of radioisotopes in areas of the United States where there were active clinical nuclear medicine programs. Because of recent rapid increases in the numbers and kinds of procedures being conducted, Sodd et al. (1975) studied the use and discharge of iodine- 125, iodine-131, and technetium-99m in the Cincinnati area. They measured the radioactivity from these nuclides in the influent, effluent, and sludge at the sewage-treatment plant, as well as the activity in the Ohio River 10 miles above and 5 miles below the plant. Gesell et al. (1975) conducted a similar survey of medical usage and concentrations in sewage of iodine-131 and technetium-99m in the Houston area. The general conclusions reached by both groups indicated that the effect on levels of radioactivity in drinking water of the medical usage of radioisotopes that they studied appears to be of negligible importance.

The Cincinnati study was centered about the largest sewage treatment plant serving that city. This plant receives the effluent from 10 hospitals that use radionuclides in clinical nuclear medicine. Approximately 60% of the patients were outpatients, so control of biological wastes was not attempted. Radioactivity in the sludge accumulated at the plant exceeded that in the water. Sludge concentrations of iodine-131 and technetium-99m were measurable, but that of iodine-125 was below the limit of detectability (10 pCi/liter).

It was estimated that between 10% and 30% of the total amount of technetium-99m given to patients in Cincinnati hospitals was discharged in sewage effluent into the Ohio River. Typically, about 300 mCi/week of this nuclide were estimated to reach the river, where dilution with river water was calculated to give concentrations downstream of about 1 pCi/liter. In fact, analysis of river water showed identical values upstream and downstream of 3-4 ± 3 pCi/liter. These are lower, by a factor of about a million, than the current maximum permissible concentration (6 µCi/liter; NRC, 1976) of technetium-99m in water for the general population. Comparable results were obtained for iodine-131. Smaller amounts were used, and the concentrations in sludge and water were lower than those of teclmetium. No differences between upstream and downstream levels were detected. Under the assumption that the same dilution had occurred, the medical uses of iodine-131 in the area were calculated to produce a maximal increase in concentration in the river of about 0.3 pCi/liter. This value is about one thousandth of the current maximum permissible concentration of iodine-131 in water (300 pCi/liter; NRC, 1976) for the general population.

Thus, at present, given current rates of use, patterns of disposal, and radiation protection guidelines, many orders of magnitude separate the concentrations of radioactivity in drinking water due to medical uses of radioisotopes from conceivably hazardous levels. Projections of the rate of increase in use of radiopharmaceuticals have been made by the Environmental Protection Agency (Klement et al., 1972). They estimate that there may be a 12-fold increase in the medical use of these agents by the year 2000, on the basis of the annual increments in whole-body radiation dose from the use of these agents in medicine. This represents a very small incursion, and probably will not be measurable.

Nuclear Fuel Cycle Activities

Among the major effluents from the use and processing of nuclear fuel are tritium, plutonium, and krypton. Of these, only tritium, which is released as a gas, and plutonium can possibly enter water supplies. The predominant form of plutonium release from nuclear power and processing plants is as an aerosol that will have little or no impact on drinking water. Although a single incident has occurred in which as much as 18,750 Ci of plutonium were released from liquid storage on a local basis, none apparently reached off-site water supplies (AEC, 1974, pp. 49-50). The usual rate of release from liquid storage at controlled sites is about 1 mCi/yr. Continuing improvement in methods of storage should further reduce this rate. Nevertheless, the adequacy of monitoring water supplies in the vicinity of nuclear facilities should be reviewed periodically.

Because of its exceedingly long half-life (1.7 × 107 yr), the possible consequences of the release of iodine-129 during nuclear fuel reprocessing were considered. This radionuclide has a specific activity of about 173 µCi/g. In a recent review, Soldat (1976) calculated that the maximal isotope ratios of 129I:127I would be about 10-6 in water near nuclear facilities. His calculations indicate that consumption of 2 liters/day of water containing iodine-129 at 1 pCi/liter deliver an annual thyroid dose of about 5 mrem to an adult and about 10 mrem to an infant. Peak activities in water have been reported to be about 0.01 pCi/liter, which would correspond to an annual thyroid dose of about 0.05 mrem to an adult.

Radiation Dose Calculations

Estimates of the radiation doses expected to be produced by radionuclides ingested in water were calculated by means of the methods and parameters given in NCRP Report 22 (NBS Handbook 69, 1963 revision) and ICRP Publication 2 (ICRP, 1959). To approximate the equilibrium levels that take into account build-up, retention, decay, and elimination of various radionuclides, annual doses were computed for the fiftieth year of constant intake of 1 pCi/year, and 2 liters per day of water containing 1 pCi/liter. These doses are presented in Table VII-2. At earlier times, the annual doses may be lower than those shown, and for a few long-lived radionuclides (e.g.90Sr,226Ra), they may never reach equilibrium, but the values in Table VII-2 are within 20% of the theoretical equilibrium levels. These values were obtained by using the NCRP and ICRP metabolic and dosimetric models for all radionuclides, except for the isotopes of the alkaline earth elements radium and strontium, which are discussed below.

TABLE VII-2. Adult Equilibrium Annual Dose Factors for Some Radionuclides in Water.


Adult Equilibrium Annual Dose Factors for Some Radionuclides in Water.

Isotopes of Alkaline Earth Elements

For the alkaline earth elements, the recent metabolic model of ICRP Publication 20 (1973) was used.

In its 1959 report on permissible doses (ICRP, 1959*), Committee II of the ICRP used an exponential model of retention for all radionuclides to calculate maximum permissible concentrations in water. The committee pointed out, however, that there was good evidence that retention of radium-226 and other bone-seeking radionuclides is best represented by a power function model (Norris et al., 1958). In the case of radium-226, the calculated body burden from intake at constant daily rate for 50 yr is about a factor of 10 smaller by the power function model than by the ICRP exponential model. This may be shown by use of the equations and the values for metabolic parameters that are given in the ICRP report. Ingestion of I pCi of radium-226 per day in water is assumed in the sample calculations given below.

According to the ICRP exponential model, the amount of a radionuclide, qf2, that accumulates in an organ from constant ingestion rate, a, is given by:

Image img00056.jpg

where q= total amount in the body, f2 = fraction of q in the organ of reference (0.99 for bone), fw = fraction of radionuclide ingested in water that reaches the organ (0.04 for bone), t = effective half-life (1.6 × 104 days), and T = duration of ingestion in days.

For uptake of 226Ra by the skeleton in 50 yr, q (0.99) = (1) (0.04) (1.6 × 104)(1 - 0.454)/0.693 and the body burden q = 510 pCi for T = 18,250 days. It has been suggested that the effective half-life of 226Ra is 17.1 yr (Miller and Finkel, 1968); the calculated body burden in the above example would then be somewhat smaller, 315 pCi.

In the case of the power function, the body burden, q, after an amount, a, of a long-lived radionuclide has been ingested per day for T days is given by:

Image img00057.jpg

where f1 = fraction of ingested radioactivity that transfers from the gut to the blood (0.3), and A and n are the power-function constants for the fraction (R) retained at t days after a single injection (R = At-n). Norris et al. (1952) give the values A = 0.54 and n = 0.52.

Hence, for uptake of 226Ra in 50 yr,

Image img00058.jpg

and the body burden q = 37 pCi. It should be noted that the power function, in the form given here, deals only with the total body burden; it is silent on the question of organ distribution.

Since 1959, additional data have tended to support a power-function model for bone seekers. In 1972, a task group of Committee II of the ICRP presented a detailed model of alkaline earth metabolism that evolved from the power function (ICRP, 1973). For a constant rate of intake into the blood for 50 yr, the newer model predicts a whole-body content that is only 5% less than the simple power function given above.

Measurements of radium-226 body burdens and dietary intake at environmental levels also indicate that the ICRP exponential model predicts long-term retentions that are too high. The average body burden of radium-226 in areas of normal radioactivity is about 50 pCi (40 pCi in the skeleton and 10 pCi in soft tissue; UNSCEAR, 1972, p. 32). Intake of radium in food appears to be the main source at normal levels, because the average daily intake in the United States is about 1 to 2 pCi in food and less than 0.1 pCi in water (NCRP, 1975, p. 92).

A quantitative relationship between the body burden of radium-226 and the concentration of radium in drinking water was found by Lucas (1961). He measured the radium content of samples of bone and soft tissue from individuals with lifetime (or at least 30 yr) residence in their communities. He also measured the radium content of the water supply of each community. The average body burden of radium-226 was 36 pCi (42 persons) in cities with less than 0.1 pCi/liter, and higher body burdens (33 persons) were found in cities with 0.13-10.5 pCi/liter. With the assumption that intake of radium in food contributed 36 pCi to the total body burden of each person measured, the following relationship was found: B = 36 + 50Cw, where B is the total body burden (pCi) and Cw is the concentration of radium-226 in drinking water (pCi/liter). This relationship was confirmed in later work on 19 other persons of known and stable residence (Lucas et al., 1964).

It should be emphasized that the Lucas equation is an empirical relationship between body burden and the concentration of radium-226 in the local water supply. Daily rates of intake, transfer from gut to blood, etc., were not known nor taken into account. If one assumes daily ingestion of 1-2 liters of local water for the persons analyzed, then the long-term accumulation of body radium from radium in water was 25 to 50 times the daily intake of radium in water. These values are compatible with ratios of body burden to daily intake that were estimated in other work (Stehney and Lucas, 1956; ICRP, 1973).

Combining the Lucas equation with the standard rate of water consumption, 2 liters/day (NCRP, 1963; ICRP, 1974), gives good agreement with the new model of alkaline earth metabolism (ICRP, 1973). According to the model, f1 has a value of 0.21 and the body burden accumulated from daily ingestion of 2 pCi of 226Ra for 50 yr is 50 pCi, of which 41.5 pCi is in bone.

Since the relevant metabolic parameters for radium-228 are the same as for radium-226, the ICRP power-function model may also be used to calculate long-term retention of radium-228. For ingestion of 2 pCi/day for 50 yr, the calculated body burden of radium-228 is 21 pCi (14.2 pCi in bone). The dose factors given in Table VII-2 are based on effective absorbed energies per disintegration of 106 MeV for radium-226, and 301 MeV for radium-228 (NCRP, 1975).

The metabolic model of ICRP Publication 20 (1973) was also used to calculate the retention of radiostrontium from continuous ingestion in water. For strontium-90, the calculated body burden from ingestion of 2 pCi/day for 50 yr is 229 pCi, of which 222 pCi are in bone; for strontium-89, the corresponding values are 7.6 pCi and 5.0 pCi. The effective absorbed energies of the strontium isotopes in bone and total body given in ICRP 1959 were used to calculate the dose factors shown in Table VII-2.

Doses From Water of Specified Composition

To illustrate the doses to be expected from drinking water that may be typical of the United States generally, a hypothetical water supply was postulated to have the amounts of radioactivity shown in Table VII-3. Also tabulated are the corresponding annual doses to be expected in the fiftieth year of constant consumption of 2 liters/day of this water, as calculated with the dose factors of Table VII-2.

TABLE VII-3. Activity in a Hypothetical Water Supply.


Activity in a Hypothetical Water Supply.

The concentrations in Table VII-3 were chosen to represent the average tritium concentration, beta activity, and alpha activity of the water analyses reported in the Environmental Protection Agency's 1975 Report to Congress (EPA, 1975). The average tritium concentration was found to be 250 pCi/liter (p. IV-4). The average beta activity (excluding tritium) from all interstate carrier water supplies—for which figures were reported for gross beta, gross alpha, strontium-90, and radium-226 activity—was 3.1 pCi/liter. The gross alpha activities for the same samples were less than 2 pCi/liter in all but one case. The reported detection limit for gross alpha was 2 pCi/liter, and for strontium-90 beta activity, 0.5 pCi/liter. In these samples, many of the entries were below this detection limit. Almost all samples were below 1 pCi/liter. A reasonable concentration of strontium was therefore taken to be 0.5 pCi/liter. Radium-226 concentration in these samples averaged about 0.2 pCi/liter (two of the entries in the EPA report were incorrectly given as 9.12 and 9.10, instead of 0.12 and 0.10).

In addition to the tritium and strontium-90 concentrations noted above, it was assumed that the other major beta-contributors would be potassium-40, cesium-137, and radium-228. Cesium-137 in drinking water in the United States is currently derived primarily from fallout from atmospheric weapons testing. As ''surface'' depositions, the concentrations vary. They are likely to be higher in surface water than in groundwater. Values for this nuclide are not available for all the states; and even within a single state (e.g., California), concentrations are variable. A plausible concentration of this nuclide was estimated to be 0.1 pCi/liter. Equally variable, but for reasons related more to geology than to anything else, is the concentration of radium-228. It was assumed that the radium-228 concentration is approximately equal to that of radium-226, and for this hypothetical example a value of 0.2 pCi/liter was adopted (Stehney and Krause, 1960). Finally, potassium-40, which is ubiquitous in water supplies, is also variable; in the absence of any nationwide surveys of its concentration, it was concluded, from the range of concentrations in California (Goldberg, 1976), that 2.3 pCi/liter would be a reasonable estimate for the hypothetical water composition.

Calculating a gonadal dose for bone-seeking radionuclides is difficult. Strontium-90, for example, when fed continuously to dogs, was found to be one thousand times less concentrated in gonadal tissue than in bone. It could then be estimated that a bone dose of 0.1 mrem/yr might be accompanied by a gonadal dose of about 0.1µrem/yr (Della Rosa et al., 1972).

Unfortunately, extensive analyses of specific radionuclides other than radium-226, strontium-90, and tritium in water are not available. Dose factors for the majority of known radionuclides, derived from NCRP Report 22 (1963), are given in Soldat et al. (1975) and may readily be used to calculate annual doses when new analytical results are obtained.

A spectrometric analysis of a 1974 drinking water composite from Los Angeles became available (Goldberg, 1976), and was used to construct another illustrative example as shown below. The potential doses, assuming 50-yr constant intake at 2 liters/day, were calculated as in the previous example.

In this case, a total bone dose of 2 mrem/yr would be calculated (although not separately determined in this analysis, an amount of a-activity from radium-228 daughters nearly equal to that of the radium-226 would also be expected). For all other tissues, including the gonads, the annual dose would be about 0.12 mrem.

Estimation Of Risk

Developmental and Teratogenic Effects

On the basis of numerous studies on the effects of external radiation, it has become generally accepted that developing mammals (intrauterine and juvenile) are more radiosensitive than adults (BEIR Committee, 1972; Sikov and Mahlum, 1969). Since the basic interactions of radiation do not differ with age, it appears that the increased sensitivity follows from the high rates of cell proliferation and the complex interactions associated with development. Also, the variety of integrated developmental stages, through which the mature organism must progress to attain maturity, increases the chance that derangements will occur.

In general, these deleterious effects may be divided into three categories: increased tumor incidence, death, and developmental abnormalities. The first category is discussed later and will not be considered here. In the broad sense, the last category includes physiological and biochemical deficits and deviations, as well as malformations. Most of the quantitative dose-effect relationships have been derived from studies using relatively high radiation doses from external photon beams.

The doses required for embryolethality change during development, and vary by more than a factor of 10. There are also short "critical periods" for most, if not all, malformative events. Most of the available data suggest an apparent threshold at about 10 tad of acute exposure and a sigmoid dose-response relation for production of lethality and developmental abnormalities (BEIR Committee, 1972; Brent and Gorson, 1972). It is possible, however, that this value may be overestimated since there are relatively few observations at low doses and there is a lack of satisfactory methods for detecting minimal defects. Many of the observed morphological malformations have been hypothesized to involve a reduction in cell number to a value below some critical level as an early step in the pathogenic process. These considerations would offer a theoretical explanation for the apparent threshold for radiation teratogenesis.

The difficulties inherent in extrapolating the results of experiments at high doses and high dose rates to low doses and low dose rates include those applicable to the adult as well as some that are specific to the immature organism. In general, protracting the dose from low-LET radiation results in a decreased biological effect. For developmental effects, the influence of protraction of exposure is even greater because only a small fraction of the life span is spent in the susceptible prenatal and neonatal stages. Only radiation that is absorbed during the critical period of hours or days would be effective in altering the particular processes occurring at that time.

Because of the apparent existence of thresholds, nonlinearity, and critical periods, risk estimates calculated in terms of events per rein are not meaningful at low doses and dose rates. Because of this and because of the available data, it may be expected that low doses of radiation delivered at low dose rates would have a small likelihood of being embryotoxic (embryocidal or teratogenic). Despite these limitations, the most pessimistic case for embryotoxicity can be calculated from the estimated threshold dose of 10 rad of acute radiation. If the 10 rad were protracted over the 9 months of gestation and the first year of life a daily dose of about 15 mrad would result. The conservatism of this estimate is suggested by the fact that the lowest dose of protracted external radiation for which confirmed deleterious effects have been reported (defective development of some organs and decreased life span, but no acute mortality) is slightly over 1 rad per day throughout late gestation and early postnatal life (BEIR Committee, 1972). It has been recently reported (Cabill et al., 1976), however, that a radiation dose of 3 mrad per day to the conceptus from continuous ingestion of tritium by pregnant rats produced a slight, but statistically significant, delay in eye-opening and development of the righting reflex.

It should be noted that the water intake of the neonate is initially minimal, increasing with time. Even in the juvenile period, milk (from commercial or maternal sources) is often the major source of fluid, and it reflects, to some extent, the radionuclide content of the water supplies and food.

The exposure of the fetus to radionuclides depends on the maternal body burden, and particularly on the concentrations of radionuclides in the maternal circulation; the fraction available to cross the placenta depends on the rate of removal from the circulation. The mount of activity to which the conceptus may be exposed is also a function of the facility with which the nuclide crosses the placenta. Radioisotopes of normal dietary constituents generally behave as do the stable isotopic species, and there are extensive data on many contaminant radionuclides (Sikov and Mahlum, 1969; Brent and Gorson, 1972). Availability is influenced also by the chemical and physicochemical form of the nuclide and changes with the stage of gestation. Most elements, in their usual forms, fall into three broad categories relative to cross-placental transfer. The first category includes materials—such as phosphorus and iodine—that freely cross the placenta, either by rapid diffusion or by active transport, and reach fetal blood concentrations approximating those of the mother. Tissue concentrations depend on metabolic considerations and may influence the rate of removal from the blood. These processes are age-dependent, in that it is not until thyroid function or skeletal calcification begins that significant quantities of iodine or strontium are removed from the circulation. At these times, the fetal concentrations in the target organs may slightly exceed those in the mother, although they are often smaller. The second category encompasses the many elements that diffuse relatively slowly and may include those for which there appear to be placental barriers to their transfer. The third category consists of the elements that themselves or in their usual chemical form are poorly transferred to the conceptus. The average concentration of such a material in the fetus is only a small fraction of that in the mother, although specific tissue concentrations occasionally approach maternal values (Wrenn, 1975). For some of these elements (e.g.,239pu) it appears from rodent studies that most of the embryotoxicity early in gestation is attributable to material localized in the villus yolk sac (Sikov and Mahlum, 1972). Because this structure is vestigial in man, it is uncertain whether this finding has significance for exposure of humans.

On the basis of the foregoing, and the calculations of potential radiation doses, we anticipate that no measurable developmental and teratogenic effects of radionuclides in drinking water will be found at the levels studied.

Genetic Effects

Many national and international groups (BEIR, NCRP, ICRP) revise from time to time our understanding of the genetic risks to human populations from ionizing radiation. The estimates of genetic effects of low levels of radiation due to ingestion of radioactive isotopes in drinking water that follow this section are based on their work.

Mutations—changes in the genetic information—can occur at any time in any cell of the body. Geneticists, however, are concerned primarily with mutations that occur in the genes and chromosomes of germ cells, sperm and eggs, or the cells from which they are derived. Sperm and egg cells, after fertilization, give rise to the individuals of the next and later generations. Thus, a mutation originating in germ cells could, in time, spread through the population.

From the earliest studies on mutation, it was recognized that the vast majority of newly arising mutations were, in varying degrees, detrimental. If this seems paradoxical in an evolutionary sense, it should be noted that evolution proceeds by selection of individuals with the highest reproductive fitness, fixing beneficial mutations, and generally eliminating less favorable ones. This selection process is not perfect, and harmful mutants do spread into the population until they are eventually eliminated through death, sterility, or reduced fertility. In some cases, on the other hand, unusual selective phenomena have led to the maintenance of genetically determined diseases, such as sickle cell anemia, in which the genetic carriers are resistant to malaria. In general, however, the more severe the mutation, the more rapid is its elimination; indeed, a substantial proportion is most likely eliminated without notice in early pregnancy. Conversely, mutations that impair the vigor of individuals only slightly may penetrate a population over a long period of time before selection occurs.

Mutations arising in nongerminal or somatic cells are limited to expression in the individual and cannot be transmitted to future generations. They are nevertheless of considerable importance, in that the induction of many cancers may be related to mutation induction. For instance, studies indicate that approximately 90% of organic compounds that are carcinogens are also mutagens (McCann et al., 1975; McCann and Ames, 1976).

Mutations fall into three major categories of genetic alteration: gene mutations, chromosome aberrations, and changes in chromosome number. Gene mutations, in which the genetic change is restricted to a submicroscopic region of a chromosome, affect only a rather restricted amount of the cell's information content. For example, mutations in the gene controlling the production of hemoglobin in red blood cells may be manifested as an altered or missing hemoglobin. Such a mutation may have no beating on the manner in which the hemoglobin functions or it may be the cause of severe anemia, as in sickle cell anemia.

In the fertilized egg, chromosomes and genes (with the exception of sex chromosomes and their genes) are inherited in pairs, one member of the pair coming from each parent. Mutant genes are usually described by the manner in which their activity is manifested. A dominant mutation or gene is one that gives an altered effect (or phenotype) in the presence of a normal partner gene. A recessive mutant gene is one whose effect is apparent in the individual only if both members of the pair of genes are mutant; when the recessive and normal are both present, the mutant trait does not appear. Sex-linked recessive genes—those on the X-chromosome—govern traits more commonly observed in males than in females. Males have only one X-chromosome, whereas females have two; thus, the phenotype resulting from a recessive gene on the male's X can be expressed, whereas, in females, unless both X-chromosomes contain the recessive, it cannot. Some gene mutations are caused by the deletion of a single base pair within the gene. In practice, it is usually difficult or impossible to discriminate between gene mutations and deletions that extend beyond the borders of a single gene to neighboring genes. If the deletions are microscopically invisible, they will usually be classified as simple gene mutations.

Deletions may vary in scale from the loss of a single base pair to larger losses that extend into the second major category of genetic change: chromosome aberrations. This category encompasses alterations in chromosomal structure that are visible under the microscope. Recognizable aberrations include deletions of chromosomal segments, additions of segments, and changes in the position of segments, either within a single chromosome or between two or more chromosomes. These changes can have considerable biological consequences in that the health, survival, and reproduction of individuals with such altered chromosomes may be impaired. Although these individuals may appear to be normal, their offspring could be severely afflicted as a result of the reshuffling of the genetic information that routinely occurs in the formation of new germ cells.

The third category of mutation consists of changes in the number of whole chromosomes in the genetic set carried in the germinal cells. In man, each germ cell contains 23 chromosomes; the fertilized egg and the individual produced therefrom have 23 distinctly recognizable pairs of chromosomes in each somatic cell. With the exception of the X- and Y-chromosomes of the male, the members of each pair are morphologically alike.

During meiosis, the process of germ-cell formation, the chromosomes are redistributed, so that each germ cell will contain only one member of each pair. During this redistribution, mistakes sometimes occur, and some cells receive two members of a pair. Correspondingly other cells receive neither member. When these aneuploid cells (cells with abnormal numbers of chromosomes) are involved in fertilization, the most commonly observed consequence is prenatal death. In a small number of cases, survival associated with severe congenital abnormality occurs. Examples of this are Down's syndrome, Edwards' and Patau's syndromes, and various sex-chromosome anomalies. About 0.5% of all live-born children contain an improper chromosome number and since more than 3 million babies are born in the United States each year, this is a substantial number of severely handicapped children.

It should be mentioned that many cases of genetic disease in man result, not from single genes, but from the concerted actions of many genes (multifactorial traits). Strictly speaking, a mutation in one of the many genes does not necessarily influence the manifestation of disease. Because of the complexity of the genetics associated with these diseases, our understanding both of their nature and of their response to mutation is very limited.

All the forms of genetic change described above (see Table VII-4 for examples) are known to occur spontaneously, i.e., in the absence of known causative agents. They also can be produced by various physical agents, such as ultraviolet light and ionizing radiation, and by chemical agents. Whether or not naturally occurring mutagens in our environment are responsible for the "spontaneous" mutation rate is moot. Certainly no single agent can readily be implicated as the sole cause. Present evidence indicates that natural background radiation levels (from cosmic rays and natural terrestrial radioactivity) are able to account for only part of the spontaneous incidence.

TABLE VII-4. Some Selected Types of Human Diseases Caused by "Mutation".


Some Selected Types of Human Diseases Caused by "Mutation".

Although there is no definitive proof that any single mutant human individual resulted from exposure of the parents to a known mutagen (radiation or chemical), and thus no direct proof that these agents are indeed mutagenic in man, radiation has been demonstrated to be mutagenic in so many organisms that it seems very unlikely that it is not mutagenic in man. In fact, all the various types of mutations described above have been induced in cultured human somatic cells. The question, therefore, is not whether mutations will be induced, but rather how many will be introduced into the population.

Basis for Estimating Genetic Risk in Man

Three major principles of particular relevance to human risk estimates have emerged from studies of induced mutation.


Radiation or other mutagens appear to produce genetic changes that are qualitatively the same as those that occur naturally. Different mutagens, however, may not increase all types of mutations in quantitatively the same manner.


At low doses and low dose rates of low-LET radiation, mutations are induced in direct proportion to the dose. No threshold dose is evident in the experiments testing this (with a few exceptions that are presently the subject of reevaluation).


In the low dose range of irradiation to which human populations are normally exposed from natural background or man-made sources, the manner in which the dose is received will not affect the yield of induced mutations. The same number of mutations will result if 100 millirem are received all at Once or spread out over weeks, months, or even years.

As mentioned earlier, national and international groups (ICRP, NCRP, UNSCEAR) have periodically evaluated the data and provided recommendations based on their assessments of radiation hazards. In the United States, the National Academy of Sciences in 1972 published a major document, The Effects on Populations of Exposure to Low Levels of Ionizing Radiation. This report (BEIR Committee, 1972) serves as a principal source of guidance for such governmental agencies as the Environmental Protection Agency, and the Nuclear Regulatory Commission, which are charged with protecting the public from unnecessary exposure.

The BEIR Committee report is the major basis for the risk estimates reached here. In addition, because of evidence that has become available since the publication of that report, some modifications will be considered.

The Subcommittee on Genetics of the BEIR Committee used four ways to estimate the risk:

1. Risk Relative to That from Natural Background Radiation

The average natural background radiation level in the United States is about 100 mrem/yr. The Committee noted that by keeping the additional radiation dose to the population from man-made sources below this level "we are assured that the additional consequences will neither differ in kind from those which we have experienced throughout human history nor exceed them in quantity." Thus, the BEIR Committee recommended that the natural background radiation be used as a standard of comparison.

2. Risk Estimates for Specific Genetic Conditions

To determine the risk, the BEIR Committee made estimates of the doubling dose for human mutation rates, i.e., the dose required to double the spontaneous mutation rate. These values were obtained by dividing the best estimates of human spontaneous gene mutation rates by the average induced mutation rate per gene (or locus) per Roentgen, which was obtained from mouse germ-cell studies. The average human rate was taken to be between 0.5 × 10-6 and 0.5 × 10-5 per gene per generation. The induced average gene mutation rate for spermatogonia was taken as 0.5 × 10-7 per rem, while that for oocytes was taken as zero and the two were averaged as 0.25 × 10-7. The doubling dose for gene mutation was therefore taken to be between 20 and 200 rem (0.5 × 10-6/0.25 × 10-7 and 0.5 × 10-5/0.25 × 10-7). The genetic conditions and their spontaneous incidences were taken from studies carried out in Northern Ireland up to 1958. Thus, the estimates of risk are themselves based on three separate estimates, each of which has some degree of uncertainty. The results of the BEIR analysis are presented in Table VII-5. The estimates are presented in terms of effects in the first generation and effects at equilibrium when the maximum permissible dose (5 rem over a 30-yr generation time) is given for many generations.

TABLE VII-5. Estimated Effects of 5 rem per Generation on a Population of 1 Million Live Births (Figures from BEIR Committee, 1972).


Estimated Effects of 5 rem per Generation on a Population of 1 Million Live Births (Figures from BEIR Committee, 1972).

The BEIR Committee based its estimates of the number of radiation-induced chromosome anomalies resulting from unbalanced chromosome rearrangements on the frequency of balanced translocations recorded as semisterility in the offspring of male mice whose spermatogonia had been irradiated. For chronic irradiation at low LET and low dose rates, this frequency was taken to be 1.5 × 10-5 per rem. To convert to human terms, this value was multiplied by 2 to correct for the greater sensitivity of human chromosomes, multiplied by 4 to estimate the number of unbalanced translocations, and multiplied again by 2 to include an equivalent rate induced in females (since that value was unknown). This product, multiplied by 5, the maximum permissible dose in rein per 30-yr reproductive generation, gave 1200 zygotes per million with unbalanced translocations (1.5 × 10-5 × 2 × 4 × 2 × 5 = 1.2 × 10-3 = 1,200 × 10-6). A further adjustment for the 5% of these zygotes that were thought to survive, gave 1,200 × 0.05 × 10-6 = 60 × 10-6 unbalanced translocations appearing in the first generation.

By a similar calculation, survival of 5% of the offspring of the 300 zygotes per million carrying balanced translocations (1.5 × 10-5 × 2 × 2 × 5 = 3 × 10-3 = 300 × 10-6), was expected to contribute an additional 15 unbalanced translocations per million to equlibrium. These values are shown in Table VII-5.

For induced aneuploidy the BEIR Committee relied exclusively on the rate of X-chromosome loss obtained from female mice (6 × 10-6 losses per gamete per rad of chronic low-LET radiation). This rate was used to estimate the frequency of viable aneuploids induced in man, leading to the number 5 entered in Table VII-5.

3. Risk Relative to Current Incidence of Serious Disabilities

The BEIR Committee also made estimates from the evidence of diseases of complex etiology, described in the Northern Ireland study (Stevenson, 1959). These diseases make up the bulk of the 56,900 cases per million live births in Table VII-5. About 70% of the total are congenital anomalies, anomalies expressed after birth, and constitutional and degenerative diseases. A considerable uncertainty is associated with the radiosensitive mutational component of these diseases. The mutational component, which is the proportion of the incidence that is directly proportional to the mutation rate, was estimated to lie between 5% and 50%. It was also assumed that under equilibrium conditions only 10% of the disorders would be manifest in the first generation. The values are listed in Table VII-5.

4. Risk in Terms of Ill-Health

The BEIR Committee assumed that an independent measure of risk could be obtained from the component of ill health that results from mutationally dependent genetic disorders. It considered that perhaps 20% of all ill health had a mutationally dependent origin and therefore, by using the estimates of the doubling dose (20 rem to 200 rein), arrived at the suggestion that 5 rein per generation would increase the incidence of ill health by 0.5 to 5% (5 rem × 20%/20 rein = 5%, or 5 rem × 20%/200 rem = 0.5%).

Reconsideration of the Beir Committee's Estimates

Since the publication of the BEIR report, Trimble and Doughty (1974) have published a large-scale study of genetically determined diseases in children in British Columbia. In the interval between the 1958 study by Stevenson and that of Trimble and Doughty, understanding of the mode of transmission of many human genetic diseases has improved considerably, and this has led to a revision of the diagnostic categories originally used in the Stevenson study. The new study showed that a larger proportion of disease—approximately 9% (instead of approximately 6%)—had a genetic basis, but that the dominant gene class was reduced. In Trimble and Doughty's study, this class was lower by an order of magnitude. Their study did not present any information on the frequency of dominant gene diseases that show late onset (i.e., become manifest after childhood) and those with variable levels of penetrance. The frequency of the last two types of dominant disease is about twice the frequency of dominant diseases measured by Doughty and Trimble. Therefore, the suggested incidence of dominant disorders is believed to be approximately 3,000 cases per million (Table VII-6) rather than the 10,000 cases per million used by the BEIR Committee (Table VII-5).

TABLE VII-6. Estimated Effect of 5 rem per Generation on a Population of I Million Live Births (BEIR Committee Estimates Modified to Account for New Data and Approaches).


Estimated Effect of 5 rem per Generation on a Population of I Million Live Births (BEIR Committee Estimates Modified to Account for New Data and Approaches).

The suggestion by human geneticists that there is no mutational component associated with the diseases classified as congenital anomalies, anomalies expressed later in life, and constitutional and degenerative diseases (Newcombe, 1975) has great significance for estimating the contribution of radiation to these diseases. The argument suggests that these categories axe maintained exclusively by selection mechanisms, as was described in the simple case of sickle cell anemia, and that changes in the mutation rate will not greatly affect their numbers. Under these circumstances, the radiation-induced increase will be approximately zero. The numbers in brackets in Table VII-6 would represent the contribution from a 50% mutational component. Many geneticists believe this percentage to be unrealistically high (see Newcombe, 1975). Because of this and because these numbers are not based on experimental data, less confidence should be attached to them.

Another area of reappraisal results from studies of induced chromosome aberrations. A recent analysis of translocations induced in spermatogonial cells of both humans and marmosets (primates) suggests that after 100 R of acutely delivered radiation, the frequency of translocations observed in spermatocytes is 0.077 per cell (Brewen, Preston, and Gengozian, 1975). The same authors have also demonstrated that induced translocations (Y) in mouse germ cells, as well as dicentrics in human and marmoset lymphocytes, are best related to dose (D) by a quadratic equation Y = C + αD + βD2. They calculated that for acute X-ray doses the expected frequency of transmissible translocations is between 1 × 10-4 and 2 × 10-4 translocations per gamete per rem. This calculation, however, includes both the dose and dose-squared components expected for acute high-dose irradiation and thus is likely to be too high by a factor of 2 when chronic low-dose irradiation is considered. We shall thus assume the value per rem to be 0.5 × 10-4 to 1.0 × 10-4 transmissible translocations. The BEIR Committee's previous analysis used a rate of 3 × 10-5 (viz. 1.5 × 10-5 × 2, the correction factor for humans). Correcting for this difference, we adopt the same approach. Thus, with exposure of 5 rem per generation, the expected number of live-born chromosomally unbalanced offspring in a million live births would be 100-200 in the first generation and 125-250 at equilibrium (see Table VII-6).

Jacobs et al. (1972) stated that a realistic estimate of the spontaneous rate of reciprocal translocations is between 0.5 and 1.0 × 10-3 per gamete. Dividing by the radiation-induced rate obtained by Brewen, Preston, and Gengozian gives a doubling dose of 5-20 rein for this category of genetic event. Jacobs et al. (1974, p. 376) also reported data indicating that the current incidence of live-born suffering from unbalanced chromosome rearrangements is about 500 cases per million rather than 1,000 cases per million as estimated by the BEIR Committee (see Table VII-5).

Since publication of the BEIR report, data have appeared on radiation-induced nondisjunction in meiosis in the female mouse. After an average dose of 20 tad, Uchida and Lee (1974) found six oocytes out of 1,149 contained an extra chromosome. After 5 rad, only one-quarter (5/20) as many will be expected. Since irradiation of males does not seem to give nondisjunction (UNSCEAR, 1972, pp. 256-257), there will be a further reduction of one-half, and since the mouse has 20 pairs of chromosomes, the estimated rate per chromosome for 5 rads would then be 6/1149 × 5/20 × 1/20 × 1/2 = 32 Per million. Only four types of chromosomal aneusomy have been found to be viable in humans, which leads to an estimate of 130 cases Per million instead of the 5 originally given by the BEIR Committee.

A final area of reassessment involves the induced mutation rate estimates obtained from mouse studies. The BEIR Committee assumed a zero mutation rate for females based on the fact that, in the mouse female, no mutations appear to be induced by irradiation of the immature oocyte (the most persistent stage). When the United Nations Committee (UNSCEAR, 1972, p. 252) reviewed these results they concluded that use of mutation rate estimates based on immature oocytes in the female mouse could lead to a serious error because of major differences between human and mouse oocytes both in morphology and in their response to cell killing. They felt, however, that use of risk estimates from the genetically most sensitive stage in the mouse should not result in an underestimate of the hazard to man. Nonetheless, scientific controversy exists as to what figures should be applied here. Taken at face value, chronic irradiation of ''mature'' oocytes leads to a mutation rate estimate 1/20 of that obtained from acute exposure, or a rate of about 0.25 × 10-7 mutations per gene per rem (5.4 × 10-7 × 1/20). One recent analysis (Abrahamson and Wolff, 1976) suggests that this value might be too low, since the radiation procedures lead to recovery of mouse cells that were irradiated to a substantial extent during their insensitive immature stage—a stage that may not be applicable to human risk estimates. If the latter viewpoint is shown to be correct, the risk estimate for the female is likely to be in the range of 1.2 × 10-7 mutations per rein. Since a male rate of 0.7 × 10-7 was obtained from a regression analysis of all the data from chronically irradiated spermatogonia (Searle, 1974), the average for both sexes would be 1 × 10-7 rather than the value of 0.25 × 10-7 used by the BEIR Committee. This would mean that the doubling dose would be in the range of 5-50 rem for those genetic effects that are proportionally increased by irradiation. This value is used as the basis for the calculations presented in Table VII-6, even though some would argue that it is too low.

Thus, there is new evidence on the genetic basis of disease that would tend to lower the mutationally dependent contribution to future generations as well as studies and calculations that would raise the induced risk per rein of radiation for those mutationally dependent diseases. The results of combining these effects are shown in Table VII-6. Although the use of these figures can lead to a somewhat higher estimate of the genetic risk from radiation than will the use of the BEIR Committee's figures, in view of the uncertainties involved, the use of the figures in Table VII-6 seems to be a responsible and prudent way to establish risks.

Somatic Effects

The somatic effects of concern at the low dose rates associated with natural background radiation are those that might conceivably result from alterations in individual cells, singly or in small numbers, in the absence of extensive cell killing or tissue disorganization. The most important such effect is considered to be the induction of cancer (BEIR Committee, 1972).

Another effect of potential concern, possibly because it may prove to have preneoplastic significance, is the induction of chromosomal abnormalities in somatic cells, the pathological importance of which is unknown at present (UNSCEAR, 1969; NAS-NRC, 1974). Other effects that are also of potential concern include infant mortality, disturbances in the growth and development of the embryo, shortening of the life span from causes other than cancer, and effects on the nervous system. To date, however, neither the available dose-response data on these effects nor our knowledge of their mechanisms suggests that they deserve to be included with cancer as risks warranting evaluation in relation to natural background radiation levels (BEIR Committee, 1972).

For the above reasons, we will confine our attention here to the possible risk of carcinogenic effects.

Radiobiological Basis for Evaluation of Cancer Risk

As has been discussed extensively elsewhere (ICRP, 1969; BEIR Committee, 1972; UNSCEAR, 1972), there is no conclusive evidence that ionizing radiation exerts carcinogenic effects at the low dose rates commensurate with natural background. Evaluation of the potential risks at such levels must depend, therefore, on extrapolation from observations at higher doses and dose rates. Because the dose-rate characteristic of background radiation (approximately 100 mrem/yr) is several orders of magnitude lower than the lowest rates at which carcinogenic effects have been documented unequivocally, the extrapolation involves assumptions that are highly tentative in our present state of knowledge.

Among the major factors complicating the extrapolation is uncertainty about the shapes of the dose-incidence curves for cancers of different types, about the relevant mechanisms of carcinogenesis, and about the influence of biological and physical variables (e.g., spatial and temporal distribution of the radiation dose, age at irradiation, sex, and physiological state) that have been observed to affect the induction of malignancy at higher dose rates in human and animal populations. The problem is further complicated by the multiplicity and diversity of effects through which radiation is thought to influence the probability of cancer development. These effects include mutagenic changes in DNA, induction of chromosomal aberrations, activation or enhancement of occult tumor viruses, alteration in the dynamics of cell populations, disturbances in hormonal regulation, impairment of immunological defenses, and other effects interfering with homeostasis. Any or all of these effects may conceivably be implicated in a given situation, depending on the dose, dose rate, and other circumstances. Some of the effects, such as disturbances in hormonal regulation and impairment of immunological defenses, are likely to be minimal or absent at low doses and low rates, since their induction requires extensive killing of cells. The other types of effects also should be reduced in frequency per tad at low doses and low dose rates, because of the action of various repair processes, at least in the case of low-LET radiation. Furthermore, if the time required to accumulate a given dose is a sufficient fraction of the life span, then, in the absence of age-dependent changes in susceptibility, the carcinogenicity per rad of the total cumulative dose can be expected to diminish, because the latent period for carcinogenesis will ultimately exceed the life expectancy of some members of the population at risk (NCRP, 1976).

Also complicating the evaluation of carcinogenic effects are the confounding effects of other forms of radiation damage. At high doses and high dose rates, the cytotoxic effects of radiation may interfere drastically with tumor induction, presumably because too few cells remain capable of proliferation to express the carcinogenic changes that might otherwise be manifest (BEIR Committee, 1972; Mole, 1975; NCRP, 1976).

For the reasons indicated, the combined effects of the various types of radiation-induced carcinogenic changes must depend heavily on the dose, dose rate, quality of radiation, and other variables; hence, it is not astonishing that the dose-incidence relationship has been observed to vary with these factors, at least in those instances where cogent data are available (BEIR Committee, 1972; UNSCEAR, 1972; NCRP, 1976). The relationship differs quantitatively, however, from one type of cancer to another, and in no instance are the parameters known well enough to enable confident prediction of the carcinogenic effects to be expected at the low dose rates associated with background radiation levels. These limitations notwithstanding, the pattern of relationships in general implies that any simple linear interpolation on dose from observations at higher doses and dose rates, without allowance for the influence of the aforementioned variables, is likely to overestimate the risks of low-level, low-LET radiation (BEIR Committee, 1972; UNSCEAR, 1972; NCRP, 1976).

It may be concluded, therefore, from all available data, that for carcinogenic effects, just as for the induction of mutations, chromosome aberrations, cell killing, teratogenic effects, and most other effects on mammalian cells and tissues, the dose-response curves for low-LET radiation will tend to be concave upward (see Fig. II-1 in Chapter II).

Characteristically, the curves tend to increase in slope with increasing dose and dose rate, until they reach the point where, with high doses accumulated at high dose rates, they pass through a maximum and eventually turn downward, owing, presumably, to excessive cell killing or to other forms of injury. In comparison, the dose-response curves for high-LET radiation tend to be steeper, more nearly linear, and less dependent on dose rate (BEIR Committee, 1972; UNSCEAR, 1972; NCRP, 1976).

Because of these variations in the biological effectiveness of radiation with changes in the spatial and temporal distribution of dose, various weighting factors have been introduced for use in risk estimation. These include the quality factor (Q), which has long been used to adjust for differences in linear energy transfer, or LET (ICRP, 1963; NCRP, 1967). More recently, other dose-effectiveness factors have been introduced to adjust for the reduced effectiveness of low-LET radiation at low doses and low dose rates, both with respect to genetic effects (BEIR Committee, 1972; UNSCEAR, 1972) and carcinogenic effects (NRC, 1975; NCRP, 1976).

Although the dose-effectiveness factors for carcinogenic effects are based largely on empirical observations of radiation carcinogenesis in experimental animals, they are concordant with the data on man and with the bulk of radiobiological experience on the induction of mutations, chromosome aberrations, cell killing, cell transformation in culture, and other effects that may be involved in carcinogenesis (NCRP, 1976). The dose-response relation envisaged can be represented by a function of the form:

Image img00061.jpg

where Y is the frequency of cancers in the population at risk, C the control incidence, D the dose, and a, b, α, and β are constants. The model of carcinogenesis represented by the formula is that cancers are induced by changes that increase both linearly with dose and as the square of the dose in those cells that are not killed by radiation. The values of the constants are not known precisely for any neoplasm, and they may be presumed to vary somewhat from one type of neoplasm to another and under the influence of other variables. The data imply that for low-LET radiation the linear contribution (aD) is equal to the quadratic contribution (bD2) at about 50 to 100 rad, i.e., a/b = 50 to 100. According to this interpretation, the linear dose term (aD) would be expected to predominate over the quadratic dose term (bD2) at low doses and low dose rates, whereas the reverse would be true at high doses and high dose rates. As a result, the carcinogenic risks per rad would be expected to increase with dose and dose rate, until overridden by excessive cell and tissue damage, the total risks per rad being as much as 4 to 6 times higher after an acutely delivered dose of 300 rad than after a dose of 10 rad (NCRP, 1976). To allow for such differences in dose-effectiveness, weighting factors have been proposed that range in magnitude from 0.2, for doses of less than 10 rad, or for larger doses received at dose rates of less than 1 mrad per minute, to 1 for doses of 200 rad or more received at dose rates in excess of 1 mrad/min (Tables VII-7 and VII-8).

TABLE VII-7. Dose-Effectiveness Factors for Carcinogenic Effects of Low-LET Radiation, in Relation to Dose Rate and Dose Magnitude, as Proposed by NCRP, Scientific Committee 40 (NCRP, 1976).


Dose-Effectiveness Factors for Carcinogenic Effects of Low-LET Radiation, in Relation to Dose Rate and Dose Magnitude, as Proposed by NCRP, Scientific Committee 40 (NCRP, 1976).

TABLE VII-8. Dose-effectiveness Factors for Carcinogenic Effects of Low-LET Radiation, as Used in the Reactor Safety Study (NRC, 1975).


Dose-effectiveness Factors for Carcinogenic Effects of Low-LET Radiation, as Used in the Reactor Safety Study (NRC, 1975).

The values tabulated have been introduced for use in estimating the overall carcinogenic risk of low-level low-LET radiation for all malignancies combined, and are not intended for use in estimating the risk per rad for every type of cancer individually, there being indications that the dose-response relation may differ among malignancies. At best, therefore, the values can be taken to represent no more than crude approximations, based on an approach that is necessarily simplified for practical purposes. For estimating the risk of carcinogenesis in any one organ, it may be more appropriate to use other dose-response functions and weighting factors, depending on whether they may be indicated by the available data (NRC, 1975; NCRP, 1976). For example, evidence has been presented elsewhere that the dose-incidence relations for cancers of the breast and thyroid may be influenced less by dose rate (BEIR Committee, 1972; NRC, 1975; NCRP, 1976) and thus deserve to be treated differently. The values tabulated are also not intended for application to high-LET radiations, the effectiveness of which is generally assumed to be relatively invariant with dose and dose rate throughout the low-to-intermediate dose region (NCRP, 1976).

Risk Estimates for Specific Cancers

Review of the available data on human and animal populations indicated that radiation may conceivably cause cancer of virtually any type or site, given appropriate conditions of irradiation and host susceptibility (BEIR Committee, 1972; UNSCEAR, 1972). At the same time, however, the data indicate that tissues vary widely in susceptibility to radiation-induced malignancy, cancers of a relatively small number of types and sites predominating during the first 25-30 yr after doses of a few hundred rein or less (ICRP, 1969; BEIR Committee, 1972; UNSCEAR, 1972).

The types of cancer that may conceivably result from low-level irradiation, the time required for their development following exposure ("latent period"), the time during which they are expected to occur in the excess among irradiated individuals ("plateau period"), and the average magnitude of the excess during the plateau period have been estimated by the BEIR Committee (1972). They assumed a linear nonthreshold dose-incidence relationship fitted to the observed human data and interpolated to pass through the control incidence at the intercept (Table VII-9). In presenting the estimates tabulated, the BEIR Committee qualified the values on the basis of the following sources of uncertainty:

TABLE VII-9. Values Assumed by BEIR Committee (1972) in Estimating Risks of Low-Level Irradiation.


Values Assumed by BEIR Committee (1972) in Estimating Risks of Low-Level Irradiation.


None of the irradiated human populations studied to date has been followed throughout its entire life span, with the result that the duration and ultimate magnitude of the cancer excess attributable to a given exposure remain to be fully ascertained; by the same token, it is conceivable that additional types of radiation-induced cancers with unusually long latent periods are still to become manifest.


The relation between the radiation-induced risk and the natural risk is not clear from the existing data; that is, it is uncertain whether the excess resulting from a given dose more nearly approximates a constant percentage of the natural incidence (and thus varies with age at time of irradiation and other susceptibility factors) or a constant number of additional cases (irrespective of the natural incidence).


The existence of various kinds of homeostatic and repair processes argues strongly that the risk per rad of background radiation is likely to be smaller than that at the higher doses and dose rates where effects have been observed; in fact, the possibility that the risk may approach zero at background levels is not excluded by the data.


The killing of susceptible cells at high doses and high dose rates can be expected to counteract the carcinogenic effects of radiation to some extent, with the result that linear extrapolation based on effects observed under these circumstances may, conceivably, underestimate the risk of irradiation at lower doses and dose rates (BEIR Committee, 1972).

Although the values tabulated (Table VII-9) may ultimately prove to be underestimates, for the reasons given above, most observers have considered them more likely to be overestimates, owing to their failure to make any allowance for the effects of repair at low doses and dose rates, effects that have been amply documented in experimental animals. Thus, more recent evaluations of the risks of low-level low-LET irradiation have recommended the use of the aforementioned dose-effectiveness factors (Tables VII-7 and VII-8) in arriving at estimates for low doses and low dose rates by extrapolation from observations at higher doses and higher dose rates (NRC, 1975; NCRP, 1976).

In the Reactor Safety Study (NRC, 1975), while the upper bound estimates were derived from absolute risk values obtained by the BEIR Committee, the central estimates (Table VII-10) were small fractions of these values, derived by the use of dose-effectiveness factors (Table VII-8) intended to correct the estimates for the influence of repair at low doses and low dose rates of low-LET radiation. The lower bound estimates were based on the assumption that below a threshold of 10-25 rein the risk per rad is zero.

TABLE VII-10. Values Assumed in Reactor Safety Study for Use in Estimating Risks of Low-Level Low-LET Irradiation (NRC, 1975).


Values Assumed in Reactor Safety Study for Use in Estimating Risks of Low-Level Low-LET Irradiation (NRC, 1975).

The dose-effectiveness factors proposed by NCRP Scientific Committee 40 (Table VII-7) are similar in direction and range to those used in the Reactor Safety Study, but they differ in magnitude at low doses, especially over the region between 10 and 100 tad. Because of this difference, and the fact that most of the human data on radiation carcinogenesis come from populations in which the average dose received in any one exposure has not greatly exceeded 100 rad, estimates derived with the NCRP factors should be intermediate between those calculated in the Reactor Safety Study and those reported by the BEIR Committee (Table VII-11). From the values tabulated, it will be noted that the number of cancers attributed hypothetically to continuous low-level irradiation of the U.S. population at a rate approximating natural background radiation levels (100 mrem per year) ranges from 0 to roughly 9,000 depending on the method used to arrive at the risk estimate. The larger of the values (9,000) corresponds to about 2.9% of the total number of cancer deaths from all causes recorded annually in the United States (BEIR Committee, 1972).

TABLE VII-11. Comparative Estimates of the Number of Cancer Deaths per Year in the U.S. Population Attributable to Continuous Exposure at Rate of 0.1 rem per Year.


Comparative Estimates of the Number of Cancer Deaths per Year in the U.S. Population Attributable to Continuous Exposure at Rate of 0.1 rem per Year.

The enormous variation among estimates yielded by different extrapolation models (Table VII-11) reflects the large uncertainty about dose-response relationships that complicates current attempts to estimate the carcinogenic risks of low-level irradiation. The criteria for selecting one method of risk estimation in preference to another must thus depend in large measure on other than purely scientific considerations. To the extent that the estimates are intended for purposes of limiting risks to populations, it is desirable that they should include a margin of safety large enough to compensate for any uncertainty as to their reliability. On the other hand, if the estimates are to be used for purposes of cost-benefit analysis, it is desirable that they not be exaggerated, since overestimation of the risks may prompt decisions in favor of alternatives that could involve greater hazards or burdens to society.

Risks from Radioactive Drinking Water

The average amount of background radiation to which the U.S. population is exposed is about 0.1 of a rem (100 mrem) per year. Part of this background comes from drinking water that contains radioactive materials.

The dose commitment from radioisotopes in U.S. drinking water supplies is very low. In a hypothetical water supply that was constituted in such a way as to contain either average or likely amounts of radioactivity, a total-body dose of less than one-thousandth of a rem (0.244 mrem) per year would be accumulated. This is less than 1% of background. Although the dose to bone would be considerably higher, became strontium and radium are bone seekers, even this dose would constitute less than 10% of the total average natural background.

Estimates were made of three possible types of risk that could be induced by the radiation: developmental and teratogenic risks, genetic risks, and somatic risks.

Developmental and Teratogenic Risks

Although the developing fetus is sensitive to radiation, the total low-dose-rate doses that would be delivered during the sensitive periods of gestation are so small that no measurable effects of the radiation from drinking water will be found. The lowest dose level at which any effect has been reported is 3 mrem/day or 1,100 mrem/yr in contrast to the 0.244 mrem per year described above.

Genetic Risks

For the general population, the maximum permissible dose of man-made radiation is 170 mrem/yr, excluding medical uses of radiation. This amounts to a 5 rem genetic dose in each 30-yr generation. This dose would increase the current incidence of genetic diseases, which is about 94,400 per million live births, by about 200 per million in the first generation. The estimate of 200, however, is so uncertain that there are very large limits about the value. The gonadal dose of 0.244 mrem/yr calculated for the hypothetical drinking water is expected to increase the genetic diseases from the 94,000/106 live births by 200 × 0.244 mrem/30 × 170 mrem = 0.0098 additional genetic diseases per million live births per year. Since there are approximately 3.6 million live births in the United States each year, this is an increase of 0.035 total genetic diseases in the United States per year. If one takes the unlikely extreme limits of the estimated genetic hazards of radiation (about 4,000) instead of the value 200, the increase is 0.7 cases per year.

Somatic Risks

The natural background of radiation can be estimated to cause 4.5 to 45 fatal cases of cancer per year per million people, depending on the risk model used to make the calculation (Table VII-11). Less than 1% of this will be contributed by the radionuclides in drinking water.

Variations in the radium content of drinking water, however, may cause appreciable differences in the radiation dose to the skeleton and, in turn, in the risks of associated carcinogenic effects. Under average conditions, the annual dose to bone from radium amounts to approximately 6.4 mrem/yr, which represents about 6% of the total dose to the skeleton from all sources of natural background radiation (roughly 100 mrem annually). The highest radium levels in drinking water (25 pCi/liter of 226Ra and an additional 12.5 pCi/liter of 228Ra), however, may be expected to deliver a dose to the skeleton of about 600 mrem/yr, which would represent a sixfold increase in the total dose to bone from all natural sources combined. If the carcinogenic risks associated with skeletal irradiation are assumed to be 0.2 fatal cases of bone cancer per million persons per year per rem (Table VII-9), then for a period up to 30 yr, in a population with a typical distribution of ages, the risks attributable to natural background radiation can be estimated to range up to about 0.6 per million persons per year under average conditions,* and to 4.2 per million per year under conditions of maximal intake of radium in the drinking water (about 600 mrem/yr from the radium).

In addition to these risks, the possibility of carcinogenic effects from radium on cells adjacent to bone, such as those in epithelia lining cranial sinuses and those in the bone marrow, should also be mentioned. However, the risks of such effects are likely to be appreciably smaller and cannot be estimated precisely from existing data. In comparison with the overall risks of cancers of all sites combined, of which 4.5 to 45 (i.e. 9000/200) fatal cases per million per year can be attributed to natural background radiation at average levels (Table VII-11), the additional 3.6 fatal bone malignancies per million per year ascribable to maximal intakes of radium in drinking water0 constitute a significant increment. It should be noted that only about 120,000 people drink water estimated to contain between 9 and 25 pCi/liter. Thus the excess bone cancers in this group would be between 0.16 and 0.43 per year; that is to say, one excess bone cancer every 2 to 6 yr. Since about 113,000 of the 120,000 people drink water containing less than 20 pCi/liter, the true number of excess bone cancers will lie somewhere towards the lower end of the range.

When interpreting the above estimates, it must be remembered that they depend on dose-response models that remain highly uncertain. For example, the value given for the combined frequency of deaths from all types of cancer attributable to natural background radiation— namely, 45 deaths per million per year— is higher by a factor of three or more than estimates derived with any of the other risk models cited (Table VII-11). Likewise, the corresponding risk estimates for skeletal cancer could vary widely, depending on the postulated dose-response relationship. Although the value yielded by the BEIR Committee's absolute risk model (0.2 fatal cancers per million per year per rein) is not greatly different from the value yielded by the BEIR Committee's relative risk model (since 9 out of the 1,704 fatal cancers per million per year are bone cancers, this would be approximately 0.09 fatal bone cancers per million per year per rein), both models, in postulating a linear nonthreshold dose-response relationship, give substantially higher estimates than do models postulating dose-dependent and dose-rate-dependent variations in the risk per rein. Given the uncertainties in present knowledge, the BEIR Committee's absolute risk model as used in the foregoing would seem to provide an acceptably conservative approach for the purposes at hand.

Summary—Radioactivity In Drinking Water

Everyone is exposed to some natural radiation that comes from both cosmic rays and terrestrial sources. Although there are large geographic variations in the amount of natural background radiation, the average background dose in the United States is about 100 mrem/yr. A small proportion of this unavoidable background radiation comes from drinking water that contains radionuclides.

By far the largest contribution to the radioactivity in drinking water comes from potassium-40, which is present as a constant percentage of total potassium. Only a small percentage of the total potassium-40 body burden, however, comes from drinking water. The total body dose from other possible radioactive contaminants of water constitutes a small percentage of the background radiation to which the population is exposed. Although the mounts of individual radioactive contaminants fluctuate from place to place, calculations made for a hypothetical water supply that might be typical for the United States have shown that a total soft-tissue dose of only 0.24 mrem/yr would be contributed by all the radionuclides found in the water. Even with rather wide fluctuations in the concentrations, the total contribution of the radionuclides will remain very small.

However, bone-seeking radionuclides—such as strontium-90, radium-226, and radium-228—account for a somewhat larger proportion of the total bone dose. This is particularly true for the two isotopes of radium because they, or their daughters, emit high-linear-energy-transfer (LET) radiation, and because certain restricted localities have been found to have rather high concentrations of radium in drinking water. Nevertheless, in the hypothetical typical water supply, less than 10% of the annual background dose comes from such radiation. It has also been estimated that the total population exposed to levels of radium greater than 3 pCi/liter is about a million people. About 120,000 people are exposed to radium at levels greater than 9 pCi/liter.

Risk estimates were made of three kinds of adverse health effects that radiation could produce: developmental and teratogenic effects, genetic effects, and somatic (chiefly carcinogenic) effects.

Developmental and Teratogenic Effects

The developing fetus is exposed to radiation from radionuclides in drinking water for nine months. Thus, the total dose accumulated by the fetus will be very small. Furthermore, although the fetus is sensitive to the effects of radiation in some stages of development, these periods are sharply limited and extremely short. For this reason, too, the total dose administered that could possibly have developmental and teratogenic effects would be extremely small. Current concentrations of radionuclides in drinking water lead to doses of about one five-thousandth of the lowest dose at which a developmental effect has been found in animals. Therefore, the developmental and teratogenic effects of radionuclides would not be measurable.

Genetic Effects

It has been estimated that there are about 94,400 genetic diseases per million live births in the United States. The maximum permissible dose of man-made radiation for the general population (170 mrem/yr) has been estimated to increase this number in the first generation by 170-215, with an unlikely upper limit of 4,250. On the basis of a 30-yr generation and 3.6 million live births per year in the United States, we would expect the 0.24 mrem soft-tissue dose, or gonad dose, to lead to 0.0098 additional cases of genetic disease per million live births per year or 0.035 additional cases of genetic disease in the United States per year. Even at the unlikely extreme upper limit of possible genetic effects of radiation of around 4,000 extra cases in the first generation, there would still be less than one additional case per year in the 94,400 × 3.6 = 340,000 live births with genetic defects. The wide fluctuation in bone dose caused by fluctuations in the radium concentration of drinking water would not have any sensible effect on the genetically significant dose, because radium is predominantly a bone seeker and will deliver very little radiation to the gonads.

Somatic and Carcinogenic Effects

The natural background of radiation can be estimated to cause 4.5 to 45 cases of cancer per million people, depending on the risk model used. The per year amount of whole-body radiation from radionuclides in typical drinking water contributes less than 1% of this amount, and thus, for cancers other than those in bone, may cause a negligible increase in the total. Radium, however, can contribute somewhat less than 7% of the total bone dose received from background radiation in areas of ''normal'' radium concentration. The average carcinogenic risk associated with skeletal irradiation by radium in a population with a typical distribution of ages is estimated to approximate 0.2 fatal cases of bone cancer per million persons per year per rein. Therefore, over a period from 10 to 40 yr after the beginning of skeletal irradiation, the average risk attributable to natural background radiation is estimated to range from 0.6 per million persons per year, under typical conditions, to as much as 4.2 per million per year, in regions where 25 pCi/liter of radium-226 are found in the drinking water. It has been noted that in the United States 120,000 people are estimated to drink water containing between 9 and 25 pCi/liter of radium-226, and only a small number lie near the upper end of this range. The number of excess cancers in this group would therefore lie between 0.16 and 0.43 per year. Since not all the 120,000 people drink water containing 25 pCi/liter of radium-226, the latter number is inordinately high.


The radiation associated with most water supplies is such a small proportion of the normal background to which all human beings are exposed, that it is difficult, if not impossible, to measure any adverse health effects with certainty. In a few water supplies, however, radium can reach concentrations that pose a higher risk of bone cancer for the people exposed.

Future Needs

The precision of estimation of the health risks associated with radioactivity in drinking water could be enhanced if several water systems were analyzed to determine the complete distributions of beta and alpha radiation that constitute the gross counting measurements.

Because the precise ratio of radium-228 to radium-226 in water has not been measured extensively, an attempt should be made to determine the ratio in several ground and surface waters whose content of radium-226 is known. Activity concentrations of the waters to be analyzed should range from about 0.1-50 pCi/liter. The percentage of the daughter radionuclides present should be determined.

Because radon is a noble gas that is quickly released from water, it is possible that, in some areas of high radon content, water vapor containing radon might constitute an inhalation hazard when such water is used, for example, in humidifiers or for showers. A determination should be made whether or not radon emanations from water do indeed constitute an inhalation hazard.

The models used in this report do not take into account the possibility that the finely divided solid particles that occur in water may alter the uptake of radionuclides. The effects of the solids in drinking water on the metabolism and uptake of radionuclides merit investigation.


Absolute risk

Excess or incremental risk due to exposure to a toxic or injurious agent (e.g., to radiation). Difference between the risk (or incidence) of disease or death in the exposed population, and the risk in the unexposed population. Usually expressed as number of excess cases in a population of a given size, per unit time, per unit dose (e.g., cases/106 exposed population/year/rem).

Curie (Ci).

Unit of radioactivity. 1 Curie = 3.7 × 1010 nuclear transformations per second. Some fractions are: millicurie (1 mCi = 10-3 Ci), microcurie (1 µCi = 10-6 Ci), nanocurie (1 nCi = 10-9 Ci), picocurie (1 pCi = 10-12 Ci), femtocurie (1 fCi = 10-15 Ci).

Latent period

Period between time of exposure to a toxic or injurious agent and appearance of a biological response.


Linear energy transfer. Average amount of energy lost by an ionizing particle or photon per unit length of track in matter.

Plateau period

Period of above-normal, relatively uniform, incidence of disease or death in response to a toxic or injurious agent.


Unit of dose or radiation (energy) absorbed in any medium, except air. 1 Rad = 100 erg/g.

Relative risk

Ratio of the risk in the exposed population to that in the unexposed population. Usually given as a multiple of the natural risk.


Unit of radiation dose equivalence. Numerically equal to absorbed dose in rad multiplied by a quality factor that expresses the biological effectiveness of the radiation of interest, and other factors. Equal doses expressed in rein produce the same biological effects, independently of the type of radiation involved.

Roentgen (R).

Unit of radiation (energy) absorbed in air. 1 R = 2.58 × 10-4 coulomb/kg of air.


  • Abrahamson, S., and S. Wolff. 1976. Reanalysis of radiation-induced specific locus mutations in the mouse. Nature 264:715-719. [PubMed: 1034880]
  • AEC. 1974. Plutonium and other transuranium elements: Sources, environmental distribution and biomedical effects. WASH-1359, U.S. Atomic Energy Commission.
  • BEIR Committee. 1972. The effects on populations of exposure to low levels of ionizing radiation. Advisory Committee on the Biological Effects of Ionizing Radiations, National Academy of Sciences, National Research Council, Washington, D.C.
  • Batchelor, A.L., R.J.S. Phillips, and A.G. Searle. 1969. The ineffectiveness of chronic irradiation with neutrons and gamma rays in inducing mutations in female mice. Br. J. Radiol. 42:448-451. [PubMed: 5810857]
  • Brent, R.L., and R.O. Gorson. 1972. Radiation exposure in pregnancy. Curr. Probl. Radiol., vol. 2, no. 5.
  • Brewen, J.G., R.J. Preston, and N. Gengozian. 1975. Analysis of X-ray-induced chromosomal translocations in human and marmoset stem cells. Nature 253:468-470. [PubMed: 803303]
  • Cahill, D.F., L.W. Reiter, J.A. Santolucito, G.T. Rehnberg, M.E. Ash, M.J. Fauor, S.J. Bursian, J.F. Wright, and J.W. Laskey. 1976. Biological assessment of continuous exposure to tritium and lead in the rat. In Symposium on Biological Effects of Low-Level Radiation Pertinent to Protection of Man and His Environment, Chicago, 1975. International Atomic Energy Agency, Vienna.
  • Cavalli-Sforza, L.L., and W.F. Bodmer. 1971. The Genetics of Human Populations. W.H. Freeman and Company, San Francisco.
  • Della Rosa, R.J., M. Goldman, H.G. Wolf, and L.S. Rosenblatt. 1972. Application of canine metabolic data to man. In Biomedical Implications of Radiostrontium Exposure. AEC Symposium Series No. 25. CONF-710201:52-67. U.S. Atomic Energy Commission.
  • EPA. 1975. Preliminary Assessment of Suspected Carcinogens in Drinking Water. Report to Congress, U.S. Environmental Protection Agency, Washington, D.C.
  • Gesell, T.F., H.M. Pritchard, E.M. Othel, L. Prittle, and W. Di Pietro. 1975. Nuclear Medicine Environmental Discharge Measurements. Final report to EPA, University of Texas, Houston, Office of Radiation Programs, U.S. Environmental Protection Agency.
  • Goldberg, J. 1976. California Department of Health, Radiologic Health Section. Personal communication.
  • Hickey, J.L.S., and S.D. Campbell. 1968. High radium-226 concentrations in public water supplies. Public Health Rep. 83:551-557. [PMC free article: PMC1891883] [PubMed: 4969684]
  • ICRP. 1959.Permissible Dose for Internal Radiation. International Commission on Radiological Protection. Publication No. 2. Pergamon Press, New York.
  • ICRP. 1963. Report of the RBE Committee to the International Commissions on Radiological Protection and on Radiological Units and Measurements. International Commission on Radiological Protection. Health Phy. 9:357-386.
  • ICRP. 1969. Radiosensitivity and Spatial Distribution of Dose. International Commission on Radiological Protection. ICRP Publication No. 14. Pergamon Press, New York.
  • ICRP. 1973. Alkaline Earth Metabolism in Adult Man. International Commission on Radiological Protection. Publication No. 20. Pergamon Press, New York.
  • ICRP. 1974. Reference Man: Anatomical, Physiological and Metabolic Characteristics. International Commission on Radiological Protection. ICRP Publication No. 23. Pergamon Press, New York.
  • Jacobs, D.G. 1968. Sources of Tritium and Its Behavior upon Release to the Environment. U.S. Atomic Energy Commission. Available from National Technical Information Service as Report 24635.
  • Jacobs, P.A., M. Melville, S. Ratcliffe, A.J. Keay, and I. Syme. 1974. A cytogenetic survey 11,680 newborn infants. Ann. Hum. Genet. (Lond.) 37:359-376. [PubMed: 4277977]
  • Kaul, A., and W. Loose. 1975. Experiences with the release of radioactive sewage from a medical area. Kerntechnik 17(2):81-88.
  • Klement, A.W., Jr., C.R. Miller, R.P. Miux, and B. Schleien. 1972. Estimates of ionizing radiation doses in the United States 1960-2000. Report no. ORD/CSD/72-1. U.S. Environmental Protection Agency.
  • Lucas, H.F., Jr. 1971. Correlation of the natural radioactivity of the human body to that of the environment: uptake and retention of Ra226 from food and water. In Radiological Physics Division. Semiannual Report, July-Dec. ANL-6297:55-56. Argonne National Laboratories. [PubMed: 15445441]
  • Lucas, H.F., Jr., R.B. Holtzman, and D.C. Dahlin. 1964. Radium-226, radium-228, lead-210, and fluorine in persons with osteogenic sarcoma. Science 144:1573-1575. [PubMed: 14169342]
  • Lucas, H.F., Jr., and D.P. Krause. 1960. Preliminary survey of radium-226 and radium-228 (MsThI) contents of drinking water. Radiology 74:114. [PubMed: 14418666]
  • Lyon, M.F., and R.J.S. Phillips. 1975. Specific locus mutation rates after repeated small radiation doses to mouse oocytes. Mutat. Res. 30:375-382. [PubMed: 1202329]
  • Marshall, J.F., and P.G. Groer. 1975. Theory of the induction of bone cancer by radiation: a preliminary report. I.A three-stage alpha particle model and the data for radium in man. In Radiological and Environmental Research Division Annual Report, Center for Human Radiobiology. ANL-75-60, Part II:1-38, Argonne National Laboratory, Argonne, Illinois.
  • McCann, J., E. Choi, E. Yamasaki, and B.N. Ames. 1975. Detection of carcinogens as mutagens in the Salmonella/microsome test: Assay of 300 chemicals. Proc. Nat. Acad. Sci. USA 72:5135-5139. [PMC free article: PMC388891] [PubMed: 1061098]
  • McCann, J., and B.N. Ames. 1976. Detection of carcinogens as mutagens in the Salmonella/microsome test: Assay of 300 chemicals: Discussion. Proc. Nat. Acad. Sci. USA 73:950-954. [PMC free article: PMC336038] [PubMed: 768988]
  • Miller, C.E., and A.J. Finkel. 1968. Radium retention in man after multiple injections: The power function re-evaluated. Am. J. Roentgenol. 103:871-880. [PubMed: 5677568]
  • Mole, R.H. 1975. Ionizing radiation as a carcinogen: practical questions and academic pursuits. Br. J. Radiol. 48:157-169. [PubMed: 1125543]
  • NAS-NRC. 1973. Radionuclides in Foods. Food Protection Committee, National Academy of Sciences, National Research Council, Washington, D. C.
  • NAS-NRC. 1974. Research Needs for Estimating the Biological Hazards of Low Doses of Ionizing Radiations. Committee on Nuclear Science, National Research Council, National Academy of Sciences, Washington, D.C.
  • NCRP. 1963. Maximum Permissible Body Burdens and Maximum Permissible Concentrations of Radionuclides in Air and Water for Occupational Exposure. National Commission on Radiation Protection, Report No. 22. National Bureau of Standards Handbook 69. U.S. Department of Commerce, Washington, D.C.
  • NCRP. 1967. Dose-effect modifying factors in radiation protection. Report of Subcommittee M-4 (Relative Biological Effectiveness). BNL-50073 (T-471). National Commission on Radiation Protection. Brookhaven National Laboratory. [PubMed: 5302566]
  • NCRP. 1975. Natural Background Radiation in the United States. NCRP Report No. 45. National Council on Radiation Protection and Measurements, Washington, D.C.
  • NCRP. 1976. Influence of dose rate and LET on dose-effect relationships: Implications for estimation of risks of low-level irradiation. Report prepared by NCRP Scientific Committee 40. National Council on Radiation Protection and Measurements, Washington, D.C. To be published.
  • Newcombe, H.B. 1975. Mutation and the amount of human ill health. In O.F. Nygaard, editor; , H.I. Adler, editor; , and W.K. Sinclair, editor. , eds. Radiation Research, Proceedings of the Fifth International Congress of Radiation Research. Academic Press, New York.
  • Norris, W.P., T.W. Speckman, and P.F. Gustafson. 1955. Studies of the metabolism of radium in man. Am. J. Roentgenol. 73:785-802. [PubMed: 14361836]
  • Norris, W.P., S.A. Tyler, and A.M. Brues. 1958. Retention of radioactive bone-seekers. Science 128:456-462. [PubMed: 13568810]
  • NRC. 1975. Reactor safety study; an assessment of accident risks in U.S. commercial nuclear power plants. WASH-1400, NUREG-75/014. U.S. Nuclear Regulatory Commission, Washington, D.C.
  • NRC. 1976. Standards for Protection Against Radiation. Nuclear Regulatory Commission, Title 10 Code of Federal Regulations, Part 20. U.S. Government Printing Office, Washington, D.C.
  • Peterson N.J., L.D. Samuels, H.F. Lucas, and S.P. Abrahams. 1966. An epidemiologic approach to low-level radium-226 exposure. Public Health Rep. 81:805-814. [PMC free article: PMC1919910] [PubMed: 4957940]
  • Rowland, R.E., H.F. Lucas, Jr., and A.F. Stehney. 1977. High radium levels in the water supplies of Illinois and Iowa. In T.L. Cullen, editor; and L.P. Franca, editor. , eds. Proc. Int. Symp. Areas of High Natural Radioactivity. Academia Brasileira de Ciencias, Rio de Janeiro.
  • Rowland, R.E., H.F. Lucas, Jr., and A.F. Stehney. 1976. Personal communication.
  • Russell, L.B. 1971. Definition of functional units in a small chromosomal segment of the mouse and its use in interpreting the nature of radiation-induced mutations. Mutat. Res. 11:107-123. [PubMed: 5556347]
  • Searle, A.G. 1974. Mutation induction in mice. Adv. Radiat. Biol. 4:131-207.
  • Sikov, M.R., editor; , and D.D. Mahlum, editor. , eds. 1969. Radiation Biology of the Fetal and Juvenile Mammal. AEC Symposium Series no. 17. CONF-690501. U.S. Atomic Energy Commission.
  • Sikov, M.R., and D.D. Mahlum. 1972. Plutonium in the developing animal. Health Phys. 22:707-712. [PubMed: 4673434]
  • Sodd, V.J., R.J. Velten, and E.L. Saenger. 1975. Concentrations of the medically useful radionuclides technetium-99m and iodine-131 at a large metropolitan waste water treatment plant. Health Phys. 28:355-359. [PubMed: 1120667]
  • Soldat, J.K., N.M. Robinson, and D.A. Baker. 1975. Models and computer codes for evaluating environmental radiation doses. U.S. Atomic Energy Commission Report BNWL-1754 (Feb. 1975, as revised 10/31/75).
  • Soldat, J.K. 1976. Radiation doses from iodine-129 in the environment. Health Physics 30:61-70. [PubMed: 1244339]
  • Stehney, A.F. 1960. Radioisotopes in the skeleton: Naturally occurring radioisotopes in man. In R.S. Caldecott, editor; and L.A. Snyder, editor. , eds. Symposium on Radioisotopes in the Biosphere. Center for Continuation Study, University of Minnesota. 366-181.
  • Stehney, A.F., and H.F. Lucas, Jr. 1956. Studies on the radium content of humans arising from the natural radium of their environment. In Proc. First Int. Conf. on Peaceful Uses of Atomic Energy, United Nations, 11:49-54.
  • Trimble, B.K., and J.H. Doughty. 1974. The amount of hereditary disease in human populations. Ann. Hum. Genet. (Lond.) 38:199-223. [PubMed: 4467783]
  • Uchida, I.A., and C.P.V. Lee. 1974. Radiation-induced nondisjunction in mouse oocytes. Nature 250:601-602. [PubMed: 4845661]
  • UNSCEAR. 1958. Report of the United Nations Scientific Committee on the Effects of Atomic Radiation. General Assembly, Official Records, 13th. Session, Suppl. 17 (A/3838). United Nations, New York.
  • UNSCEAR. 1962. Report of the United Nations Scientific Committee on the Effects of Atomic Radiation. General Assembly, Official Records, 17th. Session, Suppl. 17 (A/5216). United Nations, New York.
  • UNSCEAR. 1966. Report of the United Nations Scientific Committee on the Effects of Atomic Radiation. General Assembly, Official Records, 21st. Session, Suppl. 14 (A/6314). United Nations, New York.
  • UNSCEAR. 1969. Report of the United Nations Scientific Committee on the Effects of Atomic Radiation. General Assembly, Official Records, 24th. Session, Suppl. 13 (A/7613). United Nations, New York.
  • UNSCEAR. 1972. Ionizing Radiation: Levels and Effects. Report of the United Nations Scientific Committee on the Effects of Atomic Radiation. General Assembly, Official Records, 27th. Session, Suppl. 25 (A/8725). United Nations, New York.
  • Wrenn, M.E. 1976. Internal dose estimates. In T.L. Cullen, editor; and E.P. Franca, editor. eds. First Int. Symp. on Areas of High Natural Radioactivity. Academia Brasileira de Ciencias, Rio de Janeiro.



The maximum permissible concentrations of radionuclides in ICRP 1959 and NCRP 1963 are identical.


(0.2 fatal cases × 0.1 rem/yr × 30 yr)/(106 persons per yr per rem)

Image img00000
Copyright © National Academy of Sciences.
Bookshelf ID: NBK234160


Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...