Comparison of the Use of a Physiologically Based Pharmacokinetic Model and a Classical Pharmacokinetic Model for Dioxin Exposure Assessments

In epidemiologic studies, exposure assessments of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) assume a fixed elimination rate. Recent data suggest a dose-dependent elimination rate for TCDD. A physiologically based pharmacokinetic (PBPK) model, which uses a body-burden–dependent elimination rate, was developed previously in rodents to describe the pharmacokinetics of TCDD and has been extrapolated to human exposure for this study. Optimizations were performed using data from a random selection of veterans from the Ranch Hand cohort and data from a human volunteer who was exposed to TCDD. Assessment of this PBPK model used additional data from the Ranch Hand cohort and a clinical report of two women exposed to TCDD. This PBPK model suggests that previous exposure assessments may have significantly underestimated peak blood concentrations, resulting in potential exposure misclassifications. Application of a PBPK model that incorporates an inducible elimination of TCDD may improve the exposure assessments in epidemiologic studies of TCDD.


Commentary
Exposure to 2,3,7,8-tetrachlorodibenzo-pdioxin (TCDD) is associated with increased risk for cancer, diabetes, and reproductive toxicities in numerous epidemiologic studies (Schecter and Gasiewicz 2003). Several of these studies base exposure estimates on measurements of blood levels years after accidental or occupational exposures. Peak exposures have been estimated in these studies assuming a mono-or biphasic elimination rate for TCDD, with estimates of half-life ranging from 5 to 12 years (Hooiveld et al. 1998;Michalek et al. 2002;Steenland et al. 2001). Recent clinical studies suggest that the elimination rate of TCDD is dose dependent (Michalek et al. 2002). In experimental animals, several studies also demonstrate dosedependent elimination (Abraham et al. 1988;Diliberto et al. 2001). In both the animal and human data, as the exposure dose increases the apparent half-life decreases, indicating an inducible elimination of TCDD.
We developed a physiologically based pharmacokinetic (PBPK) model that describes the pharmacokinetics of TCDD in rodents (Emond et al. 2004). This approach is a mathematical description of the physiologic, biochemical, and physicochemical processes involved in the pharmacokinetics of TCDD. This model, originally validated in rodents, includes a mathematical description of the aryl hydrocarbon receptor-mediated induction of cytochrome P450 1A2 (CYP1A2). In the model, the elimination rate of TCDD is dose dependent and is a function of CYP1A2 induction. Experimental evidence suggests that CYP1A2 is responsible for hepatic sequestration of TCDD (Diliberto et al. 1997) and is also one of the enzymes responsible for its metabolism (Hakk and Diliberto 2002). Thus, at low exposures, there is minimal induction and the elimination of TCDD is very slow. However, at higher exposures, induction approaches a maximum and the elimination rate is much faster. Human physiologic and biochemical parameters were incorporated into the rodent PBPK model for species extrapolation.

Materials and Methods
In the present study a rodent PBPK model (Emond et al. 2004) was extrapolated to humans. Initial optimization of the human PBPK model used two data sets. The first data set comes from studies of U.S. Air Force veterans from Operation Ranch Hand. Veterans involved in Operation Ranch Hand were responsible for the aerial spraying of Agent Orange and other herbicides contaminated with TCDD during the Vietnam War from 1962 to 1971. We selected a subpopulation involving 343 Ranch Hand veterans and determined TCDD concentrations in blood samples collected every 5 years from 1982 to 1998 for a total of four or five samples from each veteran from this subpopulation (Michalek et al. 2003). Data from 20 randomly selected subjects from the Ranch Hand cohort subpopulation were used to optimize the human PBPK. The second set of data used to optimize the model was from Poiger and Schlatter (1986), in which a single volunteer received a single oral dose of 1.14 ng TCDD/kg and was followed for 40 days. These data were used in the optimization of the absorption and distribution processes occurring during the initial phase of the exposure.
Our assessment of the human PBPK model used an additional 10 randomly selected subjects from the Ranch Hand cohort and showed a good correlation (r 2 = 0.995) between predicted blood concentrations in 1982 and measured blood concentrations in 1982 (Table 1). We also assessed the human PBPK model with a second data set. In the fall of 1997, two women presented clinical signs of TCDD intoxication (Geusau et al. 2002). After presentation of chloracne, between the spring of 1998 through 2001, 25 and 20 blood samples were collected from patients 1 and 2, respectively (Geusau et al. 2002). These women are among those with the highest TCDD blood concentrations ever measured in adults.

Results
In the veterans of Operation Ranch Hand, TCDD blood concentrations were first determined starting in 1982 (Michalek et al. 1996(Michalek et al. , 2002. The exposure occurred between 1962 and 1971, with a typical tour of duty lasting only a year. Peak blood concentrations were assumed to occur at the time of discharge from Vietnam. We documented the time of discharge for each veteran in the Ranch Hand cohort, and used these individual data in the back calculation for this study. TCDD blood concentrations were determined at four or five time points for each Veteran starting in 1982. For each TCDD measurement we used data on body weight and height for each individual to estimate the body mass index for each veteran. We used the body mass index to estimate size of the adipose tissue compartment at the time of TCDD measurement for each individual based on the approach of Deurenberg et al. (1991). We estimated peak TCDD blood concentrations for each individual with the PBPK model using their individual data on blood concentrations, adipose tissue mass, and the time of discharge from Vietnam. We also estimated peak blood concentrations using a classical one compartment pharmacokinetic model with a first-order elimination. The classical model assumed a TCDD half-life of 8.7 years and used the TCDD blood concentrations at 1982 (Michalek et al. 1996) and the time of discharge as inputs into the model to estimate peak blood concentrations.
In 1982, the range of blood concentrations from 10 randomly chosen subjects, shown in Table 1, was approximately 16-fold, from 12.7 to 209 ppt. We used a classical pharmacokinetic approach; peak blood concentrations ranged approximately 12-fold, from 53 to 640 ppt (Table 1). Minor differences in the ranking and range of TCDD blood concentrations occur when comparing estimated peak concentrations using the one compartment classical pharmacokinetic model to blood concentrations measured in 1982. When using the PBPK model to estimate peak blood concentrations, we found a much larger range in exposures and a significant difference in the exposure rankings (Table 1). The PBPK model estimates that peak blood concentrations at the time of discharge range > 250-fold, from 138 to approximately 40,000 ppt. This large difference is due to the inclusion of a dosedependent elimination rate in the PBPK model. At the lower exposures, the half-life of TCDD is > 10 years, and at the higher exposures the half-life is only weeks. Models fits to these data are presented in Figure 1.
The model predictions show good correlations with the measured blood concentrations in the two highly exposed women ( Figure 2). The model predicts a rapid decrease in the blood concentrations during the distribution phase of the first few months of exposure, followed by an elimination that appears first order at these exposures because of maximal induction of TCDD sequestration metabolism. The elimination rates in these women suggest that the overall half-life of TCDD during the first 2 years of exposure is < 3 months. In the first blood samples collected from these women, the concentrations of TCDD were 144,000 and 26,000 ppt (lipid adjusted) in patient 1 and 2, respectively (Geusau et al. 2002). The PBPK model estimates that initial blood concentrations may have been as high as 507,000 ppt and 87,000 ppt (lipid adjusted) in patients 1 and 2, respectively. Based on this model, maximum CYP1A2 induction occurs at blood concentrations of approximately 1,250 ppt (lipid adjusted). Measured levels of TCDD in the women were approximately 20-100 folds higher than the blood concentrations that are predicted to be at maximal induction (Geusau et al. 2002).

Discussion
Studies on the elimination of TCDD have examined cohorts many years after the exposures and suggest that the half-life approaches  However, these studies did not examine the initial elimination of TCDD immediately after high-level exposures. The high concentration predicted with the model during the first 6 months is an extrapolation of what should be the concentration at the time of initial exposure. Limited data are available to validate the model for the initial exposure period. One data set is available from Poiger and Schlatter (1986). Although these data were used in the optimization of the model, the small sample size and only a single dose level do not provide confidence that the data from Poiger and Schlatter (1986) represent the wide range of potential exposures and populations at risk. A number of pharmacokinetic models have incorporated dose-dependent elimination of TCDD. These models use a variety of approaches to describe the dose dependency. Andersen et al. (1993) use a hyperbolic function related to receptor occupancy to describe the dose-dependent elimination. This function is modified by a species specific "fold" factor that is used to adjust the elimination rate. In rats this factor is 1 and allows for a doubling of the elimination rate; other species would have different adjustment factors. Kohn et al. (2001) also use a Hill equation for the kinetics of the metabolizing enzyme with cytosolic TCDD concentrations as the substrate concentration. TCDD is also hypothesized to be eliminated through biliary pathways after hepatocyte lysis at high exposures in the model of Kohn et al. (2001). In the models of Carrier et al. (1995aCarrier et al. ( , 1995b and Aylward et al. (2005), the elimination of TCDD is described as a function of total hepatic TCDD concentrations. The elimination of TCDD in these models is dose dependent because there is a dose-dependent sequestration of TCDD in the liver. In the present model we describe the elimination rate as a function of CYP1A2 induction. The different approaches used to describe the dosedependent induction of TCDD elimination are due to a lack of understanding of the biologic basis of these phenomena. This uncertainty in our understanding of the elimination of TCDD indicates that caution should be used when applying any of these models to human epidemiologic studies. However, the use of dose-dependent elimination of TCDD is an important concept to consider when choosing and applying pharmacokinetic tools in exposure assessments for dioxin.
Recent studies that measured TCDD blood concentrations shortly after high-level exposure indicate that the half-life is dose dependent (Geusau et al. 2002), as do clinical studies of the Ranch Hand cohort (Michalek et al. 2002). The use of first-order elimination of TCDD could significantly underestimate past exposures, resulting in exposure misclassifications in the epidemiologic studies. Using a PBPK model that incorporates a dynamic elimination rate may provide a more accurate assessment of past exposures in the epidemiologic studies. A better understanding of the biologic basis of the dose-dependent elimination of TCDD would allow for the development of more biologically realistic PBPK models. Further validation of this model is required before use in a quantitative exposure assessment. However, a pharmacokinetic model that includes an inducible elimination should be applied when assessing past exposures to TCDD. Figure 2. Time course of TCDD in blood (pg/g lipid adjusted) for two highly exposed women (patients 1 and 2). Symbols represent measured concentrations, and lines represent model predictions. These data were used as part of the model evaluation (Geusau et al. 2002).

Time (days) TCDD in blood (pg/g lipid adjusted)
Patient 1 Patient 2 Over the last two decades, atmospheric concentrations of lead have decreased significantly around the globe as more and more nations have chosen to remove tetraethylead from gasoline (Thomas et al. 1999). However, humans may also be exposed to Pb through contaminated food, water, and house dust and through industrial activities such as metal recycling and the battery industry. In the United States, for example, although the use of Pb in house paint peaked in 1940 and was banned in 1978, 40% of the nation's housing stock is estimated to still contain Pb-based paint (Wakefield 2002). After Pb enters the body, it can travel along several pathways depending on its source and, by extension, its bioavailability. The fraction of Pb that is absorbed depends mainly on the physical and chemical form, particularly particle size and the solubility of the specific compound. Other important factors are specific to the exposed subject, such as age, sex, nutritional status and, possibly, genetic background [Agency for Toxic Substances and Disease Registry (ATSDR) 1999; National Research Council 1993]. One of the earliest toxicokinetics studies reported that Pb, once absorbed into the blood compartment, has a mean biological half-life of about 40 days in adult males (Rabinowitz et al. 1976). The halflife in children and in pregnant women was reported to be longer, because of bone remodeling (Gulson et al. 1996;Manton et al. 2000). However, another study was unable to confirm this finding (Succop et al. 1998).
Like many other "bone-seeking" elements, Pb from blood is incorporated into calcified tissues such as bone and teeth, where it can remain for years (Rabinowitz 1991;O´Flaherty 1995). According to Rabinowitz (1991), the half-life of Pb in bone (bone-Pb) ranges from 10 to 30 years. However, the use of the term "half-life" to describe the biological clearance of Pb from bone implicitly makes assumptions about the kinetics of the process by which Pb is released. Some researchers prefer to use the term "residence time" to avoid implying more precision than what can be directly determined (Chettle D, personal communication). From calcified tissue stores, Pb is slowly released, depending on bone turnover rates, which in turn are a function of the type of bone, whether compact (slow turnover) or trabecular (rapid turnover) (O´Flaherty 1995). Brito et al. (2002) reported that the release rate of Pb from bone varies with age and intensity of exposure. Brito et al. (2005) also examined estimates of exchange rates among compartments. The transfer of Pb from blood to other compartments was much more rapid than the 1-month estimate reported previously (Brito et al. 2005), with the overall clearance rate from blood (sum of rates from blood to cortical bone, to trabecular bone and to other tissue), implying a halflife of 10−12 days (Brito et al. 2005). This highlights the difference between the overall clearance viewed from outside, when no allowance can be made for recirculation, and actual transfer rates.
Physiologic differences between children and adults account for much of the increased susceptibility of small children to the deleterious effects of Pb: whereas in adults 94% of Pb body burden is stored in bones and teeth, this proportion is only 70% in children (Barry 1981). In addition, the continuous growth of young children implies constant bone remodeling for skeletal development (O´Flaherty 1995). This contributes to a state in which Pb stored in bone is continually released back into the blood compartment, a process that has been described as "endogenous contamination" (Gulson et al. 1996). This process is particularly significant for pregnant women because pregnancy causes an increase in bone remodeling. The apparently limited success of various Pb hazard control measures in decreasing blood Pb (BPb) levels in exposed children and pregnant women may reflect a constant bone resorption process (Rust et al. 1999). Popovic et al. (2005) recently reported very different long-term Pb kinetics between men and women, with premenopausal women appearing to retain Pb more avidly or release Pb more slowly compared to postmenopausal women and to men.

Biomonitoring Human Exposure to Lead
Biomonitoring for human exposure to Pb reflects an individual's current body burden, which is a function of recent and/or past exposure. Thus, the appropriate selection and measurement of biomarkers of Pb exposure is of critical importance for health care management purposes, public health decision making, and primary prevention activities.
It is well known that Pb affects several enzymatic processes responsible for heme synthesis. Lead directly inhibits the activity of the cytoplasmic enzyme δ-aminolevulinic acid dehydratase (ALAD), resulting in a negative exponential relationship between ALAD and BPb. Pb depresses coproporphyrinogen oxidase, resulting in increased coproporphyrin activity. Pb also interferes with the normal functioning of the intramitochondrial enzyme ferrochelatase, which is responsible for the chelation of iron by protoporphyrin. Failure to insert Fe into the protoporphyrin ring results in depressed heme formation and an accumulation of protoporphyrin; this in turn chelates zinc in place of Fe, to form zinc protoporphyrin. These effects also result in modifications of some other metabolite concentrations in urine (ALA-U), blood, (ALA-B) and plasma (ALA-P), coproporphyrin in urine (CP). The activities of pyrimidine nucleotidase (P5´N) and nicotinamide adenine dinucleotide synthase (NADS) are also modified in blood after Pb exposure. Levels of these various metabolites in biological fluids have been used in the past to diagnose Pb poisoning when direct Pb levels were difficult to obtain in tissues or body fluids (Leung et al. 1993) or as information complementary to BPb test results. They are more accurately described as biomarkers for toxic effects of Pb. In this review we focus on markers that are more accurately defined as biomarkers of Pb exposure, namely, Pb concentrations in biological tissues and fluids. Biomarkers for the toxic effects of Pb have been reviewed in some detail elsewhere (Sakai 2000).
Throughout the last five decades, whole blood has been the primary biological fluid used for assessment of Pb exposure, both for screening and diagnostic purposes and for biomonitoring purposes in the long term. Although BPb measurements reflect recent exposure, they may also represent past exposures, as a result of Pb mobilization from bone back into blood (Gulson et al. 1996). In those subjects without excessive exposure to Pb, 45-75% of the Pb in blood may have come from bone (Gulson et al. 1995;Smith et al. 1996). In exposed children, however, it has been reported that the bone-Pb contribution to blood can be 90% or more (Gwiazda et al. 2005). Thus, reductions in BPb levels after environmental Pb remediation may be buffered somewhat by contributions from endogenous Pb sources (Lowry et al. 2004;Rust et al. 1999). Remediation efforts typically result in reductions of BPb levels in exposed children of no more than 30%, when evaluated within several months after intervention (U.S. Enviromental Protection Agency 1995). Roberts et al. (2001) reported that in children with BPb levels between 25 and 29 µg/dL who were not treated with chelation drugs, the time required for BPb to decline to < 10 µg/dL is about 2 years. Some researchers have suggested that the efficacy of Pb hazard remediation efforts should be evaluated over extended periods to allow adequate time for mobilization and depletion of accumulated skeletal Pb stores and to allow a reduction in the absolute contribution to BPb levels from these stores (Gwiazda et al. 2005;Lowry et al. 2004). Thus, the mean of serial BPb levels should be a more accurate index of long-term Pb exposure.
Data collected as part of the U.S. National Health and Examination Survey (NHANES) give the 95th percentile for BPb as 7.0 µg/dL for children 1−5 years of age and as 5.20 µg/dL for adults 20 years of age and older [U.S. Centers for Disease Control and Prevention (CDC) 2003]. Although the BPb levels of U.S. populations have dropped markedly compared to 30 years ago, new concerns have been raised regarding possible adverse health effects in children at BPb levels < 10 µg/dL; perhaps there is no safe threshold but, rather, a continuum of toxic effects (Canfield et al. 2003). In light of these concerns, the CDC Advisory Committee on Childhood Lead Poisoning Prevention formed a working group to review the evidence for adverse health effects at BPb levels < 10 µg/dL in children. Although this working group concluded that several studies in the literature had demonstrated a statistically significant association between BPb levels < 10 µg/dL and some adverse health effects in children, the effects were very small and could conceivably have been influenced by residual confounding factors. The working group's report called for further studies to examine the relationship between lower BPb levels and health outcomes to provide a more complete understanding of this issue (CDC 2004).
Many studies have reported statistically significant associations between BPb levels and various health effect outcomes. Some, however, have been statistically weak, with the magnitude of the effect relatively small. According to Hu et al. (1998), such weaknesses of association may occur because BPb is not a sufficiently sensitive biomarker of exposure or dose at the target organ(s) or because the relationships involved are biologically irrelevant and are only found because of an uncontrolled confounding factor. Furthermore, in view of the kinetics of Pb distribution within the body (cycling among blood, bone, and soft tissues), differentiation of low-level chronic exposure from a short high-level exposure is not possible on the basis of a single BPb measurement . Consequently, there is renewed interest in alternative biomarkers that may aid diagnosis of the extent of Pb exposure. Such alternatives include Pb determinations in plasma/serum, saliva, bone, teeth, feces, and urine. However, none of these matrices has gained convincing acceptance as an alternative to BPb. This is partly due to data based on erroneous or dubious analytical protocols that do not consider the confounding variables.

Plasma/Serum Lead
Plasma-Pb likely represents a more relevant index of exposure to, distribution of, and health risks associated with Pb than does BPb. Indeed, from a physiologic point of view, we can assume that the toxic effects of Pb are primarily associated with plasma-Pb because this fraction is the most rapidly exchangeable one in the blood compartment. In recent years increased attention has been paid to monitoring the concentration of Pb in plasma (or serum). However, research on associations between plasma-Pb and toxicologic outcomes is still sparse, and a significant gap in knowledge remains.
Plasma/serum Pb levels in nonexposed and exposed individuals reported in older publications range widely, from 0.02 to 14.5 µg/L (Versieck and Cornelis 1988). This is probably due to inappropriate collection methods, analytical instrumentation, and methods for Pb determination. The development and use of more sensitive analytical instrumentation, especially inductively coupled plasma mass spectrometry (ICP-MS), has resulted in determinations of Pb in plasma and serum specimens with much lower detection limits and with better accuracy. More recent data, also based on ICP-MS methods, have shown plasma-Pb levels < 1.0 µg/L in nonexposed individuals (Schutz et al. 1996).
The use of advanced analytical techniques is not the only essential requirement to ensure accurate and reliable plasma-Pb data. Contamination of the specimen may occur at the preanalytical phase, namely, during collection, manipulation, or storage. Use of Class-100 biosafety cabinets and clean rooms for specimen preparation and analysis is mandatory. Moreover, all analytical reagents used must be of the highest purity grade. These conditions are far more rigorous than are typically required for clinical BPb measurements performed in a commercial laboratory. After the blood specimen has been collected, the serum/plasma separation must be performed as soon as possible because there is high potential for Pb to move from the dominant BPb subcompartment repository, namely, the erythrocytes, into the plasma via hemolysis, leading to erroneously high results for plasma-Pb. Plasma hemolysis can be estimated by analyzing hemoglobin levels in the specimen because these levels are likely to become abnormally elevated with hemolysis . Materials for specimen collection and storage and the anticoagulant must be of the highest quality because these can be another source of Pb contamination.
Commercial evacuated blood tubes, prepared specifically for BPb measurements, are available with < 5 µg/L Pb (Esernio-Jenssen et al. 1999), but it is nevertheless desirable for the analyzing laboratory to characterize the background Pb contamination in each new lot of tubes to ensure that reported concentrations are not compromised by contamination. The choice of anticoagulant is important because EDTA, as a strong metal-chelating agent, may be difficult to obtain without some background contamination and may give misleadingly high plasma-Pb results because of selective extraction of Pb bound to erythrocytes. The use of heparin is problematic because heparinized blood is more prone to form fibrin clots after several hours. These issues were evaluated by Smith et al. (1998) in some detail; they compared commercial Vacutainer-type tubes with ultracleaned collection tubes containing either EDTA or heparin. As there are no commercial blood collection tubes available that are certified for ultra-low Pb measurements, the analyzing laboratory should prepare precleaned polyethylene tubes containing ultra low-Pb anticoagulants.
There are many reports of plasma-Pb measurements where validation data are either weak or absent. For example, some simply cite successful participation of the analyzing laboratory in quality assurance (QA) programs for whole blood Pb operated by the CDC and the College of American Pathologists (Hernandez-Avila et al. 1998), whereas others neglect to cite any kind of QA program (Dombovari et al. 2001). Participation in QA schemes designed specifically for whole BPb, while commendable, does not address the much more challenging analysis for plasma-Pb. This problem is compounded by the lack of certified reference materials for either serum or plasma-Pb (Cake et al. 1996). For these reasons, production of plasma or serum reference materials that have Pb concentrations certified close to current human values is urgently needed to support method validation.

Saliva Lead
Saliva has been proposed as a diagnostic specimen for various purposes, as it is easily collected (Silbergeld 1993). However, in the absence of consistent and dependable saliva Pb measurements, it is not generally accepted as a reliable biomarker of Pb exposure. Saliva shows large variations in its ion content throughout the day, coupled with changes in salivary flow rates before, during, and after meals. Variations also arise depending on the manner in which saliva collection is stimulated (or not) and on the nutritional and hormonal status of the individual.
Some data suggest an association between Pb levels in saliva and those in either plasma or blood (Omokhodion and Crockford 1991;Pan 1981). Moreover, it has been argued that Pb in saliva is the direct excretion of the Pb fraction in diffusible plasma namely, the fraction not bound to proteins) (Omokhodion and Crockford 1991). Despite the associations reported in the literature, the older saliva Pb concentrations are quite high, and the values vary among studies. Recent data suggest much lower saliva Pb levels, in both exposed and unexposed subjects (Koh et al. 2003;Wilhelm et al. 2002). According to Wilhelm et al. (2002), Pb content in the saliva of unexposed children is usually < 0.15 µg/dL. Uncontrolled variation in salivary flow rates, lack of standard or certified reference materials, and absence of reliable reference values for human populations are major factors that limit the utility of saliva Pb measurements. In addition the very low levels of Pb present in saliva limit the range of suitable analytical techniques, thereby further diminishing the utility and reliability of this biomarker for evaluating Pb exposure.

Hair Lead
Hair is a biological specimen that is easily and noninvasively collected, with minimal cost, and it is easily stored and transported to the laboratory for analysis. These attributes make hair an attractive biomonitoring substrate, at least superficially. Because Pb is excreted in hair, many have suggested it for assessing Pb exposure, particularly in developing countries where specialized laboratory services may be unavailable and resources are limited (Schumacher et al. 1991). However, an extensive debate is ongoing about the limitations of hair as a biomarker of metal exposure generally. Here we limit our discussion to Pb exposure, although many of the issues for Pb, such as preanalytical concerns for contamination control, sampling, and reference ranges, also apply to other metals.
The ability to distinguish between Pb that is endogenous, namely, absorbed into the blood and incorporated into the hair matrix, and Pb that is exogenous, namely, derived from external contamination, is a major problem. During the washing step it is assumed that exogenous Pb is completely removed, whereas endogenous Pb is not. However, no consensus exists about how removal of exogenous Pb is best accomplished. Some publications that describe the use of hair for assessing Pb exposure reference a hair washing method proposed by the International Atomic Energy Agency (IAEA) in 1978. The approach entailed washing hair specimens with acetone/water/acetone (Ryabukin 1978). However, a recent study (Morton et al. 2002) demonstrated that the IAEA method failed to remove exogenous Pb from hair.
Another issue is the significant variation in the Pb concentration profile among various subpopulations according to age, sex, hair color, and smoking status (Wolfsperger et al. 1994). Moreover, geographic, racial/ethnic, and ecologic factors can also affect Pb distribution in hair within a given population. Thus, it is difficult to establish reference ranges because confounding factors impose restrictions on the interpretation of individual results. No consensus exists on the length of the hair specimen to be collected, or the amount, or the position on scalp. Variations in Pb content between single hairs from the same individual can be as high as ± 100%, particularly in the distal region (Renshaw et al. 1976).
Recently, the ATSDR established an expert advisory panel to review current knowledge about the use of hair analysis for trace metals in biomonitoring (ATSDR 2001). The general consensus was that many scientific issues need to be resolved before hair analysis can become a useful tool in understanding environmental exposures. Although hair analysis may be able to answer some specific questions about environmental exposure to a few substances, it often raises more questions than it answers. The scientific community currently does not know the range of Pb contamination levels typically found in human hair. Without reliable data on baseline or background hair contamination levels in the general population, health agencies cannot determine whether results from a given site are unusually high or low (ATSDR 2001).
In addition to the preanalytical issues and the absence of reliable reference ranges, the quality of analytical techniques used for determining Pb, as well as other trace metals, in hair has been questioned. In a recent interlaboratory study of commercial laboratories that specifically market the test for trace metals in hair, interlaboratory agreement was judged very poor, with wide discrepancies observed for Pb as well as for other elements (Seidel et al. 2001).

Urinary and Fecal Lead
The determination of Pb in urine (urine-Pb) is considered to reflect Pb that has diffused from plasma and is excreted through the kidneys. Collection of urine for Pb measurements is noninvasive and is favored for long-term biomonitoring, especially for occupational exposures. However, a spot urine specimen is particularly unreliable because it is subject to large biological variations that necessitate a creatinine excretion correction. Urine-Pb originates from plasma-Pb that is filtered at the glomerular level; thus, according to some authors (Tsaih et al. 1999), urine-Pb levels that are adjusted for glomerular filtration rate may serve as a proxy for plasma-Pb. Hirata et al. (1995) found a better correlation between the concentration of plasma-Pb and urine-Pb than between BPb and urine-Pb for lead workers with low levels of Pb exposure. Manton et al. (2000), using high-precision Pb isotope ratio measurements, found the concentration of urine-Pb to be about 10% of that in whole Biomarkers for monitoring lead exposure Environmental Health Perspectives • VOLUME 113 | NUMBER 12 | December 2005 blood; however, the correlations were not particularly robust. In contrast, correlations with isotopic ratios were excellent. According to Tsaih et al. (1999), cortical bone contributes a mean of 0.43 µg Pb per day excreted in urine, whereas trabecular bone contributes as much as 1.6 µg Pb per day. Cavalleri et al. (1983) observed different Pb kinetics between exposed and nonexposed subjects after the administration of CaNa 2 EDTA. In unexposed subjects BPb levels remained stable even after 5 hr of CaNa 2 EDTA administration. However, plasma-Pb levels in the unexposed group decreased by as much as one half, while urine-Pb increased by a factor of 10. In the Pbexposed group the same amount of chelation therapy resulted in plasma-Pb levels increasing by a factor of 2, while BPb levels decreased by a factor of 2, with a higher urine-Pb excretion. Thus, it seems that in nonexposed subjects a major contribution for urine-Pb is derived from the Pb fraction in soft tissues that is in equilibrium with that in plasma compartment. We could speculate that the larger the amount of erythrocyte-bound Pb, the weaker the binding forces, and that a significant fraction of Pb is released from red blood cell membranes into plasma and is then filtered by the kidney. Because the amount of Pb excreted is very high, the kidneys are unable to remove it rapidly from the blood stream; this may account for the temporal elevation of plasma-Pb levels.
The availability of reliable urine qualitycontrol materials and reference materials certified for Pb content and participation in external quality assessment schemes for urine-Pb are important factors in assuring the accuracy of analytical results. However, the tendency for urate salts to precipitate out of urine during transit and storage can be a complicating factor in the analysis. Moreover, because only a few studies have examined associations between urine-Pb and other biomarkers, the use of urine-Pb measurements is essentially limited to long-term occupational monitoring programs, monitoring patients during chelation-therapy, and, until very recently, to clinical evaluation of potential candidates for chelation therapy.
Measurement of fecal-Pb content over several days is one possible approach to estimating the overall magnitude of childhood Pb intake. According to Gwiazda et al. (2005), fecal-Pb content should give an integrated measure of Pb exposure/intake from all sources, dietary and environmental, inside and outside the home (by isotopic composition). However, a limitation to the use of fecal-Pb is that the collection of complete fecal samples over multiple days may not be feasible. As stated by Gwiazda et al. (2005), fecal-Pb reflects unabsorbed, ingested Pb plus Pb that is eliminated via endogenous fecal (biliary) routes; interindividual variations in these physiologic processes may show up as variation that is wrongly attributable to environmental Pb exposure.

Nail Lead
Like hair, nails have many superficial advantages as a biomarker for Pb exposure, especially because specimen collection is noninvasive and simple and because nail specimens are very stable after collection, not requiring special storage conditions. Nail-Pb is considered to reflect long-term exposure because this compartment remains isolated from other metabolic activities in the body (Takagi et al. 1988). Because toenails are less affected by exogenous environmental contamination than fingernails, they have been preferred for Pb exposure studies. Toenails have a slower growth rate than fingernails (up to 50% slower, especially in winter) and thus may provide a longer integration of Pb exposure.
Lead concentration in nails depends on the subject's age (Nowak and Chmielnicka 2000), but it seems not to depend on the subject's sex (Rodushkin and Axelsson 2000). Gulson (1996a) reported high variability in Pb levels measured in the same fingernails and toenails of various subjects, even after rigorous washing procedures; such lack of reproducibility suggests that nail specimens offer only limited scope in assessing exposure to Pb.

Bone Lead
Because bone accounts for > 94% of the adult body burden of Pb (70% in children) (O´Flaherty 1995), many researchers accept that a cumulative measure of Pb dose may be the most important determinant of some forms of toxicity (cumulative measure means an exposure that is integrated over many years, rather than based on a single BPb measurement) (Landrigan and Todd 1994;Hu et al. 1998). In support of this hypothesis, recent studies have shown that bone-Pb but not BPb is significantly related to declines in hematocrit and hemoglobin among moderately Pbexposed construction workers and to decreased birth weight and increased odds of clinically relevant hypertension (Gonzalez-Cossio et al. 1997;Hu et al. 1996). According to Hu et al. (1998), other adverse health outcomes likely to be associated with bone-Pb levels include impairment of cognitive performance and growth in children and kidney failure, gout, elevated blood pressure, reproductive toxicity, and adverse cardiovascular events in adults.
As pointed by Hu et al. (1998), two major paradigms relate to skeletal Pb: bone-Pb as an indicator of cumulative Pb exposure (bone-Pb as a repository), and bone-Pb as a source of body burden that can mobilized into the circulation (bone-Pb as a source). Hernandez-Avila et al. (1998) reported a strong association between bone-Pb levels and serum-Pb levels of adults exposed to Pb. That study indicated the potential role of the skeleton as an important source of endogenous, labile Pb that may not be adequately discerned through measurement of BPb levels. The same authors argued that skeletal sources of Pb accumulated from past exposures should be considered along with current sources when exposure pathways are being evaluated. In an attempt to characterize the source of Pb exposure, Gulson et al. (1995) measured the 206 Pb/ 204 Pb isotopic ratios in immigrant Australian subjects, Australian-born subjects, and environmental samples. The immigrant population exhibited Pb isotopic ratios from 17.7 to 18.5, distinct from the ratio in Australian-born subjects (~17.0). This difference allowed a distinction to be drawn between current exposure acquired from Australian sources and older bone-stored Pb that was not acquired from Australian sources.
Differing bone types have differing bone-Pb mobilization characteristics. For example, the tibia principally consists of cortical bone, whereas the patella is largely trabecular bone. Pb in trabecular bone is more biologically active than Pb in cortical bone, and trabecular bone has a shorter turnover time. The endogenous contribution of Pb from bone stores is an important health consideration. The O´Flaherty kinetic model can be used to indicate the quantity of Pb delivered from bone as a function of bone turnover and Pb exchange (O´Flaherty 1995). A recent revision of this model (Fleming et al. 1999) suggests that a smeltery worker with a tibia Pb concentration of 100 µg/g can expect a continuous endogenous contribution to BPb of 16 µg/dL. A pregnant woman with a tibia Pb concentration of 50 µg/g can end up with a contribution of 8 µg/dL BPb; this figure does not consider the increased rate of bone turnover associated with pregnancy. Individuals not exposed to Pb in the workplace typically display tibia Pb levels up to about 20 µg/g (Roy et al. 1997).
Over the last decade bone-Pb measurements based on noninvasive in vivo X-ray fluorescence (XRF) methods have become increasingly accepted. The technique uses fluorescing photons to remove an inner-shell electron from a Pb atom, leaving it in an excited state. The result is emission of X-ray photons that are characteristic of Pb. Measurements are performed by using one of four kinds of XRF: two involve fluorescence of the K-shell electrons of Pb (K-XRF), and the other two involve fluorescence of the L-shell electrons (L-XRF) (Todd et al. 2002a). Several groups, mainly in North America, have reported the development of in vivo measurement systems; the majority have adopted the K-XRF approach based on excitation with a 109 Cd isotope and backscatter geometry because of its advantages: it provides a robust measurement with a better detection limit and a lower effective (radiation) dose (as compared to L-XRF) (Todd and Chettle 1994). The radiation dose is not a limiting factor in using this technique with humans, as demonstrated by Todd and Chettle (1994).
Calibration is usually performed with Pbdoped plaster-of-Paris phantoms (Todd et al. 2002a). Method accuracy has been evaluated through comparison of XRF data from cadaver specimens with electrothermal atomic absorption spectrophotometry data (Todd et al. 2002b). However, XRF sensitivity and precision for Pb still constitute an analytical challenge. In addition to sample-to-sample reproducibility, XRF can also display a certain amount of imprecision associated with each calculated bone-Pb value (Ambrose et al. 2000). This uncertainty, estimated using a goodness-of-fit statistic from the curve fitting of the background, ranged from 3 to 30 µg/g Pb; clearly, this represents a problem for measurements of low-level Pb namely, young children and nonexposed populations. Another problem inherent to the XRF technique is photon scattering due to overlying tissue or subject movement during the measurement period (Ambrose et al. 2000). Normalization of the Pb signal to the calcium backscatter signal appears to solve this problem. Precision depends on the amount of tissue overlying the bone: the greater the thickness of tissue, the poorer the precision. Todd and Chettle (1994), comparing the K-shell with L-shell precisions with 3 and 6 mm of overlying soft tissue, reported that K-XRF precision worsens by only 5%, whereas L-XRF precision worsens by 49% for greater thickness. The precision of the L-XRF method is much more severely affected by the strong attenuation of the Pb L-shell X rays. Todd et al. (2001) reported contiguous inhomogeneities in the distribution of Pb toward the proximal and distal ends of the tibia bones. They speculated that the region of lower Pb concentration has lower blood flow in the Haversian canals and, consequently, less Pb available for uptake into bone matrix during bone remodeling (Todd et al. 2001). Trabecular bone has a larger surface area and a greater volume of blood delivered per unit of time compared to cortical bone. In addition there are more active osteons per gram in trabecular bone to accomplish resorption and deposition. Hernandez-Avila et al. (1998) reported that, in individuals with no history to occupational Pb exposure, bone-Pb (in particular trabecular Pb) exerts an additional independent influence on plasma-Pb after control for BPb.
Thus, an appropriate selection of the precise bone type to be analyzed for Pb content must be made before commencing. Moreover, further research on the relationship between various bone-Pb subcompartments and other Pb measures is warranted.

Tooth Lead
Like bone, teeth accumulate Pb over the long term. However, there is some evidence that teeth are superior to bone as an indicator of cumulative Pb exposure because the losses from teeth are much slower (Maneakrichten et al. 1991). Moreover, deciduous teeth are relatively easy to collect and analyze; exfoliation generally occurs after the age of 6 years. Teeth are also very stable for preservation purposes.
Chronic Pb exposure from mouthing activity in early childhood may be camouflaged by "dilution" effects during periods of rapid skeletal growth in young children and adolescents and may not be detected by a single BPb measurement. However, most published data on tooth-Pb have been based on whole tooth analysis, with no attempt to distinguish among tooth types (different teeth are formed at different ages) or to differentiate the Pb concentration in enamel from that in dentin (enamel contains much more Pb, by mass, than does dentin). The influence of age and/or sex have also not been considered (Brown et al. 2002). Furthermore, use of deciduous teeth is only possible for children over 6 years in age.
Recently, Gomes et al. (2004) proposed a solution to the limitations mentioned above by using an in vivo enamel biopsy of children. In this approach superficial minerals are leached from teeth and Pb is determined by electrothermal atomic absorption spectrophotometry. One important drawback to this approach is that, because an accumulation gradient for Pb has not yet been established for enamel, only biopsies of a given depth can be compared to one another. Another issue related to tooth-Pb measurements is whether Pb that accumulates in the first few micrometers of the enamel surface was incorporated posteruptively (e.g., from the mouth, saliva, food) rather than during the period when the tooth was mineralized inside the bone.
An interesting and potentially valuable aspect of tooth-Pb measurements is their capacity to elucidate the history of Pb exposure. Teeth are composed of several distinct tissues formed over a period of several years, and different parts of the tooth can bind Pb at different stages of the individual's life. Therefore, a section of tooth can yield historical information on the individual's exposure to Pb. For example, the enamel of all primary teeth, and parts of the enamel from some permanent teeth, are formed in utero and thus may provide information on prenatal exposure to Pb. This information could be valuable in improving our understanding of dose−effect relationships for embryonic anomalies, particularly neurotoxic dysfunction. The dentine of the primary teeth provides evidence of exposure during the early childhood years, when hand-to-mouth activity is usually an important contributor to Pb body burden (Gulson 1996b). However, enamel Pb levels may be useful for indirectly estimating the Pb composition of the mother's bone (Gulson 1996b).
More recently there has been some interest in using laser ablation ICP-MS to examine Pb distribution in tooth profiles. This approach offers spatially resolved measurements of trace element distribution that can be compared to a temporal axis via reference to the neonatal line, enabling researchers to use not only the Pb concentration of the entire tooth but also the specific amount of Pb in each tooth layer, namely, a time line of Pb exposure. Nevertheless, some serious challenges remain before this technique can be fully exploited (Uryu et al. 2003).

Conclusions
Thus far an impressive body of data has been established based on the use of alternative biomarkers for monitoring exposure to Pb. However, it is still unclear to what extent such data are superior to the information obtained from BPb measurements. Clearly, many of the limitations identified in the foregoing sections must be resolved before alternative biomarkers can be accepted as superior indicators of Pb exposure. At this time BPb measurements are still the most reliable indicator of recent Pb exposure, although serial BPb measurements may offer a better assessment of temporal fluctuations in Pb absorption. If reliable and reproducible plasma-Pb measurements can be obtained, these may offer better correlation with toxic effects. However, we do not yet know what a single plasma-Pb value means, in terms of health effects; in the absence of a normal reference range, the clinical utility for individual assessment is problematic. Further research on this issue is needed, especially for children and adults with low to moderate Pb exposure. Further efforts are also warranted in the further development and continued use of well-established analytical protocols, as well as in the estimation of random and systematic errors. Efforts are needed to create regional reference ranges of nonexposed populations for each biomarker, to acquire data related to longterm and short-term exposures, and to evaluate the influence of nutritional status and ethnicity (genetic polymorphisms).
A critical question that might be asked with respect to an individual's bone-Pb measurement is what does it mean in terms of health risk or, perhaps, clinical management? To answer this question, we may need to distinguish between bone-Pb measurements in children and pregnant women, namely, those with high bone turnover rate compared to (nonpregnant) adults. In children, bone-Pb may have little effect on BPb levels, but it may help us to estimate the extent to which BPb derives from endogenous sources and the possible contribution to the labile plasma-Pb pool.

Biomarkers for monitoring lead exposure
Environmental Health Perspectives • VOLUME 113 | NUMBER 12 | December 2005