Anemia is a common complication of chronic kidney disease (CKD), which develops early in the course of CKD and becomes increasingly severe as kidney function deteriorates.1 Iron deficiency anemia is a continuous process evolving in three stages. The first phase is the depletion of storage iron (stage I), where total body iron is decreased but hemoglobin (Hb) synthesis and red cell indices remain unaffected. Both these indices change when the supply of iron to bone marrow becomes problematic (iron deficient erythropoiesis, or stage II). In stage III the iron supply is insufficient to maintain a normal Hb concentration, and eventually iron deficiency anemia develops.
The management of anemia in CKD patients must strike an appropriate balance between stimulating generation of erythroblasts (erythropoiesis) and maintaining sufficient iron levels for optimum Hb production.2 It is important to assess iron stores and the availability of iron for erythropoiesis, as adequate iron status is integral to both iron and anemia management in CKD patients. The major cause of iron deficiency is blood loss, particularly for dialysis patients. Dialysis patients are in a state of continuous iron loss from gastrointestinal bleeding (very common), blood drawing, and/or, most importantly, with hemodialysis, the dialysis treatment itself. A CKD patient who receives treatment with erythropoietic-stimulating agents (ESAs) for the anemia often develops iron deficiency, because the iron requirements for achieving a response to ESA treatment usually cannot be met by mobilization of the patient’s iron stores alone. Therefore, supplemental iron therapy, either given orally or intravenously, is often needed among dialysis patients who receive recombinant human erythropoietin (EPO) or darbepoetin alfa treatment.1 Thus, iron management (iron status assessment and iron treatment) is an essential part of the treatment of anemia associated with CKD, as there are concerns regarding the adverse effects associated with both elevated doses of ESAs and supplemental (intravenous or oral) iron.
Classical iron status tests, of which ferritin and transferring saturation (TSAT) are the most widely used, reflect either the level of iron in tissue stores or the adequacy of iron for erythropoiesis. Though widely used, classical laboratory biomarkers of iron status are not without drawbacks when used in CKD patients: CKD is a pro-inflammatory state, and the biological variability of serum iron, transferrin saturation, and ferritin is known to be large in the context of underlying inflammation.3–5 Furthermore, results from a meta-analysis of 55 studies (published before 1990) showed that ferritin radioimmunoassay was the most powerful test (a mean area under the receiver operating characteristic curve of 0.95; 95% CI 0.94, 0.96), compared with mean cell volume determination, TSAT, and red cell volume distribution, for diagnosing adult patients with iron deficiency, but test performances varied between patients with and those without inflammatory (e.g., CKD patients) or liver disease.6 The accurate assessment of iron status is dependent on the validity and reliability of laboratory test results, and differences in test performance pose a dilemma regarding the most appropriate test to guide treatment decisions.
In an attempt to find a more accurate and reliable test, several novel biomarkers of iron status have been proposed:
- The Hb content of reticulocytes (CHr)/Reticulocyte hemoglobin equivalent (RetHe). CHr and RetHe measurements are functionally equivalent,7 but the two measurements are performed by different analyzers. CHr/RetHe, which examines both the precursors and mature red cells, provides an opportunity to detect and monitor acute and chronic changes in cellular hemoglobin status. CHr/RetHe measurement is a function of the amount of iron in the bone marrow that is available for incorporation into reticulocytes (immature red blood cells);8 decreased levels of CHr/RetHe indicate iron deficiency.
- The percentage of hypochromic erythrocytes (%HYPO). %HYPO is a measurement of Hb in red blood cell (RBC), which factors in the absolute Hb content as well as the size of the RBC.9 This can be used to measure functional iron deficiency. If iron supply is low in the face of ESA therapy, then there is lesser amount of Hb being incorporated into each RBC, and as a result, %HYPO levels are high.
- Erythrocyte zinc protoporphyrin (ZPP). ZPP is a measure of iron incorporation in heme. When iron levels are low, zinc is used instead of iron in the formation of heme, a protein component of Hb. As a result, ZPP levels increase, indicating iron deficiency.10
- Soluble transferrin receptor (sTfR). sTfR measures the availability of iron in the bone marrow. When the bone marrow is stimulated by ESAs, it results in increased expression of transferrin receptors on the surface of erythroblasts, the precursors of RBC. If iron supply is low, then levels of transferrin containing iron are low, and there is a mismatch between the numbers of transferrin receptors and the transferrin-iron complexes to bind with them. Some of the transferrin receptors which are not bound by iron-containing transferrin then get detached and can be detected in the blood. Increased concentration of sTfRs in the blood is an indicator of iron deficiency.
- Hepcidin. Hepcidin is a peptide produced by the liver that regulates both iron absorption in the intestine as well as release of iron from macrophages. Increased levels of hepcidin have indeed been associated with a decrease in available iron.11
- Superconducting Quantum Interference Devices (SQUIDs) are a non-invasive method for the detection and quantification of liver iron content.12 They operate by exploiting the paramagnetic properties of iron: magnetic resonance diminishes in the liver as iron concentration increases.
Scope of Comparative Effectiveness Review
Although a number of international guidelines have examined the use of both classical and new serum iron biomarkers, their recommendations differ.1,2,13 In view of the considerable clinical uncertainty, the high biological variability associated with laboratory biomarkers, and the need for frequent assessment to guide treatment with ESAs, a systematic review of the relevant literature was deemed to be a priority. In order to address this knowledge gap, the Tufts Evidence-based Practice Center (EPC) conducted a Comparative Effectiveness Review (CER) to systematically evaluate the impact on patient-centered outcomes of using newer laboratory biomarkers1 as a replacement for or as an add-on to older laboratory biomarkers of iron status2 for assessing iron status and the management of iron deficiency in adult and pediatric CKD patients (nondialysis and dialysis).14 Although studies that assess the overall impact of these tests on the clinical management process would provide the most direct evidence for this CER, they are often challenging or unfeasible to conduct due to the high patient and resource requirements. Because it was expected that little such evidence would be found, the question of overall impact (Key Question 1; see below for full descriptions of all Key Questions) was broken out into three component Key Questions (Key Questions 2 to 4). Combining evidence that addresses these three component Key Questions could thus inform the conclusions for the review’s primary, overarching question (Key Question 1).
Key Question 1 (Overarching Question)
What is the impact on patient centered outcomes of using the newer laboratory biomarkers as a replacement for or an add-on to the older laboratory biomarkers of iron status for assessing iron status and management of iron deficiency in stages 3–5 CKD patients (nondialysis and dialysis), and in patients with a kidney transplant?
Key Question 2
What is the test performance of newer markers of iron status as a replacement for or an add-on to the older markers in stages 3–5 CKD patients nondialysis and dialysis, and in patients with a kidney transplant?
- What reference standards are used for the diagnosis of iron status in studies evaluating test performance?
- What are the adverse effects or harms associated with testing using newer and/or older markers of iron status?
Key Question 3
In stages 3–5 nondialysis and dialysis CKD patients with iron deficiency, what is the impact of managing iron status based on newer laboratory biomarkers either alone or in addition to older laboratory biomarkers on intermediate outcomes (e.g., improvement in Hb levels, dose of erythropoiesis-stimulating agents, time in target Hb range), compared with managing iron status based on older laboratory biomarkers alone?
- What are the adverse effects or harms associated with the treatments guided by tests of iron status?
Key Question 4
What factors affect the test performance and clinical utility of newer markers of iron status, either alone or in addition to older laboratory biomarkers, in stages 3–5 (nondialysis and dialysis) CKD patients with iron deficiency? For example:
- Biological variation in diagnostic indices
- Use of different diagnostic reference standards
- Type of dialysis (i.e., peritoneal or hemodialysis)
- Patient subgroups (i.e., age, sex, comorbid conditions, erythropoiesis-stimulating agent resistance, protein energy malnutrition secondary to an inflammatory state, hemoglobinopathies [e.g., thalessemia and sickle cell anemia])
- Route of iron administration (i.e., oral or intravenous)
- Treatment regimen (i.e., repletion or continuous treatment)
- Interactions between treatments (i.e., patients treated with versus without ESA, patients treated with vs. without iron-replacement therapy)
- Other factors (based on additional information in the reviewed papers)
Each question had specific criteria for study inclusion based on the population, intervention, comparator, outcomes, and study design. Population criteria included studies in both adults and children with stage 3, 4, or 5 CKD; patients with CKD undergoing dialysis (hemodialysis [HD] or peritoneal dialysis [PD]); and patients with a kidney transplant. For interventions, eligible studies were those involving newer laboratory biomarkers to diagnose and manage iron deficiency either as a replacement for classical markers or in addition to classical biomarkers. For comparators, eligible studies were those involving older laboratory biomarkers to diagnose and manage iron deficiency. We were interested in both patient-centered outcomes (such as mortality and morbidity [e.g., cardiac or liver toxicity and infection], quality of life, and adverse events or harms) and intermediate outcomes (such as improvement in Hb levels, dose of erythropoiesis-stimulating agents, and time in target Hb range).
Comparative Effectiveness Review Findings
A total of 30 articles met the study eligibility criteria, based on the populations, tests, and outcomes of interest, including one Polish- and one Japanese-language publication. Twenty seven articles reported data on the test performance of newer markers of iron status compared with classical markers (Key Question 2);7,15–40 two reported intermediate outcomes comparing iron management guided by newer laboratory markers with iron management guided by classical markers (Key Question 3);39,41 and three (in two articles) reported data on factors affecting test performance comparing newer with classical laboratory markers of iron status (Key Question 4).42,43 Most studies enrolled only adult CKD patients undergoing HD. The main findings of this CER are presented in Table 1.
Combining the evidence addressing Key Questions 2, 3, and 4, it was concluded that there is insufficient data to determine if most newer laboratory biomarkers of iron status are better than classical markers for predicting iron deficiency as defined by a response to iron challenge test. However, it may be that CHr and %HYPO have better predictive ability for a response to intravenous (IV) iron treatment than classical markers (TSAT <20% or ferritin <100 ng/mL) in HD CKD patients. In addition, results from two randomized controlled trials (RCTs) showed a reduction in the number of iron status tests and resulting IV iron treatments administered to patients whose iron management was guided by CHr compared with those guided by TSAT or ferritin. These results suggest that CHr may reduce potential harms from IV iron treatment by lowering the frequency of iron testing, although the evidence for the potential harms associated with testing or test-associated treatment is insufficient.
Nevertheless, the strength of evidence supporting these conclusions is low and there remains considerable clinical uncertainty regarding the use of newer markers in the assessment of iron status and management of iron deficiency in stages 3–5 CKD patients (both nondialysis and dialysis). In addition, factors that may affect the test performance and clinical utility of newer laboratory markers of iron status remain largely unexamined.
Identification of Evidence Gaps
The current Future Research Needs (FRN) project was undertaken in order to find the most important research gaps in the literature with regards to laboratory tests for assessing iron status and management of iron deficiency in CKD patients, identified during the synthesis of the aforementioned CER. The objectives of this project are to identify potential research questions and to suggest study designs for addressing these questions. These objectives were achieved by cataloguing the evidence gaps relevant to iron deficiency laboratory tests, establishing a stakeholder panel, engaging stakeholders in research topic nomination and prioritization, and developing research protocols for the most highly ranked topics. Table 2 summarizes the evidence gaps identified in our review. (Note: The gaps are not listed in the order of the CER Key Questions.)
One major evidence gap concerns the dearth of pediatric studies (only one study enrolled pediatric HD and PD CKD patients).22 This suggests that there is a need for further refinement of FRN for pediatric CKD patients (nondialysis and dialysis). Since this would require a specially-composed stakeholder group, it was determined to be beyond the scope of this project. Thus, the current FRN project is focused on adult CKD patients (nondialysis and dialysis).
Figure 1 depicts the analytic framework used in structuring the CER as well as the FRN report. Broadly, it shows how the individual Key Questions are addressed within the context of the logical linkages between populations, interventions, comparators, and outcomes of interest.
Newer laboratory biomarkers: content of Hb in reticulocytes, percentage of hypochromic red blood cells, erythrocyte zinc protoporphyrin, soluble transferrin receptor, hepcidin, and superconducting quantum interference devices.
Older laboratory biomarkers: bone marrow iron stores, serum iron, transferrin saturation, iron-binding capacity, and ferritin.
Agency for Healthcare Research and Quality (US), Rockville (MD)
Chung M, Chan JA, Moorthy D, et al. Biomarkers for Assessing and Managing Iron Deficiency Anemia in Late-Stage Chronic Kidney Disease: Future Research Needs: Identification of Future Research Needs From Comparative Effectiveness Review No. 83 [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2013 Jan. (Future Research Needs Papers, No. 33.) Background.