• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of milbankLink to Publisher's site
Milbank Q. Jun 2010; 88(2): 240–255.
PMCID: PMC2980345

“Impactibility Models”: Identifying the Subgroup of High-Risk Patients Most Amenable to Hospital-Avoidance Programs

Abstract

Context: Predictive models can be used to identify people at high risk of unplanned hospitalization, although some of the high-risk patients they identify may not be amenable to preventive care. This study describes the development of “impactibility models,” which aim to identify the subset of at-risk patients for whom preventive care is expected to be successful.

Methods: This research used semistructured interviews with representatives of thirty American organizations that build, use, or appraise predictive models for health care.

Findings: Impactibility models may refine the output of predictive models by (1) giving priority to patients with diseases that are particularly amenable to preventive care; (2) excluding patients who are least likely to respond to preventive care; or (3) identifying the form of preventive care best matched to each patient's characteristics.

Conclusions: Impactibility models could improve the efficiency of hospital-avoidance programs, but they have important implications for equity and access.

Keywords: Predictive modeling, impactibility, hospital avoidance, equity, access

As the population ages and more people live with complex diseases, the costs of chronic disease will become increasingly difficult to sustain. Moreover, health care costs are highly skewed across the population (Cummings, Cummings, and Johnson 1997). For example, 8 percent of Medicaid enrollees account for roughly two-thirds of all Medicaid spending (Sommers and Cohen 2006). This means that large sums could be invested “upstream” in preventive care for the most costly patients and still potentially yield net savings from averted “downstream” expenditure (Billings and Mijanovich 2007; Cousins, Shickle, and Bander 2002).

The need to demonstrate cost savings from averted emergency hospital admissions is being intensified by rising health care costs and tightening health care budgets. But a series of disappointing results from government-funded trials of chronic disease programs, such as the Medicare Health Support Experiment (Centers for Medicare and Medicaid Services 2008b) and the Medicare Coordinated Care Demonstration (Centers for Medicare and Medicaid Services 2008a), have shown how difficult it can be to realize these potential savings. In the wake of such findings, therefore, attention is turning once again to ways of improving the cost-effectiveness of chronic disease management programs (Peikes et al. 2009). One strategy being pursued is to optimize the case-finding process by developing more sophisticated tools for selecting participants.

At present, eligibility for government-funded care management programs is often determined by the output of a predictive risk model. For example, the New York Medicaid Chronic Illness Demonstration Program (New York State Department of Health 2008a) selects participants by using a risk-prediction algorithm that includes variables relating to prior hospital utilization, pharmacy and durable medical equipment use, diagnostic information, and patient demographics (New York State Department of Health 2008b). The purpose of the predictive model is to ensure that the program is offered only to those people who are at high risk of the outcome to be prevented, namely, future hospitalization (Billings and Mijanovich 2008).

This article describes the development of impactibility models, tools designed to identify systematically the subset of at-risk enrollees for whom preventive care is expected to be successful. Impactibility models are intended to address a potential disadvantage of predictive models, which is that some of the high-risk patients they identify may not in fact be amenable to upstream care (Weber and Neeser 2006).

Risk Adjustment and Predictive Modeling

Since their introduction in the mid-1980s, risk-adjustment tools have had a profound impact on health services research and delivery (AcademyHealth 2008). Using relationships in historic administrative data, they calculate the expected resource use of each member of a health plan, thereby enabling health care providers to be remunerated fairly and efficiently (Majeed, Bindman, and Weiner 2001). Known also as case-mix adjusters and clinical groupers, examples of risk-adjustment tools include the Adjusted Clinical Group (ACG) system (Weiner et al. 1991) and the Diagnostic Cost Group (DCG) system (Ellis et al. 1996), both of which are used in many countries to predict health care costs and other patient outcomes.

In the late 1990s, attention turned to how such tools might be applied to predict future, rather than current, use of resources (Cucciare and O’Donohue 2006). An indication of which individuals are at risk of future acute care inpatient hospitalizations is useful because today's high-cost patients will have markedly lower average costs in the future even without intervention, a phenomenon called regression to the mean (Roland et al. 2005). This means that hospital-avoidance programs, such as case management, are best offered to patients according to their risk of future hospitalization, rather than to patients who are currently experiencing multiple hospital admissions (Curry et al. 2005). Predictive risk models enable patients to be stratified according to their individual risk of future hospitalization.

Risk-adjustment tools and predictive risk models can be constructed from historic administrative data, such as claims and demographic data, using multiple regression or neural network techniques (Cousins, Shickle, and Bander 2002). Unlike risk adjusters, however, predictive models also typically incorporate variables relating to prior health care use and disease severity. Such information is generally excluded from risk adjusters because otherwise health care providers might benefit financially from ascribing more severe diagnoses to their patients or by providing unnecessary medical care (Hu and Lesneski 2004). For example, a risk-adjustment model that included procedure variables would be open to manipulation by physicians because their future reimbursement could be increased simply by providing more costly treatments. In contrast, predictive models may legitimately include variables from a wide range of data sources, not only those found in claims and demographic data, but also variables from health risk assessment (HRA) surveys, prescriptions data, clinical and biometric information from electronic medical records, and laboratory results.

Predictive risk models apply statistical techniques such as multiple regression or neural networks to routine electronic data (Cousins, Shickle, and Bander 2002), using historic patterns in the population's data to make predictions at the individual level. The growing use of predictive models in health care in recent years has been made possible by a combination of better access to individual-level electronic data and improvements in computing power. Very large data sets, often involving hundreds of millions of observations, can now be analyzed according to the health needs, service use, and health outcomes of each individual in a population.

In developing predictive models, analysts must be careful not to “overfit” their models to the data. If a model is too complex, then overfitting can occur, in which a few idiosyncrasies in the data are captured in the formula for calculating risk scores. Such a model may then perform relatively poorly when it is applied to other data sets that lack these idiosyncrasies. One way to avoid overfitting is to split the data at random, using half the data (the “development sample”) to construct the model, with the other half (the “validation sample”) used later to test how well the model performs. The accuracy of the predictive model can be quantified according to its performance on the validation sample using metrics such as the sensitivity and specificity, the positive and negative predictive values, the area under the receiver operating characteristics curve (ROC curve), and the r-squared value.1 In practice, a predictive model is applied to the current data in order to produce a risk estimate for each individual in the population for the forthcoming time period (typically the next twelve months).

Methods

In this study, I sought to identify new developments in health care predictive modeling in the United States. I received confirmation from the New York University institutional review board (IRB) that ethics approval was not required for this study, and I obtained each participant's informed consent.

I conducted semistructured telephone interviews between January and May 2008 with a designated senior official from a range of U.S. organizations that build, use, or appraise predictive models in the field of health care. An initial list of organizations was compiled from the following sources:

  • speakers at the First National Predictive Modeling Summit held in December 2007 in Washington DC
  • members of the “Predictive Modeling News” electronic mailing list run by Health Policy Publishing LLC
  • organizations listed in a commercial directory of predictive modeling vendors and their clients (Schwartz 2007)

I identified other organizations using a snowball technique. I reached thematic saturation after twenty-six interviews, but I conducted four more interviews to ensure that no new themes emerged. The final list of respondents included representatives of thirty organizations (see appendix) that

  • build predictive models (model vendors, disease management companies, universities, and consultancies),
  • use such models (physicians, nurses, insurance plan managers, and actuaries, and integrated health care delivery systems, Medicaid, Medicare, and employers), or
  • appraise their use (academics, consultants, and employer groups).

The interview schedule included open-ended questions in five domains: (1) how predictive modeling is currently being used, (2) issues relating to the data used to build and run predictive models, (3) the outcomes predicted by the models, (4) how predictions are used, and (5) likely new developments in this field.

All interviews were recorded, transcribed, and coded, and qualitative data software (Atlas.ti) was used in the analysis.

Findings

A new development cited by almost all respondents was “impactibility modeling.” The interviewees described a growing recognition in the disease management and predictive modeling industries that not all high-risk patients benefit from preventive care. Whereas all high-risk patients once may have been offered case management, the respondents described attempts now to target upstream interventions only at those individual patients thought most likely to benefit. This was seen as a way to increase the cost-effectiveness of upstream care.

A predictive impactibility model may be defined as one that

predict[s] who will acquire a disease, an adverse event related to a disease, or change from one health (functioning) state to another, where these outcomes are impactible with some specific intervention such as taking or stopping a medication, doing a test, reducing avoidable medical costs, making a behavioral change, or changing the person's environment. (Duncan 2004, 91)

Ideally, an impactibility model would use information about the differential effects of a specific preventive intervention offered at random to patients and controls, so as to identify the characteristics of the “perfect patient” for that preventive program. Since such data are rarely available in practice, the respondents reported that other, more pragmatic approaches were being pursued to predict impact. Such strategies included data mining, quasi-experimental methods, and analysis of routine data sets for adherence to evidence-based guidelines. None of the interviewees indicated what proportion of high-risk patients they regarded as “non-impactible,” nor did they quantify the possible improvement in efficiency from using impactibility models.

The respondents described three classes of impactibility model, those that (1) gave priority to patients who were predicted to be the most amenable to preventive care; (2) excluded patients who were deemed unlikely to respond to preventive care; and (3) tailored preventive care to each patient's characteristics.

Giving Priority to Patients with Conditions That Make Them Amenable to Preventive Care

The most commonly reported strategies for improving the impact of upstream care involved giving priority to patients based on the “actionability” of their diseases and the treatments they were receiving. The types of impactibility model described in this category were as follows:

Excluding the Very Highest Risk

Although most respondents maintained that all high-risk patients should potentially be offered intensive upstream care, a few stated that certain disease management organizations choose to manage no extremely high-risk cases, considering this to be a sign of unmanageability. They explained that hospitalizations can be very difficult or impossible to prevent in this group of patients and that many very high-risk patients would have died before they could be contacted.

Ambulatory Care–Sensitive Conditions

The most commonly cited way of attempting to increase the impact of predictive risk models was to give priority to patients with certain diagnoses, such as heart failure and epilepsy, which are known to be particularly amenable to upstream care. Many interviewees reported favoring patients with “ambulatory care–sensitive” (ACS) conditions. These are diseases for which prompt, high-quality primary or outpatient care can reduce the risk of hospitalization (Billings et al. 1993).

Gaps in Care

Another commonly reported method of increasing impact was to give priority to patients according to the number of “gaps” in their care, a gap being an observed difference between the optimal care and the care received. For example, in a patient with ischemic heart disease, not taking an antiplatelet drug (such as low-dose aspirin) could be a gap. The respondents explained how the number and nature of these gaps are used as a proxy of each patient's impactibility. Those patients with many high-impact gaps would be chosen for intervention because tangible steps could be taken to improve their care. The respondents described a variety of methods for defining and weighting gaps, including nonstatistical techniques (e.g., modified Delphi method), or by using evidence-based standards, such as those published by Milliman (Milliman Inc. 2009). One respondent commented that when using evidence-based gaps, it might be possible to quantify the expected impact of closing each gap by using the published adverse event rates from studies of patients who did and did not have that gap (Weber and Neeser 2006). This respondent felt that such quantitative information could then be used to help make more detailed projections of the likely impact of upstream care.

Excluding Patients Who Are Unlikely to Respond to Preventive Care

A second category of impactibility models that the respondents named were those that gave priority to patients according to their expected response to preventive care. Here the determining factors are the patient's characteristics rather than the disease or its management. Sometimes referred to as predicting “patient activation” or “co-operability,” the aim of this approach is to concentrate resources on those people most likely to participate in and respond to upstream care. Respondents described several methods that may be used to estimate the probability that a particular patient will respond to a care management program.

Patient Characteristics

Several respondents reported that certain disease management companies gave less priority to or excluded patients with attributes suggestive of likely noncompliance. These characteristics were said to include mental health diagnoses (schizophrenia, depression, dementia, or learning difficulties), addictions (smoking, alcohol misuse, or illicit drug addiction), and social factors (language barrier, housing problems, or being a single parent).

Previous Noncompliance

A few interviewees described how less priority might be given to patients whose administrative data indicated that they previously had not complied with a particular treatment. For example, patients who had attended a weight-loss clinic but whose subsequent data showed they remained overweight or patients who had not filled all their prescriptions or attended all their follow-ups might be excluded from upstream care.

Patient Activation

Some respondents said that validated tools such as the Patient Activation Measure (Hibbard et al. 2004) might be used to identify patients deemed unlikely to respond to upstream care. Patients might be asked to complete such tools online or through an automated telephone call. Patients with a high activation score were thought to be more likely to comply with upstream care, so some disease management organizations used these tools as a screening measure for selecting patients for upstream care.

Similarities to Previous Patients

One respondent described predicting responsiveness by using multiple regression to identify the characteristics of patients who had previously been successfully managed in a preventive program. This information was then used to select patients who were expected to respond well.

Disenrollment

Another respondent cited modelers’ attempts to predict which individuals were at risk of disenrolling from a disease management program. Retaining patients in upstream programs can be challenging (MacStravic 2007), so this respondent felt that such a model might be used to target extra resources or attention at those participants at particular risk of disenrollment.

Tailoring Preventive Care to the Individual Patient

A third strategy that the respondents described for improving the impact of predictions was to model “receptivity.” The aim here is to forecast what approach to preventive care is most likely to work best for each patient. Receptivity modeling is based on the premise that patients with a similar predicted risk of hospitalization may respond differently to the same preventive intervention. By using some techniques adopted from marketing, analysts may be able to predict the best approach for each individual patient based on demographic, diagnostic, neighborhood, and other characteristics.

Interviewees reported that some modelers attempt to predict the best “channel” for contacting prospective patients, predict the optimal “content” of the preventive care, or predict what incentive would be most likely to persuade a particular patient to change his or her behavior.

Channel

One respondent described attempts to predict the best medium for making contact with the patient (brochure versus email versus telephone call), the best messenger (male versus female nurse, older versus younger health coach), and the best timing of the message (morning versus afternoon, weekend versus weekday). The respondent explained how a disease management company's data might show that patients with certain combinations of characteristics would not sign up for a preventive program following an email and a telephone call but that other patients with very similar characteristics were receptive to a mailed brochure and a telephone call. On this basis, the most receptive channel for each patient could be determined according to a combination of characteristics, including age, socioeconomic status, patterns of health care use, and diagnoses.

Content

A few respondents reported attempts to tailor the nature of the preventive care to the characteristics of each patient. For example, patients might be classified according to their readiness to change their unhealthy behavior. Prochaska and DiClemente classified several stages of readiness to change, including “pre-contemplative,”“contemplative,”“preparation,”“action,”“maintenance,” and “relapse” (Prochaska and DiClemente 1983). The respondents felt that different types of preventive care were appropriate for each of these stages of readiness to change, so they tried to develop a “Prochaska index” based on routine data. They believed that using such an index was helpful for deciding what type of preventive care should be offered to each patient.

Incentives

A few respondents mentioned that certain health plans used incentives to encourage enrollees to engage with their health coach or case manager. These ranged from small gifts (such as pedometers, gift cards, and vouchers) to alterations to the member's insurance benefit package, co-pays, or deductibles. The plans were said to use receptivity modeling to predict what incentive would be most attractive to each enrollee based on variables in routine data.

Several respondents explained that receptivity modeling was conducted on data from marketing and credit card companies as well as demographic and claims data (Stehno 2007). Although consumer data were not considered to be necessarily predictive of future health care costs, they were felt to provide insight into psychosocial factors and therefore might help determine what type of preventive care was most likely to be successful for a given patient.

Even though receptivity modeling was cited less frequently by the respondents than the other two classes of impactibility model, it was described as a new and promising field.

Discussion

Programs aimed at preventing unplanned acute hospitalizations are attractive to policymakers because they have the potential to improve patient experiences and outcomes and at the same time reduce overall costs. Although such programs may seem intuitive and clinically plausible, in practice it has been difficult or impossible to demonstrate net savings. One way of improving the cost-effectiveness of hospital-avoidance programs may be to target upstream care not simply to those people who are at risk of future hospitalization but more specifically to those at-risk people whose risks can be mitigated. For example, some disease management organizations that are participating in Medicare and Medicaid trials have reportedly sought the freedom to select only those high-risk beneficiaries whom they believe are most likely to benefit (Abelson 2008). Predictive risk models can help identify patients who are truly at risk of future hospitalization. Impactibility models are meant to identify the subset of at-risk patients who are likely to benefit from “upstream” care.

One disadvantage of using impactibility models is that even though they may improve the efficacy of a preventive program for individual patients, they may also reduce its overall potential across the population if the impactibility model deems that only a small proportion of high-risk patients are amenable to or will respond to preventive care. As the cost-effectiveness ratios for more preventive programs become known, these values may be used to set priorities. For example, a health plan could decide to implement any preventive intervention below a certain cost-effectiveness threshold, starting with the most cost-effective.

Although the interviewees agreed that excluding certain individual “high-risk” patients might increase a program's cost-effectiveness, they disagreed on whether to exclude systematically all very high-risk patients. Most interviewees felt very high-risk patients should, in general, be offered upstream care. Because these patients have the highest expected rates of hospitalization, the potential payback for success also is the greatest here. Some interviewees believed, however, that all patients with very high “risk scores” should be excluded because on average they are too complex to manage or will have died before being reached. Such a strategy seems somewhat surprising, given that the literature suggests that hospital-avoidance interventions are most successful for the highest-risk patients (Krause 2005; Peikes et al. 2009) and given the apparent absence of published evidence that upstream care is less cost-effective for very high-risk patients than for patients with a slightly lower predicted risk.

As well as improving the cost-effectiveness of preventive care, predictive models can increase access and equity. One reason is that predictive models identify patients according to objective criteria rather than the attentiveness of the physician or the wishes of patients and their relatives. Furthermore, predictive models often include a proxy for socioeconomic status, a factor known to be predictive of hospital use (Billings et al. 1993). Including such variables in predictive models (as opposed to impactibility models) tends to increase the risk scores for people living in more deprived areas, thus favoring them for preventive care.

The introduction of certain types of impactibility model offers the potential to improve access and equity still further. For example, models designed specifically to predict rehospitalizations owing to ACS conditions, such as the “PARR 1” model used in the English National Health Service (Billings et al. 2006), give precedence to patients who are receiving suboptimal primary care or who are noncompliant with that care. Likewise, patients receiving suboptimal care from their physician may be given priority by impactibility models that assess the number of quality “gaps” in their data. In general, predictive models rely on positive signals in administrative data to make forecasts of health care use. For example, a new diagnosis or a visit to an emergency department might be predictive of a future hospitalization. In contrast, care gaps are typically represented by negative observations in the data, such as no follow-up visit or the absence of a particular drug. Although all the interviewees in this study regarded quality gaps as markers of high impactibility, it is worth noting that gaps could also denote noncompliance with care, so noncompliant patients might be favored by this type of impactibility model.

Some other types of impactibility model, however, may impede access to preventive care by the most vulnerable in society. If low socioeconomic status were found to predict poor compliance or a poor response to a preventive program, then people living in more deprived areas might be excluded from enrolling by a plan that was interested only in cost-effectiveness. Likewise, impactibility models that excluded enrollees with addictions, mental illness, language barriers, or other social problems might worsen disparities in health care. However, unless these characteristics are reflected in the risk-adjustment systems that determine capitation or reimbursement, organizations may receive the same funding to care for vulnerable patients who are difficult to manage as for those who are easier to manage. Without careful regulation and appropriate incentives, unscrupulous disease management organizations might choose to discriminate against (or “dump”) difficult patients based on the predictions of these types of impactibility models (Ellis 1998).

A great deal is already known about which chronic diseases are most amenable to ambulatory care and the characteristics of the patients who are most likely to respond to preventive care. As this knowledge is incorporated into practical tools that can be applied systematically to population data, it will be important not only to assess the effect of impactibility models on the cost-effectiveness of upstream care, but also to detect any unintended effect they may have on access and equity. Indeed, it will be important to quantify how many “high-risk” patients are deemed “non-impactible,” to describe these patients’ characteristics, and to study the effects of prioritizing and de-prioritizing patients based on predicted impactibility.

Impactibility models represent an important strategy for improving health and reducing disparities in health care. Careful evaluation and well-designed incentives will be needed to encourage organizations to develop programs that cater to the individual needs of all high-risk patients.

Acknowledgments

This work was made possible by the Commonwealth Fund, which supported me as a 2007/2008 Harkness Fellow in Health Care Policy and Practice, based at the Robert F. Wagner Graduate School of Public Service, New York University. My work was independent of the fund, and the opinions expressed in this article are not necessarily those of the fund. Jennifer Dixon, Rhema Vaithianathan, John Billings, and three anonymous reviewers provided helpful comments on a previous draft.

Appendix

Representatives of the following organizations kindly volunteered their time to participate in this research:

ACG Case-Mix System

Ault International Medical Management

Boston University

CareAdvantage Inc.

Center for Health Care Strategies Inc.

CVS Caremark

D2 Hawkeye Inc.

Harvard University

Health Care Resources Inc.

Health Dialog Inc.

Healthways Inc.

Humana Inc.

Independence Blue Cross

Ingenix Inc.

Johns Hopkins University

Kaiser Permanente

Massachusetts General Hospital

MEDai Inc.

Medical Cost Management Corporation

Mercer LLC

Montana Association of Healthcare Purchasers

Montefiore Medical Center

New York University

Partners HealthCare System Inc.

Predictive Health LLC

Society of Actuaries

Solucia

3M Company

UnitedHealth Group

Veterans Health Administration

Endnote

1The sensitivity is the proportion of those people who will truly experience the outcome of interest (e.g., future acute care inpatient hospitalization) that the model correctly identifies as “high risk.” The specificity is the proportion of people who will not experience the outcome of interest that the model correctly identifies as “low risk.” The positive predictive value is the likelihood that a person identified by the model as “high risk” will truly experience the outcome of interest, and the negative predictive value is the probability that a person identified by the model as “low risk” will not experience the outcome of interest. A receiver operating characteristics curve (ROC curve) plots the trade-off between the sensitivity and (1-specificity) of a model, so that the area under the ROC curve represents the ability of the model to discriminate between those individuals who will and will not experience the outcome of interest. The area under the curve may range from 0.5 (for a useless model) to an area of 1.0 (for a perfect model). The r-squared value is the proportion of variance in the population for the outcome of interest that is explained by the model.

References

  • Abelson R. Medicare Finds How Hard It Is to Save Money. New York Times. 2008 April 7, 2008.
  • AcademyHealth. 2008 HSR Impact Awardee: Risk-Based Predictive Modeling—Improving the Financing and Delivery of Health Care with Risk-Based Predictive Modeling. 2008. Available at http://www.academyhealth.org/files/awards/Risk-BasedPredictiveModeling.pdf (accessed March 10, 2010.
  • Billings J, Mijanovich T. Improving the Management of Care for High-Cost Medicaid Patients. Health Affairs. 2007;26(6):1643–54. [PubMed]
  • Billings J, Mijanovich T. Narrow Model: The Authors Respond. Health Affairs. 2008;27(3):900.
  • Billings J, Mijanovich T, Dixon J, Curry N, Wennberg D, Darin B, Steinort K. Case Finding Algorithms for Patients at Risk of Re-Hospitalization, PARR 1 and PARR 2. London: King's Fund; 2006.
  • Billings J, Zeitel L, Lukomnik J, Carey TS, Blank AE, Newman L. Impact of Socioeconomic Status on Hospital Use in New York City. Health Affairs. 1993;12(1):162–73. [PubMed]
  • Centers for Medicare and Medicaid Services. Medicare Coordinated Care Demonstration. 2008a. Available at http://www.cms.hhs.gov/DemoProjectsEvalRpts/downloads/CC_Fact_Sheet.pdf (accessed February 13, 2009.
  • Centers for Medicare and Medicaid Services. Medicare Health Support, Overview. 2008b. Available at http://www.cms.hhs.gov/CCIP/ (accessed February 8, 2009.
  • Cousins MS, Shickle LM, Bander JA. An Introduction to Predictive Modeling for Disease Management Risk Stratification. Disease Management. 2002;5(3):157–67.
  • Cucciare MA, O’Donohue W. Predicting Future Healthcare Costs: How Well Does Risk-Adjustment Work? Journal of Health Organization and Management. 2006;20(2):150–62. [PubMed]
  • Cummings NA, Cummings JL, Johnson LN. Behavioral Health in Primary Care: A Guide for Clinical Integration. Madison, CT: Psychosocial Press; 1997.
  • Curry N, Billings J, Darin B, Dixon J, Williams M, Wennberg D. Predictive Risk Project Literature Review. London: King's Fund; 2005.
  • Duncan I. Dictionary of Disease Management Terminology. Washington, DC: Disease Management Association of America; 2004.
  • Ellis R. Creaming, Skimping and Dumping: Provider Competition on the Intensive and Extensive Margins. Journal of Health Economics. 1998;17(5):537–56. [PubMed]
  • Ellis R, Pope G, Iezzoni L, Ayanian JZ, Bates DW, Burstin H, Ash AS. Diagnosis-Based Risk Adjustment for Medicare Capitation Payments. Health Care Financing Review. 1996;17(3):101–28. [PMC free article] [PubMed]
  • Hibbard JH, Stockard J, Eldon R, Mahoney ER, Tusler M. Development of the Patient Activation Measure (PAM): Conceptualizing and Measuring Activation in Patients and Consumers. Health Services Research. 2004;39(4, part 1):1005–26. [PMC free article] [PubMed]
  • Hu G, Lesneski E. The Differences between Claim-Based Health Risk Adjustment Models and Cost Prediction Models. Disease Management. 2004;7(2):153–58. [PubMed]
  • Krause DS. Economic Effectiveness of Disease Management Programs: A Meta-analysis. Disease Management. 2005;8(2):114–34. [PubMed]
  • MacStravic S. The Challenge of Participation in Disease Management. Disease Management. 2007;10(5):247–51. [PubMed]
  • Majeed A, Bindman AB, Weiner JP. Use of Risk Adjustment in Setting Budgets and Measuring Performance in Primary Care I: How It Works. British Medical Journal. 2001;323(7313):604–7. [PMC free article] [PubMed]
  • Milliman Inc. Milliman Care Guidelines. 2009. Available at http://www.milliman.com/expertise/healthcare/products-tools/milliman-care-guidelines/ (accessed February 8, 2009.
  • New York State Department of Health. Request for Proposals—Chronic Illness Demonstration Projects. 2008a. Available at http://www.health.state.ny.us/funding/rfp/0801031003/ (accessed February 8, 2009.
  • New York State Department of Health. Request for Proposals for Office of Health Insurance Programs. 2008b. Available at http://www.health.state.ny.us/funding/rfp/0801031003/0801031003.pdf (accessed May 15, 2009.
  • Peikes D, Chen A, Schore J, Brown R. Effects of Care Coordination on Hospitalization, Quality of Care, and Health Care Expenditures among Medicare Beneficiaries: 15 Randomized Trials. Journal of the American Medical Association. 2009;301(6):603–18. [PubMed]
  • Prochaska JO, DiClemente CC. Stages and Processes of Self-Change of Smoking: Toward an Integrative Model of Change. Journal of Consulting and Clinical Psychology. 1983;51(3):390–95. [PubMed]
  • Roland M, Dusheiko M, Gravelle H, Parker S. Follow up of People Aged 65 and Over with a History of Emergency Admissions: Analysis of Routine Admissions Data. British Medical Journal. 2005;330(7486):289–92. [PMC free article] [PubMed]
  • Schwartz D. Predictive Modeling in Disease Management. 2nd ed. Marblehead, MA: HCPro Inc; 2007.
  • Sommers A, Cohen M. Medicaid's High Cost Enrollees: How Much Do They Drive Program Spending? Menlo Park, CA: Kaiser Family Foundation; 2006.
  • Stehno CE. An Innovative Health Risk Measurement Technique for Disease Management. Disease Management. 2007;10(1):1–5. [PubMed]
  • Weber C, Neeser K. Using Individualized Predictive Disease Modeling to Identify Patients with the Potential to Benefit from a Disease Management Program for Diabetes Mellitus. Disease Management. 2006;9(4):242–55. [PubMed]
  • Weiner JP, Starfield BH, Steinwachs DM, Mumford LM. Development and Application of a Population-Oriented Measure of Ambulatory Care Case Mix. Medical Care. 1991;29(5):453–72. [PubMed]

Articles from The Milbank Quarterly are provided here courtesy of Milbank Memorial Fund
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...