NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Cylus J, Papanicolas I, Smith PC, editors. Health system efficiency: How to make measurement matter for policy and management [Internet]. Copenhagen (Denmark): European Observatory on Health Systems and Policies; 2016. (Health Policy Series, No. 46.)

Cover of Health system efficiency

Health system efficiency: How to make measurement matter for policy and management [Internet].

Show details

1A framework for thinking about health system efficiency

, , and .

This chapter takes up the challenge set out in the book’s preface and offers a framework for thinking about the conceptualization and measurement of efficiency in health systems. The intention is to help those seeking to gain an understanding of the magnitude and nature of a system’s inefficiencies. The chapter first reiterates why an understanding of health sector efficiency is important. We then explain what is meant by efficiency and explore in more depths the two fundamental concepts of allocative efficiency (AE) and technical efficiency (TE). We show that many metrics relating to efficiency are partial, and if viewed in isolation, can be misleading. We bring the discussion to a conclusion by presenting a framework for thinking about health efficiency metrics comprising five key issues: the entity to be scrutinized; the outputs; the inputs; the external influences on performance; and the impact of the entity on the broader health system.

1.1. Why is health sector efficiency important?

The notion of health sector efficiency – and related issues such as cost–effectiveness and value for money – are some of the most discussed dimensions of health care performance. These concepts seek to capture the extent to which the inputs to the health system, in the form of expenditure and other resources, are used to secure valued health system goals. In many other sectors of the economy, consumer preferences help to ensure that the most valued outputs are produced at market prices. However, there are numerous, well-rehearsed market failures in the health sector that mean that traditional market mechanisms cannot work, allowing poor quality or inappropriate care to persist at high prices if no policy action is taken. Most commentators would therefore agree that the pursuit of efficiency should be a central objective of policymakers and managers, and to that end better instruments for measuring and understanding efficiency are urgently needed.

Inefficient use of health system resources poses serious concerns, for a number of reasons:

  • it may deny health gain to patients who have received treatment because they do not receive the best possible care available within the health system’s resource limits;
  • by consuming excess resources, inefficient treatment may deny treatment to other patients who could have benefited from treatment if the resources had been better used;
  • inefficient use of resources in the health sector may sacrifice loss of consumption opportunities elsewhere in the economy, such as education or nutrition;
  • wasting resources on inefficient care may reduce society’s willingness to contribute to the funding of health services, thereby harming social solidarity, health system performance and social welfare.

Thus, as well as its instrumental value, tackling inefficiency has an important accountability value: to reassure payers that their money is being spent wisely, and to reassure patients, caregivers and the general population that their claims on the health system are being treated fairly and consistently. Also, health care funders including governments, insurance organizations and households are interested in knowing which systems, providers and treatments contribute the largest health gains in relation to the level of resources they consume. Efficiency becomes particularly important in the light of financial pressures and concerns over long-term financial sustainability experienced in many health systems, as decision-makers seek to demonstrate and ensure that health care resources are put to good use. When used appropriately, efficiency indicators can be important tools to help decision-makers determine whether resources are allocated optimally, and to pinpoint which parts of the health system are not performing as well as they should be.

1.2. What is inefficiency?

The concept of health system efficiency may seem beguilingly simple, represented at its simplest as a ratio of resources consumed (health system inputs) to some measure of the valued health system outputs that they create. In effect, this creates a metric of the generic type, the so-called resource use per unit of health system output. Yet, making this straightforward notion operational can give rise to considerable complexity. Within the health system as a whole, there exist a seemingly infinite set of interlinked processes that could be evaluated independently and found to be efficient or inefficient. This has given rise to a plethora of apparently disconnected indicators that give glimpses of certain aspects of inefficiency, but rarely offer a comprehensive overview.

Economists conceive the transformation of inputs into valued outputs as a ‘production function’, which indicates the maximum feasible level of output for a given set of inputs. Any failure to attain that maximum is an indication of inefficiency (Jacobs, Smith & Street, 2006). The concept of a production function can be applied to the functioning of very detailed micro units (such as a physician’s office) through to huge macro units (such as the entire health system). Whatever level is chosen, the intention is to offer insights into the success with which health system resources are transformed into physical outputs (such as patient consultations), or (more ambitiously) into valued outcomes (such as improved health).

But why exactly might a health system not perform as well as it could? Processes in the health system may be inefficient for two distinct, but related reasons. The first reason is that health system inputs such as expenditure or other resources may be directed towards creating some outputs that are not priorities for society. For example, providing very high-cost end-of-life cancer treatments may create benefits for the individuals involved, but society may judge that the limited money available to the health system may be better spent on other interventions that create (in aggregate) larger health gains. The second reason for inefficiency is that there could be misuse of inputs in the process of producing valued health system outputs. Waste of inputs at any stage of the production process mean that there will be less output than what is possible for a given initial level of resources, leading to what can be loosely thought of as waste. For example, if a health system does not secure the minimum cost of medicines and other inputs, less output either in terms of quantity of patients treated or quality of care provided will be possible for a given level of expenditure. Likewise, if a patient’s medical tests are unnecessarily ordered or duplicated, there is a waste of resources and other individuals may be forced to forego needed care.

Economists refer to these two concepts as as allocative efficiency (AE) and technical efficiency (TE). AE can be used to scrutinize either the choice of outputs or the choice of inputs. On the output side, it examines whether limited resources are directed towards producing the correct mix of health care outputs, given the preferences of funders (acting on behalf of society in general). AE can also examine whether the entity under scrutiny uses an optimal mix of inputs – for example, the mix of labour skills – to produce its chosen outputs, given the prices of those inputs.

In contrast, TE indicates the extent to which the system is minimizing the use of inputs in producing its chosen outputs, regardless of the value placed on those outputs. An alternative, but equivalent formulation is to say that it is maximizing its outputs given its chosen level of inputs. In either case, any variation in performance from the greatest feasible level of production is an indication of technical inefficiency, or waste. The prime interest in TE is therefore in the operational performance of the entity, rather than its strategic choices relating to what outputs it produces or what inputs it consumes.

The thesis underlying this book is that – whether inefficiency takes the form of inputs lost in the production of valued health outputs, or inputs misdirected towards relatively low-value health outputs – a first step towards remedial actions is to properly understand the magnitude and nature of any such inefficiency. To that end, it is important for decision-makers (whether clinicians, managers, regulators or policymakers) to understand the strengths and limitations of the many efficiency metrics that are becoming available.

We now therefore consider the concepts of allocative and TE in more detail.

1.3. Allocative inefficiency

AE is central to the work of health technology assessment (HTA) agencies, which often use expected gains in quality-adjusted life years (QALYs) as the central measure of the benefits of a treatment, and cost per QALY as a prime cost–effectiveness criterion for determining whether or not to mandate adoption of a treatment. The assumption underlying this approach is that payers wish to see their financial contributions used to maximize health gain. Under these circumstances, a provider would not be allocatively efficient if it produces treatments with low levels of cost–effectiveness, because the inputs used could be better deployed producing outputs with higher potential health gain (see Chapter 6).

Table 1.1 gives an example of a cost-per-QALY ranking, which indicates the relative value of a set of treatments being considered for introduction based on conventional estimates of incremental cost–effectiveness (compared to current practice). At the level of individual interventions, concentrating on introducing treatments with the lowest incremental cost per QALY maximizes the health benefits secured from limited funds. Of course, the volume of expenditure consumed by each intervention will depend on the incidence of the associated disease. In principle, the treatments under consideration should be prioritized in order of increasing cost per QALY, all of which should be included in the health benefits package until the available funds are exhausted. An equivalent perspective is to require that only treatments that lie below the system’s cost-per-QALY threshold should be accepted, where the value of the threshold is determined by the size of the total budget available for the health system.

Table 1.1. An example of an incremental cost-per-QALY league table.

Table 1.1

An example of an incremental cost-per-QALY league table.

AE can also be considered at a broad sectoral level to examine whether the correct mix of health services is funded, such that at a given aggregate level of expenditure, health outcomes are being maximized. For example, an allocatively efficient health system allocates funds between sectors like prevention, primary care, hospital care and long-term care so as to secure the maximum level of health-related outcomes in line with societal preferences. AE indicators at this level should indicate whether a health system is performing poorly because of a misallocation of resources between such health system sectors. Indicators such as the rate of avoidable hospital admissions might be considered an indicator of misallocation, perhaps suggesting that greater emphasis on primary care may yield efficiency improvements. Note that such principles can be equally applied to much smaller units of analysis, such as a primary care practice. Metrics such as excessive use of antibiotic prescribing, or excessive referrals to hospital specialists, might be indicators of allocative inefficiency.

Consideration of the different levels of AE highlights the fact that the health system may contain entities (such as clinical teams) that perform perfectly efficiently in producing what has been asked of them (for example, preventive treatments). However, consideration of a broader societal perspective may indicate that strategic decision-makers have misallocated resources between preventive and curative services, and that efficient teams are operating within an inefficient system.

Note that a great deal of emphasis on AE has hitherto been on ex ante guidelines on treatments and clinical pathways that should (or should not) be provided. Assuming those guidelines have been prepared in line with the principles of cost–effectiveness, they can also be used ex post to explore whether provider organizations and practitioners have deviated from policy intentions and delivered what can be thought of as inappropriate care. This may take the form of obviously suboptimal use of resources, such as hospital treatment of glue ear, a condition that does not typically require such a resource-intensive setting. However, it could also take the form of treatments that confer health benefits, but which policymakers have decided are not priorities, perhaps implicitly because their cost–effectiveness ratios are above the system’s chosen cost–effectiveness threshold. End-of-life cancer drugs are emerging as a particularly challenging example of such treatments in some systems.

Of course, the inappropriate treatments might have been provided because the financial regime continues to reward such provision, or because clear guidelines have not been promulgated, in which case accountability for the efficiency failure may properly be assigned to policymakers rather than providers. The identification and measurement of inappropriate care is therefore a first step towards identifying inefficiency of this type and designing remedial policies. Note that some valuation of the health benefits of treatment is needed to determine whether or not an activity is cost-effective, and therefore appropriate.

On the input side, although given less attention, there may be potential for a wide range of indicators of allocative inefficiency, in the form of inappropriate use of health system resources. For example, metrics relating to the skill mix of labour inputs can be prepared at a whole system level or at a local level. It is also possible to envisage a wide range of metrics of treatment taking place in the wrong setting (for example, emergency department rather than primary care office), or using inappropriate inputs (such as emergency ambulance transport for non-urgent care).

1.4. Technical inefficiency

There is a sense in which the analysis and measurement of technical inefficiency is less demanding than that of allocative inefficiency. It does not require ex ante specification of norms, and instead is usually entirely an ex post examination of whether the outputs produced by the entity under scrutiny were maximized, given its inputs and external circumstances. Comparative performance therefore lies at the core of most analyses of technical inefficiency.

A major class of TE indicator examines the total costs of producing a specified unit of output, in the form, for example, of costs per patient within a specified disease category. The most celebrated form of such unit cost indicators forms the basis for the various systems of diagnosis-related groups (DRGs), initially developed by Fetter and colleagues at Yale University (Fetter, 1991) for use in the hospital sector (see Chapter 2). These methods cluster patients into a manageable number of groups that are homogeneous with regard to medical condition and expected costs. In the first instance, a hospital’s average unit cost within a DRG category can be compared with a national reference cost for that DRG, often the mean of unit costs across all comparable institutions. This metric in itself may prove useful information on the functioning of specialities within the hospital.

Moreover, the number of cases in each DRG can then be multiplied by the relevant reference cost to derive the expected aggregate costs of treating all the hospital’s patients (if reference costs applied). This can be compared with its actual costs to yield an index of the hospital’s relative efficiency. This approach has usually been used in the hospital sector, but can be extended to many other units of analysis in the health system.

An important barrier to applying the DRG method effectively is the great complexity of hospital cost structures. This has led to major challenges in allocating many hospital costs to specific patients and activities, and the associated variations in accounting practice are one of the reasons for apparent variation in unit costs. To the extent that it is feasible, greater standardization of accounting practices would seem to be an important priority. Chapman et al. (Chapter 4) discuss these important management accounting issues further.

Unit cost metrics offer insights into the overall TE of the entity (relative to other such entities), but give little operational guidance as to the reasons why such inefficiency arises, nor any insights into the AE of the entity. Therefore, aggregate measures of technical inefficiency can usefully be augmented by more specific metrics of operational waste, either in some specified form, such as excessive prices paid for inputs, comparatively long lengths of stay, or unnecessary duplication. Here we seek to examine the various types of TE indicators in the context of a stylized example, based on hospital treatment.

For health production processes of any complexity, there are usually a number of stages in the transformation of resources to outcomes, and much of the confusion in discussing efficiency arises because commentators are discussing different parts of that process. To illustrate, Figure 1.1 represents a typical (but simplified) process associated with the treatment of hospital patients. The overarching concern is with cost–effectiveness, which summarizes the transformation of costs (on the left-hand side) into valued health outcomes (the right-hand side). However, the data demands of a full system cost–effectiveness analysis are often prohibitive, and the results of such endeavours may in any case not provide policymakers with relevant information on the causes of inefficiency, or where to make improvements. To take remedial action, decision-makers require more detailed diagnostic indicators of just part of the transformation process.

Figure 1.1. The production process in hospital care.

Figure 1.1

The production process in hospital care. Note: QALY = quality-adjusted life year.

Inefficiency might occur at any stage of this transformation process. Take first the transformation of money into physical inputs. The principal question (given the mix of chosen inputs) is whether those inputs are purchased at minimum cost. For example, is the organization using branded rather than generic drugs, or paying wage rates in excess of local market rates? A metric such as the average hourly wage (adjusted for skill mix) might shed light on such issues. Note that if no adjustment is made for skill mix, the index may also capture information about the AE of input choices: is the right mix of doctors, other professionals and administrators being deployed? So, in many circumstances it may be helpful to prepare such indicators with and without adjustment.

The production process now moves to the creation of activities produced from those physical inputs, such as diagnostic tests or surgical procedures. Possible sources of waste here may include the use of highly skilled (and therefore costly) workers to produce activities that could be done by less specialized workers, or using excessive hours of labour or other physical inputs in the creation of a particular activity. We cite just one among countless numbers of such possible indicators – the number of tests undertaken by a histologist per month (see Figure 1.1). Note the manifest incompleteness of such an indicator (ignoring both the other outputs of the specialist and the other inputs to the testing function). However, the metric may in some circumstances prove useful when supporting broader efficiency metrics.

Next, physical outputs are created by aggregating activities for a particular service user. In a hospital setting, this usually refers to single episodes of patient care, an aggregation of many actions such as tests, procedures, nursing care and physician consultations. There is great scope for waste in this process, for example, in the form of duplicate or unnecessary diagnostic tests, use of branded rather than generic drugs, or unnecessarily long length of stay. Much depends on how the internal processes of the hospital are organized so as to maximize outputs using the given inputs. The well-known metric of length of stay, which indicates the number of bed days expended per case, falls into this category. (Of course, this will usually be adjusted for case mix complexity.)

The final stage of the health system production process is the quality of the outputs produced. Even when they employ the same physical inputs, activities or physical outputs, there is great scope for variation in effectiveness among providers. The notion of quality in health care has a number of connotations, including the clinical outcomes achieved (usually measured in terms of the gain in the length and quality of life) and the patient experience (a multidimensional concept). So, for example, even though two hospitals produce identical numbers of hip replacements, because of variations in clinical practice and competence, the value they confer on patients (in the form of length and quality of life, and patient experience) can vary considerably. Quality-adjusted output is usually referred to as the outcome of care in the literature. Quality of care has become a central concern of policymakers, and its measurement, while contentious, is usually essential if a comprehensive picture of efficiency is to be secured.

Note that the unit costs metric usually links costs to physical outputs. The numerous partial efficiency indicators that have been developed seek to shed some light on the reasons for variations in unit costs. Each metric gives an indication of the TE of part of the production process. Some, such as the labour productivity or length of stay examples, are based on only partial measures of inputs or outputs. Some are capable of adjustment for external influences on attainment (such as case mix complexity), others are not. None addresses the production process in its entirety, that is, the cost–effectiveness with which costly inputs are converted into valued outputs.

Furthermore, this stylized example looks only at the hospital sector, without reference to other aspects of the health system. It therefore focuses mainly on hospital TE, making no judgement on AE issues, such as whether patients might have been treated more cost-effectively in different settings (for example, primary care or nursing homes). And by focusing on the curative sector, it can shed no light on the success or otherwise of the health system’s efforts to prevent or delay the onset of disease. A further aspect of whole system performance that is ignored is the impact of hospital performance on other sectors within the health system. For example, it may be the case that apparently high levels of efficiency in (say) average length of stay are being secured at the expense of heavy workloads for rehabilitative and primary care services, which may or may not be efficient from a whole system perspective.

1.5. An analytical framework for thinking about efficiency indicators

Figure 1.2 summarizes the principles underlying the simplistic viewpoint of efficiency referred, namely that it represents the ratio of the inputs an organization consumes in relation to the valued outputs it produces. The entity consumes a series of physical resources, referred to as inputs, often measured in terms of total costs. The organization then transforms those inputs into a series of valued outputs. Although measuring the aggregate value of inputs in terms of total costs is relatively uncontroversial, the valuation of aggregate outputs in the health sector depends on how much importance we place on different health system outputs, such as health improvement and quality of life, which are highly contested. Nevertheless, if we can agree on a measure of aggregate valued outputs, then we can calculate a summary measure of efficiency as the ratio of valued outputs to inputs, what is often referred to as cost–effectiveness, or how well the organization’s costs are converted into valued benefits.

Figure 1.2. The naive view of efficiency.

Figure 1.2

The naive view of efficiency.

As discussed in the preceding section, any specific indicator of efficiency may seek to aggregate all inputs into a single measure of costs, or it may consider only a partial measure of inputs. For example, labour productivity measures such as patient consultations per full-time equivalent (FTE) physician per month ignore the many other inputs into the consultation, and the many outputs other than patient consultations produced by the physician. In effect, such partial measures create efficiency ratios using only a subset of the inputs and outputs represented by the arrows in Figure 1.2. Here the output measure is partial in several senses: a physician may undertake many other activities; there are many other inputs into the patient’s care; and there is no information on the health gain achieved by the consultation. In short, the indicator shows only a fragment of the complete transformation of resources into the desired outcomes (improved health).

Numerous other issues arise when seeking to use the concept set out in Figure 1.2 to develop operational models of organizational efficiency in health care, reflecting the complexity of the health care production process. The production of the majority of health care outputs rarely conforms to a production-line type technology, in which a set of clearly identifiable inputs is used to produce a standard type of output. Instead, the majority of health care is tailor-made to the specific needs of an individual patient, with consequent variations in clinical needs, social circumstances and personal preferences. This means that there is often considerable variation among patients in how inputs are consumed and outputs or outcomes are produced. For example, contributions to the care process may be made by multiple organizations and caregivers, a package of care may be delivered over an extended period of time and in different settings, and the responsibilities for delivery may vary from place to place and over time.

In the light of these complexities, the objective of this section is to offer a framework for thinking more clearly about what a specific efficiency indicator tells us, and for identifying the respects in which the indicator may be informative, misleading or partial. Five aspects of any efficiency indicator are assessed in turn:

  • the entity to be assessed;
  • the outputs (or outcomes) under consideration;
  • the inputs under consideration;
  • the external influences on attainment;
  • the links with the rest of the health system.

1.5.1. Identifying entities: what to evaluate?

Where then should an analyst begin? An assessment of efficiency first depends crucially on establishing the boundaries of the entity under scrutiny. At the finest micro level of analysis, an entity could be considered to be a single treatment, where the goal is to assess its cost relative to its expected benefit. At the other extreme, the macro level entity could be considered as the entire health system, defined by the WHO as “all the activities whose primary purpose is to promote, restore or maintain health” (WHO, 2000: p.5).

Most often, however, efficiency measurement takes place at some intermediate or meso level, where the actions of individuals or groups of practitioners, teams, hospitals or other organizations within the health system are assessed. Whatever the chosen level, as a general principle it is important that any analysis reflects an entity for which clear accountability can be determined, whether it is the whole health system, a health services organization or an individual physician. Only then can the relevant agent, whether it is the government, management board or physician, be held to account for the level of performance revealed by the analysis.

Almost all efficiency analysis relies on comparisons, so it is important to ensure that the entities being compared are genuinely similar. A great deal of efficiency analysis is concerned with securing such comparability. If organizational entities are operating in different circumstances, perhaps because the population cared for or the patients being treated differ markedly, some sort of adjustment will be needed to ensure like is being compared with like. We consider this in further detail when discussing external influences on attainment.

More generally, almost all organizations and practitioners operate within profound operational constraints, created by the legal, professional and financial environment within which they must operate. In assigning proper accountability for efficiency shortcomings, it is important to identify the real source of the weakness, which may lie beyond the control of the immediate entity under scrutiny. For example, a community nurse practising in a remote rural area may necessarily appear less efficient when assessed using a metric such as patient encounters per month. However, local geography may preclude any increase, and the nurse may be performing as well as can be expected within the constrained circumstances.

When choosing the entity to evaluate, there is often a difficult trade-off to be made between scrutiny of the detailed local performance of the system, and scrutiny of broader system-wide performance. In general terms, the performance of individual clinicians and clinical teams may be highly dependent on the inputs from other parts of the system (for example the performance of the emergency department in supporting the work of a maternity unit). Furthermore, determining the resources allocated to local teams can be challenging from an accountancy perspective. On the other hand, moving the analysis to a more aggregate level, while obviating the need to identify in detail who undertakes what activity, can make it difficult to identify what is causing apparently inefficient care.

1.5.2. What are the outputs under consideration?

In the context of efficiency analysis in the health sector, two fundamental issues need to be considered. How should the outputs of the health care sector be defined? And what value should be attached to these outputs? The consensus is that in principle health care outputs should properly be defined in terms of the health gains produced. However, organizations rarely collect relevant routine information about health gains and regardless, the construct of health gain has proved challenging to make operational. In most circumstances, it is rarely possible to observe a baseline, or counterfactual – the health status that would have been secured in the absence of an intervention. Furthermore, the heterogeneity of service users, the multidimensional nature of health, and the intrinsic measurement difficulties add to the complexity.

Recent progress in the use of patient-reported outcome measures (PROMs) offers some prospect of making more secure comparisons, at least of providers delivering a specific treatment (Smith & Street, 2013), and a number of well-established measurement instruments have been developed that could be used to collect before/after measures of treatment effects, such as the EuroQol five dimensions (EQ-5D) questionnaire and Short Form-36 (SF-36) (EuroQol Group, 1990; Ware & Sherbourne, 1992). Although many unresolved issues surrounding the precise specification and analysis of such instruments remain, their use should be considered whenever there are likely to be material differences in the clinical quality of different organizations.

In practice, however, analysts are often constrained to examining efficiency on the basis of measures of activities, for example, in the form of patients treated, operations undertaken or outpatients seen. Such measures are manifestly inadequate, as they fail to capture variations in the effectiveness (or quality) of the health care delivered. Yet there is often in practice no alternative to using such incomplete measures of activity in lieu of health care outcomes.

Measuring activities can also address a fundamental difficulty of outcome measurement – identifying how much of the variation in outcomes is directly attributable to the actions of the health care organization. For example, mortality after a surgical procedure is likely to be influenced by numerous factors beyond the control of the provider, or even the health system. In some circumstances such considerations can be accommodated by careful use of risk-adjustment methods. However, there is often no analytically satisfactory way of adjusting for environmental influences on outcomes, in which case analysing instead the activities of care may offer a more meaningful insight into organizational performance.

1.5.3. What are the inputs under consideration?

The input side of efficiency metrics is often considered less problematic than the output side. Physical inputs can often be measured more accurately than outputs, or can be summarized in the form of a measure of costs. However, even the specification of inputs can give rise to serious conceptual and practical difficulties.

A fundamental decision that must be taken is the level of disaggregation of inputs to be specified. At one extreme, a single measure of aggregate inputs (in the form of total costs) might be used. The input side of the efficiency ratio then effectively becomes costs. This approach assumes that the organizations under scrutiny are free to deploy inputs efficiently, taking account of relative prices. In practice, some aspects of the input mix are often beyond the control of the organization, at least in the short-term. For example, the stock of capital can usually be changed only in the longer-term. In these circumstances, it may be important to disaggregate to some extent the inputs to capture the different input mixes that organizations have inherited.

Labour inputs can usually be measured with some degree of accuracy, often disaggregated by skill level. An important issue is then how much aggregation of labour inputs to use before pursuing an efficiency analysis. Unless there is a specific interest in the deployment of different labour types, it may be appropriate to aggregate into a single measure of labour input, weighting the various labour inputs by their relative wages. There may be little merit in disaggregation unless there is a specific interest in the relationship between efficiency and the mix of labour inputs employed. Under such circumstances, metrics using measures of labour input disaggregated by skill type may be valuable. Such analysis may yield useful policy insights into the gains to be secured from (say) substituting some types of labour for others.

Although labour inputs can be measured readily at an organizational level, problems may arise if the interest is in examining the efficiency of subunits within organizations, such as (say) operating theatres within hospitals. It becomes increasingly difficult to attribute labour inputs as the unit of observation within the hospital becomes smaller (department, team, surgeon and patient). Staff often work across a number of subunits, but information systems cannot usually track their input across these units with any accuracy. Particular care should be exercised when developing metrics that rely heavily on input measures of self-reported allocations of professional time.

In general, capital is a key input whose misuse can be a major source of inefficiency. However, incorporating measures of capital into the efficiency analysis is challenging. This is partly because of the difficulty of measuring capital stock and partly because of problems in attributing its use to any particular activity or time period. Measures of capital are often very rudimentary and even misleading. For example, accounting measures of the depreciation of physical stock usually offer little meaningful indication of capital consumed. Indeed, in practice, analysts may have to resort to very crude measures, for example, the number of hospital beds or floor space as a proxy for physical capital. Furthermore, non-physical capital inputs, such as health promotion efforts, are important capital investments that can be difficult to attribute directly to health outcomes.

As with all modelling, efficiency metrics should be developed according to the intentions of the analysis. If the interest is in the narrow, short-term use of existing resources, then it may be relevant to disaggregate inputs to reflect the resources currently at the disposal of management. If a longer-term, less constrained analysis is required, then a single measure of total costs may be a perfectly adequate indicator of the entity’s physical inputs.

1.5.4. What are the external influences on performance?

In many contexts, a separate class of factors affects organizational capacity, which we classify as the external or environmental determinants of performance. These are influences on the organization beyond its control that reflect the external environment within which it must operate. In particular, many of the outcomes secured by health care organizations are highly dependent on the characteristics of the population group they serve. For example:

  • population mortality rates are heavily dependent on the demographic structure of the population under consideration and the broader social determinants of health;
  • the intensity of resource use is usually highly contingent on the severity of disease of patient;
  • hospital performance may be related to how primary care is organized in the local community;
  • the costs to emergency ambulance services of satisfying service standards (such as speed of attendance) may depend on local geography and settlement patterns.

There is often considerable debate as to what environmental factors are considered controllable. This will be a key issue for any scrutiny of efficiency, and for holding relevant management to account. The choice of whether to adjust for such exogenous factors is likely to be heavily dependent on the degree of autonomy enjoyed by management, and whether the purpose of the analysis is short-term and tactical, or longer-term and strategic. In the short-term, almost all input factors and external constraints will be fixed. In the longer-term, depending on the level of autonomy, many may be changeable. In many circumstances it will be appropriate to consider efficiency metrics both with and without adjustment for external factors.

Broadly speaking, there are three ways in which environmental factors can be taken into account in efficiency analyses:

  • restrict comparison only to entities operating within a similarly constrained environment;
  • model the constraints explicitly, using statistical methods such as regression analysis;
  • undertake risk adjustment to adjust the outcomes achieved to reflect the external constraints.

The first approach to accommodating environmental influences is to select only entities in similar circumstances. Then, the intention is to compare only like-with-like. Of course this begs the question as to what criteria should be used to select the similar entities. They might simply be readily observable characteristics, such as urban/rural. Alternatively, statistical techniques such as cluster analysis might be used to identify similar organizations according to a larger number of observable characteristics (Everitt et al., 2001).

A shortcoming of comparing only similar entities is that it will reduce sample size, as it allows comparison of performance only with similar types. Therefore, a second approach is to incorporate environmental factors directly into a regression model of organizational efficiency. The regression analysis makes allowance for the uncontrollable factors at an organizational level, and the residual in the model (what cannot be explained) is the adjusted measure of efficiency. While leading to a more general specification of the efficiency model than the clustering approach, the use of such techniques gives rise to modelling challenges that are discussed in detail by Jacobs, Smith & Street (2006).

The final method to control for variation in environmental circumstances is the family of techniques known as risk adjustment. These methods adjust organizational outputs for differences in circumstances before they are used in any efficiency indicator, and are – where feasible – often the most sensible approach to deal with environmental factors. In particular, they permit the analyst to adjust each output for only those factors that apply specifically to that output, rather than use environmental factors as a general adjustment for all outputs.

Well-understood forms of risk adjustment include the various types of standardized mortality rates routinely deployed in studies of population outcomes. These adjust observed mortality rates for the demographic structure of the population, thereby seeking to account for the higher risk of mortality (ROM) among older people. Likewise, surgical lengths of stay might be adjusted for the severity of risk factors, such as the age, comorbidities and smoking status of the patients treated. The methods of risk adjustment, often based on multivariate regression methods, have been developed to a high level of refinement (Iezzoni, 2003). However, risk adjustment usually has demanding data requirements, generally in the form of information on the circumstances of individual patients.

1.5.5. Links with the rest of the health system

No outputs from a health service practitioner or organization can be considered in isolation from their impact on the rest of the health system in which they operate. For example:

  • the effectiveness of preventive services will affect the nature of demand for curative services;
  • the performance of hospital support services, such as diagnostic departments, will affect the efficiency of functional areas such as surgical services;
  • the actions of hospitals, for example, in creating care plans for discharged patients, may have profound implications for primary care services;
  • the performance of rehabilitative services may have important implications for future hospital readmissions.

Likewise, cost-effective treatment is often secured only if there is effective coordination between discrete organizations. The need for such coordination is becoming increasingly important as the number of people with complex comorbidities and care needs rises. The frequent calls for better integration of patient care reflect the concern that such coordination often fails to meet expectations. That failure may in itself be an important cause of inefficiency.

Scrutiny of a health system entity in isolation may ignore these important implications of the entity’s impact on whole system efficiency. Thus, for example, if a primary care practice is held to account only by metrics of costs per patient, it might secure apparently good levels of efficiency by inappropriately shifting certain costs (such as emergency cover) onto other agencies, such as hospitals or ambulance services. The chosen metric creates perverse incentives for the practice, and may fail to capture its serious negative impact on other parts of the health system. That consequence should in principle be accounted for in any assessment of that practice’s efficiency. In principle, it should be feasible to accommodate such negative effects – which economists conceive as externalities – within the analytic framework. However, in practice it is rarely done, with potentially important consequences for bias in efficiency assessment, perverse incentives and misdirected managerial responses.

Failures of integration of care for patients with complex, long-term needs pose an especially serious barrier to good efficiency assessment. Indeed, the very act of measuring the efficiency of separate entities may frustrate efforts to encourage cooperation between different parts of the health system unless successes of care integration are properly recognized in performance assessment. Organizations that are held to account with partial measures of efficiency that ignore coordination activities may be reluctant to divert efforts towards integration of future patient care. Linking patient data across multiple care settings (see Chapter 3) is an important prerequisite for beginning to address this issue.

1.6. Concluding comments

Two broad types of inefficiency have been discussed – allocative and technical inefficiency. Allocative inefficiency arises when the wrong mix of services is provided, given societal preferences, or when a suboptimal mix of inputs is used. Allocative inefficiency can occur at the level of the health system, the provider organization or the individual practitioner, and may arise from inadequate priority-setting, faulty payment mechanisms, lack of clinical guidelines, incomplete performance reporting or simply inadequate governance of the system. Technical inefficiency arises most notably at the provider and practitioner level, and may result from inappropriate incentives, weak or constrained management and inadequate information. Either type of inefficiency may have profoundly adverse consequences for payers, whose money is wasted, and for patients, who either receive poor care or are denied treatment because of the associated loss of resources.

This suggests that the simple notion of efficiency as the conversion of inputs into valued outputs disguises a series of thorny conceptual and methodological problems. Setting aside the obvious measurement difficulties, the structural problem can be illustrated as in Figure 1.3, which is a more realistic development of Figure 1.2. Naive efficiency analysis involves examining the ratio of health system outputs to health system inputs (the shaded boxes). Yet system inputs should also incorporate previous investments by the organization (which we call endowments) and external constraints (such as other organizations and population characteristics). System outputs should also include endowments for the future management of the organization, joint outputs and outputs not directly related to health, such as enhanced workforce productivity.

Figure 1.3. A more complete model of efficiency.

Figure 1.3

A more complete model of efficiency. Source: Smith (2009).

It will never be feasible to accommodate all the issues summarized in Figure 1.3 into a single efficiency metric. Rather, the analyst should be aware of which factors are likely to be material for the efficiency metric under consideration, and seek to offer guidance on the implications of serious omissions and weaknesses. The framework we have introduced seeks to deconstruct efficiency metrics into a manageable number of issues for analytical scrutiny. It is immediately relevant mainly for analysis of TE, although its discussion of external circumstances and broader impact on the health system raises issues relating to AE.

The pursuit of health system efficiency is a central concern in all health systems, made strikingly more urgent in many countries by adverse economic circumstances and pressure on public finances. However, measurement methodology is, and will remain, highly contested and is at a developmental stage. Notwithstanding their complexity, the economic concepts of AE and TE offer the only currently available unifying framework for assessing all the diverse objectives of health systems within an efficiency framework. The numerous potential metrics of efficiency all have limitations. However, it is almost certainly preferable to steer the health system with the imperfect measures we have available, rather than to fly blind. In our view, efficiency analysis should be routinely embedded in all relevant functions of service delivery and policymaking. However, it is vital that decisions are taken in full recognition of the strengths and weakness of indicators, and that the search for improved metrics and better resources for comparison is pursued with vigour. The rest of this book offers insights into some of the most promising prospects for future improvement.


  • Briggs A, Gray A. Using cost effectiveness information. BMJ. 2000;320(7229):246. [PMC free article: PMC1117441] [PubMed: 10642242]
  • EuroQol Group. EuroQol: a new facility for the measurement of health-related quality of life. Health Policy. 1990;16(3):199–208. [PubMed: 10109801]
  • Everitt B, et al. Cluster analysis. London: Arnold; 2001.
  • Fetter R. Diagnosis related groups: understanding hospital performance. Interfaces. 1991;21(1):6–26.
  • Iezzoni LI. Risk adjustment for measuring healthcare outcomes. 3rd edn. Chicago: Health Administration Press; 2003.
  • Jacobs R, Smith P, Street A. Measuring efficiency in health care: analytic techniques and health policy. Cambridge: CUP; 2006.
  • Smith PC. Measuring for value for money in health care: concepts and tools. 2009. http://www​​.uk/sites/health/files​/MeasuringValueForMoneyInHealthcareConceptsAndTools.pdf accessed 3 August 2016.
  • Smith PC, Street AD. On the uses of routine patient-reported health outcome data. Health Economics. 2013;22(2):119–131. [PubMed: 22238023]
  • Ware JE Jr, Sherbourne CD. The MOS 36-item short-form health survey (SF-36). I. Conceptual framework and item selection. Medical Care. 1992;30(6):473–483. [PubMed: 1593914]
  • WHO. Geneva: WHO; 2000. The World Health Report 2000. Health systems: improving performance. http://www​​.pdf?ua=1 accessed 3 August 2016.
  • Williams A. Economics of coronary artery bypass grafting. British Medical Journal. 1985;291(6491):326–329. [PMC free article: PMC1416615] [PubMed: 3160430]
© World Health Organization 2016 (acting as the host organization for, and secretariat of, the European Observatory on Health Systems and Policies)
Bookshelf ID: NBK436891


  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (4.1M)

Other titles in this collection

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...