• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of hcpolicyLink to Publisher's site
Healthc Policy. Sep 2005; 1(1): 55–71.
PMCID: PMC2585236

Using Research to Inform Healthcare Managers’ And Policy Makers’ Questions: From Summative to Interpretive Synthesis

Abstract

This paper highlights the importance of research synthesis for healthcare managers’ and policy makers’ questions and the difficulty of generalizing from the methods used to answer clinicians’ questions. Social science research has a central role in such syntheses because of the context-dependent nature of managers’ and policy makers’ questions, which generally encompass a far broader spectrum than the circumscribed “what works?” questions of clinically oriented reviews. A major challenge is in moving from purely researcher-driven processes, which summarize research, to co-production processes, which allow managers and policy makers to join with researchers in interpreting implications for the healthcare system. Additional challenges lie in clearly defining the function, role and objective of the synthesis; handling flexibility around finalizing the question; harnessing a manageable scope of literature to review; adopting rules to select the final sample of research; creating useful messages; and developing a format that is responsive to the needs and preferences of the audience. One inevitable conclusion is that research synthesis for managers and policy makers will, compared to that for clinicians, leave much discretion in the hands of the synthesiser(s). This raises the interesting issue of how to engender, in the absence of “methodological checklists,” trust and credibility in both the people doing the synthesis and the processes they use.

The remarkable success of the Cochrane Collaboration as a tool to define clinical effectiveness has encouraged others in the healthcare system to pay attention to the importance of evidence-based decision-making (Moynihan 2004; Walshe and Rundall 2001; Klein 2000; Black 2001). With this success has come increased interest from, and pressure on, healthcare managers and policy makers to have available rigorous, useful syntheses of research relevant to their work. Research funding agencies are now seeing synthesis as part of their remit (Canadian Health Services Research Foundation [CHSRF] 2005; Canadian Institutes of Health Research [CIHR] 2005) and are even leading the charge in exploring new ways of doing synthesis for healthcare managers and policy makers (CHSRF and NHS Service Delivery 2005). Indeed, this growth of interest is not restricted just to healthcare (Davies et al. 2000); those in the management community more generally are “exploring ways in which evidence-informed management reviews might be achieved [with] the process of systematic review used in the medical sciences” (Tranfield et al. 2003).

Unfortunately, the questions, context and content of healthcare management and policy are generally broader and more diffuse than those of the clinical world. Studies on program or intervention effectiveness – the main focus of the Cochrane or Campbell Collaborations, and most other programs of systematic review – are only one part of the larger landscape of potential research support for managers and policy makers (Walshe and Rundall 2001; Klein 2000; Black 2001; Tranfield et al. 2003; Tunis et al. 2003). This paper evaluates the nature of the questions asked of research by managers and policy makers, outlines why these questions are just as important to address with synthesized research as those of the clinician, and highlights some of the methodological challenges in doing such synthesis. The goal is to alert the decision-making community to this issue and add to the emerging debate in this area among researchers.

Managers and Policy Makers Don’t Ask the Same Questions As Clinicians

The main functions of managers and policy makers – understanding their local context and values, creating an organizational culture, building consensus on actions – are not functions routinely incorporated into the world of clinical research. These are the concerns of social scientists. For example, sociologists evaluate the role of institutions in determining behaviours, anthropologists examine the influence of norms and cultural determinants of action, psychologists outline cognitive constraints and heuristics, organizational theorists design workplaces, and political scientists predict the interplay and outcomes of the complex web of interests and ideologies (Fulop et al. 2001; Lemieux-Charles et al. 2004).

In a recent exercise, the author tested the nature of managers’ and policy makers’ questions empirically by asking them to identify their priority issues and define where a synthesis of research might help (see Table Table1)1) (Dault et al. 2004). Only some of their questions were of the circumscribed “what works?” variety that dominate most systematic review work in the clinical area (Cook et al. 1997; Egger et al. 2001). Many questions concerned the context and overall organization of service delivery – a finding that replicates prior work on intensive care research priorities in England (Vella et al. 2000) or more general questions of the UK civil service (Davies 2005).

TABLE 1.
Sample of managers’ and policy makers’ questions for which a synthesis of research was deemed a priority

In addition to the question “what works to reduce problem x?” managers and policy makers appear to have at least two other types of questions:

  1. What do we know about problem x? This is the general interest question of the decision-maker. Is it a problem? If so, what is causing it, how extensive is it, who is it affecting and what are some feasible options to address it?
  2. What will be/now are the issues around doing action y? This is the context question, sometimes asked before embarking on action plans, sometimes after, to aid in finding remedies to the unforeseen. Who opposes, who supports and why? What else is affected, and how (side effects)? What else should we do in concert with this action?

The Value-Added Role of Synthesis for Managers’ and Policy Makers’ Questions

If we believe that research evidence on these questions makes for better decisions, and we are aware that research is more reliable, useful and usable when its numerous studies are synthesized into coherent messages, then why restrict this benefit to the question “what works?” which is dominant in the clinical world? After all, clinical action does not occur in isolation; neither does it operate inside a maintenance-free organizational machine. Knowing how to set policies for, and how best to manage the context around, service delivery is as important to patient outcomes as is the front-line application of effective clinical interventions. Policy and management also save lives (or cause deaths), albeit in a less visible and direct fashion than clinical care.

For example, Deveraux and colleagues (2002) have estimated that the US government policy of encouraging for-profit rather than not-for-profit ownership of haemodialysis centres creates 1,200 to 4,000 additional patient deaths each year. West (2002) has shown that in 61 English hospitals, performance that is one standard deviation above the mean in human resource management, as measured by routine conduct of employee performance reviews, is associated with 12.3% fewer deaths after hip fracture. In the United States, management’s initiative to formalize training for teamwork among hospital emergency room staff members reduced clinical error rates from 30.9% to 4.4% in less than 12 months (Morey et al. 2002).

Such contextual factors – attitudes about profit and privatization, human resource policies, the environment for teamwork – are increasingly important in explaining the success or failure of clinical interventions delivered by care providers and their organizations. Ironically, the importance of good management and policy for good care emerges from studies of guideline implementation efforts that “failed.” These efforts to put clinical research synthesis into practice – in the form of practice guidelines – focused too narrowly on the clinicians’ world and not broadly enough on the management and policy contexts influencing it.

For example, a recent study of general practitioners (GPs) failed to find an effect of a guideline implementation strategy because the control group improved compliance as much as the experimental group. The most parsimonious explanation that the authors could find for this clinical trial “failure,” supported by in-depth qualitative interviews with participating general practitioners, was a widespread response of all GPs to increase their accountability because of new government policies on clinical governance (Harrison et al. 2003). Policy context, not the specific local intervention strategy, was the dominant factor in explaining practice behaviour and good care.

As stated by the study’s authors, “few studies of guideline implementation have reported either the timing of the interventions and data collection, or raw before and after data...implying an assumption that context is irrelevant” (Harrison et al. 2003: 152–53). Sheldon (2001) has made the same point. This overriding influence of context may go a long way towards explaining why the latest systematic review of clinical behaviour change interventions, now capturing 235 methodologically sound but clinically focused studies, continues to offer no clear advice for managers on how to improve the quality of care. Grimshaw and colleagues (2004: 66) concluded: “This review highlights the fact that despite 30 years of research in this area, we still lack a robust, generalizable evidence base to inform decisions about strategies to promote the introduction of guidelines or other evidence-based measures into practice.”

Synthesis that addresses the broader contextual factors of the managers’ and policy makers’ world therefore appears to be the logical next step in the search for more effective ways to bring research evidence into health system practice. But how well developed are the methods for such synthesis?

Matching Methods to Function, Role and Objective

Methods must be driven by function, role and objective. The dog (function, role and objective) should wag the tail (methods), not vice versa.

Function

First, we should not assume that the methods developed for the function of synthesizing clinical research on “what works?” are applicable to synthesizing social science research on managers’ and policy makers’ broader questions. A clinically focused systematic review of research studies may tell us that on average, across multiple settings and contexts, doing “x” works better than what we are doing now. It may, if accompanied by an economic evaluation, also tell us whether “x” is worth doing. But such reviews rarely indicate how to create the policies and the organizational context to implement them and make them work for a particular setting.

Many years of methods development have gone into syntheses with the function of answering “what works?” questions. The methods dilemma now for health services researchers is to come to some broad agreement on how to do synthesis when the function is to assemble social science knowledge that addresses questions beyond ”what works?”

Role

The role of a synthesis is determined largely by the intended audience and the context for its production and use. The three most prevalent roles are:

  • Defining the future research agenda by identifying the current state of knowledge and highlighting the gaps. For example, the Canadian Institutes of Health Research (2005) recently released a call for work that does “a systematic scan of existing evidence in a broad thematic area for the purposes of identifying areas in which sufficient evidence exists to conduct a synthesis or systematic review and where insufficient evidence exists such that primary research is necessary.” On a small scale, this is done by every researcher who includes a literature review to justify a specific project proposed in a grant application. On a larger scale, research funding bodies commission or create for themselves “state of the science” reviews or “scoping papers” to guide future funding programs. In either case, the methods around this role for synthesis are not the concern here, as the primary audience is the researcher or the research funder, and not the manager or the policy maker.
  • Creating a rapid response to a request for the research knowledge pertinent to a specific planned and soon-to-be-made decision. This is closer to the “client–contractor” situation, in which the synthesis is done not just for an identifiable audience, but often for identifiable individuals in the healthcare system with clearly circumscribed needs. “Rapid response” programs and units are emerging to serve this need (NHS Service Delivery and Organization Research and Development Program 2005). The driving force is the user’s context, including the timeline, which may be as short as days or weeks, severely limiting the opportunity for reflective co-production between the client and the contractor.
  • Contributing to an accumulating library or database of research overviews in a defined area for some as yet unspecified future decision. Creating a stockpile of syntheses on potentially relevant topics for an audience of unidentified decision-makers is a worthwhile objective. The Cochrane database operates under this objective, largely for clinical effectiveness issues. Some are now calling for a similar repository for managers’ and policy makers’ issues (Lavis et al. 2005). In this role for synthesis, more time is available for careful planning and undertaking of the task, using comprehensive methods and processes that reflect both the researchers’ and the decision-makers’ perspectives. It is this role for synthesis that is the focus of this paper.

Objective

Finally, some authors have distinguished between two broad objectives for a research synthesis (Noblitt and Hare 1988; Forbes and Griffiths 2002; Dixon-Woods et al. 2005). Others express a distinction between, on the one hand, an integrative or summative objective involving “the quantification and systematic integration of data” and an interpretive objective involving “some form of creative process where new constructs are fashioned.” These authors go on to comment that “the choice of the form of synthesis is likely to be crucially related to the form and nature of the research questions being asked” (Dixon-Woods et al. 2005: 46–47).

These two different objectives have clear implications for methods. Summative syntheses are most appropriate where the context in which the conclusions are to be implemented is absent or a minor concern – often the case for the globally created clinical effectiveness syntheses on “what works?” questions. Knowledge of, and the involvement of those knowledgeable about, particular implementation contexts is not a central part of the methods for such work. The entire process of synthesis can readily be undertaken by researchers working on the world literatures, largely in isolation from the system(s) to which their work may have some application. In the parlance of the knowledge translation literature, this form of synthesis is part of the “push” strategy of getting research into practice (Lavis et al. 2004).

This situation contrasts with interpretive syntheses, where the objective is not only to compile and aggregate data, but also to interpret it for application into one or more contexts – precisely the kind of broader objective relevant to the world of the manager or policy maker. Syntheses done under this objective need to bring in more contextual social science research, where the methods for aggregation and application are less well developed and even incorporate the documented experiences of those in the system knowledgeable about that context, an area where methods are even less well developed.

In this domain, the interpretive skills of the researcher are severely limited compared to those of the manager or policy maker. Hence, this objective implies the development of “creative process” methods that can combine the empirical study perspective of the researcher with the pragmatic experience perspective of the managers and policy makers themselves. The policy-synthesis program of the Canadian Health Services Research Foundation was constructed under this objective as it “brings together the best available evidence, practical experience of decision-makers and expert knowledge of researchers to provide evidence-based policy advice” (CHSRF 2000). In knowledge translation parlance, this is more like “evidence-informed decision-making” (Tranfield et al. 2003) and closer to the “linkage and exchange” strategy wherein the synthesis is a co-production between researchers and decision-makers (Lomas 2001).

Therefore, just as a clinical trial must define its primary outcome measure to determine the choice of analysis, so too must a synthesis focus on its primary function, role and objective to determine methodological choices. This is particularly important given the nascent state of knowledge on synthesis methodology. We need to accumulate better information on which methods are most appropriate for which circumstances. Obviously, if the role is to produce a rapid response for a specific decision due in a few weeks, the synthesis cannot use the same comprehensive methods as those that would be employed for a planned contribution to an accumulating library with no specific time constraint. Also, as stated, methods that incorporate the managers and policy makers in the process are more central to an interpretive objective than they are to a summative objective for synthesis.

The focus in the rest of this paper is on syntheses with the function of addressing the broad questions of managers and policy makers, the role of contributing to an accumulating library relevant for managers and policy makers and the objective of providing interpretive advice. The task is ambitious. It is not only to emulate for the questions of managers and policy makers what the Cochrane Collaboration and Library has created for clinical effectiveness questions, but also to expand this base to include the key implications of research for healthcare management and policy.

The Methodological Challenges

The current dominant methodology for aggregating research into a synthesized form is that developed under the label “systematic review,” which dates back, in fact, to the early 1980s and work done in psychology (Moynihan 2004; Light and Pillemer 1984). The essence of this approach is to minimize the bias of the reviewer by imposing some specific methodological requirements for explicitness and transparency on the question being posed and the methods used to compile, analyze and report on the included studies. These methods were largely developed as an antidote to the traditional narrative review by a content expert (Oxman et al. 1994), which is “subject to criticism for its lack of transparency” (Dixon-Woods et al.: 47).

More recently, these general requirements have been translated by the Cochrane Collaboration and others into more specific “methodological rules” for synthesizing the literature on “what works?” questions (Cook et al. 1997; Cochrane Collaboration 2004; NHS Centre for Reviews and Dissemination 2001). These requirements are more restrictive than the general expectations of transparency, explicitness and replicability of the original proponents of systematic review methodology. They have come under increasing scrutiny by those concerned with using synthesis to answer broader questions beyond “what works?” (CHSRF and NHS Service Delivery 2005; Tranfield et al. 2003; Forbes and Griffiths 2002; Dixon-Wood et al. 2005; Mays et al. 2001; Mays et al. in press; Pawson 2002; Pawson et al. 2005; Britten et al. 2002; Greenhalgh 2004; Greenhalgh et al. 2005). For example, Dixon-Woods et al. (2005: 52) conclude their review of “alternative synthesis methods” with the statement that “there is an urgent need for rigorous methods for synthesizing evidence of diverse types generated by diverse methodologies...to meet the needs of policy makers and practitioners, who need to be able to benefit from the range of evidence available.”

Such rigorous methods for alternative forms of synthesis are being developed by these and other authors – realist synthesis (Pawson 2002; Pawson et al. 2005), meta-ethnography (Noblitt and Hare 1988; Britten et al. 2002) and meta-narrative mapping (Greenhalgh 2004; Greenhalgh et al. 2005) are some of the examples. The development of all these approaches is still in an early, exploratory stage. However, a number of common areas of debate have already emerged that distinguish the task of assembling the evidence base for a variety of management and policy questions, posed within many different contexts, from the traditional systematic review of clinical effectiveness research. If managers and policy makers are to gain full benefit from the research, then issues in at least five interconnected areas of synthesis methodology need to be addressed. The differences from systematic reviews done under the more restrictive rules of a Cochrane-style clinical effectiveness question are highlighted in each of these areas.

The synthesis question(s)

On one side are the synthesis questions that researchers see can be answered straightforwardly. Unfortunately, these very often involve moving the target to hit the bullet, i.e., creating the questions to fit whatever research is available, rather than vice versa. On the other side are the questions around which managers and policy makers want some help. Unsurprisingly, these are usually framed without consideration for the research that is available to answer them. Negotiating the question(s) between these poles is therefore an inevitable element of doing a relevant synthesis with recommendations for feasible action – managers and policy makers know what is being asked for at the counter; researchers know what is available in the stock room. Somewhere between the two lie the ingredients for a reliable and usable product.

Having said that, we have remarkably little information about how such negotiations should be conducted: in what structures, over what timeframe and using which helpful processes? An intriguing solution, adopted by the Word Health Organization’s Health Evidence Network (HEN), is to have an ongoing, Web-based call for questions from decision-makers and then have a panel or board that selects and finalizes “the best” questions for synthesis based on criteria that are sensitive to the availability of research (World Health Organization Regional Office for Europe 2004). “Iterative commissioning” (Lilford et al. 1999) and “linkage and exchange (Lomas 2001) have also been proposed to address this issue. Some evidence is accumulating on the value of such jointly negotiated questions (Denis and Lomas 2003), but much is left to learn.

Neither do we know the consequences of not setting the question in stone, but rather modifying and adapting it as concepts and issues emerge from the literature-gathering process or as the policy context around the issue changes. Yet, many of the newer forms of synthesis have already established that the question does evolve as one moves into the literature and as one clarifies the needs of managers and policy makers in a series of iterative interactions (Greenhalgh et al. 2005). By way of contrast, the checklists of the Cochrane Collaboration (2004) require a clearly specified and unchanging question.

The scope of the information sources

The challenge of defining the scope of the information sources to cover for management-oriented research, compared to clinical effectiveness research, is well put by Walshe and Rundall (2001: 443-44) when they observe that

…overall, the tightly defined, well-organized, highly quantitative and relatively generalizable research base for many clinical professions provides a strong and secure foundation for...the production of guidelines and protocols. In contrast, the loosely defined, methodologically heterogeneous, widely distributed and hard to generalize research base for healthcare management is much more difficult to use in the same way.

These amorphous literature boundaries are even more so for healthcare policy.

Pragmatism, based on available time, expertise, funds and interest is therefore inevitable. But what principles should guide this pragmatism? For example, given the importance of practical experience and case studies in elucidating context-dependent implementation challenges in the management or policy worlds, under what circumstances should the extra-academic “grey” literature of unpublished work be included? Is there a case sometimes for survey work or focus groups to capture the tacit knowledge present in the experiences of managers or policy makers who have already tried a particular change? This approach was taken as a supplement to the systematic review on guideline implementation described above (Grimshaw et al. 2004) and is built into some networks that use published evidence as the starting point for discussions of research implementation (Russell et al. 2004).

What is clear is that the scope of information covered by a management or policy-oriented interpretive synthesis will be subject to a series of pragmatic considerations. What is not as clear is identifying these considerations and their relative importance. As Greenhalgh et al. (2005: 420) state, “An interpretive model acknowledges that picking out a series of story threads from a heterogeneous and unbounded mass of literature involves choices that are irrevocably subjective and negotiable.” This stance contrasts with that of the clinical effectiveness reviews, in which the scope is far less subjective and defined by a specific intervention.

On a further pragmatic note, the relative reliance on formal literature search techniques, or on key informants and experts as the sources for the studies and literatures, is under review. The broader and more diffuse the question, the harder it is to capture within a series of search terms for use with Medline or other literature databases. In these cases, it may be more efficient to rely on interrogation of knowledgeable experts in the area, at least as a supplement to more formalized methods of literature identification.

The sample

Defining the sample of studies to include within the defined scope is perhaps where the clinically oriented systematic reviews of intervention effectiveness most clearly diverge in philosophy from approaches sensitive to the needs of managers and policy makers. Indeed, there is no sample for a clinical intervention systematic review; only the full population of published and unpublished relevant studies will do. Finding every last research report on the question and being conscientious and comprehensive in constructing the population of studies is central to how the Cochrane Collaboration, for example, minimizes publication or other bias (Cochrane Collaboration 2004).

The task of minimizing bias in the selection of studies is not so easy for the social sciences. As described above, even defining the sampling frame – of which literatures to draw upon and what disciplines and methods to include – is fraught with difficulty when the questions move beyond straightforward clinical effectiveness issues of “what works?” Precisely because there is no clear boundary on the sample frame, there is potentially an infinite number of studies in a search. How, then, does one decide when to stop looking in the defined literatures? When is the sample enough to constitute external validity and generalizability? The usual approach is to use saturation, i.e., searching ceases when no or only marginal further value is added to the accumulated concepts, theories or models. Are there other approaches?

Still unaddressed is the issue of internal validity for the accumulated studies. What quality or other criteria define their inclusion in the final sample? The checklists for including studies relevant to clinical effectiveness questions circumvent the problem by establishing clear “hierarchies of evidence.” Others have tried to develop such checklists for both quantitative and qualitative studies but, as one commentary points out, “they all suffer from the drawback that they do not spell out in detail how each criterion should be applied: in particular how to discern whether or not a sufficient standard has been reached.... Much rests on the judgment of the reviewer” (Mays et al. 2001).

Creating main messages

A further conceptual as well as methodological issue is the form of the conclusions – in essence, the interpretation of the output from the literature for management or policy advice. This step has not usually been included as part of a traditional summative systematic review. Do these “main messages” adhere closely to the research, or do they, as an interpretive synthesis, adapt to the particular context for which the synthesis is being done by stretching to “bounded reality” implications for management or policy? The average researcher gets decidedly uncomfortable when asked to go beyond his or her data. But the average manager or policy maker is always pressing the researcher for the “best guess” recommendation, arguing that such a guess is inevitable in the policy world and will often be more informed when coming from the expert researcher than when coming from the generalist decision-maker.

Perhaps this is where participants revisit the collaborative negotiation used to define the question(s) being addressed by the synthesis and reinforce the co-production synthesis model. The researchers producing the synthesis and the potential health system users of it can once again pool their relative expertise. Researchers can temper overly ambitious decision-makers with the strength of the evidence behind particular implications or recommendations. Decision-makers can temper overly cautious researchers by relegating the “more research is needed” preoccupation to the appropriate appendix.

The format

Generally, any format should reflect the needs and preferences of the audience; but what are the needs and preferences of managers when it comes to research synthesis? Although some have pointed out the power of quantification in influencing policy (Reuter 1986), it is not clear that anything other than narrative description will be possible for many areas where there is either incomparability in study designs or a dominance of qualitative research.

Where possible, a judicious mix of quantitative estimates, tabular summary and narrative explanation may create the best of all worlds – but what to do when this is not possible, and what forms of quantitative estimates or tabular presentation are understandable and preferred by managers and policy makers? Although there has been a lot of research on this question for clinician audiences – creating, for example, such data representations as “number needed to treat” (Laupacis et al. 1988), only a handful of similar studies are available for managers and policy makers. One such study makes clear that “graded entry” formats, in which increasingly less summarized and more detailed information is gradually uncovered for the reader, meet the varied needs of multiple forms of decision-makers (Lavis et al. 2005). One such graded entry approach is the 1:3:25 format of the Canadian Health Services Research Foundation, which provides brief main messages (in one page), an executive summary (three pages) and then a maximum 25-page full report and appendices (CHSRF 2001).

Conclusion

To date, the research synthesis needs of managers and policy makers have not been addressed with the same enthusiasm and application as those of clinicians, despite evidence that their activities are also influential on health outcomes. However, there is a growing literature on synthesis techniques that address managers’ and policy makers’ unique concerns, particularly those that go beyond the finely honed methods of summative systematic reviews to answer well-defined clinical effectiveness questions. Admittedly, the task is more challenging – demanding and often impatient clients, questions that need ongoing negotiation and depend as much on context as on content, literatures with unclear boundaries, multiple relevant methodologies and few generally agreed upon standards for quality. There are, however, those who are rising to these challenges and trying to develop methods for interpretive synthesis for managers and policy makers. These methods have the potential to get social science and health services research contributing to healthcare management and policy as effectively as the Cochrane Collaboration brings epidemiologic and economic research to the provision of clinical care.

However, a series of methodological and conceptual issues remains before this potential can be realized. Not the least of these issues is the willingness of academic peers and potential users of synthesis to tolerate a far greater degree of discretion to those producing interpretative rather than summative syntheses. This willingness contrasts with the relatively rigid checklist approach that has historically been the case for summative systematic reviews. Questions need to be flexible and designed (and sometimes re-designed) in collaboration with users. The scope of literatures covered has to be defined pragmatically, and significant judgment needs to be exercised as to the source, number and quality of studies assembled for synthesis. Finally, recommendations and implications need to emerge from a judicious mix of the expertise and experience of both those working with the research evidence and those working within the system.

None of this should circumvent the need to minimize bias and be transparent about the criteria used to guide such discretion – the fundamental tenets of systematic research synthesis. Nor should we be excused from evaluating the impact of those choices, whenever possible, in order to advance and develop methods of synthesis. But checklists are unlikely to be the order of the day, and perfect replicability may be more of an aim than a destination in social science–oriented, interpretive research synthesis. For this reason, the conduct of two or more independently conducted syntheses on the same topic, using the same or even different methods, would be an interesting first step in evaluating generalizability. This measure may go some way in reassuring (or not) managers and policy makers concerned about the degree of bias that may remain after the exercise of this significant discretion.

Furthermore, attention needs to be given to ways of reassuring users of such interpretive syntheses that the individuals producing them are exercising their significant judgment and discretion in a relatively unbiased and trustworthy fashion. As Black and Carter (2001) have asked, “While the need to ensure that doctors and other clinicians are accountable for their actions is widely accepted both within and outside the profession, should we not have similar expectations of academic researchers and scientists?” While formal certification or licensing may be going too far, those who fund synthesis work may wish to consider some form of a priori pre-qualification for potential applicants. In addition, during the conduct of a synthesis and following completion, peer review – where peers are from both the research and decision-making communities – can also provide reassurance on adequate control of bias and trustworthy exercise of discretion.

Let us not, however, become too precious in our search for the perfect method for assembling interpretive syntheses for managers’ and policy makers’ questions: “Don’t let the best become the enemy of the good.” The need to bring evidence more effectively into healthcare management and policy continues unabated, and independent of our methodological sophistication.

Acknowledgments

An earlier version of this paper was developed during a study leave in summer 2004, supported by the Canadian Health Services Research Foundation and ZonMW, the Dutch national health research funding agency. This draft profited from extensive comments and feedback from Huw Davies, Ian Graham, Sophie Hill, Andy Oxman, Jacomine Ravensbergen and members of the Polinomics Discussion Group at McMaster University. However, the views expressed in this paper are the author’s alone and do not represent these reviewers’ position or that of the Canadian Health Services Research Foundation on the topic of synthesis for healthcare managers and policy makers.

References

  • Black N. Evidence Based Policy: Proceed with Care. British Medical Journal . 2001;323:275–79. [PMC free article] [PubMed]
  • Black N., Carter S. Public Accountability: One Rule for Practitioners, One for Scientists? Journal of Health Services Research and Policy . 2001;6:131–32. [PubMed]
  • Britten N., Campbell R., Pope C., Donovan J., Morgan M., Pill R. Using Meta-Ethnography to Synthesise Qualitative Research: A Worked Example. Journal of Health Services Research and Policy . 2002;7:209–15. [PubMed]
  • Canadian Health Services Research Foundation. Ottawa: Canadian Health Services Research Foundation; 2001. Reader-Friendly Writing – 1:3:25. Retrieved August 2, 2005. http://www.chsrf.ca/knowledge_transfer/pdf/cn-1325_e.pdf .
  • Canadian Health Services Research Foundation. Policy Synthesis. Program Overview. 2005. Retrieved August 2, 2005. http://www.chsrf.ca/funding_opportunities/commissioned_research/polisyn/descrip_e.php .
  • Canadian Health Services Research Foundation and the NHS Service Delivery and Organization R&D Program. Commissioned Research Projects and Related Activities. Methods of Synthesis Project. Retrieved August 2, 2005. http://www.chsrf.ca/funding_opportunities/commissioned_research/projects/msynth_e.php .
  • Canadian Health Services Research Foundation and the NHS Service Delivery and Organization R&D Program. Making Evidence Synthesis More Useful for Management and Policy Making. Journal of Health Services Research and Policy 10 (suppl. 1) 2005 [PubMed]
  • Canadian Institutes of Health Research. Scoping Reviews and Research Syntheses: Priority Health Services and System Issues. Retrieved August 2, 2005. http://www.cihr-irsc.gc.ca/e/25651.html .
  • Cochrane Collaboration. Cochrane Reviewers’ Handbook 4.2.2. 2004. Mar, Retrieved August 2, 2005. http://www.cochrane.org/resources/handbook/index.htm .
  • Cook D., Mulrow C., Haynes R.B. Systematic Reviews: Synthesis of Best Evidence for Clinical Decisions. Annals of Internal Medicine . 1997;126:376–80. [PubMed]
  • Dault M., Lomas J., Barer M. Listening for Direction II. National Consultation on Health Services and Policy Issues for 2004–2007. Ottawa: Canadian Health Services Research Foundation; 2004. Retrieved August 2, 2005. http://www.chsrf.ca/other_documents/listening/pdf/LfD_II_Final_Report_e.pdf .
  • Davies H.T.O., Nutley S.M., Smith P.C. What Works? Evidence-Based Policy and Practice in Public Services . Bristol, UK: Policy Press; 2000.
  • Davies P. What Is Needed from Research Synthesis from a Policy Making Perspective? In: Popay J., editor. Putting Effectiveness in Context. London, England: Routledge; 2005.
  • Denis J.L., Lomas J. Convergent Evolution: The Academic and Policy Roots of Collaborative Research. Journal of Health Services Research and Policy 8 (suppl. 2) 2003:1–6. [PubMed]
  • Devereaux P.J., Schuneman H.J., Ravindran N., Bhandari M., Garg A.X., Choi P.T., et al. Comparison of Mortality between Private For-Profit and Private Not-for-Profit Hemodialysis Centers: A Systematic Review and Meta-Analysis. Journal of the American Medical Association 288 . 2002:2449–57. [PubMed]
  • Dixon-Woods M., Agarwal S., Jones D., Young B., Sutton A. Synthesising Qualitative and Quantitative Evidence: A Review of Possible Methods. Journal of Health Services Research and Policy 10 . 2005:45–53. [PubMed]
  • Egger M., Davey Smith G., Altman D. Systematic Reviews in Health Care: Meta-Analysis in Context. London: British Medical Journal Publications; 2001.
  • Forbes A., Griffiths P. Methodological Strategies for the Identification and Synthesis of ‘Evidence’ to Support Decision-Making in Relation to Complex Healthcare Systems and Practices. Nursing Inquiry . 2002;9:141–55. [PubMed]
  • Fulop N., Allen P., Clarke A., Black N. Studying the Organisation and Delivery of Health Services: Research Methods . London: Routledge; 2001.
  • Greenhalgh T. Meta-Narrative Mapping: A New Approach to the Synthesis of Complex Evidence. In: Hurwitz B., Greenhalgh T., Skultans V., editors. Narrative Research in Health and Illness. London: British Medical Journal Publications; 2004.
  • Greenhalgh T., Roberts G., Macfarlane F., Bate P., Kyriakidou O., Peacock R. Storylines of Research: A Meta-Narrative Approach to Systematic Review. Social Science and Medicine . 2005;61:417–30. [PubMed]
  • Grimshaw J.M., Thomas R.E., MacLennan G., Fraser C., Ramsay C.R., Vale L., et al. Effectiveness and Efficiency of Guideline Dissemination and Implementation Strategies. Health Technology Assessment . 2004;8(6):iii–iv, 1–72. [PubMed]
  • Harrison S., Dowswell G., Wright J., Russell I. General Practitioners’ Uptake of Clinical Practice Guidelines: A Qualitative Study. Journal of Health Services Research and Policy . 2003;8:149–53. [PubMed]
  • Klein R. From Evidence-Based Medicine to Evidence-Based Policy? Journal of Health Services Research and Policy . 2000;5:65–66. [PubMed]
  • Laupacis A., Sackett D.L., Roberts R.S. An Assessment of Clinically Useful Measures of the Consequences of Treatment. New England Journal of Medicine . 1988;318:1728–33. [PubMed]
  • Lavis J., Becerra Posada F., Haines A., Osei E. Use of Research to Inform Public Policy Making. Lancet . 2004;364:1615–21. [PubMed]
  • Lavis J., Davies H., Oxman A., Denis J.L., Golden-Biddle K., Ferlie E. Towards Systematic Reviews That Inform Healthcare Management and Policy Making. Journal of Health Services Research and Policy . 2005;10(suppl. 1):35–48. [PubMed]
  • Lemieux-Charles L., Champagne F., Langley eds A. Using Knowledge and Evidence in Health Care . Toronto: University of Toronto Press; 2004.
  • Light R., Pillemer D. Summing Up: The Science of Reviewing Research . Cambridge, MA: Harvard University Press; 1984.
  • Lilford R., Jecock R., Shaw H., Chard J., Morrison B. Commissioning Health Services Research: An Iterative Method. Journal of Health Services Research and Policy . 1999;4:164–67. [PubMed]
  • Lomas J. Using ‘Linkage and Exchange’ to Move Research into Policy at a Canadian Foundation. Health Affairs . 2001;19(3):236–40. [PubMed]
  • Mays N., Pope C., Popay J. Systematically Reviewing Qualitative and Quantitative Evidence to Inform Management and Policy Making in the Health Field. Journal of Health Services Research and Policy . 2005;10(suppl. 1):6–20. [PubMed]
  • Mays N., Roberts E., Popay J. Synthesising Research Evidence. In: Fulop N., Allen P., Clarke A., Black N., editors. Studying the Organisation and Delivery of Health Services: Research Methods. London: Routledge; 2001.
  • Morey J.C., Simon R., Jay G.D., Wears R.L., Salisbury M., Dukes K., et al. Error Reduction and Performance Improvement in the Emergency Department through Formal Teamwork Training: Evaluation and Results of the MedTeam Project. Health Services Research . 2002;37:1553–80. [PMC free article] [PubMed]
  • Moynihan R. Evaluating Health Services: A Reporter Covers the Science of Research Synthesis . New York:: Milbank Memorial Fund; 2004.
  • NHS Centre for Reviews and Dissemination. Undertaking Systematic Reviews of Research on Effectiveness. CRD’s Guidance for Those Carrying Out or Commissioning Reviews. CRD Report Number 4. 2nd ed. York, UK: 2001. Retrieved August 2, 2005. http://www.york.ac.uk/inst/crd/report4.htm .
  • NHS Service Delivery and Organization Research and Development Program. Rapid Response Mode Commissioning. 2005. Retrieved August 2, 2005. http://www.sdo.lshtm.ac.uk/rapidresponse.htm .
  • Noblitt G.W., Hare R.D. Meta-Ethnography: Synthesizing Qualitative Studies . Newbury Park, CA: Sage Publications; 1988.
  • Oxman A., Cook D., Guyatt G. Users’ Guides: VI: How to Use an Overview. Journal of the American Medical Association . 1994;272:1367–71. [PubMed]
  • Pawson R. Evidence-Based Policy: In Search of a Method. Evaluation . 2002;8:157–81.
  • Pawson R., Greenhalgh T., Harvey G., Walshe K. Realist review – a new method of systematic review designed for complex policy interventions. Journal of Health Services Research and Policy . 2005;10(suppl. 1):21–34. [PubMed]
  • Reuter P. The Social Costs of the Demand for Quantification. Journal of Policy Analysis and Management . 1986;5:807–24.
  • Russell J., Greenhalgh T., Boynton P., Rigby M. Soft Networks for Bridging the Gap between Research and Practice: Illuminating Evaluation of CHAIN. British Medical Journal . 2004;328:1169–74. [PMC free article] [PubMed]
  • Sheldon T.A. It Ain’t What You Do But the Way That You Do It. Journal of Health Services Research and Policy . 2001;6:3–5. [PubMed]
  • Tranfield D., Denyer D., Smart P. Towards a Methodology for Developing Evidence-Informed Management Knowledge by Means of Systematic Review. British Journal of Management . 2003;14:207–22.
  • Tunis S.R., Stryer D.B., Clancy C.M. Practical Clinical Trials. Increasing the Value of Clinical Research for Decision-Making in Clinical and Health Policy. Journal of the American Medical Association . 2003;290:1624–32. [PubMed]
  • Vella K., Goldfrad C., Rowan K., Bion J., Black N. Use of Consensus Development to Establish National Research Priorities in Critical Care. British Medical Journal . 2000;320:976–80. [PMC free article] [PubMed]
  • Walshe K., Rundall T. Evidence-Based Management: From Theory to Practice in Health Care. Milbank Quarterly . 2001;79:420–57. [PMC free article] [PubMed]
  • West M.A. The Link between Management of Employees and Patient Mortality in Acute Hospitals. In: Presentation to the 1st national SDO conference Delivering Research for Better Health Services and Social Care, London, UK. 2002. Mar,
  • World Health Organization Regional Office for Europe. Health Evidence Network. 2004. Retrieved August 2, 2005. http://www.euro.who.int/HEN .

Articles from Healthcare Policy are provided here courtesy of Longwoods Publishing
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...