• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of jmlaJournal informationSubscribeSubmissions on the Publisher web siteCurrent issue of JMLA in PMCAlso see BMLA journal in PMC
J Med Libr Assoc. Oct 2005; 93(4 Suppl): S49–S56.
PMCID: PMC1255753

Community outreach: from measuring the difference to making a difference with health information*

Judith M. Ottoson, EdD, Associate Professor1 and Lawrence W. Green, DrPH, Visiting Professor2

Abstract

Background: Community-based outreach seeks to move libraries beyond their traditional institutional boundaries to improve both access to and effectiveness of health information. The evaluation of such outreach needs to involve the community in assessing the program's process and outcomes.

Purpose: Evaluation of community-based library outreach programs benefits from a participatory approach. To explain this premise of the paper, three components of evaluation theory are paired with relevant participatory strategies.

Concepts: The first component of evaluation theory is also a standard of program evaluation: use. Evaluation is intended to be useful for stakeholders to make decisions. A useful evaluation is credible, timely, and of adequate scope. Participatory approaches to increase use of evaluation findings include engaging end users early in planning the program itself and in deciding on the outcomes of the evaluation. A second component of evaluation theory seeks to understand what is being evaluated, such as specific aspects of outreach programs. A transparent understanding of the ways outreach achieves intended goals, its activities and linkages, and the context in which it operates precedes any attempt to measure it. Participatory approaches to evaluating outreach include having end users, such as health practitioners in other community-based organizations, identify what components of the outreach program are most important to their work. A third component of evaluation theory is concerned with the process by which value is placed on outreach. What will count as outreach success or failure? Who decides? Participatory approaches to valuing include assuring end-user representation in the formulation of evaluation questions and in the interpretation of evaluation results.

Conclusions: The evaluation of community-based outreach is a complex process that is not made easier by a participatory approach. Nevertheless, a participatory approach is more likely to make the evaluation findings useful, ensure that program knowledge is shared, and make outreach valuing transparent.

As intended by the National Library of Medicine's (NLM's) health disparities initiative, “community-based outreach” moves libraries beyond their traditional institutional boundaries to improve community access to and effectiveness of health information. Such efforts are directed toward health promotion and protection, but they recognize that these more lofty goals are beyond the immediate reach of libraries alone. They depend on the effective collaboration of libraries with other community organizations and on the mobilization of other community resources to augment health information. This recognition is central to the evaluation of community outreach, the focus of this paper, because it sets the stage for identifying appropriate indicators and change processes to be measured, values to be considered, and uses to be made of the results of evaluation. It also signals the need for the evaluation to be collaborative, as the planning of effective outreach will need to be collaborative. The “overall goals of outreach are to affect the capacity of the individual, organization, or community to effectively utilize health information resources and to address problems and barriers to accessing them” [1]. A community-based approach puts outreach into a broader context—the social, cultural, economic, structural, and political influences on whether and how outreach occurs and how it can affect health. Community-based approaches are necessarily collaborative; they must start from some shared community understanding of local needs and some coordinated mobilization of local resources.

PAIRING PARTICIPATORY OUTREACH WITH PARTICIPATORY EVALUATION

To be effective, outreach must entail something more collaborative or participatory than one organization, such as a library, venturing into the community. Evaluation, like the outreach itself, must also engage other stakeholders in deciding what is important to accomplish, what community resources to combine with health information, and how to measure whether outreach is effective. This paper will ask who decides whether outreach is effective and how findings about outreach effectiveness are used. One way to answer these questions is to match community-based outreach with participatory evaluation. Such an approach integrates stakeholders and makes transparent what outreach is, what values it is judged by, and what use is made of findings about its effectiveness.

The authors propose that use of evaluation theory as derived by Shadish, Cook, and Leviton from many leading authors in the evaluation literature [2] can help frame the integration of community-based outreach and participatory evaluation. In the sections that follow, we suggest practical strategies for the participatory use of key components of evaluation theory in evaluating community-based outreach.

EVALUATION THEORY: LENSES TO EXPLORE OUTREACH

Those involved in information outreach have access to a wide range of theories about behavioral and community change, including the theories reviewed in the NLM health information outreach planning and evaluation manual [1]: social change theory [3]; the “Transtheoretical” stages-of-change model [4]; health education and health promotion models such as precede-proceed, with their emphasis on participatory approaches [5]; and diffusion of innovations [6]. While “thinking theory” may seem antithetical to the needs of busy, action-oriented practitioners, Lewin's time-honored admonition that there is “nothing as practical as a good theory” [7] reminds us that theory can guide, not interfere with, practice. Theories help explain concepts of interest, such as outreach, by revealing underlying components and assumed connections. Theories about outreach may be used as maps to interpret the terrain, lenses to view the components, or a framework to hold concepts of how outreach works. While theories about behavioral and community change may help inform community-based outreach, another kind of theory is needed to assess outreach effectiveness: evaluation theory.

In the early 1990s, Shadish, Cook, and Leviton in their Foundations of Program Evaluation [2] made a signal contribution toward a theory of evaluation. The idea was to cut across multiple approaches, philosophies, and methodologies to ask fundamental questions about what evaluation is. From this effort came four components of an overarching theory of evaluation: use, program, valuing, and knowledge construction. The way each of the components of evaluation theory is understood shapes the practice of evaluation. For community-based outreach, these components may be translated into the following questions:

  • Use: How will evaluation findings about outreach be used? By whom (e.g., librarians, consumers, and other community-based organizations that would become collaborators in library outreach) will findings be used?
  • Program: What is outreach? How is it intended to work? What internal and external factors influence outreach?
  • Valuing: By which values will the success or failure of outreach be judged? Who judges the success or failure of outreach?
  • Knowledge Construction: Which methodologies will be used to gather evidence about the value of outreach?

Answering these questions helps expose assumptions and approaches to all types of evaluation. These assumptions shape evaluation practice. For example, an evaluation approach to outreach that places the valuing of outreach primarily in the hands of librarians will be very different from one that shares valuing responsibility with a broader range of stakeholders. The lesson from evaluation theory is that how one thinks about evaluation shapes how one conducts evaluation.

For the remainder of the paper, the first three questions about evaluation are addressed as they apply to a participatory approach to outreach evaluation. Knowledge construction, which is thoroughly covered in most evaluation textbooks and methodological guidelines, such as the NLM-sponsored Measuring the Difference [1], is beyond the scope of this paper and would ideally follow a thorough answer to the first three questions.

USE OF OUTREACH EVALUATION RESULTS AND WAYS IT WILL BE USED

The findings of outreach program evaluation are intended to be used. Use is a concept so central to evaluation that it is not only one of the four main components of evaluation theory; it is also one of the standards of program evaluation codified by the Joint Committee for Standards of Educational Evaluation, consisting of several professional and scientific organizations: “The utility standards are intended to ensure that an evaluation will serve the information needs of intended users” [8]. Guidelines of evaluation use include stakeholder (end users, payers, concerned citizens) identification, values identification, and timeliness and dissemination.

Guidelines for use

Key findings from previous literature reviews shape an understanding of knowledge utilization and of barriers to and facilitators of its realization. Utilization is not a product or an event, but rather a complex, changing, multidirectional social process with mutual interdependencies.

The many meanings of the utilization process were described in early writings [9] and subsequently found their way into different conceptual frameworks of utilization, such as instrumentalist or transactionist [10]. Each framework offers a different view of the way the process of utilization works and has very different implications for evaluation practice. For example, the instrumentalist view suggests that once evaluation findings are turned over to librarians and health practitioners, the findings will be put to direct and immediate use to solve health information problems. Evidence of this type of utilization, however, is limited, so a more complex understanding of use is required. For example, outreach evaluation findings might be used to legitimize a point of view, to enlighten policy decisions, to warn about potential or existing problems, or to manipulate knowledge strategically for power or profit [10, 11]. These more indirect understandings of use mean that outreach evaluation findings may meander into policy and practice, not take a straight, instrumentalist march forward.

The numerous influences on the use of evaluation findings can be grouped according to the source of evaluation findings, the content of findings, the medium through which findings are delivered, the user, and the context [10, 12]. Among variables associated with the first four influences on use are relationship building with potential users, accessibility of findings, early and sustained involvement of the user in the evaluation, links among users, level of effort required to access the findings, and the special interests and sometimes conflicting ideologies of potential users. According to Landry [12] the best socio-organizational predictor of utilization is the user context. Characteristics of the context that facilitate use include resources, supportive social conditions, a champion for new knowledge, flexibility for change, no strong political or bureaucratic opposition, incentives to change, leadership by example, and support for a long-term interactive relationship between the collaborating parties in a community and between practitioners and their clients [1017].

As if the list of general facilitators and barriers to utilization were not long and complex enough, Lester [17] reminds us that most of the variables that determine use of evaluation findings are beyond the control of the evaluator. Weiss adds to the interaction of these variables—interests, ideologies, information, and institutional form—a reminder of their interaction in a political context: “The world does not run on principles of scientific rationality, but on the rationality of our system to reconcile different society interests, what we call politics” [11]. In this political arena, then, how can a more participatory approach facilitate the use of outreach evaluation findings?

Participatory approach to use

A first requirement of effective outreach is that outcomes and evaluation decisions depend on appropriate planning, development, and implementation of the programs to be evaluated. Concern about use starts at the beginning of this process, not the end. From decades of experience in public health, adult education, and other community-based enterprises, a well-accepted principle of good planning is to engage the implementers and end users of the program's products at the outset in the assessment of their needs and the planning of the program or products to meet those needs [18]. This principle has been obscured somewhat in recent years by the emergence of “best practices” thinking, influenced by the adoption of systematic reviews of highly controlled experimental trials in less political areas of practice such as medicine and engineering [19]. Even more recently, however, it has become clear that much of what has been offered as evidence-based “best practices” in guidelines for practitioners is not being implemented. This has forced some revival of interest in wider participation in planning as one of a few “best processes” that, together with locally adapted “best practices” as part of a planning process, facilitate taking the complexities of the local context into account [20].

Outcomes related to reduction of disparities, in particular, require planning, implementation, and evaluation that must include changes in behaviors among those community-based organizations and practitioners serving disadvantaged populations if evaluation findings are to be used. This requirement raises two related questions: Will these organizations and practitioners want to collaborate with libraries in outreach and evaluation of outreach efforts? Will collaboration enhance or interfere with progress?

Outreach from libraries cannot assume that the existing health-related, community-based organizations understand the added value that libraries can bring to the achievement of their shared community health and disparity-reduction goals. They might be suspicious of libraries sharing health goals. They also have fought their own turf battles in the health sector and among other community-based organizations competing for resources. Their receptivity to the entry of the library into this fray cannot be taken for granted and must be nurtured carefully and strategically through a participatory planning process [21]. Library outreach also cannot assume that engaging multiple community partners will follow automatically on the engagement of one community partner. The question of whether collaboration and building of coalitions necessarily achieves greater efficiencies, impact, and use is a complicated one [22].

Systematic, strategic, and transparent consultation by libraries with other community-based organizations, practitioners, and lay leaders in the community will improve the chances that the outreach strategy chosen, the health needs it proposes to address, and the customs and circumstances of the particular groups for which health disparities are at issue will be taken appropriately into consideration [23]. In the end, the participatory approach to research on needs, to outreach planning, and to evaluation will increase the probability that practitioners will see the results of the evaluation as relevant to their needs and will therefore be more likely to use them [24].

PLANNING OUTREACH FROM THE GROUND UP

It seems to go without saying that to evaluate something, such as outreach, it is important to understand what it is. But in a field like evaluation, which was built on methodological approaches, a more natural instinct is to start the evaluation with a methodological choice. For example, should a survey be done, should outreach participants be interviewed, or should a randomized controlled trial be mounted and the data emerge from the experimental results of the emerging outreach program? Evaluation theory suggests, however, that before the valuing and the methodology comes a need to understand the thing to be evaluated. In an evaluation of community-based outreach, such questions would include:

  • What is community-based outreach? What is the goal of outreach, for example, better health, greater public awareness of health needs and actions, and reduced disparities?
  • How is community-based outreach supposed to work? How is it meant to achieve change in the way information is transmitted and the ways in which it is received?
  • What are the levers for change? If we cannot change everything about outreach, what can we change to make it work better?
  • What internal structures, processes, and factors influence outreach?
  • What external norms, culture, and factors influence outreach?

Parameters of understanding the program

This paper began with a statement about the goals of community-based outreach. The program component of evaluation theory digs behind the goals to explore how an intervention, such as outreach, can achieve those goals. This approach is consistent with NLM's recommendation to define outreach not solely by its specific activities, but rather as part of a larger package or program [1]. Such an approach encourages looking beyond the components of outreach to understand how they are meant to connect to each other to achieve stated goals.

Logic models or program maps are simple tools for making the components of outreach and their intended links transparent to multiple stakeholders. At its simplest, a logic model can help reveal the following program components:

input → activities → output → outcome → impact

A logic model that has served widely (with some 950 published applications [25]) as both a conceptual map and a procedural guide to the planning of community health programs is the precede-proceed model (Figure 1).

Figure 1
The precede-proceed model for planning health programs that encompass health education outreach and community organizational, policy, and regulatory issues Source: adapted from Green LW, Kreuter MW. Health program planning. 4th ed. New York, NY: McGraw-Hill, ...

Working with a logic model or program map has several advantages. First, it puts the activities into a larger package. The activities are linked to inputs and outputs: they are not stand-alone events. Second, the use of a logic model or conceptual map makes the ideas and assumptions of community-based outreach transparent to all stakeholders. Communication is facilitated, and ownership of the concept is shared. Such transparency enables all stakeholders to agree or disagree about the potential fairness, effectiveness, and practicality of the approach taken. Third, the map allows stakeholders to see the multiple points in the intervention—in this case, community-based outreach— where things might go right or wrong in the program and where data about program effectiveness might be collected. With a model like the one shown in Figure 1, the intervention is placed in a sufficiently broad context to see the multiple determinants that can affect outcomes.

Understanding the program or intervention requires more than a mapping of inputs to levels of the community system where they would be in play and what pathways each would follow from activity to health outcomes and social impact. It also requires that users understand not only the internal workings of the program, but its external context. The smooth arrows that connect the components of outreach in a logic model may become ragged, redirected, or unconnected in the world of practice. These multiple points for outreach to go “right” or “wrong” need to be part of the bigger package or picture of how outreach is assessed.

Theories of change, particularly community and behavior change, can help to clarify the way community-based outreach is intended to work. But formal theories are only part of the picture of understanding a program in context. The experience or intent of stakeholders reveals the informal theories that also shape programs.

An ecological, evidence-based, and participatory approach to program planning and evaluation

One of the lessons of successful efforts in community-based health information has been that activities must be coordinated and mutually supportive across levels and channels of influence, from individual to family to institutions to whole communities. This is the lesson of an ecological understanding of complex, interacting, community program components and the causal chains by which they affect outcomes. Isolated and singular points of intervention, such as answering a health information query for an individual, are likely to have only a limited influence on that one individual's awareness or understanding, less likely to influence that individual's attitude or beliefs, and then not likely to have much influence on the individual's behavior, much less the behavior of larger numbers of people. An information-only intervention is likely to affect only predisposing factors (Figure 1) and is less likely to influence enabling or reinforcing factors without mutually supportive interventions at other levels and through other channels of the community. Program theory, as illustrated in the logic model, shows assumptions about the connections (arrows) among inputs (boxes to the left) and outcomes (boxes to the right). Program planning, using such a logic model, seeks to identify the best evidence and the best local wisdom on interventions that will fill the boxes on the left to change program activities.

A second lesson of successful community-based efforts in health has been the recognition of the importance of building on evidence of efficacy from controlled experimental trials and related research, but with the understanding that such evidence is usually limited in its appropriateness to other community situations and populations. Where evidence is lacking either in its specificity or its generalizability, theory must be brought to bear to fill gaps. Logic models combine evidence-based intervention results with theory that helps bridge the gaps in evidence.

If evidence for specific practices or program components is plentiful, strong, and specific or generalizable to the local situation, the evaluation of outreach can be limited to the measurement of “process” or quality of implementation activity or performance. Good implementation of solid evidence-based practices can reasonably promise positive outcomes. If, however, only some of the interventions or program components are evidence-based, theory-guided interventions will be included in the outreach program, and these warrant more systematic evaluation beyond implementation and quality of performance. Evaluation here must be theory-driven, with links in the causal chain of the logic model dictated by the theory-derived hypotheses about what the intermediate outcomes between intervention and ultimate outcomes should be.

Evidence and theory are seldom sufficient to satisfy the practitioners responsible for implementing a program (e.g., librarians or health workers) that their local circumstances and particular population characteristics have been taken into account, especially with highly fluid circumstances, multicultural populations, and complex programs. Here, library planners can borrow ideas, not yet formally tested, from other outreach programs, communities, or settings like their own where the ideas for intervention have had at least some reality testing for their feasibility in similar real-world settings, if not for their acceptability and effectiveness. An outreach program might well include locally invented, homegrown interventions based on indigenous wisdom, cultural sensitivity, and local experience, for example, but, if these have been tried in a similar form elsewhere, even without formal evaluation, the experience from those other trials should be consulted. Evaluation of these components of an outreach program deserves more tightly controlled assessments than the “evidence-based” interventions. Their evaluation should give due consideration to potential side effects or unintended consequences, because they have not been previously submitted to the same degree of prior experimentally controlled, research-subject-protected protocols. The use of program theory places these issues in a broader context that invites communication with stakeholders, thereby encouraging their use of evaluation findings, as discussed previously.

VALUING: THE BASIS FOR JUDGING OUTREACH A SUCCESS OR FAILURE AND WHO DECIDES?

Parameters of valuing

Following an understanding of the thing to be evaluated (e.g., the components, process, or effects of an outreach program) comes the difficult task of valuing outreach. What will count as success for an outreach program? What will count as failure? Who decides what counts as success or failure? These questions are not the technical questions of methodology, but the social, cultural, philosophical, economic, political, structural, and interpersonal questions of evaluation.

To help sort through the complexities of placing value on programs, Scriven [26] suggests three basic steps to the logic of valuing: (1) determine criteria, (2) set standards, and (3) measure performance. Criteria or indicators are the dimensions of community-based outreach on which value—success or failure—will be placed. This is the heart of valuing. These criteria are often imbedded in neutral-sounding terms like “objectives.” For example, will there be process criteria for outreach (e.g., participation or access) or only outcome criteria (e.g., changed health behavior)? To choose one over the other is to set or limit the basis for judging outreach success. Logic models or program maps help visualize where judgments are made about program success or failure.

Decisions about the value of a program need to be transparent. Who is included or excluded from this process—the funder, the consumer, the practitioner, the evaluator? Because different stakeholders may have different interests in outreach, a clash of values is possible and even likely. Sorting through values, reaching some agreement on the basis for evaluation, and moving toward measurement is a series of key evaluation tasks.

Standards of evaluation define how well the criteria must be achieved for success or failure to be determined. For example, if access to health information is the criterion, a standard of success might be set at different levels: Would access be successful if 10%, 50%, or 99% of the population participated? Again, who decides the standards needs to be transparent. Power differentials may mean that some players are excluded from the decisions about value but are expected to participate in the activities of the intervention: “Play, but no say.”

Participatory approaches to valuing

A participatory approach to valuing opens the door for making the values of an outreach program transparent. That open door has two sides. On one hand, participation of multiple stakeholders will likely reveal conflicts about what counts as program success. For example, the program funder may view success differently than the participating client as a community member. On the other hand, broad stakeholder participation in valuing can make the use of evaluation findings more likely.

Methods for involving community stakeholders and intended end users in the valuing process are essentially those of participatory planning and participatory evaluation. These methods and guidelines for their application in health promotion research and evaluation have been found by experts from various disciplines to be appropriate for community efforts [27]. Figure 2 elaborates on Figure 1 to show how the whole planning, implementation, and evaluation process comes together in a series of integrated steps at the local level. Note that a participatory approach begins with engaging the community.

Figure 2
A flow chart of steps in community-based outreach and planning for health, with skip patterns for making the process more efficient Source: Green LW, Kreuter MW. Health program planning. 4th ed. New York, NY: McGraw-Hill, 2005:29

CONCLUSIONS

Measuring performance moves toward knowledge construction, which we will not tackle in this paper. But we hope to have shown in this discussion that before measuring anything, it is vital to understand how the results of the measurement will be used, what is being evaluated, and how and by whom value is placed on the program components and outcomes.

Participatory research into community needs and program strategies and participatory evaluation of process and outcomes are the best guarantors that the results of outreach evaluation are relevant and thus more likely to be used by the community-based organizations and practitioners. Participatory approaches to incorporating community values in judgments of success make the valuing process transparent and provide a wider view of stakeholder values. Who is at the table and who is not determines what aspects of a program are considered most important, what gets measured, and what gets used in the end. Participatory evaluation involving community-based organizations and practitioners also ensures the incorporation of indigenous wisdom and experience in identifying local needs and setting priorities and establishing the values by which outreach success will be judged.

Footnotes

* This paper is based on a presentation at the “Symposium on Community-based Health Information Outreach”; National Library of Medicine, Bethesda, Maryland; December 3, 2004

REFERENCES

  • Burroughs CM, Wood FB. Measuring the difference: guide to planning and evaluating health information outreach. Bethesda, MD: National Library of Medicine, 2000.
  • Shadish W, Cook T, and Leviton L. Foundations of program evaluation: theories of practice. Newbury Park, CA: Sage, 1991.
  • Rothman J. Planning and organizing for social change: action principles from social science research. New York, NY: Columbia University Press, 1974.
  • Prochaska JO, Di Clementi CC.. Stages and processes of self-change in smoking: towards an integrative model of change. J Consul Clin Psychol. 1983;51(3):S390–5. [PubMed]
  • Green LW, Kreuter MW. Health program planning: an educational and ecological approach. 4th ed. New York, NY: McGraw-Hill, 2005.
  • Rogers EM. Diffusion of innovations. 4th ed. New York, NY: The Free Press, 1995.
  • Lewin K. Field theory in social science: selected theoretical papers. Cartwright D, ed. New York, NY: Harper, 1951.
  • The Joint Committee on Standards for Educational Evaluation. The program evaluation standards: how to assess evaluations of educational programs. 2nd ed. Newbury Park, CA: Sage Publications, 1994.
  • Weiss CH. Evaluation research: methods for assessing program effectiveness. Englewood Cliffs, NJ: Prentice-Hall, 1972.
  • A review of the literature on dissemination and knowledge utilization. [Web document]. Austin, TX: National Center for the Dissemination of Disability Research, Southwest Educational Development Laboratory, 1998. [cited 31 Jan 2005]. <http://www.ncddr.org/du/products/review/>.
  • Weiss CH. What we have learned from 25 years of knowledge utilization. In: Henke R, Final report: International Conference on Social Science and Governance. Zeist, The Netherlands: Netherlands Commission for UNESCO, Management of Social Transformations Programme. 20–21 Mar 2000.
  • Landry R, Lamari M, and Amara N. The extent and determinants of the utilization of university research in government agencies. Public Admin Rev. 2003.  Mar/Apr; 63(2):S192–205.
  • Backer TE.. Knowledge utilization: the third wave. Knowledge Creation Diffusion Utilization. 1991;12(3):S225–40.
  • Berwick DM. Disseminating innovations in health care. JAMA. 2003.  Apr; 289(15):S1969–75. [PubMed]
  • Florio E.. The use of information by policy makers at the local community level. Knowledge Creation Diffusion Utilization. 1993;15(1):S106–23.
  • Landry R, Amara N, Lamari M.. Climbing the Ladder of Research Utilization: evidence from social science research. Sci Commun. 2001;22(4):S396–422.
  • Lester JP.. The utilization of policy analysis by state agency officials. Knowledge Creation Diffusion Utilization. 1993;14(3):S267–90.
  • Green LW. The theory of participation: a qualitative analysis of its expression in national and international health policies. In: Ward WB, ed. Advances in health education and promotion: a research annual. v.1, pt.A. Greenwich, CT: JAI Press, 1986.
  • Green LW. From research to “best practices” in other settings and populations. Am J Health Behav. 2001.  May–Jun; 25(3):S165–78. [PubMed]
  • Green LW, Ottoson JM. From efficacy to effectiveness to community and back: evidence-based practice vs practice-based evidence. In: From clinical trials to community: the science of translating diabetes and obesity research. Bethesda, MD: National Institutes of Health, 2004.
  • Kreuter MW, Lezin NA, and Young LA. Evaluating community-based collaborative mechanisms: implications for practitioners. Health Prom Prac. 2000.  Jan; 1(1):S49–63.
  • Green LW. Caveats on coalitions: in praise of partnerships. Health Prom Prac. 2000.  Jan; 1(1):S64–65.
  • Minkler M, Wallerstein N. Community-based participatory research in health. San Francisco, CA: Jossey-Bass, 2003.
  • Green LW, Mercer SL. Can public health researchers and agencies reconcile the push from funding bodies and the pull of communities? Am J Public Health. 2001.  Dec; 91(12):S1926–9. [PMC free article] [PubMed]
  • Green LW. Published applications of the precede model. [Web document]. [cited 25 Jan 2005]. <http://lgreen.net/precede%20apps/preapps.htm>.
  • Scriven MS. The science of valuing. In: Shadish W, Cook T, Leviton L, eds. Foundations of program evaluation: theories of practice. Newbury Park, CA: Sage, 1991:73–118.
  • Green LW, George A, Daniel M, Frankish CJ, Herbert CP, Bowie WR, and O'Neill M. Guidelines for participatory research. In: Minkler M, Wallerstein N, eds. Community based participatory research in health. San Francisco, CA: Jossey-Bass, 2003. (Available from: http://www.lgreen.net/guidelines.html.).

Articles from Journal of the Medical Library Association : JMLA are provided here courtesy of Medical Library Association
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...