NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Institute of Medicine (US) Roundtable on Environmental Health Sciences, Research, and Medicine. Global Environmental Health: Research Gaps and Barriers for Providing Sustainable Water, Sanitation, and Hygiene Services: Workshop Summary. Washington (DC): National Academies Press (US); 2009.

Cover of Global Environmental Health

Global Environmental Health: Research Gaps and Barriers for Providing Sustainable Water, Sanitation, and Hygiene Services: Workshop Summary.

Show details

8Panel Discussion: Moving Forward

There is a fundamental disconnect between intervention programs and the translation of research into community action. The questions become what are the research needs to achieve the most sustainable water solutions and how to put these pieces together. Panelists started the discussion with a listing of research needs and strategies for improving sustainable water services:

  • Evaluation of water, sanitation, and hygiene interventions. Christine Moe of Emory University noted there has been an emphasis on implementation of interventions, but comprehensive evaluations are often neglected. As people debate sustainable practices in water services, there is a need to understand what elements of the intervention were the determining factors for the success or failure of the intervention (see next section).
  • Global climate and its implication for water use. As the environment is changing, one cannot necessarily assume that the current models or practices will remain relevant, noted Peter Richards of Heidelberg College.
  • Real-time monitoring. For example, real-time bacterial tests that would inform the public whether the beach is safe today, not two days ago, noted Richards. Ideally, the monitoring would include information about the origin of the outbreak—whether the E. coli comes from Canada geese, deer, or people.
  • Holistic approaches to implementing water services. Joseph Jacangelo of MWH and Johns Hopkins University noted that, for many programs, especially in developing countries, there is a need for a more holistic program that incorporates technology, education, and behavior. There are plenty of projects in which the engineering aspects were well designed but the project failed, because there wasn’t an educational aspect or a behavioral change orientation. This is true not only for developing countries, but also in the United States. Water reuse is a good example of well-engineered programs that have failed because of poor educational and communication components.
  • Integration of social, behavioral, and communication components in water services interventions. Phyllis Nsiah-Kumi of Northwestern University noted that at the beginning of any intervention is the need to understand the community’s knowledge, attitudes, and beliefs about the problem in question. The incorporation of these factors needs to be used as the central foundation, so that the water field can teach the community about available solutions and why these solutions are important to health.
  • A water-centered focus. Diane Dupont of Brock University noted that if the research interest is in water and water services, then water needs to be in the center diagram. From there, the field needs to develop the linkages from water to all of the different aspects that might relate to it. This change in focus may address the unintended consequences that can prevent the project from being a success. Thus, before the start of the project, it is important to think about what the ramifications may be.

EVALUATION OF INTERVENTIONS

During the workshop, a number of individuals alluded to the need for evaluation, and the panel explored best practices or effective matrices for measuring success. Dupont started with the idea that assessment of previous experience is the creation of a knowledge base on sustainable water services. Once the knowledge base is created, anyone can access it, and it is hard to recover the costs of maintaining it. What is needed is an agency or an organization to act as a central clearinghouse, where information is readily available for researchers to use. However, without a strong evaluation program, the knowledge base is incomplete, subject to bias, and exists only with the researchers in the field.

Paul Hunter of the University of East Anglia noted that one of the problems with evaluation is determining the appropriate objectives. A number of nongovernmental organizations use targets, such as the number of wells sunk. However, whether or not those wells are effective, are poisoned with arsenic, or improve the health of the population are more difficult matrixes to evaluate. It is not only the providers, but also many of the funders, who do not adequately think through the evaluations. Hunter noted that, even major institutions such as at the World Bank, can give the impression that they are more concerned with whether or not funds are dispersed than with whether projects achieve worthwhile outcomes, such as meeting health goals. Therefore, there needs to be more discussion, when money is given, to very clearly define the objectives about what needs to be done, and what the health aims are. Hunter stressed that money should be given out with the proviso that the intervention achieved its goals, not just that the work has been done.

Richard Gelting of the Centers for Disease Control and Prevention expanded on this idea by noting that people in the multilateral lending institutions are evaluated on money spent, and people at implementing institutions are evaluated on the number of taps installed. The question is how to approach those incentives, so that they are more in line with the ultimate goal. He further noted that the goals often become “the number of pipes in the ground.” This is not the goal, but rather an objective used to get to the ultimate goal of improving public health. There is a need to focus on the goals and not on the intermediate steps or objectives. Jacangelo further suggested that putting pipes in the ground is a tactic from which to accomplish an objective or a goal. There is a well-developed science of program evaluation, and water researchers in the field need to learn from these scientists, noted Moe. If researchers are going to change the outcome of interest, then they need to invest time to determine how to measure it.

Finally, Moe noted that public health practitioners need to think broadly about health goals when providing safe drinking water, to include not only diarrhea or weight and height, but also well-being. For example, when practitioners look at the gender effects of the impact of water and sanitation on women, it’s more than health. It is quality of life, well-being, and opportunities to do other activities, such as get an education and earn income. Improving literacy can be just as important as decreasing the rate of diarrheal disease.

Barriers to Evaluation

There is an inherent conflict with resource allocation that may be one of the largest barriers for incomplete evaluation, noted Moe. For example, a nongovernmental organization may feel compelled to spend its funds on implementing a solution in another community that does not have access to safe drinking water instead of evaluating a current strategy. The danger in this approach is that scientists, as noted above, may not understand which components are vital to the success of the intervention strategy and which components are not necessary. If implementing agencies spent the money and the resources on evaluation, then they would have evidence of success, and it would inform the strategies for future interventions.

The timing of an evaluation can be an additional barrier. For example, Moe noted that she had been asked to evaluate a program after a nongovernmental organization had built 3,000 units of a particular intervention. She noted that it is hard to correct a problem or an intervention strategy when there is already an investment in 3,000 of an item. It therefore becomes a balancing act to put in place a certain number of implementations to provide statistical power for evaluation, but not wait too long after spending these resources on a strategy that is ineffective. However, Moe asserted, the main problem is that monitoring and evaluation is an afterthought and not an integral part of an intervention strategy. Until funders recognize the importance of measuring the right objectives and invest money in this area, more cohesive programs are not feasible.

COMMUNITY-BASED EVALUATION AND PARTICIPATION

A central theme during the discussion was the role of the community in providing water services, and this is the cornerstone of sustainability. With that spirit, one participant noted that evaluations need to be community based and not just consist of the research community evaluating the intervention. Metrics that are important to the community need to be reflected in the evaluation tool. Moe added that this underscores the need to have in-country partners who understand the culture. She said this is critical to the success of the project.

However, community engagement is important throughout the intervention. Peggye Dilworth Anderson of the University of North Carolina at Chapel Hill noted that when practitioners go into communities, their academic training is not enough. They need to have some model or framework as how to best approach community members by understanding their literacy and education levels. Working with indigenous people requires the use of advisory committees made up of the individuals that one wants to serve and having the gatekeepers involved up front. She noted that when she goes into a community, she does not use the word “intervention,” but rather programs or projects—words that resonate with the community. The community then becomes a part of framing the aims, the research questions, and the evaluation process.

Bringing in a developing country perspective, Eric Kofi Obutey of the Ghana Public Utilities Regulatory Commission noted that, in Ghana, there is an abundance of researchers conducting survey research, but these surveys do not always result in changes in the community. While these researchers are well intended, if the community does not see a benefit, it is less likely to participate. There is thus a need to ensure that the research translates into policy making and recommendations in the communities studied. Vincent Nathan of the City of Detroit Department of Environmental Affairs noted that policy relies on research and community support. The community can influence that political process, but it needs to be involved.

Anderson noted that community research is an iterative process, that communities shape and reshape the interpretation of the results. Sometimes researchers have a rigid perspective on the outcome, but using the community-based participatory model may mean tweaking the aims and the questions as the process continues. Nathan agreed and suggested that the community say at the very beginning whether or not they want to do something and then decide whether it is acceptable and whether it is going to be sustainable.

Finally, Obutey noted that it is also important to change the perception of the study team. In most community work, there is certainly a concept of us and them, even as people try to do more community-based research. There is still the sense that the researchers “came to our neighborhood and did these things and asked us all these questions, and they left and we don’t know what happened, and we didn’t get any of the money.” In these circumstances, the community feels taken advantage of. This calls for a new collective in which community members, the private sector, and academic institutions participate equally. This has to happen at all levels and from the beginning. This type of collective form can shape effective community programs, establish an effective evaluation, and ultimately shape policy to establish other programs that continue in the community long after the grant funding period. This type of project is difficult because it can take 6–8 months for a community to decide if they really want to participate in a program. And then once they decide they want to participate, the community may have a different plan, and it takes time to retool a program that has already been approved by a funding agency. This added time becomes a barrier for researchers who have established grants with deadlines and timelines. Some participants suggested that further discussion was needed to resolve how community engagement could be covered under a funding structure.

Copyright © 2009, National Academy of Sciences.
Bookshelf ID: NBK50762

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (8.0M)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...