• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of croatmedjFree full text at www.cmj.hrAboutSubscribeSubmitInfo for AuthorsFree full text at www.cmj.hr
Croat Med J. Dec 2008; 49(6): 720–733.
PMCID: PMC2621022

Setting Priorities in Global Child Health Research Investments: Guidelines for Implementation of the CHNRI Method

Abstract

This article provides detailed guidelines for the implementation of systematic method for setting priorities in health research investments that was recently developed by Child Health and Nutrition Research Initiative (CHNRI). The target audience for the proposed method are international agencies, large research funding donors, and national governments and policy-makers. The process has the following steps: (i) selecting the managers of the process; (ii) specifying the context and risk management preferences; (iii) discussing criteria for setting health research priorities; (iv) choosing a limited set of the most useful and important criteria; (v) developing means to assess the likelihood that proposed health research options will satisfy the selected criteria; (vi) systematic listing of a large number of proposed health research options; (vii) pre-scoring check of all competing health research options; (viii) scoring of health research options using the chosen set of criteria; (ix) calculating intermediate scores for each health research option; (x) obtaining further input from the stakeholders; (xi) adjusting intermediate scores taking into account the values of stakeholders; (xii) calculating overall priority scores and assigning ranks; (xiii) performing an analysis of agreement between the scorers; (xiv) linking computed research priority scores with investment decisions; (xv) feedback and revision. The CHNRI method is a flexible process that enables prioritizing health research investments at any level: institutional, regional, national, international, or global.

Proposals for health research funding are far exceeding available resources. Increasingly, there is a need to set priorities in health research investments in a fair, transparent, and systematic way. In 2005, Child Health and Nutrition Research Initiative (CHNRI, www.chnri.org), an initiative of the Global Forum for Health Research, launched a project to develop a systematic method for setting priorities in health research investments and to apply it to global child health (1). This effort was motivated by a notion that current research investment prioritization approaches suffer from many shortcomings which may partly be responsible for persisting high levels of mortality in children globally (2-4). The target audience for the proposed method are international agencies, large research funding donors, and national governments and policy-makers. The CHNRI method is a flexible process that enables prioritizing health research investments at any level: institutional, regional, national, international, or global.

Selecting managers of the process

CHNRI method is a process managed by a relatively small team of persons. This team needs to appropriately represent investors in health research, their interests, and visions. Like any other investing, health research funding is associated with possible gains and profits, but also risks and losses. The key concept of CHNRI’s methodology is that all health research should have a common ultimate goal, which is to reduce existing burden of disease and disability and improve health. Future reductions in the existing disease burden that will result from supported health research are considered “profits.” However, because of many uncertainties inherent to health research, many investments will never sufficiently contribute to reduction in disease burden to justify the investments.

The purpose of the CHNRI priority setting method is to inform the investors in health research about the risks associated with their investments. Each research investment option needs to be judged according to a set of criteria. Those criteria will assess likelihood that proposed research option could realistically contribute to disease burden reduction within the context in which investments are taking place (4).

Specifying context and risk management preferences

Priority setting in health research investments is not an abstract, theoretical exercise with a single possible correct outcome, such as a mathematical problem. It is a process that occurs within complex circumstances of the real world. The decisions will, therefore, strongly depend on the context in which the process takes place and on risk preferences of the investors.

At this point, a small group of process managers (who represent the investors) needs to specify the context and their risk preferences. The context is specified by thoroughly discussing and carefully defining the following: (i) context in space; (ii) disease, disability, and death burden; (iii) context in time; (iv) stakeholders; and (v) risk management preferences. Box 1 provides guidelines on how this should be done.

Box 1Guidelines on defining the context in which research priorities will be set

(i) Context in space: What is the population in which the investments in health research should contribute to disease burden reduction and improved health? (eg, all developing countries/all children under 5 years of age/people exposed to a specific risk factor);

(ii) Disease, disability, and death burden: What is known about the burden of disease, disability, and death that will be addressed by supported health research? Can it be measured and quantified (eg, in disability-adjusted life years – DALYs – or in some other way)?

(iii) Context in time: In how many years are the first results expected (in terms of reaching the endpoints of health research, translating and implementing them, which is then expected to achieve detectable disease burden reduction)?

(iv) Stakeholders: Who are the main groups in the society whose values and interests should be respected in setting health research investment priorities?

(v) Risk management preferences: What will be investment strategy in health research with respect to risk preferences? Will all the funding support a single (or a few) expensive high-risk high-profit research options (eg, vaccine development), or will the risk be balanced and diversified between many research options which will have different levels of “risks” and “profits” associated with them?

Discussing criteria for setting health research priorities

There is a large number of nearly independent criteria that can be used to discriminate between any two competing “health research investment options,” giving one of them preference over the other. The central challenge is that the decisions on investment priorities based on different criteria will necessarily conflict each other. This means that, when choosing between any two proposed research options, some criteria will give preference to one of them, while other will prefer the other.

At this point, managers of the priority setting process should try to list possible criteria appropriate to their specific context. Box 2 provides a list of criteria that can serve as an example and starting point. There is no real limit to a number of priority setting criteria that may seem appropriate to different contexts. However, with inclusion of more criteria to the list, they will begin to overlap with the already listed ones, so their potential usefulness as independent criteria will soon begin to decrease.

Box 2Examples of the possible criteria which can be used for setting priorities in health research investments

• Answerability? (some health research options will be more likely to be answerable than the others)

• Attractiveness? (some health research options will be more likely to lead to publications in high-impact journals)

• Novelty? (some health research options will be more likely to generate truly novel and non-existing knowledge)

• Potential for translation? (some health research options will be more likely to generate knowledge that will be translated into health intervention)

• Effectiveness? (some health research options will be more likely to generate/improve truly effective health interventions)

• Affordability? (the translation or implementation of knowledge generated through some health research options will not be affordable within the context)

• Deliverability? (some health research options will lead to/impact health interventions that will not be deliverable within the context)

• Sustainability? (some health research options will lead to/impact health interventions that will not be sustainable within the context)

• Public opinion? (some health research options will seem more justified and acceptable to general public than the others)

• Ethical aspects? (some health research options will be more likely to raise ethical concerns than the others)

• Maximum potential impact on burden? (some health research options will have a theoretical potential to reduce much larger portions of the existing disease burden than the others)

• Equity? (some health research options will lead to health interventions that will only be accessible to the privileged in the society/context, thus increasing inequity)

• Community involvement? (some health research options will have more additional positive side-effects through community involvement)

• Cost and feasibility? (all other criteria being equal, some research options will still require more funding than the others and thus be less feasible investments)

• Likelihood of generating patents/lucrative products? (some research options will have greater likelihood of generating patents or other potentially lucrative products, thus promising greater financial return on investments, regardless of their impact on disease burden)

Choosing limited set of most useful and important criteria

In this step, managers of priority setting process need to select a set of priority setting criteria from the longer list that should be sufficiently informative to discriminate between the competing research options. Figure 1 shows an example of how this can be done. Competing research options are expected to initially generate new knowledge, which then needs to be translated into health intervention. This translation may either lead to improvement of an existing intervention, or development of a new one. The implementation of that intervention will eventually reduce disease burden, which is the ultimate aim of any health research investment (Figure 1).

Figure 1
A simple framework developed by Child Health and Nutrition Research Initiative, which identifies some of the apparent criteria that can be used for setting priorities between the proposed health research options.

The criteria that assess the likelihood of the progress through this simple framework are: (i) answerability, (ii) effectiveness, (iii) deliverability, (iv) maximum potential for disease burden reduction, and (v) the effect on equity. CHNRI recommends these five criteria to be used in almost all contexts. Some of them may even be merged – eg, “effectiveness” and “deliverability” criteria could be merged in some contexts into a more general criterion called “usefulness.” Also, “maximum potential for disease burden reduction” and “effect on equity” criteria can be merged into a more general criterion called “impact.”

Additional criteria (those shown in Box 2, or any other useful criteria) may be added to these ones suggested here, if the management team decides that they are important within their context. It is entirely up to the team of process managers to decide on the final list of criteria that will be useful for their particular exercise in priority setting in health research investments. Examples on how this was achieved in practice can be found in some published examples of implementation (5,6)

Developing means to assess the likelihood that proposed health research options will satisfy selected criteria

After the managers selected the criteria, they should invite a group of technical experts. The experts should take the process through the next three steps (listing, checking, and scoring research options), working closely with the management team.

The first task for technical experts is to develop a set of three simple questions that will address each of the selected criteria. These questions should jointly help to assess the likelihood that proposed research options will satisfy the selected criteria. It is recommended that the questions should be simple, sufficiently informative, easily understandable, and answerable simply as “yes” or “no.” Box 3 shows an example of how the questions were developed in some of the conducted exercises to address the set of 5 criteria: answerability, effectiveness, deliverability, maximum potential for disease burden reduction, and the effect on equity (5,6).

Box 3Example of yes/no questions that can be used to assess likelihood whether proposed health research options satisfy the chosen priority-setting criteria

CRITERION 1: ANSWERABILITY

1. Would you say the research question is well framed and endpoints are well defined?

2. Based on: (i) the level of existing research capacity in proposed research and (ii) the size of the gap from current level of knowledge to the proposed endpoints; would you say that a study can be designed to answer the research question and to reach the proposed endpoints of the research?

3. Do you think that a study needed to answer the proposed research question would obtain ethical approval without major concerns?

CRITERION 2: EFFECTIVENESS

1. Based on the best existing evidence and knowledge, would the intervention which would be developed/improved through proposed research be efficacious?

2. Based on the best existing evidence and knowledge, would the intervention which would be developed/improved through proposed research be effective?

3. If the answers to either of the previous two questions are positive, would you say that the evidence upon which these opinions are based is of high quality?

CRITERION 3: DELIVERABILITY

1. Taking into account the level of difficulty with intervention delivery from the perspective of the intervention itself (eg, design, standardizability, safety), the infrastructure required (eg, human resources, health facilities, communication and transport infrastructure) and users of the intervention (eg, need for change of attitudes or beliefs, supervision, existing demand), would you say that the endpoints of the research would be deliverable within the context of interest?

2. Taking into account the resources available to implement the intervention, would you say that the endpoints of the research would be affordable within the context of interest?

3. Taking into account government capacity and partnership requirements (eg, adequacy of government regulation, monitoring and enforcement; governmental intersectoral coordination, partnership with civil society and external donor agencies; favorable political climate to achieve high coverage), would you say that the endpoints of the research would be sustainable within the context of interest?

CRITERION 4: MAXIMUM POTENTIAL FOR DISEASE BURDEN REDUCTION

1. Taking into account the results of conducted intervention trials or for the new interventions the proportion of avertable burden under an ideal scenario, would you say that the successful reaching of research endpoints would have a capacity to remove 5% of disease burden or more?

2. To remove 10% of disease burden or more?

3. To remove 15% of disease burden or more?

CRITERION 5: EFFECT ON EQUITY

1. Would you say that the present distribution of the disease burden affects mainly the underprivileged in the population?

2. Would you say that the underprivileged would be the most likely to benefit from the results of the proposed research after its implementation?

3. Would you say that the proposed research has the overall potential to improve equity in disease burden distribution in the long term (eg, 10 years)?

Systematic listing of large number of proposed health research options

Research priorities will usually be set under two types of circumstances. In the first scenario, a funding agency/government will aim to distribute its annual budget in the most rational way, without having already received any specific funding proposals. It will need to define its funding priorities and launch the calls for research proposals, while deciding in advance how much funding will be made available for each call.

In the second scenario, an existing source of funding (such as a donor agency or a national ministry) will receive demands for research support from many research groups. The sum of their demands will greatly exceed the available funds.

In both scenarios, it is useful to systematically list (or categorize) all the competing research options. In the first scenario, this systematic list will ensure that all apparent research options are given a fair chance to compete against each other. In the second scenario, the systematic categorization will expose avenues of research in which there is fierce competition and those in which there seems to be no research capacity or research interest.

The number of possible health research options is endless and limited only by imagination of all living researchers. Theoretical framework that enables systematic listing of such an endless spectrum of options is rather complex (1,4). However, the CHNRI methodology developed a process of systematic listing of all competing research options that respects that theoretical framework, but is also practical and intuitive. In both scenarios, the way we propose that all competing health research options should be listed (or categorized, if they have already been proposed for funding) is shown in a Table 1.

Table 1
Framework developed by Child Health and Nutrition Research Initiative that enables systematic listing of a very large number of proposed research options and research questions.

There is different “depth” of health research. The most fundamental categorization of all health research is shown in the first column, which we call “research domains.” There are three main domains: (i) health research to assess burden of health problem (disease) and its determinants; (ii) health research to improve performance of existing capacities to reduce the burden; and (iii) health research to develop new capacities to reduce the burden. All imaginable health research options should fall under one of those “domains.”

The next level of “depth” are broad “research avenues,” shown in the second column. Within each of those avenues, large number of “research options” can be envisaged (the third column). In practice, prioritization in health research investments will usually be made between the competing “research options,” as they correspond to 3-5-year research projects. That is a concept that both investors and researchers are familiar with and the level at which investment prioritization is already taking place (competition for research grants).

Finally, there is an even more specific level of “depth” of health research, which we call “research questions.” Each “research option” will propose to answer a number of “research questions.” These are very specific lines of research that correspond to a title of a single research article, which is another concept the researchers are familiar with.

In some instances, eg, when the process is conducted as a mainly theoretical exercise to identify the most important specific questions that should be investigated within a given context, the prioritization using CHNRI methodology can be performed between the competing research questions.

Table 1 is an example of how research options (or questions) should be categorized before they are scored against the relevant criteria in order to be prioritized for investments. It should also be stressed that this step is not really necessary to identify priorities and can even be skipped, but it has an advantage of ensuring that the process is systematic, that it gives a fair chance to all types of health research, and that it exposes areas of fierce competitiveness and also of low interest and capacity.

Pre-scoring check of all competing health research options

Once that all competing research options have been systematically listed, technical experts should read them all again very carefully before the scoring. The experts need to ensure that scoring of all proposed research options against all proposed criteria should be possible. If problems are envisaged, research options should be reworded to enable their structured scoring by the experts.

The easiest way to do this is by keeping in mind the simple framework shown in Figure 1. The research options (or questions) must always suggest what is the new knowledge that they intend to generate. Also, it should be possible to envisage an uninterrupted link between this knowledge and its proposed effect on disease burden reduction through translation and implementation.

Scoring of health research options using the chosen set of criteria

At this stage, technical experts are expected to use their knowledge and experience to systematically score research options against the criteria chosen by process managers. The more experts agree to participate in the scoring, the more reliable is the outcome of the process. The experts should score all research options independently of each other. Each technical expert scores each research option by answering three questions per each criterion about that particular option. The answers to each question are simply:

– “I agree” (1 point), or

– “I disagree” (0 points).

There will be cases in which technical experts will not feel informed enough to answer some questions. In all such cases, they should leave those answers blank (no answer). Furthermore, when technical experts are sufficiently informed to answer the question, but can neither agree nor disagree, they are allowed to enter a score of 0.5 (half a point). In this way, such choice is distinguished from “no answer.”

When finished with scoring, each technical expert should submit his/her own scores to the process management team independently from other experts. This will ensure that the overall scores represent a measure of their collective optimism toward each of the scored research options.

Calculating intermediate scores for each health research option

Each research option will first achieve its intermediate scores. The number of intermediate scores equals the number of selected criteria, as each intermediate score informs process managers on likelihood that the research option would satisfy a specific criterion (eg, answerability, effectiveness, equity, etc.). Once all the scores from all technical experts are submitted to process managers, intermediate scores for each criterion can be easily computed. Table 2 presents how this should be done. In this simple example, 12 competing research options (options 1-12) are being scored, only one criterion is used (criterion 1), research options are assessed by three scoring technical experts (TE1-3) based on three related questions (question 1-3). In reality, there will be more research options, criteria, and scoring technical experts, but all the principles of calculating the intermediate scores will remain exactly the same as shown in Table 2.

Table 2
An example of scoring of 12 hypothetical proposed research options by 3 technical experts (TE1-TE3) using a single criterion and computation or intermediate score for that criterion.

The intermediate scores are computed by adding up all the informed (ie, non-blank) answers (“1,” “0,” or “0.5”). The achieved sum is then divided by the number of received informed answers. Blanks are left out of the calculation in both numerator and denominator. All intermediate scores for all research options will, therefore, be assigned a value between 0 and 100%. In this way, the methodology deals with missing answers because it should not be expected that all technical experts would be sufficiently informed on each possible research option to score it against each possible criterion.

In the hypothetical case shown in Table 2, the values for intermediate score 1 (for criterion 1) ranged from 31% (option 11) to 78% (option 4). These figures now represent a measure of collective optimism among technical experts toward the likelihood that each of the proposed research options would satisfy the priority-setting criterion 1. They can now be prioritized and ranked according to this criterion based on the scores they received. Some of the expected advantages of this approach in comparison with other priority-setting methodologies are its transparency, limitation of personal biases through a structured survey, a systematic process with very specific outcomes and intuitive quantitative scores (3,4). The concerns over subjectivity of this approach are discussed in the concluding paragraph, where possible biases and limitations of the methodology are addressed.

Obtaining further input from stakeholders

One of the biggest challenges in prioritizing health research investments is involving relevant stakeholders and the wider community in the process (7). The term “stakeholders” refers to all individuals and/or groups who have interest in prioritization of health research investments. Stakeholders will therefore comprise a large and very heterogeneous group. Examples of stakeholders include research funding agencies (eg, governmental agencies, private organizations, public-private partnerships, international and regional organizations, taxpayers of a certain region), direct recipients of the funding (eg, researchers and research institutions), users of the research (eg, policy makers, industry, or the general population of a country), and any other group with interest in prioritization process (eg, advocacy groups, journalists and media, lawyers, economists, experts in ethics, and many others). To ensure legitimacy and fairness of priority setting decisions in health research investments, involvement of a wide range of stakeholders is recommended.

Stakeholders from the wider community are usually not included in the process because they lack sufficient technical expertise. The CHNRI methodology developed a strategy of involving the stakeholders in the process regardless of their technical expertise. This can be done by modifying intermediate scores (which are entirely based on the structured input from technical experts) according to the stakeholders’ system of values. In this way, the final research priority score for each research option will contain the input from both technical experts and the stakeholders. Although the stakeholders do not have enough technical expertise to score research options according to chosen priority-setting criteria, they can still score the chosen criteria. This is expected to reveal how much each criterion matters to them relative to the others. In this way, the wider group of stakeholders may still substantially influence the final outcome of the process. The stakeholders can: (i) define minimal score (threshold) for each intermediate score (criterion) that needs to be achieved to consider any research option a funding priority; and (ii) allocate different weights to intermediate scores, so that the overall score is not a simple arithmetic mean of the intermediate scores, but rather the weighted mean that reflects relative values assigned to each criterion by the stakeholders.

Thresholds will prevent investments in research options that dramatically fail any of the criteria to which stakeholders are particularly sensitive, regardless how well these research options were scored against other criteria. Weights will ensure that some intermediate scores, which relate to priority setting criteria that are seen as more important, would influence the value of the final score more than the others. Values for thresholds and weights can be obtained through a simple survey conducted among the appropriate group of representatives of the stakeholders (“larger reference group”). Table 3 shows an example. Further details are available in the article by Kapiriri et al (8).

Table 3
An example of a simple questionnaire that can be used to survey different stakeholders and obtain their input into the Child Health and Nutrition Research Initiative process

Adjusting intermediate scores taking into account the values of stakeholders

The managers of the process need to compute average thresholds and weights for each criterion based on the suggestions obtained from the survey in a larger reference group of stakeholders. They need to check if all intermediate scores for all research options pass all the suggested thresholds. Research options that fail to pass all the thresholds should be disqualified at this stage and not considered funding priorities.

Then, every intermediate score received by each research option should be multiplied by the average weight (amount of assigned US$) suggested by the larger reference group of stakeholders. The products represent “weighted intermediate scores.” These scores will be used to compute an overall score (see next step), which will reflect both the input from technical experts and the stakeholders.

The actual size and composition of the larger reference group of stakeholders will depend on the context. Small reference group of stakeholders is appropriate when several major donors to any health research-funding organization want to influence priority setting process. They can set very specific thresholds and weights for each criterion. Large and diverse reference group of stakeholders is more appropriate for priority setting for health research on problems of regional or global importance.

Calculating overall priority scores and assigning ranks

Intermediate scores for each research option that are based on the scores received from technical experts will range between 0%-100%. At that point, the managers of the process can simply agree that all criteria that they initially chose for priority setting are equally important (because all of them are needed to get from new knowledge to decrease in disease burden). In that case, an overall research priority score (RPS) will be a simple mean of all intermediate scores.

In a hypothetical example shown in Figure 2, research option received five intermediate scores from technical experts: 60%, 80%, 70%, 60% and 80%, respectively. This would mean that its overall RPS can be computed as follows:

Figure 2
Calculation of overall research priority score (RPS) based on 5 hypothetically chosen priority setting criteria (C1-C5); values W1-W5 are factors by which each criterion is weighted (computed as average weights for each criterion obtained from the survey ...
RPS = (60% + 80% + 70% + 60% + 80%)/5 = 70%

However, if stakeholders were also involved and a survey was undertaken among them to include their values in the process, it could provide hypothetical average thresholds for the five criteria (50%, 50%, 40%, 20%, and 60%, respectively), and also hypothetical average weights (US $15, $15, $15, $30, and $25, respectively). In this case, the initial check will establish that all thresholds have been passed and that the research option from the example below remains in the contest for the funding. Then, the weights are applied as shown below and the overall RPS is corrected to 69.5%. After computing weighted RPS for all research options that passed all the thresholds, the options can be ranked by priority according to their achieved RPS.

Performing an analysis of agreement between scorers

Scoring performed by technical experts is both independent and transparent to process managers. Therefore, the CHNRI methodology offers the potential to expose the points of the greatest agreement and the greatest controversy among the experts. Identification of these points should allow more focused discussion on the priorities after the completion of the process. In this way, in addition to the information on how each research option complies with the chosen priority-setting criteria, investors and policy makers are informed about the amount of agreement between the experts on each research option.

The level of agreement can be assessed for each specific research option using agreement statistics (κ). This calculation becomes extremely complicated when the number of scorers exceeds 2 and the number of rating categories exceeds 2. We suggest that all observations where the expert reviewer chose 0.5 (“knowledgeable, but the answer is indeterminate”) should first be recoded as missing values, restricting the number of rating categories to 2 and making the calculation of the κ statistics more meaningful. The decision to choose 0.5 is nearly equivalent to choosing to leave the answer field blank, since in both situations the expert reviewer is revealing that his or her answer to the question is unknown.

Kappa value should be computed for each research question as a measure of the level of agreement among the scorers. When the number of scorers is variable across subjects, statistical significance testing cannot be performed. Interpreting κ statistics is arbitrary, and the greater the κ, the greater the level of agreement. Further details on calculation of κ can be found elsewhere (9).

Linking computed research priority scores with investment decisions

There are two main scenarios in which process managers will link research priority scores with investment decisions. The first one is designing an investment strategy before actual investments are made. The second one is modifying an already existing investment portfolio to reduce risk and/or increase returns on investments.

In the first scenario, a donor agency or organization will conduct an informative CHNRI process to define its priorities before it commits to funding and launching of the calls for grant proposals. In this case, we argue that investing in health research is fundamentally not much different than investing in stocks of different companies on the stock market. Rather than making investment decisions by comparing companies, investors in health research will be choosing between many groups of health researchers and their research grant proposals. Seen in this way, investors in health research should learn from the vast experience and literature on investment in financial markets (10).

Among many analogies, “high risk” health research investment is the one with very uncertain (or unlikely) answerability, transferability (usefulness) or potential impact on disease burden reduction. “High profit” health research investment is the one offering very high reduction in disease burden if successful. There will be investments in health research that offer lower “profits,” but also at lower “risks” (such as research on improvement of existing interventions); and also “high-risk high-profit” investment options (eg, research to develop new and non-existent vaccines against malaria or AIDS).

There is always a risk and a potential profit associated with any investment. The risk preference of the investors will therefore represent an important determinant of their investment strategy. For a rational investor, the probability of success always needs to be balanced against the probability of failure. The preferences of both those who tend to seek or to avoid risk have costs in terms of reduced expected profits, which is easy to demonstrate with standard expected utility theory (11). While this theory does not normatively qualify some preferences as better than others, a common suggestion is that rational actors who are sufficiently large for risk pooling should base investment decisions on preferences that are risk-neutral, as this strategy leads to highest profits in the long run (11,12). This implies that an unbalanced investment portfolio, in which large majority of investments are in “low-risk low-profit” health research options or in “high-risk high-profit” options, is neither rational nor responsible. However, because there is very little accountability for poor investment decisions in health research and their evaluation in terms of benefits for the society, we are witnessing an increasing trend of global research portfolio becoming unbalanced and favoring “high-risk high-profit” health research options (12).

In an alternative scenario, international funding agency or national government has already been funding health research for several years and would like to improve the mix of supported research options. In this case, a classical “program budgeting and marginal analysis would be appropriate:” (i) identifying funding cut-off points and RPSs for funded research options; (ii) comparing research options that have no allocated funding to existing funding programs; (iii) assessing relative value of each priority using the same criteria; (iv) releasing resources from existing programs to support additional new priority research areas (13). All decisions that need to be made within this scenario are based on: (i) defining RPS and cost of each research option, either already supported or proposed as an alternative; (ii) maximizing the sum of RPS values of supported research options within a given fixed budget; (iii) if the sum of RPS scores within an existing program is lower than the sum of the alternative, resources should be shifted from the existing into the new research options.

Feedback and revision

CHNRI methodology is a process which does not end with definition of health research priorities and allocation of funding. The investments are expected to lead to changes in the context over time in terms of disease burden. Other components of the contexts may also change substantially, from stakeholders’ system of values to limits in space or risk management preferences. All these changes can be accounted for by: (i) adding further research options to the list; (ii) adding additional criteria; (ii) re-scoring all research options in the redefined context; and (iii) revising thresholds and weights placed on intermediate scores. In this way, the research investment portfolio will continuously be adjusted to the context and aim to reduce the existing disease burden most cost-effectively and in an equitable way.

Conclusion

Some of the possible advantages of CHNRI’s research priority-setting methodology include: (i) transparent presentation of the context and criteria in the priority setting process; (ii) management of the process by investors themselves over its entire duration; (iii) structured way of scoring, which should limit specific interests or personal biases; (iv) · involvement of non-technical stakeholders in priority setting; (v) the flexibility of the process provided by adding or subtracting the criteria; (vi) potential to revise weights and thresholds based on the changes in the context; (vii) · simple presentation of the strengths and weaknesses of each competing research option; (viii) possibility to rank research options according to each individual criterion; (ix) a simple quantitative outcome that is easy to present, justify, and explain to policy-makers; (x) exposure of the points of the greatest agreement and controversy. Although the proposed guidelines are based on wide consultations and extensive review and assessment of previous approaches, the CHNRI method will eventually benefit from independent validation in various settings in the future. It will be even more challenging to define the real impact of the process on shifting global research priorities. That is the ultimate goal of CHNRI method and the one that will leverage support for health research to make more impact on the disease burden in the real world.

Still, the methodology is not free of several possible biases. Although the advantages mentioned above represent a serious attempt to deal with many issues inherent to a highly complex process of research investment priority setting, there are still concerns over the validity of the CHNRI approach and related biases. One of them is related to the fact many possible good ideas (“research investment options”) may not have been included in the initial list of research options that was scored by the experts, and to the potential bias toward items that get the greatest press. The spectrum of research investment options listed initially in this exercise was derived through a systematic process, but it is not endless and it cannot ever cover every single research idea. Specific research methodologies (ie, randomized clinical trials) are not mentioned because the research questions listed in that exercise are unlikely to be answered by a single well-defined study. Therefore, the CHNRI process aims to achieve reasonable coverage of the spectrum of possible ideas.

Another concern over the CHNRI process is that its end product represents a possibly biased opinion of a very limited group of involved people. In theory, a chosen group of experts can have biased views in comparison with any other potential groups of experts. However, the number of people who possess enough experience, expertise, and knowledge on the issue to be able to judge a very diverse spectrum of research questions is rather limited. If one thinks of this “pool of technical experts” as the whole population that could theoretically be used to solicit expert opinion on the questions that need to be asked, we then propose selection of a “sample” from that population, based on their track record. The larger and the more diverse this sample is, the less likely it is that there would be considerable differences in the composition of the initial list of questions (or results of the scoring process) if some other group of experts had been selected.

Acknowledgment

Child Health and Nutrition Research Initiative (CHNRI) of the Global Forum for Health Research was supported by The World Bank in conducting this work.

References

1. Rudan I, El Arifeen S, Black RE. A systematic methodology for setting priorities in child health research investments. In: A new approach for systematic priority setting. Dhaka: Child Health and Nutrition Research Initiative; 2006.
2. Leroy JL, Habicht JP, Pelto G, Bertozzi SM. Current priorities in health research funding and lack of impact on the number of child deaths per year. Am J Public Health. 2007;97:219–23. doi: 10.2105/AJPH.2005.083287. [PMC free article] [PubMed] [Cross Ref]
3. Rudan I, Gibson J, Kapiriri L, Lansang MA, Hyder AA, Lawn J, et al. Setting priorities in global child health research investments: assessment of principles and practice. Croat Med J. 2007;48:595–604. [PMC free article] [PubMed]
4. Rudan I, El Arifeen S, Black RE, Campbell H. Childhood pneumonia and diarrhoea: setting our priorities right. Lancet Infect Dis. 2007;7:56–61. doi: 10.1016/S1473-3099(06)70687-9. [PubMed] [Cross Ref]
5. Tomlinson M, Chopra M, Sanders D, Bradshaw D, Hendricks M, Greenfield D, et al. Setting priorities in child health research investments for South Africa. PLoS Med. 2007;4:e259. doi: 10.1371/journal.pmed.0040259. [PMC free article] [PubMed] [Cross Ref]
6. Chisholm D, Flisher AJ, Lund C, Patel V, Saxena S, Thornicroft G, et al. Scale up services for mental disorders: a call for action. Lancet. 2007;370:1241–52. doi: 10.1016/S0140-6736(07)61242-2. [PubMed] [Cross Ref]
7. Ham C, Robert G, editors. Reasonable rationing: international experience of priority setting in health care. New York (NY): McGraw-Hill; 2003.
8. Kapiriri L, Tomlinson M, Chopra M, El Arifeen S, Black RE, Rudan I. Setting priorities in global child health research investments: addressing values of stakeholders. Croat Med J. 2007;48:618–27. [PMC free article] [PubMed]
9. Cicchetti DV, Feinstein AR. High agreement but low kappa: II. Resolving the paradoxes. J Clin Epidemiol. 1990;43:551–8. doi: 10.1016/0895-4356(90)90159-M. [PubMed] [Cross Ref]
10. Graham B. The intelligent investor. New York (NY): Harper Collins; 2003.
11. Elbasha EH. Risk aversion and uncertainty in cost-effectiveness analysis: the expected-utility, moment-generating function approach. Health Econ. 2005;14:457–70. doi: 10.1002/hec.915. [PubMed] [Cross Ref]
12. Arrow KJ, Lind RC. Uncertainty and the evaluation of public investment decisions. Am Econ Rev. 1970;60:364–78.
13. Mitton CR, Donaldson C. Setting priorities and allocating resources in health regions: lessons from a project evaluating program budgeting and marginal analysis (PBMA). Health Policy. 2003;64:335–48. doi: 10.1016/S0168-8510(02)00198-7. [PubMed] [Cross Ref]

Articles from Croatian Medical Journal are provided here courtesy of Medicinska Naklada
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles