Participatory Logic Modeling in a Multi-Site Initiative to Advance Implementation Science

Background: It is increasingly being recognized that logic models should be developed through a participatory approach which allows input from those who carry out the program being evaluated. While there are many positive examples of participatory logic modeling, funders have generally not used this approach in the context of multi-site initiatives. This article describes an instance where the funder and evaluator of a multi-site initiative fully engaged the funded organizations in developing the initiative logic model. The focus of the case study is Implementation Science Centers in Cancer Control (ISC3), a multi-year initiative funded by the National Cancer Institute (NCI). Methods: The case study was collectively constructed by representatives of the seven centers funded under ISC3. Members of the Cross-Center Evaluation (CCE) Work Group jointly articulated the process through which the logic model was developed and refined. Individual Work Group members contributed descriptions of how their respective centers reviewed and used the logic model. Cross-cutting themes and lessons emerged through CCE Work Group meetings and the writing process. Results: The initial logic model for ISC3 changed in significant ways as a result of the input of the funded groups. Authentic participation in the development of the logic model led to strong buy-in among the centers, as evidenced by their utilization. The centers shifted both their evaluation design and their programmatic strategy to better accommodate the expectations reflected in the initiative logic model. Conclusions: The ISC3 case study provides a positive example of how participatory logic modeling can be mutually beneficial to funders, grantees and evaluators of multi-site initiatives. Funded groups have important insights about what is feasible and what will be required to achieve the initiative’s stated objectives. They can also help identify the contextual factors that either inhibit or facilitate success, which can then be incorporated into both the logic model and the evaluation design. In addition, when grantees co-develop the logic model, they have a better understanding and appreciation of the funder’s expectations, and thus are better positioned to meet those expectations.


Background
Logic models are one of the most important and widely used tools in the evaluation eld. A logic model depicts the program designer's expectations for what will occur and and the mechanisms or pathways through which those outcomes will occur (1,2). Logic models can be applied to a broad range of "programs," including direct service interventions, structured trainings, legislation, institutional policies, advocacy campaigns, community development initiatives and research programs (2). In addition, implementation scientists are increasingly relying on logic models to describe the expectations associated with the strategies used to implement programs (3)(4)(5). Funders also use logic models to clarify and communicate their expectations for multi-site initiatives (6-8). This article focuses speci cally on the use of logic models in the context of Implementation Science Centers in Cancer Control (ISC 3 ), a multi-year initiative funded by the National Cancer Institute (NCI) focused on identifying implementation strategies that improve cancer prevention, control, and outcomes.

Logic Model Basics
Logic models organize the program designer's expectations within a causal chain which typically includes the following domains: inputs (i.e., resources available to support a given program or study, such as human resources or nances), activities (i.e., actions taken to address the identi ed problem, concern, or need), outputs (i.e., products yielded from activities, including changes in knowledge and attitude, new or stronger relationships, coalition development, strategic plans, or new infrastructure for implementation), outcomes (i.e., tangible results spanning a temporal continuum and relating to the program's goals, including behavior change, policy enactment, higher functioning organizations, or improved community capacity), and impacts (i.e., the ultimate pay-offs from the outcomes, such as changes in disease morbidity and mortality). Just as importantly, logic models use arrows to indicate the causal pathways through which outcomes and impacts are expected to occur.
The W.K. Kellogg Foundation popularized the use of logic models with two guidebooks published in 1998 (9) and 2003 (10), the rst of which de ned a logic model as: "…a picture of how your program worksthe theory and assumptions underlying the program. ...This model provides a roadmap of your program, highlighting how it is expected to work, what activities need to come before others, and how desired outcomes are achieved (9 p35). Such a roadmap is useful in guiding the choice of evaluation measures and methods, as well as pointing out the speci c hypotheses to test (11,12).
The initial purpose motivating logic models was to ensure that program evaluations focus on the "right" outcomes and test the "right" underlying theories (i.e., those that the program designers had in mind) (13)(14)(15)10). As evaluators began creating logic models with clients, it became apparent that this exercise brought value beyond guiding evaluation. Namely, the inquiry and conversation that goes along with creating a logic model often brings clarity and speci city to the program designers' intent and assumptions.

Participatory Logic Modeling
One of the most important advances in logic modeling was expanding the set of actors engaged in creating the logic model. Initially logic models were generally drafted by evaluators who incorporated the expectations they elicited from program designers. This approach quickly gave way to one where program developers and funders created logic models as part of the design process (either with or without the support of an evaluator). With the advent of evaluation paradigms such as Participatory Evaluation, Collaborative Evaluation and Empowerment Evaluation in the 1990s, there was a widespread recognition that broader input is needed to produce valid logic models. According to the American Evaluation Association (AEA) (16), CDC (17), and the Joint Commission on Standards for Educational Evaluation (JCSCEE) (18), one of the key principles of good evaluation is to "devote attention to the full range of individuals and groups invested in the program and affected by its evaluation." There are both practical and ethical reasons to engage the people and communities that are being served by a program or funding initiative when spelling out expected outcomes and causal pathways (19,20). They have a legitimate stake in determining what constitutes "success," as well as real-world knowledge as to how and under what conditions the program's outcomes are likely to occur (10). For funderdesigned initiatives, the organizations that receive funding have similar expertise as well as their own distinct interests which should be re ected in the logic model (22). In addition, when program designers and funders co-develop the logic model with the people who will carry out the work, there will be greater alignment in expectations, allowing for fuller implementation (19).
The merits of participatory logic modeling have been recognized for at least two decades (23)(24)(25). One excellent example is from A et al (19), who describe how a coalition of young people living in a Palestinian refugee camp in Lebanon designed a multi-level program to address the mental health needs of youth. The logic modeling process was an essential phase in both designing the program and determining how to evaluate it.
Although several examples of participatory logic modeling are described in the literature, they generally pertain to single program logic models rather than multi-site initiative logic models. In most funder initiatives, a small group of staff from the funding organization (e.g., the director of the initiative, an evaluation manager) develops an initial version of the logic model at the time the initiative is designed, and then this logic model is re ned once an external evaluator is hired, usually through a collaborative process involving the funder and the evaluator. The initiative logic model is often shared with the groups that are funded under the initiative in order to provide a clearer sense of the funder's intent and assumptions, but there generally are no opportunities for grantees to in uence the logic model.
In some multi-site initiatives, the evaluation approach is described as "participatory" (26, 27), but the forms of participation are generally downstream from the logic modeling process, such as deciding which information to collect, providing data, administering surveys to program participants, and being an audience for ndings from the evaluation. Rarely do funded groups have the opportunity to collaborate with the funder and the initiative evaluator to create or re ne the initiative logic model. Logic Modeling in the ISC 3 Initiative ISC 3 represents what we believe is the rst documented case of a multi-site initiative where the funding agency actively engaged funded organizations in developing the initiative-level logic model. The ISC 3 initiative, launched by NCI in 2019 and funded by the Beau Biden Cancer Moonshot SM Initiative, funds seven centers for ve years through a P50 mechanism. The initiative is designed to dramatically strengthen the national capacity to impact cancer prevention and control through implementation science (28). ISC 3 represents NCI's largest investment to date focused on implementation science (29).
The seven ISC 3 centers conduct research and build capacity for the use of implementation science across the cancer care continuum. Some centers were supported as "advanced centers" and others as "developing centers", with varying award amounts, leadership structures, and foci. Each center was asked to: 1) establish IS "laboratories" to conduct collaborative research focused on testing implementation strategies to reduce cancer risk and improve cancer care; 2) conduct rapid innovative projects to identify effective methods to improve the use of evidence-based programs in the context of cancer prevention and control; 3) develop resources, training, and mentorship to strengthen the national availability of implementation scientists and capacity for conducting implementation research; and 4) identify methods for cross-center collaborations to broaden the overall impact of the initiative.
Evaluation is strongly emphasized within ISC 3 . Each funded center has an evaluation function to assess the center's capacity-building activities and studies. The funding announcement required applicants to develop a logic model with expected outcomes and process indicators. At the initiative-wide level, NCI contracted with Westat to annually collect and analyze data to assess the collective progress that the seven centers are making toward NCI's expectations, including the production and dissemination of new scienti c knowledge and tools and the building of the eld of IS, especially as it supports cancer prevention and control efforts.
A Cross-Center Evaluation (CCE) Work Group, comprised of representatives from the seven centers, NCI, and Westat, was convened early in the establishment of ISC 3 to promote learning and coordination among the centers' evaluators, and to ensure that the initiative-wide evaluation was aligned with the center-speci c evaluations. The CCE Work Group served as the forum for transforming the initial version of the logic model (original development described below) into a version that more fully re ected the aims and programming of the seven funded centers. Over time, this logic model evolved, especially to have an increased focus on health equity.

Methods
This case study describes the process through which the ISC 3 logic model was developed, re ned and used by the funded centers, NCI and Westat (the external evaluator for the initiative). The authors of the paper were members of the CCE Work Group where the logic model was developed and re ned. They also brought early versions of the logic model to their respective centers for discussion and to elicit recommendations. The case study was constructed according to the following steps: 1. The CCE Work Group collectively constructed an outline of the topics to be covered in the case study, including the process through which the logic model was developed and re ned, the various ways in which the logic model was used, and the bene ts and challenges associated with using a participatory process. 2. A subgroup of the CCE Work Group wrote an initial draft of how the logic model was developed and re ned. That draft was distributed among other Work Group members (including representatives from NCI and Westat) who offered additional information and comments. The description included here incorporated that input as well as points raised during discussions in Work Group meetings. 3. Members of the CCE Work Group were asked to contribute information regarding their respective centers' discussion and use of the logic model. That information was organized according to: a) promoting understanding and alignment, b) guiding evaluation, and c) guiding strategy.
4. Cross-cutting themes, implications and lessons were generated through discussion in monthly meetings of the CCE Work Group, captured in meeting notes, and re ned further in the collective writing of this manuscript. Notably, these dicussions included representatives of NCI as well as the funded centers.

Logic Model Development
The initial draft of the logic model for the overall initiative ( Fig. 1) was jointly created by NCI and Westat based on NCI's expectations for ISC 3 (as speci ed in the request for applications). Westat also incorporated the activities, outcomes and measures that were included in the center-speci c logic models and evaluation plans that were included in the funded proposals.
The initial version of the logic model was presented for review to the CCE Work Group in May of 2020. Both NCI and Westat encouraged feedback and suggestions. Work Group members offered a variety of ideas for making the logic model more comprehensive and easier to comprehend. After that meeting, one of the Work Group members (Easterling) developed a mock-up of how the logic model might be structured to emphasize the primary causal pathways. This version was discussed at the next CCE Work Group meeting, stimulating further discussion and suggestions. In particular, the CCE Work Group recommended a variety of additions and revisions. Some of these were speci c, including: adding a box for the expected outcomes from the pilot projects, adding rapid cycle testing and implementation as a feature of the funded pilot projects, and embedding pilot projects within the implementation laboratories.
A broader recommendation was to bring health equity more explicitly into both the activities and outcomes boxes of the model. Following this meeting, Westat and NCI conferred on how to incorporate the Work Group's input into the o cial logic model for ISC 3 . They developed the next version, which maintained the basic form used in Fig. 1, while also including a large number of features that emeged in the two meetings and the mock-up version. That revised version was presented, discussed and endorsed at the subsequent CCE Work Group meeting.
At the same time that they endorsed the revised logic model, the Work Group also determined that this should be a 'living document' to be updated as the centers' work continued to unfold. In fact, the activities and expectations associated with ISC 3 have evolved in important ways during the implementation process. The current version of the initiative logic model is shown in Fig. 2. The CCE Work Group has continued to use a participatory process to accommodate these re nements, in each case involving actors from thoughout the initiative. These include the overall initiative Steering Committee, other ISC 3 Work Groups (i.e., for the Implementation Science Laboratories; Health Equity) and the investigators at each center. At each step, those reviewing the logic model have been invited to recommend additions or changes to the logic model.

Incorporating Health Equity
One of the most substantial changes in both the initiative and the logic model involved the explicit emphasis and thematic focus on health equity. This was an initiative-wide collaborative decision in 2020 which followed deep discussion on the topic at the rst annual grantee meeting. The decision coincided with the nation's larger reckoning of racial injustice in the wake of growing race-based hate crimes, including the murder of George Floyd. NCI further increased its focus on health equity when the Biden administration made this a government-wide priority shortly after inauguration.
To explicitly incorporate health equity into the design and implementation of ISC 3 , NCI and the Steering Committee established the ISC 3 Health Equity Task Force in January 2021. The Task Force (which had overlapping membership with the CCE Work Group) determined that the logic model could provide a useful point of reference for assessing where health equity was already re ected within ISC 3 's expectations and priorities, and where health equity could be incorporated more explicitly. In addition, the Task Force engaged the CCE Work Group in conversations to determine how the design of ISC 3 should change so that the initiative would promote progress on health equity outcomes.
The Task Force developed a set of themes as to how health equity should be advanced within ISC 3 , each of which were incorporated into an updated version of the logic model. With guidance from the Task Force, the CCE Work Group devoted several monthly meetings to name speci c health equity-oriented elements to be added to the inputs, activities, outcomes and impacts.
These additions were veri ed and re ned through conversations at the seven centers. Each center was tasked with asking their own center members for logic model feedback that the CCE Work Group then reviewed, discussed, and ultimately incorporated into the logic model. Based on this feedback, several re nements were made regarding where to include health equity, and how to be more explicit with the outputs we are assessing. We continued to engage and seek input from the Task Force throughout this process. The work group decided that regular input from ISC 3 leaders, work groups, and centers would ensure that updates to the model were in line with initiative activities. One additional idea that came up was discussion around how to explicitly include the engagement of community-based partners in the centers' work, for example, with the implementation laboratories. These equity-related augmentations to the logic model are highlighted in red in the logic model shown in Fig. 2.

Promoting Understanding and Alignment
The process of reviewing and augmenting the logic model yielded a more accurate logic model and also greater clarity among those involved in ISC 3 around what was expected of funded centers in terms of activities and outcomes. This occurred within each of the seven funded centers as the logic model was reviewed and critiqued in team meetings. Table 1 presents examples of the expectations that were clari ed and aligned within individual centers. Community partners were engaged in de ning how equity is most relevant and salient in their practices, and then subsequently in carrying out shared work.
Oregon Health Sciences University (OHSU) Logic model was used to communicate the center's goals and emphasis on equity to co-investigators and partners in the Lab.

Colorado ISC 3 (University of Colorado)
Recognized that more emphasis was needed on capacity building, rapid adaptations and dissemination within the Lab.

2) Guided Evaluation
Initiative-wide Key methods for the initiative evaluation (Annual Grantee Survey, Collaboration Survey) were revised to align with the logic model. Increased emphasis on cross-center partnerships in pilot grants and manuscripts. Pilot program added equity as a factor in the review of applications.
One of the key insights that emerged involved the speci city of the activities, outputs and outcomes. Some of NCI's expectations were quite speci c (e.g., an expanded and more densely connected network of IS researchers, training more researchers and clinicians in IS methods, new IS measures and tools). In contrast, some elements of ISC 3 , particularly the Implementation Lab, had more generically de ned outcomes in the logic model, with the expectation that each Center would develop its own strategy to achieve outcomes directly relevant to the center and its clinical partners.

Guiding Evaluation
The logic model is the primary point of reference in determining evaluation methods and measures for both the initiative-level evaluation and the local evaluations conducted by each center.
Initiative-wide Evaluation: Westat relied on the logic model to develop the Annual Grantee Survey, which is the primary method used in the initiative-wide evaluation of ISC 3 . This survey asks representatives from each center to report on the programmatic activities, including progress on the studies funded; securing extramural funding for new investigator-initiated research; publications and presentations; laboratory expansion; training, mentoring, and other forms of capacity building; and the development of new methods, theories and tools; and the outcomes of those activities. The logic model pointed to the important activities and outcomes, ensuring consistency across the centers in reporting content. The Annual Grantee Survey was revised in year 2 of the initiative to include new questions re ecting the health equity elements added to the logic model. For example, in the section focused on evaluating the outcomes from center studies, the following question was added: Do studies include health-equity focused components, targets, or outcomes? The following question was also added: To what extent are ISC 3 outputs being disseminated to patient and advocacy groups-especially those representing underserved communities?
A second key method used in the initiative-wide evaluation is the Collaboration Survey, which supports a social network analysis of investigators engaged in IS work within and across the centers (29). Questions in the survey are aligned with relevant outcomes in the logic model (e.g., strengthen IS networks). As health equity became a more central focus of ISC 3 , new analyses were conducted to assess the position of under-represented scientists in the network.

Center-speci c Evaluations
As a complement to the initiative-level evaluation carried out by Westat, each center conducts evaluations of its own programming. The center-speci c logic models provided the initial guidance for these "local" evaluations. As the ISC 3 logic model took shape, it allowed leadership at each center to re ne their evaluation plans to be more fully aligned with the initiative's expectations and priorities. As a result, centers made changes to their interview guides, reporting forms for pilot awards and data-capture processes, while also identifying new research questions and topics to address when analyzing these data. Speci c examples are shown in Table 1.

Guiding Strategy
As leaders of each center reviewed the logic model, they sometimes recognized that their existing ISC 3 strategy was not "complete" in terms of meeting expectations for either activities or outcomes. As shown in Table 1, this led to a number of enhancements or revisions in the activiites that the centers carried out. Many of the changes were made in response to the increased emphasis on health equity within the logic model.
The pilot award program was frequently the focus of these changes. A number of centers added equity as an explicit review factor and/or added community members as reviewers. Capacity-building strategies were also enhanced so as to reach more diverse audiences and to include health equity as a key topic when discussing implementation science methods, theories and principles. Bene ts and lessons from the case study are summarized in Table 2. Directly engaging funded groups produced an initiative logic model with more complete speci cation of activities, outcomes and pathways.

Discussion
The resultant logic model better re ected the expectations and understanding of the funded groups, without any diminishment in the representation of the funder's expectations and understanding of the initiative.
The enhancements to the logic model resulting from the participatory process pointed to concepts that were not fully captured in the original set of evaluation measures.

Collateral Bene ts
Actively reviewing and editing the logic model allowed investigators at the funded centers to more fully understand and align with the funder's expectations.
Discussions about the logic model led to more alignment around strategy and objectives within each center.
Requirements The participatory process required each funded center to have at least 1 representative willing to focus on this task over an extended period of time. Their responsibilities included not only actively participating in the co-development process, but also serving as a liaison to others within their center who have a stake and/or relevant knowledge.
Because of the iterative nature of the co-development process, it took approximately 6 months to move from the original logic model to agreement on the rst revision.
For initiatives that evolve in their goals and/or design, co-development of the logic model should continue.
Having representatives from the funded centers who were skilled in research and evaluation was a distinct advantage in co-developing the ISC 3 logic model.
The funder's openness and ethic of collaboration were critical in ensuring that the logic model actually evolved in line with grantees' input.
The participatory process required more time and effort from the evaluation rm than was initially budgeted for the development of the logic model.

Bene ts
The participatory process allowed the logic model to re ect the funder's expectations and theory of change and the perspective and interests of the groups responsible for carrying out the work that the funder envisioned. Input from the ISC 3 centers clari ed and re ned the expected outcomes and the pathways through which those outcomes will occur. The centers had the authority to question the funding agency's assumptions and to operationalize those assumptions and even to propose additional lines of equity-oriented work and outcomes that were supported under ISC 3 . Authentic participation in the development of the logic model led to strong buy-in among the centers. The centers shifted their evaluation design and their programmatic strategy to better accommodate the expectations re ected in the logic model.
This case study demonstrates that engaging funded groups can lead to more speci c and realistic logic models, which has important bene ts for both evaluation and strategy of large scale and multi-site implementation science initiatives. Those doing the work (i.e., closest to the ground) have important insights about what is feasible and what will be required to achieve the initiative's stated objectives. They can also help identify the contextual factors that either inhibit or facilitate success, which can then be incorporated into the logic model and the evaluation design. As the logic model becomes more accurate and grounded, the funder may ways to enhance the design of the initiative. To the extent that such expansions are included, the initiative will be more potent and more likely to achieve its goals and objectives. In addition, when grantees co-develop the logic model, they have a better understanding and appreciation of the funder's expectations, and thus are better positioned to meet those expectations.

Lessons
Engaging grantees in the development of an initiative logic model is admittedly challenging because of the chicken-and-egg dilemma. How can grantees participate in developing the logic model if they have not yet been selected? The ISC 3 case study resolves this dilemma by demonstrating that no matter how thoughtful the funder is prior to the launching of an initiative, the logic model will inherently be a rst approximation. The logic model can be improved by revisiting it with grantees once they have been selected and begun implementing the initiative.
Another challenge with participatory logic modeling is the requirements imposed on grantees. In many initiatives, the funded organizations do not have representatives with evaluation expertise. ISC 3 was unique in this regard: the RFA required each center to include an evaluator as part of its leadership team. Other NIH initiatives with similar requirements (31, 32) could replicate the participatory logic modeling process used in ISC 3 . Engaging grantee representatives in logic modeling is admittedly more di cult in initiatives where the funded organizations are small nonpro ts or grassroots groups.
Even in cases where the funded groups have evaluation expertise, participatory logic modeling can be challenging because of the time required to review, discuss, revise and reach agreement, especially for complex initiatives such as ISC 3 . Time is required not only from grantees, but also the funder and the evaluator. There are opportunity costs for each; time spent clarifying and re ning the logic model takes away from other evaluation-related tasks, as well as other work needed to achieve the initiative's desired outcomes. The funder may also need to include extra funds for the external evaluator to accommodate a participatory process.
One other consideration worth mentioning is that the participatory approach pro led here required a genuine commitment from the funder. NCI staff engaged the funded centers (as well as Westat) as active partners in the initiative and afforded them a high degree of in uence in shaping the logic model. Not all funders are this open to grantee input. Some program directors are much more focused on their accountability to their institutional leaders, their boards, and (in the case of government funders) to the elected o cials who allocate their budgets.

Conclusions
The ISC 3 case demonstrates that by engaging funded groups in the logic modeling task, funders can actually better achieve their own goals. The groups carrying out the work speci ed in the initiative have a clear sense of which goals are feasible, what it will take to reach those goals, and how the funder can best contribute (33). Grantees' knowledge and perspective produces a more accurate logic model, more informed evaluation methods and measures, and even a more effective and e cient funding strategy. We hope that the ISC 3 case study provides a positive example of how participatory logic modeling can be mutually bene cial to funders, grantees and evaluators of multi-site initiatives.
Abbreviations ISC 3 -Implementation Science Centers for Cancer Control NCI -National Cancer Institute CCE -Cross-Center Evaluation (Work Group)

Declarations
Ethics approval and consent to participate Not applicable.

Consent for publication
Not applicable.

Availability of data and material
Materials related to the case study, including additional versions of the logic model, are available from the corresponding author upon reasonable request.

Competing interests
Authors have no competing interests to declare.
Funding Figure 1 Original version of the ISC 3 logic model Figure 2 Revised version of the ISC 3 logic model, highlighting health equity components