• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
J Subst Abuse Treat. Author manuscript; available in PMC Sep 1, 2008.
Published in final edited form as:
PMCID: PMC1995028
NIHMSID: NIHMS28751

Moving Innovations into Treatment: A Stage-based Approach to Program Change

Abstract

Treatment programs are expected to change their clients. To adopt evidence-based practices to improve their therapeutic effectiveness in dealing with drug-related problems of clients, they also are expected to change themselves. The process of innovation adoption and implementation is the focus of studies included in this special journal issue. Collectively, this volume examines staff perceptions of program needs, organizational readiness for change (based on pressures, resources, staff attributes, and organizational climate), quality of workshop training, subsequent utilization of training materials, and client self-report of treatment engagement. Approximately 800 treatment programs nationwide contributed data for these studies. A standardized assessment of organizational functioning captured attributes that describe environments, settings, and staffs, and the findings are interpreted in the context of a stage-based approach to program changes. A conceptual model is used to help organize and summarize longitudinal results within the organizational context and according to implementation influences related to qualities of the innovations.

Keywords: Evidence-based practice, Innovation, Program change, Organizational functioning, Implementation process

1. Introduction

Few initiatives in the medical and behavioral health field have received the level of attention being given to “evidence-based practice.” Like many professional associations representing providers and researchers involved in the delivery of care, the American Psychological Association appointed its own 2005 Presidential Task Force on Evidence-Based Practice (2006) to voice a position for influencing policy development. The report emphasizes the dual role of efficacy and clinical utility. Efficacy refers to the traditional use of scientific (experimental) standards for establishing causal relationships between interventions and positive outcomes. Clinical utility relates to a broader set of implementation issues involving client attributes, professional consensus, generalizability, feasibility, and costs. Thus, an important – but often overlooked – balance is sought between “evidence-based” and “practice.”

Efficacious interventions are fundamental to improving service delivery, but they are useless without being adopted and giving attention to their implementation in the field. Due to growing concerns in recent years for using evidence-based practice in health care systems, the editor of the Journal of Substance Abuse Treatment (JSAT) has identified “technology transfer” and related studies concerning adoption of innovations as a top priority. Miller, Zweben, and Johnson (2005) discuss the wide range of issues involved in establishing and implementing evidence-based treatments. In his commentary on this article, Brown (2006) highlights the role of feasibility in moving interventions from research to practice.

“Interventions nurtured in the comparative luxury of research environments may not transfer easily to the more modest circumstances of typical treatment programming. Also, feasibility of service delivery is not an area of research expertise. Miller et al. are correct in suggesting that, as the application of evidence-based practice progresses from request to demand, clinical input will be required to augment the contributions of research in ensuring the selection of programs that are not only evidence-based but also capable of reasonably widespread adoption.” (page 87)

Fixsen, Naoom, Blase, Friedman, and Wallace (2005) emphasize the transactional nature of implementation efforts and the need to address the “multilevel” complexities involving interventions efficacy, staff skills, training process and fidelity, client responsiveness, organizations and systems, and policy. For policy makers, they stress the importance of investing “in the development and use of implementation strategies and methods that are grounded in research and elaborated through accumulated experience” (p. 73). For service providers, they advise forming a “community of practice” for identifying beneficial innovations as well as to develop on-going, long-term partnerships with researchers in an effort to contribute to the science of implementation.

The pivotal players in these two sets of recommendations are researchers. Therefore, Fixsen et al. call for studies that can increase knowledge about this process and offer practical guidance for both policy makers and service providers. In particular, they proposed that core intervention components of evidence-based practices and programs be clearly identified. Next, they suggest using field-based approaches to assess effectiveness of implementation procedures that have been put into practice, and developing process and outcome measures specific to the implementation process (i.e., not the intervention per se) to monitor and operationalize these practices. Finally, they point to the need for studying organizational as well as broader socio- political factors that influence and sustain innovation implementation. These ideas emerge in part from the large number of studies (over 700) these authors reviewed from the multi- disciplinary literature on implementation research, so they are not exactly novel ideas. Nor are they simple.

2. Purpose of this “Special Issue”

Articles and commentaries that acknowledge the role of organizational dynamics in effectively diffusing innovations have begun to appear regularly in JSAT. One of these articles described an assessment of organizational functioning (Lehman, Greener, & Simpson, 2002) and has continued to be one of the top 25 downloaded JSAT articles ever since its publication. It contributes to a growing trend toward considering program environment – both structural and functional – along with client and staff needs as features indicating “readiness” to adopt new innovations.

One of the first special issues published in JSAT (Simpson & Brown, 2002) assembled a set of studies focused thematically on transferring research to practice. It included a “program change model” for integrating findings from the literature and linking them conceptually to new research being formulated to examine the process of program change (Simpson, 2002). Assessment tools for studying client-level functioning within programs (Joe, Broome, Rowan-Szal, & Simpson, 2002) and staff perceptions of organizational functioning (Lehman et al., 2002) were presented as pertinent research strategies. Three other studies examined field factors related to the adoption of new technologies across different settings (Dansereau & Dees, 2002; Liddle et al., 2002; Roman & Johnson, 2002). Finally, Brown and Flynn (2002) discussed the role of federal agencies in moving evidence-based treatment into practice.

The current special issue on transferring research to practice is more programmatic, meaning the studies included all rely on the same conceptual framework for interpreting their results and use a common instrument for assessing organizational functioning. As a result, heuristic refinements have been made to the original program change model (Simpson, 2002) by broadening its scope to cover program planning and preparations for change, as well as by expanding measurement constructs based on field experiences and new research. The major changes to the model itself include more explicit attention to individual-level factors that impact each stage of the overall process as well as elaboration of the decision-making and preliminary action steps involved in innovation adoption that precede the crucial implementation challenges.

3. How Programs Plan and Implement Change

Although “innovation adoption” is commonly used in reference to the broad process leading to changes in organizational practices, Klein and colleagues (Klein & Knight, 2005; Klein & Sorra, 1996) emphasize that implementation serves as the crucial stage that connects an adoption decision with routine practice. That is, deciding to own something (i.e., adopting it) occurs prior to, but is different from, putting it to regular use. In a study of computer technology innovations, Klein, Conn, and Sorra (2001) observe that while innovation adoption in the business world has been widely studied, the implementation process has been given less attention. Even more rare, they add, are quantitative studies of between-organizational differences in implementation process and effectiveness. They focused on the role of institutional climate (i.e., based on aggregated survey responses of employees) and especially on “antecedents” that influence it. The two facets of special interest – managerial support and financial resources – were shown to be highly significant in the software implementation process across the companies they studied.

Klein and Knight (2005) list six key factors identified from the literature that shape the process and outcomes of innovation implementation. These include: (1) quality and quantity of training, including continuing technical assistance as needed, rewards for usage, and user-friendliness; (2) positive team or organizational climate, measured as shared perceptions of employees; (3) managerial support; (4) financial resources; (5) learning orientation that supports staff perceptions about skills development, competence, and growth; and (6) patience in enduring short-term stumbling blocks to reach stable and enduring performance gains.

The revised TCU Program Change Model, illustrated in Figure 1, converges closely with elements of this framework presented by Klein and colleagues (1996, 2001, 2005). As discussed in more detail below, however, the model conceptually decomposes explicit influences on the innovation adoption and implementation process. Counselor perceptions related to a particular innovation per se versus the contextual platform of organizational functioning are integral ingredients in technology transfer and program change.

Figure 1
TCU Program Change Model for planning and implementing innovations for treatment improvement.

4. The “Process” of Innovation Adoption and Implementation

The central portion of Figure 1 portrays several crucial features of the process involved in adoption of treatment innovations. These include training, adoption (including distinct steps for decision-making and action), and implementation of innovations. “Routine practice” is the culmination of successful implementation and stabilization of program enhancements. Progression through each stage is influenced by staff members knowledge of an innovation as well as their perceptions about organizational functioning. In this figure, the key factors that represent individual-level staff reactions and support of an innovation implementation are listed as bullets under major stages of the change process, while the program-level organizational readiness and functioning domains that shape the process are grouped across the bottom of the figure.

It should be emphasized that this is a working model. That is, continued refinements are anticipated for its measurement domains, and it is not meant to imply that the stages of change are rigidly linear. Indeed, there are cyclical phases of progress with setbacks involved, and it is these dynamics that represent the most vulnerable “points of impact” for many of the change factors being studied. The model identifies a variety of individual and program level factors that are influential in the program change process. However, their influences are not interpreted to be narrowly limited to an initial point of impact, but rather they often have primary entry points with on-going (sometimes cumulative) roles. For instance, the influences of motivation, organizational readiness, and resources have early impact in the process, but these factors also exert a dynamic sustaining presence across subsequent stages of adoption and implementation.

4.1. Training

The active process of adopting an innovation generally begins with training. Its format is important, and with increasing complexity of the innovation it requires more investments in role-playing, practice rehearsal, monitoring, and “booster” sessions for on-going consultation. When an innovation is introduced in a conference-based or specialized workshop setting, decisions by staff on whether or not to attend are influenced by (1) how relevant it is to their needs, (2) its accessibility, including location, scheduling, and cost, and (3) if it is accredited and thereby offers educational or credentialing benefits. The quality of innovation materials and training influences decisions about value and staff capabilities to adopt and implement. In addition to these and related personal concerns by individual staff members, organizational needs and pressures for applying particular interventions can likewise influence their training attendance.

While there is surprisingly little empirical research published on key elements of training (Fixsen et. at., 2005), there is general agreement about the importance of having practical knowledge and understanding about an innovation as well as having opportunities to practice its delivery. Manuals have become a preferred tool for guiding delivery of an intervention and improving its fidelity, but practice components (e.g., role-playing, behavior rehearsal, coaching) are necessary to optimize its effectiveness. Fixsen et al. also note the common difficulty of finding sufficient release time (and sometimes financial resources) for staff training as well as the need for staff to readily see the relevance and benefits the innovation offers recipients. Practical experiences and recommendations for meeting these training challenges in using a cognitive intervention technique (for visual communication mapping, Dansereau & Dees, 2002) and a family therapy program for adolescents (for Multidimensional Family Therapy, Liddle et al., 2002) emphasize the need for hands-on practice, and feedback and rewards for progress. These should be accompanied by realistic views of skill requirements and limitations, team building and peer support, and empirical evaluations of results.

4.2. Adoption

Following training, the next crucial step involves adoption. Although the term “adoption” is often used in the literature to refer to the entire process of adoption/implementation of innovation, it seems more advantageous to recognize its specific subcomponents that can be scrutinized. In the current model, adoption is defined as a two-step activity involving decision-making and action-taking.

The “decision” to adopt an innovation takes several considerations into account. These can be illustrated by three issues commonly found in the literature. First, there must be leadership support, both at the formal and informal levels. Such support is crucial for gaining and sustaining the innovation's visibility, resource allocations, performance feedback, and endurance (Klein et al., 2001; Sirkin, Keenan, & Jackson, 2005). The lack of effective and committed management endorsement increases the likelihood that innovations will flounder, especially as their complexity and intensity rise. Second, the innovation should be viewed as having the overall quality and utility necessary for applications in “real world” clinical settings, based on the adequacy of training and how well the innovation materials appear to serve prevailing (or in some cases, emerging) client needs (Gotham, 2004). It may require adjustments in language or types of examples to fit the setting (e.g., community versus correctional programs, or special populations), although this does not imply that matters of fidelity can be compromised. Third, the innovation should be viewed by frontline staff as having adaptability for meeting specific nuances of the treatment applications and setting (McGovern, Fox, Xie, & Drake, 2004). It must be compatible with other materials and fit with existing values or culture within the treatment program (Klein & Sorra, 1996; Rogers, 2003), as indicated by staff interests in further training or involvement of fellow counselors. For instance, attitudes about drug use abstinence versus reduction as well as the acceptability of medications in treatment offer classic examples of values that can impact choices about acceptable therapeutic strategies. What is appropriate for one program may not be for another. At the organizational level, adequate resources (such as staff allocations and financial support) must be available and committed by program leadership. Likewise, staff skills and abilities must be deemed appropriate for implementing the innovation.

A cognitive decision to adopt is followed by the development of a plan of “action,” including a trial period. Following the decision to adopt, at least a few days or weeks to allow a test run are needed for potential adopters to form opinions about applications. Examples of some prominent considerations are as follows. First is the issue of capacity and proficiency of the innovation in meeting preliminary expectations. Namely, does it seem to work as expected and does it merit further investments of time and energy? Second, there should be satisfactory preliminary results and feedback from those involved. And third, forms of resistance, including both the active and passive barriers to change, must be manageable. Rogers (2003) suggests that decisions regarding new innovations might be made independently by members of an organization, collectively through consensus, or authoritatively as top-down reactions by executives. At the organizational level, staff capacity and a positive, supportive organizational climate are necessary in the decision and action processes.

4.3. Implementation

The next major stage of the process is implementation. This builds on and extends directly the brief trial phase discussed above. It reflects an attitude shift from one of “let's see how this might work” to a longer view of putting it to work. Over the course of weeks or months following a trial phase, the value of an innovation has to be proven. For instance, an intervention must be viewed by program staff and leadership as being effective. Clinical experience and intuition are relevant, but empirical evidence and feedback provide a more convincing foundation. Part of this appraisal process (i.e., determining relevance and valuing evidence and effectiveness) might involve a translator who can bridge the communication gaps between the policy-maker, research, and clinical service delivery worlds (Brown & Flynn, 2002). During implementation, this same individual might also serve as a developer (i.e., manuals, materials, etc.) and trainer. Besides being effective, an innovation must be feasible within a program's context. Some interventions, for example, might be viewed as therapeutically helpful – but be appropriate for too few clients or be too demanding of staff resources. A third consideration is somewhat related and involves sustainability. In other words, is the innovation affordable over time in regards to on-going staff resources (including training needed to maintain fidelity) and financial requirements? At the organizational level, factors such as motivation, resources from program management, staff attributes, and program climate play a role in determining long-range implementation, along with financial resources.

Finally, innovations that pass these stages successfully tend to become standard “practice” and presumably bring improvements in client care. It is important that on-going monitoring of effectiveness indicators – client engagement, progress and outcomes – be established (Simpson, 2004) and that continued attention be given to organizational functioning. This information, along with service-cost records, is fundamental to effective program management. Additionally, innovation costs (e.g., cost of materials, training, supervision, loss of billable hours associated with training and supervision, etc.) should be considered by programs intending to adopt and implement new innovations.

5. Overview of Samples and Methods Used

As described in the following studies, an integrated set of Texas Christian University (TCU) assessments for treatment staff and clients was used. They include staff assessments of program and client needs, evaluations of training provided for innovations, and follow-up evaluations of progress and problems in implementation. These represent several key elements in the process of innovation adoption and implementation. Client self-reports of engagement and treatment performance (see Joe et al., 2002) are appropriate outcomes for evaluating program efforts to establish evidence-based practices. The central measure tying together all of these studies, however, is the organizational functioning assessment described below.

5.1. How organizational readiness for change was measured

The TCU Organizational Readiness for Change (ORC) assessment (Lehman et al., 2002) served as the “standard” tool for measuring program functioning and readiness for change. It is tailored for the drug treatment and health services fields, but alternative versions have been adapted for use in several specialized applications. Over 4,000 ORC surveys administered in over 650 organizations (including work in Italy and England) during the past 5 years demonstrate its broad applicability (e.g., see Rampazzo, De Angeli, Serpelloni, Simpson, & Flynn, 2006).

The ORC includes four sets of scales for measuring staff perceptions about the adequacy of program resources, counselor attributes, work climate, and motivation or pressures for program changes. The 18 scales (see Table 1) contain an average of 6 items each (scored on a 5-point Likert scale ranging from strongly disagree to strongly agree), and they require approximately 25 minutes to complete. Lehman et al. report that principal components analysis confirmed factor structures of the ORC scales, coefficient alpha reliabilities showed they have adequate levels of psychometric internal consistency, and relationships with selected indicators of client and program functioning documented their predictive validities. Program-level coefficient alphas are above .70 for 13 scales (ranging from .72 to .92) and .68 for 2 scales (the other 3 scales were .64 for Training, .66 for Equipment, and .56 for Autonomy).

Table 1
Description of scales in the ORC survey

The staff version of the ORC (i.e., ORC-S) was designed specifically for community treatment programs and is the original and most commonly used assessment. The ORC-S scoring guide explains procedures for computing scale scores (some items require “reflected” scoring for items with reverse wording, and a limited number of missing responses is permissible). In essence, the set of item responses (i.e., values of 1 to 5) for each scale are averaged and multiplied by 10, yielding scores that range from 10 to 50 with a midpoint of 30. A score of 30 is “neutral” because it reflects neither overall agreement nor disagreement with the set of items from any given scale. Scores closer to 50 reflect strong agreement with the named attribute, while those closer to 10 reflect strong disagreement.

By averaging scale scores across all staff respondents from the same treatment program, they can be plotted graphically in a line chart that defines a program functioning profile. Figure 2 presents mean scores calculated using 2,031 completed surveys from previous research based on the ORC-S assessment. The chart also contains 25th and 75th percentile norms. These norms for ORC scale scores allow interpretations to be made not only on the basis of how far above or below 30 – the neutral point – scores fall, but also how they compare with those from other drug treatment agency staffs that have completed the ORC.

Figure 2
Means and 25th-75th percentile norms for ORC scale profiles (N = 2,031).

5.2. Samples and analytic procedures

Studies for this issue used data from a variety of samples. Multiple statistical methods appropriate to the research questions were used to analyze these multisource data. Samples drew from data sets that incorporated numerous assessments from states in the Southeast and Southwestern coastal regions, as well as from a broader array of states throughout the U.S. Over 800 treatment programs are represented. A couple of studies only used data from a single state. Across studies, there was some sample overlap. When this occurred, it generally involved situations in which a study used either a subset or a specific assessment form that was not the focus of another study that used the same sample.

The multimethods ranged from basic correlational, regression, and psychometric techniques to more sophisticated data reduction strategies and multivariate statistics needed for these complicated data sets. The multivariate techniques included logistic regression, latent class analysis, hierarchical linear models (i.e., random-effects models), and factor analysis.

6. Studies included in this Volume

The revised TCU Program Change Model described earlier offers guidance for conducting new research and helps organize findings. The first four papers reported in this volume – Rowan-Szal et al., Courtney et al., Greener et al., and Broome et al. – address how programs can carry out strategic planning and make organizational preparations for adopting innovations. The last two of these four elucidate the relationship between organizational functioning and client performance. The next set of three papers – Saldana et al., Joe et al., and Fuller et al. – shift the focus to address factors related to staff functioning and their attitudes about adopting “evidence- based practices.” The studies are based on large and highly diverse samples of counselors, which extend the generalizability about organizational assessments and functional relationships with staff decision-making about innovations. Finally, Simpson et al. take a longitudinal perspective in tying all of these factors together into a coherent package.

6.1. Strategic planning for change and importance of organizational functioning in treatment engagement

Rowan-Szal, Greener, Joe, and Simpson (this issue) examine the utility of a 15-minute assessment of Program Training Needs (PTN) that was administered to staff in almost 200 programs from two states in the Gulf Coast region. This information effectively represents seven important domains of program needs and related issues, such as facilities, resources, staff training needs and preferences, and barriers to adopting innovations. Comparisons with data collected using the comprehensive assessment of organizational functioning and readiness for change (ORC; Lehman et al., 2002) in the same programs indicated the PTN offers a preview of what programs can expect to find using the more extensive ORC assessment. In addition, the brief PTN taps core staff attitudes about previous training experiences, current barriers to using training, and how training might be improved. The PTN therefore appears to serve as a promising and beneficial planning tool for programs beginning to explore organizational openness to innovations and how to begin the process.

Rowan-Szal et al. also confirmed that inadequate budgets as well as workload pressures and schedules are viewed as common barriers to attending and using training, along with the heavy emphasis on licensure and limited rewards that accrue to staff who try out new ideas. Over 70% of PTN respondents preferred the use of intensive full-day training on special (evidence-based) topics, hands-on role playing and group activities, on-site follow-up consultations, and a conceptual process model for portraying how treatment activities contribute to recovery. The PTN also serves the important purpose of helping staff feel they have been consulted and are being heard in regard to planning for treatment innovations, including the training strategy designed to encourage the adoption of innovation.

Courtney, Joe, Rowan-Szal, and Simpson (this issue) show that when programs are confronted with evidence of their own organizational deficits, based on feedback from ORC survey results, they can respond strategically with plans for taking corrective actions. That is, high-need treatment programs with relatively poor scores on institutional resources, staff attributes, and climate were the ones that became most engaged in a deliberate change process. These were part of a larger group of member programs from a statewide association of treatment agencies who were motivated to attend a workshop that provided structured guidance and encouragement to address their organizational needs. Because organizational health has direct implications for quality of client services as well as ability to successfully implement treatment innovations, these are worthwhile “corrective” steps to consider. This information and supportive guidance can lead to program-level decisions about change, carried out either as an executive leadership function or as part of a more collaborative process involving staff.

Client engagement represents one of the key ingredients of effective therapeutic process (see Simpson, 2004; 2006), so it was important to confirm preliminary findings by Lehman et al. (2002) that client and program performance are interrelated. Greener, Joe, Simpson, Rowan- Szal, and Lehman (this issue) found measures of client rapport, satisfaction, and participation in treatment (see Joe et al., 2002) collected at 163 programs from eight states representing three different regions (i.e., Gulf Coast, Northwest, Prairielands) were indeed positively correlated with counselor perceptions of program resources, staff attributes, and organizational climate. Similar findings are reported by Broome, Flynn, Knight, and Simpson (this issue) based on a sample of 94 outpatient drug-free programs from nine states covering four diverse regions of the U.S. (i.e., Great Lakes, Gulf Coast, Northwest, and Southern Coast). The Broome et al. study applied hierarchical linear modeling to include features of program structure such as size and accreditation in the analysis. Both studies indicate that organizational structure and functioning are important when it comes to engaging clients in treatment services.

6.2. Studies that test stages of innovation adoption and implementation

“Early adopters” have the reputation for being at the front of the line for training and embracing innovations (Rogers, 1995). Being able to identify early adopters – and those who are not – can lead to more strategic approaches for increasing exposure and interest in new interventions. Saldana, Chapman, Henggeler, and Rowland (this issue) applied the ORC to study staff attitudes toward the adoption of evidence-based practice and treatment manuals. They sampled therapists in 50 programs from the mental health and substance abuse sectors that serve adolescents in a single Southeastern state. After establishing psychometric generalizability of the ORC assessment to these programs, a 2-level hierarchical linear model analysis (i.e., random effects regression model) was used to examine the associations of therapist-level predictors (within- agency variation) and agency-level predictors (i.e., between-agency variation) with attitudes about innovations. Findings showed the motivational readiness/training needs scales (both at the therapist and agency level) were associated with higher appeal and openness to innovations, but program resources and climate were comparatively unimportant at this early stage of the process. Saldana et al. also noted there were differences between substance abuse and mental health settings, with the mental health sector reporting more stress from higher caseloads and potentially greater barriers to innovation.

Joe, Broome, Simpson, and Rowan-Szal (this issue) explored staff attributes related to adoption decisions. They used latent profile analysis to classify approximately 1,000 counselors from over 300 drug treatment programs from four diverse regions of the U.S. (i.e., Gulf Coast, Mid America, Northwest, and Prairielands) into subgroups according to their ratings on the ORC and readiness to adopt innovations. A typology of three counselor groups emerged – Isolated, Integrated, and Exceptional – on the basis of individual-level perceptions of their own professional attributes and of the organizations in which they worked. Not surprisingly, counselors with poorer ratings of their program climate, professional growth, and influence within their own treatment program (i.e., representing the “Isolated” category of counselors) reported a pattern of being less likely to attend innovation training and committing to adopt workshop training ideas.

Similarly, Fuller, Rieckmann, McCarty, Nunes, Miller, Arfken, and Edmundson (this issue) found many of these same traits predict willingness of counselors to use treatment manuals (as an indicator to adopt evidence-based practices). Using ORC information collected from staff at 228 treatment units from 17 Regional Research Centers across the U.S. participating in the National Drug Abuse Treatment Clinical Trials Network (CTN), they identified program resources (i.e., internet access) along with positive scores on professional growth, personal efficacy, and autonomy as being primary predictors of counselor orientations toward being trained using manuals.

Not all training is created equal, however. Bartholomew, Joe, Rowan-Szal, and Simpson (this issue) examined several aspects of counselor assessments of training relevance and quality at a statewide workshop attended by staff from almost 50 treatment programs. Attitudes about the quality of materials and training procedures, as well as adequacy of program resource allocations were related to endorsement and trial use of the training materials in the following months. Some of the major barriers counselors face in making changes in their clinical practice are also discussed.

6.3. An integrative overview

Except for the two papers that tie together organizational functioning with client engagement in their services (Greener et al. and Broome et al.), the focus of the studies reported here has been primarily on relationships between measures for adjacent stages of the innovation adoption and implementation process. They fit together programmatically and conceptually, but the range of their analytic potential is limited by the challenges of linking together longitudinal data collection elements at multiple points over time. Fortunately, the projects from which these papers were drawn are in the process of collecting and integrating records across several years, so future expansions of these studies can be expected.

As a forerunner of this work, Simpson, Joe, and Rowan-Szal (this issue) assembled a long-range, cross-linked subset of data for taking a preliminary view of relationships between major adoption and implementation process stages across time. It uses 2 years of sequential data from statewide staff surveys of program training needs in a Gulf Coast state, an ORC and client sample survey, workshop training session ratings, a training implementation follow-up evaluation, and a second ORC and client sample survey. Program-level linkages were made over time based on a statewide sample of about 50 programs (although clients and some counselors did not remain the same over time, of course). Program-level measures of needs, functioning, adoption decisions, implementation progress, and client engagement in treatment were interrelated in a manner consistent with the overall TCU Program Change Model.

The findings from this study are important because they demonstrate that it is feasible and informative to conduct long-range observational research that is outside the reach of randomized clinical trials. First, the original program training needs (using the PTN a year before training) were related to subsequent staff responses to workshop training (i.e., both to staff evaluations of training as well as later implementation steps). Second, more positive organizational functioning (using the ORC 4 months before training) was related to more positive staff responses to training. Third, more favorable staff-level responses to workshop training as well as their implementation progress across programs were positively related to self-reports of clients in these programs about counseling participation, rapport, and satisfaction 9 months after the training. Finally, changes in staff perceptions were found 2 years later in the follow-up assessment of program training needs, and these were associated with occurrences of interim training and adoption actions. Interactions of these predictors with other factors (i.e., moderators) over time also were explored. Given the small sample sizes in this exploratory study of program-level measures, the consistency and significance of the predicted relationships over time suggests progress is being made in assembling elements of the innovation implementation process.

7. Concluding Comments

Collectively, almost 6,000 counselors at over 800 treatment programs nationwide were assessed as part of the studies included in this journal volume. The data are used to address particular aspects of how evidence-based treatment can be effectively adopted and implemented in practice. Studies share an emphasis on the role played by organizational structure and functioning of treatment programs in this process, and they all employ the same assessment tool and stage-based conceptual framework. Most discussions about how innovations are transmitted into practice revolve around common ideas about action steps and subsets of specialized influences on the process (e.g., Aarons & Sawitzky, 2006; Klein & Knight, 2005; Miller, Sorensen, Selzer, & Brigham, 2006; Rogers, 2003; Simpson, 2002; Sirkin et al., 2005). The TCU Program Change Model includes key elements of these discussions, but is somewhat unique for two reasons. First, it is a comprehensive model that identifies expected points of impact for specific influences in the 4-stage process of innovation adoption and implementation. Second, it delineates personal and corporate sources of influence (based on user perceptions) on this change process within organizational contexts.

Using a multistage conceptual model of program change helps organize findings and also generate new research plans for systematically examining the elements believed to determine this process over time. The stream of events involved and the diverse contextual factors that influence them challenge traditional analytical strategies in the addiction sciences. Tucker and Roth (2006) conclude that randomized clinical trials, the gold standard for treatment efficacy studies, are inadequate for addressing these contemporary issues. Instead, as they point out, multivariate longitudinal methodologies capable of exploring mediator and moderator variables in long-range event chains are necessary. Appropriate analytical techniques and research designs are emerging, but still need further development and implementation experience. In other words, addiction scientists themselves are faced with “technology transfer” challenges as they learn to use new methodological strategies and tools.

Although all of the studies in this volume share a concern about organizational factors, differing perspectives on organizational behavior are evident. Notably, the same survey measures, such as for organizational climate, can be used to describe an individual viewpoint (e.g., Joe et al.) or a collective property of a program (e.g., Greener et al.). These two perspectives can lead to different, and complementary, conclusions about the processes involved. For example, individual perceptions may be shaped by personal background or position within the organization, whereas group-level aggregates of individual responses may reflect structural features that are more common to the entire organization. The studies by Saldana et al. and Fuller et al. therefore include comparisons of individual-level and group-level relationships. In practice, applications of both individual and group data are relevant to understanding and improving treatment programs.

The general consistency of findings from multisource data across large samples from diverse treatment settings, and using multimethods with different perspectives represented in these studies, is impressive. They demonstrate that better guidance on how to move evidence-based treatment into practice can benefit from an understanding of the process and the participants involved in this transaction. There are important steps that require attention to procedural details. In contrast, evaluations focused primarily on adoption outcomes of top-down or singular strategies that impose innovations without addressing staff needs and readiness, or the functional climate in the target agencies, offer few insights on “how to do it.” And the more complex the innovation, the more important it is to understand the steps of this process (Aarons & Sawitzky, 2006). The set of articles compiled for this special volume offer warnings and advice about intentional program changes.

Acknowledgements

This work was funded, in part, by the National Institute on Drug Abuse (Grants R37 DA13093 and R01 DA014468). The interpretations and conclusions are, however, entirely those of the authors and do not necessarily represent the position of the NIDA, NIH, or Department of Health and Human Services. More information (including data collection instruments and intervention manuals that can be downloaded without charge) is available on the Internet at www.ibr.tcu.edu, and electronic mail can be sent to ude.uct@rbi.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • Aarons GA, Sawitzky AC. Organizational culture and climate and mental health provider attitudes toward evidence-based practice. Psychological Services. 2006;3(1):61–72. [PMC free article] [PubMed]
  • APA Presidential Task Force on Evidence-Based Practice Evidence-based practice in psychology. American Psychologist. 2006;61(4):271–285. [PubMed]
  • Bartholomew NG, Joe GW, Rowan-Szal GA, Simpson DD. Counselor assessments of training and adoption barriers. Journal of Substance Abuse Treatment. this issue. [PMC free article] [PubMed]
  • Broome KM, Flynn PM, Knight DK, Simpson DD. Program structure, staff perceptions, and client engagement in treatment. Journal of Substance Abuse Treatment. this issue. [PMC free article] [PubMed]
  • Brown BS. Evidence-based treatment: Why, what, where, and how. Journal of Substance Abuse Treatment. 2006;30:87–89. [PubMed]
  • Brown BS, Flynn PM. The federal role in drug abuse technology transfer: A history and perspective. Journal of Substance Abuse Treatment. 2002;22(4):245–257. [PubMed]
  • Courtney KO, Joe GW, Rowan-Szal GA, Simpson DD. Using organizational assessment as a tool for program change. Journal of Substance Abuse Treatment. this issue. [PMC free article] [PubMed]
  • Dansereau DF, Dees SM. Mapping training: The transfer of a cognitive technology for improving counseling. Journal of Substance Abuse Treatment. 2002;22(4):219–230. [PubMed]
  • Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation research: A synthesis of the literature. University of South Florida; Tampa: 2005. (No. Louis de la Parte Florida Mental Health Publication #231).
  • Fuller BE, Rieckmann T, McCarty D, Nunes EV, Miller M, Arfken C, Edmundson E. Organizational readiness for change and opinions toward treatment innovations. Journal of Substance Abuse Treatment. this issue. [PMC free article] [PubMed]
  • Gotham HJ. Diffusion of mental health and substance abuse treatments: Development, dissemination, and implementation. Clinical Psychology: Science and Practice. 2004;11:160–176.
  • Greener JM, Joe GW, Simpson DD, Rowan-Szal GA, Lehman WEK. Influence of organizational functioning on client engagement in treatment. Journal of Substance Abuse Treatment. this issue. [PMC free article] [PubMed]
  • Joe GW, Broome KM, Rowan-Szal GA, Simpson DD. Measuring patient attributes and engagement in treatment. Journal of Substance Abuse Treatment. 2002;22(4):183–196. [PubMed]
  • Joe GW, Broome KM, Simpson DD, Rowan-Szal GA. Counselor perceptions of organizational factors and innovations training experiences. Journal of Substance Abuse Treatment. this issue. [PMC free article] [PubMed]
  • Klein KJ, Conn AB, Sorra JS. Implementing computerized technology: An organizational analysis. Journal of Applied Psychology. 2001;86:811–824. [PubMed]
  • Klein KJ, Knight AP. Innovation implementation: Overcoming the challenge. Current Directions in Psychological Science. 2005;14(5):243–246.
  • Klein KJ, Sorra JS. The challenge of innovation implementation. Academy of Management Review. 1996;21(4):1055–1080.
  • Lehman WEK, Greener JM, Simpson DD. Assessing organizational readiness for change. Journal of Substance Abuse Treatment. 2002;22(4):197–209. [PubMed]
  • Liddle HA, Rowe CL, Quille TJ, Dakof GA, Mills DS, Sakran E, Biaggi H. Transporting a research-based adolescent drug treatment into practice. Journal of Substance Abuse Treatment. 2002;22(4):231–243. [PubMed]
  • McGovern M, Fox T, Xie H, Drake R. A survey of clinical practices and readiness to adopt evidence-based practices: Dissemination research in an addiction treatment system. Journal of Substance Abuse Treatment. 2004;26:305–312. [PubMed]
  • Miller WR, Sorensen JL, Selzer JA, Brigham GS. Disseminating evidence-based practices in substance abuse treatment: A review with suggestions. Journal of Substance Abuse Treatment. 2006;31:25–39. [PubMed]
  • Miller WR, Zweben J, Johnson WR. Evidence-based treatment: Why, what, where, when, and how? Journal of Substance Abuse Treatment. 2005;29(4):267–276. [PubMed]
  • Rampazzo L, De Angeli M, Serpelloni G, Simpson DD, Flynn PM. Italian survey of Organizational Functioning and Readiness for Change: A cross-cultural transfer of treatment assessment strategies. European Addiction Research. 2006;12:176–181. [PubMed]
  • Rogers EM. Diffusion of innovations. 4th ed. The Free Press; New York: 1995.
  • Rogers EM. Diffusion of innovations. 5th ed. The Free Press; New York: 2003.
  • Roman PM, Johnson JA. Adoption and implementation of new technologies in substance abuse treatment. Journal of Substance Abuse Treatment. 2002;22(4):211–218. [PubMed]
  • Rowan-Szal GA, Greener JM, Joe GW, Simpson DD. Assessing program needs and planning change. Journal of Substance Abuse Treatment. this issue. [PMC free article] [PubMed]
  • Saldana L, Chapman JE, Henggeler SW, Rowland MD. Assessing organizational readiness for change in adolescent treatment programs. Journal of Substance Abuse Treatment. this issue. [PMC free article] [PubMed]
  • Simpson DD. A conceptual framework for transferring research to practice. Journal of Substance Abuse Treatment. 2002;22(4):171–182. [PubMed]
  • Simpson DD. A conceptual framework for drug treatment process and outcomes. Journal of Substance Abuse Treatment. 2004;27(2):99–121. [PubMed]
  • Simpson DD. A plan for planning treatment. Counselor: A Magazine for Addiction Professionals. 2006;7(4):20–28.
  • Simpson DD, Brown BS, editors. Special Issue: Transferring research to practice. Journal of Substance Abuse Treatment. 2002;22(4) [PubMed]
  • Simpson DD, Dansereau DF. Assessing organizational functioning as a step toward innovation. NIDA Science & Practice Perspectives. in press. [PMC free article] [PubMed]
  • Simpson DD, Joe GW, Rowan-Szal GA. Linking the elements of change: Program and client responses to innovation. Journal of Substance Abuse Treatment. this issue. [PMC free article] [PubMed]
  • Sirkin HL, Keenan P, Jackson A. The hard side of change management. Harvard Business Review, October. 2005:109–118. [PubMed]
  • Tucker JA, Roth DA. Extending the evidence hierarchy to enhance evidence-based practice for substance use disorders. Addiction. 2006;101:918–932. [PubMed]
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...