• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Stat Med. Author manuscript; available in PMC Jul 30, 2013.
Published in final edited form as:
PMCID: PMC3399974
NIHMSID: NIHMS368775

Designing a Pilot Sequential Multiple Assignment Randomized Trial for Developing an Adaptive Treatment Strategy

Daniel Almirall, PhD
Survey Research Center, Institute for Social Research, University of Michigan, Ann Arbor, MI
Scott N. Compton, PhD
Department of Psychiatry & Behavioral Sciences, Duke University Medical Center, Durham, NC
Meredith Gunlicks-Stoessel, PhD
Department of Psychiatry, University of Minnesota, Twin Cities, MN
Naihua Duan, PhD
New York State Psychiatric Institute, Columbia University, New York, NY

Abstract

There is growing interest in how best to adapt and re-adapt treatments to individuals to maximize clinical benefit. In response, adaptive treatment strategies (ATS), which operationalize adaptive, sequential clinical decision making, have been developed. From a patient's perspective an ATS is a sequence of treatments, each individualized to the patient's evolving health status. From a clinician's perspective, an ATS is a sequence of decision rules that input the patient's current health status and output the next recommended treatment. Sequential multiple assignment randomized trials (SMART) have been developed to address the sequencing questions that arise in the development of ATSs, but SMARTs are relatively new in clinical research. This article provides an introduction to ATSs and SMART designs. This article also discusses the design of SMART pilot studies to address feasibility concerns, and to prepare investigators for a full-scale SMART. As an example, we consider an example SMART for the development of an ATS in the treatment of pediatric generalized anxiety disorders. Using the example SMART, we identify and discuss design issues unique to SMARTs that are best addressed in an external pilot study prior to the full-scale SMART. We also address the question of how many participants are needed in a SMART pilot study. A properly executed pilot study can be used to effectively address concerns about acceptability and feasibility in preparation for (that is, prior to) executing a full-scale SMART.

Keywords: pilot, experimental, adaptive, individualized, intervention

1. Introduction

With the establishment of evidence based treatments for many chronic conditions, there is growing interest and need for research on how to adapt, and re-adapt treatments to maximize clinical benefit. That is, there is growing interest in developing health interventions that are individualized to the patient and which respond over time to the needs (successes, benefits) of the patient. In the field of mental health, for example, the director of the National Institute of Mental Health (NIMH) recognizes the current state of interventions research: “To improve outcomes we will need to [develop treatments which]…personalize care based on individual responses”.[1]

Effective clinical management of chronic conditions (such as psychiatric disorders) often requires a sequence of treatments, each adapted to individual response, and hence multiple treatment decisions throughout the course of an individual's clinical care. For example, in child and adolescent mental health, the American Academy of Child and Adolescent Psychiatry practice parameters for pediatric depressive disorders recommend antidepressants following a non-response to initial psychotherapy.[2] Similarly, for pediatric anxiety disorders, an augmentation strategy of medication is recommended for children who show partial response to first-line psychotherapy.[3] Sequential treatments, in which treatments are adapted over time, are often necessary because treatment outcomes are heterogeneous across patients, treatment goals change over time, and in the long-term it is necessary to balance benefits (e.g., symptom reduction) with observed and potential risks (e.g., unwanted side-effects, patient burden).[4,5] As a result, clinicians often find themselves implicitly engaging in a sequence of treatments with the goal of optimizing both short-term and long-term outcomes.

Adaptive treatment strategies (ATS)[47] formalize such sequential clinical decision making. An ATS individualizes treatment via decision rules that specify whether, how, and when to alter the intensity, type, or delivery of treatment at critical clinical decisions. Examples of critical decisions include, which treatment to provide initially, how long to wait for the initial treatment to work, how to determine whether the initial treatment worked or not, and which treatment to provide next if the initial treatment is or is not working. Treatments at each critical decision may include medications, behavioral interventions, or some combination of these two. The following is an example of an ATS following an initial diagnosis of pediatric generalized anxiety disorder (GAD):

  • ATS 1: “First treat with the medication sertraline (SERT) for 12 weeks. If the child has not achieved an adequate response to initial SERT (at week 12), augment by initiating a combination of sertraline + individual cognitive behavioral therapy (CBT) for 12 additional weeks; otherwise, if child shows adequate response, maintain SERT alone for another 12 weeks.”

ATSs should be explicit; for instance, in ATS1, “adequate response” may be defined as the child exhibiting a value on a symptom scale beyond a pre-specified cut-off (more on this topic in Section 4 below). From the perspective of the child and his/her parent(s), the ATS is a sequence of treatments: e.g., SERT for 12 weeks, followed by CBT for another 12 weeks (assuming an inadequate response to SERT alone). From the perspective of the clinician, the ATS is a clinical decision rule that guides treatment both initially and also following an assessment of the 12-week response status. ATSs are also known as adaptive interventions and dynamic treatment regimes.[722]

Sequential multiple assignment randomized trials (SMARTs, discussed in more detail below) have been proposed to facilitate or accelerate the development of ATSs and represent an important advancement in clinical research methodology.[5, 2325] SMARTs can be used: (1) to discover which treatments work together in a sequence so as to lead to improved outcomes; (2) to investigate the interplay between trajectories of change in illness and the development of sequences of treatments; (3) to compare different sequences of medication, behavioral treatments or treatment tactics (e.g., treatment delivery methods); and (4) to investigate the clinical utility of both biological (e.g., genetic information) and clinician-observable data for individualizing treatment sequences. A central aim of SMARTs is to inform the construction of an optimized ATS: that is, to develop the sequence of treatments that lead to optimal outcomes in the long-term. Further, since at their core, SMARTs are concerned with the identification and use of which treatments work best for whom, when, and under what circumstances, mental health research that employs SMART designs fits squarely within the domain of comparative effectiveness research, which is another national research priority.[26]

SMARTs are gaining rapid acceptance in the clinical and health services research community. The seminal NIMH-funded trials CATIE[27] and STAR*D[28, 29] in schizophrenia and depression, respectively, were early precursors to SMARTs and represent important trials in terms of encouraging researchers to consider the development of ATSs. In oncology, trials similar to SMARTs were conducted in the early 1990's;[30] see the work of Thall and colleagues[22] for more recent work. In mental health and substance abuse research, a number of SMARTs have been completed or are currently on-going, including trials in alcohol dependence (D. Oslin, personal communication), attention deficit hyperactivity disorder (W. Pelham, personal communication), cocaine and alcohol dependence (J. McKay, personal communication), substance abuse problems in pregnant women (H. Jones, personal communication), and autism (C. Kasari, personal communication).

Despite their promise, increasing popularity, high quality fit for research in individualized medicine, and adoption by some clinical trialists, SMARTs are relatively new in clinical research. Due to their novelty, and because SMARTs represent a departure from the standard two-arm randomized clinical trial, questions remain about the use and execution of SMARTs by clinical investigators. Primary among these concerns is the issue of feasibility. Study sections, grant-funding review boards, and other stakeholders often want to see evidence that the proposed SMART study design is feasible. This includes showing evidence that investigators have the experience to execute the SMART design properly.

External pilot studies have long been used in all areas of health research to address concerns about feasibility. The primary aim of this article is to provide guidance on executing a SMART pilot study in preparation for a full-scale SMART. In Section 2 we review the goals of pilot studies. In Section 3 we review the SMART design, and we introduce a motivating example SMART in pediatric anxiety disorders. In Section 3, we also review the types of scientific questions that a full-scale SMART can examine. In Section 4 we propose and discuss design considerations within the context of a pilot study in preparation for a full-scale SMART. In Section 5, we provide general guidance on how to choose the sample size for a SMART pilot study. To make ideas concrete, we focus on the pediatric anxiety disorders example SMART throughout the article; however, all of the main ideas presented in this manuscript extend readily to SMARTs used to develop and optimize ATSs for other chronic disorders.

2. Pilot Studies

Following the lead of a diverse set of researchers, statisticians, and methodologists,[3136] we define a pilot study as a small-scale version of the larger study with the aim of fine-tuning the study design, evaluating its feasibility and acceptability, and preparing the research team for a future “full scale” randomized trial. In this manuscript, “feasibility” means both the ability of the investigators to execute the SMART, as well as the ability to treat participants with the ATSs that comprise (i.e., that are embedded in) the SMART (see below). By “acceptability” we mean the tolerability or appropriateness of the ATSs (including assessment procedures that make up the ATSs, see Section 4 below) being studied from both the perspective of study participants and clinicians.

There is a distinction in the statistical literature concerning internal versus external pilot studies; the definition given above (which we use throughout) is consistent with the definition of an external pilot study. A common aim of internal pilot studies, on the other hand, is to improve sample size calculations during the execution of the already-developed full-scale trial. In this case, they use the first pre-specified number of subjects to re-calculate the sample size needed for the remainder of the full-scale trial. As such, internal pilot studies are not concerned with examining feasibility and acceptability; for more information on internal pilot studies, consult the work of Wittes and colleagues[3739].

A well-designed and executed external pilot study helps answer questions such as: “Are we able to deliver properly the interventions we are proposing to compare?” “What is the level of staff understanding and fidelity to the research protocol?” “Are the proposed adaptive interventions acceptable to participants?” “Should we devise special quality control measures and procedures to improve and maintain fidelity during the large scale trial?”

Pilot studies also offer the opportunity to fine-tune proposed interventions and may provide preliminary knowledge about the direction of its effect. Pilot studies can also inform whether a proposed intervention study is worth pursuing in its current form. For example, if feasibility and acceptability are found to be lacking beyond what can be achieved by fine-tuning, the outcome of a pilot study may be that a second pilot or a new study is necessary. Under this definition, the primary role of a pilot study is not to test hypotheses about the potency of a given intervention, nor to obtain information about effect sizes with any certainty. For example, this idea is shared in the NIMH's pilot study program announcement (R34; http://grants.nih.gov/grants/guide/pa-files/PAR-09-173.html) which explicitly states that “…conducting formal tests of outcomes or attempting to obtain an estimate of an effect size is often not justified.”

The primary aim of this article is to provide guidance on executing a SMART pilot study in preparation for a full-scale SMART. Beginning in Section 4, we discuss scientific, statistical, and logistical issues specific to executing a SMART that should be considered in a SMART pilot. This article does not provide guidance on executing pilot studies in general; rather, we focus on the unique aspects of SMART designs. Scientists preparing to execute a randomized trial should also refer to the literature on executing external pilot studies for pointers on more general uses of pilot studies.[3436]

3. Sequential Multiple Assignment Randomized Trials

The overarching aim of a SMART is to inform the construction of an optimized ATS. The key feature of a SMART is that it allows investigators to evaluate the timing, sequencing, and adaptive selection of treatments in a principled fashion by use of randomized data. In a SMART, participants can move through multiple stages of treatment; each stage corresponds to a critical decision; and participants are randomized at each stage/critical decision. Randomized treatment options at each critical decision include appropriate single- or multi-component treatment alternatives.

An example of a SMART is shown in Figure 1. This SMART can be used to develop an ATS for the management of pediatric GAD involving the medication sertraline (SERT), cognitive behavioral therapy (CBT), and a combination of both (COMB). This SMART provides data that helps investigators address two critical decisions: (1) “Which treatment to provide first?” and (2) “Which treatment to provide to non-responding participants?” Since the answer to each question may depend on the answer to the other question, the SMART involves two stages of treatment one per critical decision. Participating children are first randomly assigned to either 12-weeks of SERT or 12-weeks of CBT as first-line treatment. After the end of the initial 12 weeks of treatment, each child's response to treatment is evaluated and classified as either a treatment responder or treatment non-responder. This binary indicator is the primary tailoring variable used as part of the SMART: children who are not responding at the end of 12 weeks are re-randomized between an augmentation of their initial treatment or a switch in treatment, whereas children who do respond continue with their initial treatment. As indicated in Figure 1, the primary outcome could be a longitudinal measure of anxiety over the 48 week trial period.

Figure 1
This example SMART can be used to develop an adaptive treatment strategy involving sertraline medication (SERT), cognitive behavioral therapy (CBT), and their combination (COMB) for the management of pediatric anxiety disorders.

By using sequenced randomizations, SMARTs ensure that at each critical decision, the groups of participants assigned to each of the treatment alternatives are balanced in terms of both observed and unobserved participant characteristics. This includes time-varying characteristics and outcomes experienced during prior treatment such as symptom levels, side effects, and adherence.

All SMART designs have multiple ATSs embedded within them. For example, in addition to the ATS described in the Introduction (ATS 1, sub-group A + B), the example SMART shown in Figure 1 also includes the following three additional ATSs:

  • ATS 2: First treat with SERT only for 12 weeks. Then, if the child does not respond to initial SERT, switch to CBT alone for 12 additional weeks; otherwise, if the child responds to initial SERT, maintain on SERT alone for another 12 weeks (subgroup A + C)
  • ATS 3: First treat with CBT only for 12 weeks. Then, if the child does not respond to initial CBT, augment treatment by initiating a combination strategy (COMB) of sertraline + CBT for 12 additional weeks; otherwise, if the child responds to initial CBT, maintain on CBT alone for another 12 weeks (subgroup D + E).
  • ATS 4: First treat with CBT only for 12 weeks. Then, if the child does not respond to initial CBT, switch to SERT medication for 12 additional weeks; otherwise, if the child responds to initial CBT, maintain on CBT alone for another 12 weeks (subgroup D + F)

The four ATSs embedded in the SMART shown in Figure 1 are also described in Table 1. These are the only ATSs embedded in this example SMART.

Table 1
The four adaptive treatment strategies embedded as part of the example SMART design shown in Figure 1.

A full-scale SMART can be used to evaluate a variety of primary and secondary scientific questions to inform the development of an optimal ATS. SMARTs are factorial designs in a sequential setting[21]; thus the primary aims usually involve main effects. One example of a primary aim is “What is the main effect of first-line treatment?” For example, in Figure 1, this involves a comparison of first-stage SERT versus first-stage CBT (subgroup A + B + C versus subgroup D + E + F). Note that this main effect comparison averages over the second stage treatments. A SMART can also be used to contrast two or more ATSs. For example, in Figure 1, an investigator may be interested in examining which of the four ATSs leads to the most rapid reduction in symptoms over the course of 48 weeks since initial diagnosis. Since some participants in the SMART simultaneously are consistent with multiple embedded ATSs (e.g., participants in subgroup A are consistent with both ATS1 and ATS2), specialized methods which account for the multiple use of subjects are used to estimate and compare mean outcomes under different ATSs[1415]. A full-scale SMART also provides investigators the opportunity to investigate the use of time-invariant (e.g., patient characteristics such as baseline severity, demographic variables or genetic information) and other time-varying (e.g., adherence with treatment) tailoring variables for improving sequential treatment (these may be the exploratory aims of a SMART). For instance, the example childhood anxiety SMART could be used to explore if treatment adherence and treatment satisfaction over the course of the initial 12 weeks are important predictors of second stage treatment response such that they may be useful additional tailoring measures. This can be done using standard treatment-by-covariate moderator analyses of the impact of second-line treatments on subsequent outcomes---e.g., “Among non-responders, does adherence during the initial 12 weeks of treatment moderate the impact of subsequently augmenting versus switching treatments during the next 12 weeks?”—or using more sophisticated data analytic methods such as dynamic regime marginal structural model estimation[14], iterative minimization and G-estimation of the structural nested mean model[10, 12, 16], or Q-Learning[4, 40; for software, see http://methcenter.psu.edu]. In other SMARTs, the duration of the first stage of treatment may differ between trial participants; for example, this may occur when the second stage begins at an event time (as in the ExTENd trial example below). In such cases, the duration of first-stage treatment could also be examined in secondary analyses for its potential usefulness as a tailoring variable. Hence, SMARTs are used not only to test treatment options at particular stages of treatment and to contrast embedded ATSs, but also yield high-quality data to inform the clinical utility of individualizing treatment using additional tailoring.

Consistent with the overarching aim of a SMART, data analyses associated with these primary and secondary aims avoid choosing the best treatment at each critical decision point using separate one-at-a-time optimizations. To appreciate why this is important, consider that a first-line treatment leading to poorer outcomes in the short-term may lead to better outcomes in the long-term when considered as part of a whole ATS. For instance, in the SMART in Figure 1, it is possible for first-stage CBT to lead to poorer (or similar) symptom relief at the end of 12 weeks relative to first-stage SERT; yet when evaluated at 48 weeks, beginning with CBT results in lower symptoms than beginning with SERT. This can happen, for example, if the initial CBT sets the stage for a more pronounced response to COMB among non-responders in the second stage (say, by priming the individual to take advantage of the subsequent CBT) than initial SERT. Or to consider another case, suppose that the CBT provided as part of COMB includes components designed to increase adherence to medication. Here, SERT may be the better initial treatment when considered part of a sequence; this can happen if initial SERT reveals prescriptive information: Initial treatment by SERT may be better than CBT at identifying individuals who are poor adherers, and thus indicate who needs COMB (which, in our example, includes components to improve adherence). Other conjectures are possible. The key point here is that developing an ATS by piecing together the treatments which are best myopically (i.e., work best at the end of each decision point) may be a sub-optimal way to proceed in terms of long-term outcomes.

It is important to understand that the SMART is not an adaptive trial design [4142]. Just as the standard two-arm RCT is a fixed trial design, the SMART is also a fixed trial design which does not change during the course of the trial. What is adaptive about the SMART are the treatment strategies embedded within the SMART (e.g., ATS1 – ATS4, described above). It is conceivable, of course, to conduct a SMART using an adaptive trial design by allowing some design parameters (e.g., sample size or sample selection) to vary with interim data; this topic, however, it outside the scope of this article.

SMARTs can be seen as developmental trials used to construct and optimize an ATS. Following the successful completion of a SMART, the constructed ATS can be tested against a usual care treatment (or other state-of-the-art intervention) in a standard RCT. In such a confirmatory trial, participants would be randomly assigned to either the state-of-the-art intervention or to the SMART-optimized ATS to test which treatment strategy is more effective. SMARTs can also be sized to have other roles besides that of a developmental trial. For example one could size a SMART to conduct a comparison of the embedded ATSs; this may be particularly attractive when one of the embedded ATSs represents “usual care.”

4. Considerations to Address in a SMART Pilot

In this section we discuss nine topics specific to executing a SMART that can be considered in a SMART pilot. Table 2 provides examples of issues/concerns which may arise (specific to each topic) and how they may lead to changes in the full-scale SMART protocol.

Table 2
Feasibility or acceptability issues/concerns and subsequent changes in the full-scale SMART protocol. Examples are given by topic addressed in Section 4.

The Primary Tailoring Variable

One area unique to a SMART is the importance placed on tailoring variables. In preparation for a SMART, a key exercise is to undertake a thorough discussion about the primary tailoring variable. (Note that in the ATSs embedded within a SMART, the primary tailoring variable is used to adaptively determine the next treatment; in contrast, in the SMART design, the primary tailoring variable determines the set of randomized treatment options.) This involves brainstorming about how to determine early signs of non-response and when this determination should be made. As part of this discussion, investigators will need to decide how to assess early response/non-response (e.g., the Clinical Global Impression-Improvement Scale (CGI-I)[31]?), and what criterion or cut-score should be used (e.g., less than 3 on the CGI-I?). Other considerations include: How sensitive is the measure to treatment change? Is there an established precedent in the literature that can be used to justify the measure as a tailoring variable? Would the measure be feasible in real-world settings?

In addition to identifying how to assess the tailoring variable, investigators must determine how frequently the tailoring variable needs to be assessed. That is, how often should early response/non-response be evaluated? This will depend in large part on the domain being studied and historical precedent. In our example in Figure 1, response/non-response is assessed at one point in time (12 weeks) and a score ≤2 on the CGI -I is the criterion. Participants who are not responding at 12 weeks are re-randomized to the next stage treatment. In this example—which uses a primary tailoring variable which is fixed at a pre-specified number of weeks after the initiation of first-stage treatment—it is necessary for investigators to operationalize what they mean by “end of 12 weeks.” This is because, as with any study, it is not always feasible to schedule clinic visits at exactly “the end of 12 weeks” due to scheduling conflicts. Often, for example, a pre-specified window of time around 12 weeks would be used. Indeed, this issue is not unique to SMARTs. However, in a SMART the width of the window deserves particular attention because different window lengths imply different operationalizations (definitions) of the ATSs which are embedded in the SMART. The related issue of how to define the primary tailoring variable when non/responder status is missing is discussed below.

In other SMARTs, multiple assessments for early response and/or nonresponse can occur and the primary tailoring variable is a summary of these multiple assessments. Or the primary tailoring variable may be defined as a “time until” outcome of first-stage treatment. For example, in a SMART concerning alcohol dependence (the ExTENd clinical trial, D. Oslin, personal communication), counts of heavy drinking days are used to measure response/nonresponse and participants are assessed weekly to ascertain the number of heavy drinking days occurring over the prior week. Here the participant is deemed to be an early nonresponder as soon as ≥2 heavy drinking days occur. Note that in the ExTENd example, the primary tailoring variable—defined as a “time until” measure—requires more frequent monitoring times (as compared to a primary tailoring variable assessed at the end of first-stage treatment, as in Figure 1) adding logistical complexity. The pilot study can be used to examine the feasibility of assessing and using such a measure.

In other example SMARTs, the primary tailoring variable need not be dichotomous (e.g., it may be a trichotomous variable measuring responder, non-responder, and partial responder status). However, the more complicated the primary tailoring variable, then the more complex the trial design becomes since the randomized treatment options may differ by values of the primary tailoring variable. In general, the choice of primary tailoring variable should be driven by a primary, parsimonious scientific question. Another advantage of parsimony in the choice of primary tailoring variable is that it allows for observed variability on other measures which may be useful for building ATSs that are more refined (i.e., which may offer a more individually-tailored treatments) than those embedded in the SMART by design. These additional measures would be considered in secondary analyses of the data arising from a SMART. That is, other, possibly more interesting, scientific questions involving more refined tailoring (such as considering how to tailor treatment using a less coarse/continuous tailoring variable, or such as using adherence to first-stage treatment to decide how to treat non-responders in the second-stage) can be addressed as part of secondary analyses—for more on this topic, see Other Potential Tailoring Variables below.

The SMART pilot should allow the investigative team ample opportunities to train in applying the approach for assessing and using the primary tailoring variable, assessing whether the approach is clinically feasible, and refining the measurement of the primary tailoring variable and refining the criterion for determining early response/nonresponse during the full-scale trial.

Randomization Procedure

In a SMART, participants are randomized at multiple critical decisions over the course of the trial. Investigators can chose between two randomization procedures: an up-front approach or a real-time approach. In the up-front approach, participants are randomized at the beginning of the trial to the different ATSs that are embedded in the SMART design. In our example, this means randomizing participants at baseline to one of the four ATSs described in Section 2. In the real-time approach, participants are randomized sequentially at each critical decision point as described in Figure 1. In both approaches, participating families will be informed during the consent process of the possible treatment sequences to which they might be randomized. Compared to the up-front approach, the real-time approach has at least one important advantage: it allows investigators to capitalize on information (including time-varying covariates) available at time of randomization to ensure balance in assigned treatment options at each critical decision stage. For instance, if we use a real-time approach in our example, the second randomization among non-responders (to initial SERT or initial CBT) can be stratified on adherence to treatment, symptom severity, or other important outcomes observed during the first 12 weeks of treatment. This is advantageous because if, by chance, the composition of groups differs by these variables and these variables are prognostic for subsequent study outcomes, then differences between the groups can be attributed to these compositional differences as opposed to differences between second stage treatments. The up-front approach does not afford investigators this level of control over compositional balance. A SMART pilot study will give the research team's analyst an opportunity to develop and evaluate the randomization procedure and check for unanticipated errors.

Missing the Primary Tailoring Variable

In our example, the primary tailoring variable is response status (responder/non-responder) at the end of acute treatment (week 12). Ideally, all participants in a SMART will have a measure of the primary tailoring variable available at the end of 12 weeks that can be used to guide subsequent treatment assignments. However, this ideal situation might not hold. That is, the assessment of a given participant's response to the first 12 weeks of treatment may be missing, either because the participant dropped-out of the study prior to week 12, or because the participant was unavailable for the week 12 assessment (e.g., the participant might be ill or on vacation). The problematic situation for purposes of executing the SMART is the latter one, in which a participant is unavailable for the 12 week assessment yet returns to the study at some later point when a decision must be made concerning response/nonresponse status and next step in treatment.

All randomized trials must contend with missing data, but this problem is unique in SMARTs because the interventions (e.g., the ATSs) that make up a SMART are adaptive. The critical issue here is how to manage missingness for purposes of offering/assigning subsequent treatment, as opposed to how to handle missing evaluation outcomes for purposes of data analysis. As in standard randomized trials, the pilot can be used to prepare and practice low burden approaches that can be used to facilitate the collection of the missing evaluation outcomes (e.g., via telephone assessments). Beyond this, a satisfactory solution to the “missing tailoring data problem” (as opposed to “missing evaluation data problem”) is one that recognizes that this type of missingness should be part and parcel of the definition of the ATSs embedded in the SMART, just as in typical clinical practice the clinician has to decide next treatment when faced with missed visits. Therefore, the solution to this problem should be guided by what would be done in clinical practice. Clinical investigators should ask “How do I treat a patient when s/he returns after a missed clinic visit and what do I need to know concerning the missed visit to make this decision?” Investigators may need to differentiate between excused (e.g., family could not find childcare for younger siblings) and unexcused missed assessments, how long the participant was missing, how many sessions the participant attended in the first treatment phase, how well or poorly the participant was doing prior to missing, or how well the participant is doing when s/he reappears. The actual approach used will depend on the particular research question(s) being investigated and the types of disorders and treatments being studied. Consistent with these ideas, the solution to this problem involves having a fixed, pre-specified way to determine subsequent treatment in the presence of a missed clinic visit. This can be operationalized in at least two ways in the SMART: First, missingness could be made part of the definition of early non-response. This approach could be taken if in actual practice, a missed clinic visit is clinically viewed as non-response (this is often the case in substance abuse treatment). One way to operationalize this is to classify all participants with missing response status at any given decision point as non-responders, then assign the participant their randomized treatment option at the next clinic visit. This approach could be labeled “non-responding until proven responding.” The opposite approach, or “responding until proven non-responding,” could also be employed whereby a participant missing the responder/non-responder status is classified as a responder for purposes of subsequent treatment assignment. Another option in the case of a missing 12 week visit is to devise an approach that relies on data which would be readily available to the clinician in practice, including previous response to treatment and current clinical status up to the point of missingness to determine the responder status. This option requires investigators to decide how to summarize the observed history of treatment response, including the decision of how much historical data to use. A second way to operationalize missingness as part of the ATSs embedded in the SMART is to treat it separately from non/responder status and offer a separate treatment altogether. This approach may be more appropriate than the above approach if the second stage treatment options are simply not feasible for subjects exhibiting this type of missingness. Importantly, no matter what approach is used, the choice of subsequent treatments in the presence of missingness should be well-specified and fixed prior to the trial. The SMART pilot will allow the investigative team to train in applying the chosen approach and assess whether it is clinically feasible and scientifically relevant.

Other Potential Tailoring Variables

The investigative team must also decide which additional potential tailoring variables should be collected. Potential tailoring variables include both baseline patient characteristics as well as time-varying measures (e.g., treatment adherence or side effects) that might be useful in tailoring the treatment to the patient. In the childhood anxiety example, investigators may want (in the full-scale SMART) to explore whether patient characteristics and baseline measures might be used to tailor initial treatment, and whether patient characteristics, baseline measures, and outcomes due to initial treatment (but collected prior to the subsequent critical decision point) may be used to tailor the second treatment. A time-varying tailoring variable can be measured at a single point (e.g., the week 12 clinic visit) or may be a cumulative summary of treatment response up to that point in time. Potential tailoring variables should be simple, easy to use (minimal burden) in actual clinical practice—for example, short-form of instruments—and able to be collected by the treating clinician. The SMART pilot can be used to pilot such instruments, items, or questions under consideration for tailoring. Although these variables will not impact the full-scale trial (unlike the primary tailoring variable), they could lead to the creation of new variables useful for tailoring treatment in future studies or found to be important when the data analyzed.

The use of tailoring variables represent an important departure from standard randomized trials, even when designed with an interest in understanding moderators which predict treatment response,[44] because they are rarely considered outcomes to initial treatment, such as symptom, side effect, and adherence measures, that may predict later outcomes to second-stage treatments. This is one important reason why SMARTs more closely mirror clinical practice and will ultimately lead to information that is more clinically relevant to the practicing clinician.

Identifying Unanticipated Tailoring Variables

Just as the SMART pilot can be used to practice the measurement of new tailoring variables identified a priori, as described above, it may also be helpful in identifying unanticipated variables that can be useful for tailoring treatment and can then be measured in more detail in a full-scale trial. Focus groups or structured exit interviews scheduled during and after the SMART pilot, will likely be helpful in uncovering new and potentially important tailoring variables. Such focus groups can also be used with non-clinician members of the treatment team (e.g., research assistants, project coordinators). For example, in the context of our example SMART, families who are more difficult to schedule frequently, arrive late to visits, require repeated reminders, rush through paperwork, and more challenging to work with may benefit more from one treatment over another when compared to families who are highly compliant and easier to manage.

Evaluation (or SMART) Assessments versus Treatment (or ATS) Assessments

In a SMART, a clear distinction is made between research assessments for purposes of data analysis to evaluate the effectiveness of ATSs (data used in evaluation) versus assessments made as part of the ATS to inform subsequent treatment assignment (data used in tailoring). Indeed, keeping these assessments distinct is not entirely unique to SMARTs; in standard two-arm RCTs it is equally important to differentiate between information gathered and shared as part of treatment versus information gathered for purposes of evaluating treatment effectiveness. However, in SMARTs it is important to further highlight this distinction due to the realistic possibility that ATSs embedded within the SMART could unknowingly become ill-defined if evaluation data is implicitly or explicitly used in the determination of early non/response during the conduct of the SMART.

In our example, the week 12 response status is assessed by the treating clinician as part of the ATS (data used in tailoring). Because the aim of the SMART study is to inform actual clinical practice, it is acceptable for the week 12 response status (used to inform subsequent treatment randomizations) to be an unblinded clinician evaluation. However, other assessment measures are also collected at the week 12 visit and are used to determine the effectiveness of treatment(s) (data used in evaluation). The key difference is that the latter assessments are not part of the embedded ATSs. If possible, it is important to use blinded independent evaluators (clinicians not involved in the provision of treatment) to collect the outcome measures that will be used to evaluate the effectiveness of the ATS or its components. It is equally important to ensure that only data used in tailoring (i.e., as part of the embedded ATSs) is used to determine subsequent treatment changes. The SMART pilot will provide staff an opportunity to prepare and practice these assessment methods and strategies for maintaining these two types of assessments separate and distinct. Fundamentally, this distinction is about understanding the distinction between SMART versus the ATSs embedded within the SMART.

Staff Acceptability and Fidelity to Changes in Treatment

A properly executed SMART requires careful staff fidelity to changes in treatment provided over time as dictated by the study design. This may be challenging because (1) clinical researchers accustomed to participating in standard randomized trials may have little experience with sequenced treatments that are an explicit part of the SMART research protocol and (2) following the SMART protocol may limit the use of clinical judgment. Prior to a full-scale SMART, a pilot SMART can be used to identify concerns clinicians may have about the sequence of treatments offered and the assessment of early response/nonresponse. The pilot can be used to develop training procedures to enhance clinician fidelity to both the research protocol and treatment strategies and to ensure that the clinical team has the required training and expertise needed to successfully carryout the SMART. For instance, in our example SMART, suppose that a child is randomized to receive SERT as first stage treatment and that prior to week 12, say at week 10, the treating clinician is concerned that the child is worsening and insists that the child be immediately moved to the next stage of treatment. Is this an indication that the definition and timing of non-response should be revised prior to the full-scale SMART? Do staff members need training in how to manage these emergent clinical situations in a consistent manner? Can something be learned from this situation that will refine and improve the sequence of treatments? A SMART pilot can be used to identify when and where staff flexibility is warranted, to develop fidelity measures for its continued assessment, and to receive staff feedback about the timing of treatment switches and augmentations.

Participant Concerns about Changes in Treatment

The SMART pilot can also assess whether treatment changes specified in the SMART are acceptable to participants and whether the new treatment option(s) being offered are clinically feasible. Understanding participant concerns about the treatment sequences may lead to modifications to enhance its efficacy, acceptability, and feasibility. To inform these concerns, the SMART pilot may include additional survey items, exit interviews, or focus groups with participants to better understand from their perspective what was useful about the sequence of treatments offered, the transitions, and concerns about acceptability. Questions may include: “How was your experience when you transitioned from a psychiatrist to a psychologist?” “How was your experience when you participated in the CBT sessions after having come off your medication?” “Did you find that the concerns you expressed during your sessions with the psychiatrist were also understood by the psychologist?” “Was the rationale for the treatment change adequate?” This information will aid in the execution of the full-scale trial, and also inform treatment delivery and refinement of the treatment strategies. It may also likely lead to additional measures to investigate for use as tailoring variables.

Ethical Considerations and Consent Procedures

SMART studies may be more acceptable to participants than RCTs of a fixed, non-adaptive treatment. With a few exceptions when there are adverse events, in standard RCTs of fixed non-adaptive treatments there is usually no alternative treatment within the context of the trial for participants who are not responding well. In contrast, consider the SMART in Figure 1 in which, if a participant is not responding well then a second treatment is offered. Of course, guarantees cannot be made prior to a SMART, that a change or augmentation in treatment (among first-stage non-responders) will necessarily result in improved outcome; however, in a SMART such changes or augmentations are examined (and could possibly lead to improved outcomes), whereas in fixed-treatment RCTs this is usually not an alternative. Further, as in Figure 1, a SMART can involve potentially less burdensome maintenance or step-down treatment options for responding participants.

From the investigator's point of view, participants in a SMART are randomized to a number of pre-specified ATSs (see Table 1). From the study participant's point of view, they are offered a sequence of treatments over time. A participant who is randomized to ATS1 in the SMART in Figure 1, for example, may receive either the treatment sequence (SERT, SERT) or the treatment sequence (SERT, SERT+CBT), depending on their early non/response status. Since interventions offered to SMART participants are treatment sequences—rather than fixed non-adaptive treatments, as in most standard RCTs—this aspect of the SMART design may require consent procedures or language different from those used in a standard RCT. A SMART pilot study can be used to practice these changes in the language typically used in standard RCT consent forms.

Re-randomization does not imply that re-consent is necessary. As described in the Randomization Procedure section above, the actual allocation procedure used may be one that performs the randomizations up-front or in real-time (sequentially). Regardless of the allocation procedure used, SMART participants provide consent up-front to be part of the entire study and to be assigned to one of the embedded ATSs (and therefore receive one of the embedded treatment sequences), just as in a standard RCT participants consent up-front to be part of the study and to be assigned to one of the fixed treatments. A key part of this, of course, is that participants have a clear understanding (as part of the up-front consent procedures) of the types of treatment sequences which they may be offered during the course of the SMART. Another key part of this is that investigators understand that a SMART is not a combination/packaging of separate sub-studies, one per randomization; rather, a SMART is itself one study with multiple randomizations.

Indeed, re-consenting SMART participants at the second stage may conflate second-stage treatment drop-out with study consent and have unintended consequences in terms of study drop-out. A SMART participant may, in fact, not find second-stage treatments acceptable/helpful and as a consequence drop-out of treatment; this does not mean the SMART participant drops out of the study. This is no different from a participant in a standard RCT who stops attending CBT sessions after week 5, for example, but continues to provide outcome assessments (that is, a participant who is a treatment drop-out but not a study drop-out). The problem with re-consenting SMART participants prior to second-stage treatment initiation is that—not only is it unnecessary, as described above—it may have the unintended consequence of encouraging study drop-out (leading to missing outcomes) among participants who would otherwise just have been treatment drop-outs (and possibly continued to provide outcome assessments). The SMART pilot study can be further used to ensure that study consent is done up-front rather than sequentially to avoid these concerns.

5. How Many Participants Are Necessary for a SMART Pilot?

As discussed above, the primary aim of a pilot study is to examine the feasibility of carrying out a future larger-scale trial, rather than examining the clinical impact of a proposed set of treatments. Correspondingly, the sample size for a SMART pilot study should be based on a feasibility aim, rather than on detecting an effect size.

To ensure that the investigative team can assess feasibility is to ensure that sufficient number of participants appear in all of the subgroups. One way to accomplish this goal is to size the pilot study so that with fixed probability k, at least m participants will fall into the non-responder subgroups B and E in Figure 1. More formally, the total sample size N can be chosen such that

Pr(MB>mandME>m)>k,
(1)

where the probability is over repeated pilots of size N, and MB and MB are random variables denoting the number of subjects who fall into non-responder subgroup B and E, respectively. Assuming that one-half (½) of the available participants are allocated to each first- and second-stage treatment, we have that

Mj=12i=1N2(1Rji)forj=B,E,
(2)

where Rji is a dummy indicator which equals 1 if subject i is a responder and 0 if subject is a non-responder. (In order to ensure approximately equal numbers of participants at each randomization during the execution of both the pilot and full-scale SMART, investigators may consider permuted-block randomizations (i.e., AABB-AB-BA-BAAB-…) or more sophisticated minimization allocation procedures during the conduct of a SMART.) Note that the random variable Vj=i=1N2(1Rji) has a binomial distribution of size N* = N/2 with probability qj which is the true (unknown) rate of non-response to first-stage treatment j For simplicity (and to be conservative, see below), we further assume q = qB = qE (equal rates of non-response to either first-stage treatment option) so that display (1) is equivalent to

Pr(V>2m)2>k,
(3)

where V has a binomial distribution of size N* with probability q. Note that subgroups B and C will have the same number of subjects, as will subgroups E and F which is why it suffices to focus on subgroups B and E alone rather than all four non-responder subgroups. Further, we focus on just the non-responder subgroups B and E since in practice—with non-response rates in the typical range of 35% to 65% for SMARTs—if non-responder subgroups B and E contain at least m participants with high probability, then so will subgroups A and D, respectively; intuitively, this is because responders are not re-randomized and therefore not further split.

Three steps are required to use display (3) to determine sample size for the pilot study: First, investigators supply m, a guess at the common non-response rate q, and the desired k (say 80% or 90%). Second, binomial cumulative distribution functions of different sizes (in increasing order, beginning with N*=1) with probability q are evaluated (upper tail) at the quantile 2m. This produces a list of non-decreasing probabilities. Third, choose N* corresponding to the first (the smallest) probability larger than k, and then set N = 2N*. These calculations are easily done using any statistical software package. An R[45] function which performs these calculations is available for download on the Penn State University Methodology Center website http://methcenter.psu.edu.

Table 3 shows suggested values of N for values of k = 0.80, 0.90, m = 2, 3, 4, 5 and q varying from 0.35 to 0.65. As an example, suppose that the investigative team agrees that m = 3 children in one of the small non-responder subgroups is a sufficient number to ensure familiarity with the research protocol, treatment delivery, identify potential problems, and to address the concerns described in Section 4 above. Suppose the team would like to see this happen in the pilot with k = 90% probability. If the team expects a non-response rate of q = 50% at the end of week 12, then a pilot study for the SMART in Figure 1 would require approximately 42 participants. The key variable needed is the estimated non-response rate; this can be based on existing studies in the relevant topic area.

Table 3
The table shows sample sizes required for piloting SMART studies of the type shown in Figure 1. N is chosen such that with probability k and early non-response rate q, a minimum number of participants m will fall into the non-responder subgroups B and ...

The non-response rate q is unknown prior to a SMART pilot. Although the investigators may be be able to find somewhat similar studies with somewhat similar treatments and participants,it is likely that the participants and the first stage treatments were not identical to those in previous studies. Thus, it may be useful to use a value of q smaller than the actual guess so that N is chosen conservatively. In addition, in the calculations above we assume the non-response rate q is identical for both first-line treatments. In practice, it is likely this assumption will not hold (indeed, investigators may hypothesize that one of the first-line treatments will lead to better short-term outcomes). In this case, we recommend investigators set q to the smaller of the two anticipated non-response rates, again this is recommended so as to be conservative in the choice of sample size for the pilot. Further, as in any study (pilot or full), it is useful to inflate N by a guess of the study drop-out/attrition rate. For instance, in the example above, if the team expects an overall 10% study drop-out/attrition rate by the end of the study, then the pilot study sample size should be 47 = 42 / (1 – 0.10) instead of 42.

6. Summary

Adaptive treatment strategies (ATS) hold much promise in operationalizing and informing the type of adaptive, sequential treatment decisions made to address chronic conditions in “real-world” clinical settings. Sequential Multiple Assignment Randomized Trials (SMARTs) have been developed explicitly for the purpose of developing such ATSs. A small number of SMARTs have been designed, completed or are currently being used in clinical research. Despite this, SMARTs are still new to many researchers and some questions remain concerning how to design SMARTs appropriately. In this article, we present a number of design considerations, unique to SMARTs, which are best addressed within the context of a small, but useful, pilot study in preparation for a full-scale SMART. To motivate and illustrate these considerations, we discuss an example SMART that addresses how to treat children and adolescents with anxiety disorder using medication, CBT, and/or the combination of both (including how to treat them following non-response to an initial first-line treatment). This article will be a useful guide to clinical trial investigators who are interested in planning a SMART study to develop or optimize an ATS.

Acknowledgments

Funding was provided by NIMH grants K23-MH-075843-04 (Compton), R01-MH-080015 (Murphy), a K23-MH-090216 and New York State Psychiatric Institute Research Associate Award (Gunlicks-Stoessel), and NIDA grant P50-DA-010075 (Murphy). Drs. Almirall, Compton, Gunlicks-Stoessel, Duan, and Murphy report no financial or potential conflicts of interest. The authors would like to thank John Walkup and Joel Sherrill for thoughtful comments on an earlier draft of this manuscript. In addition, we would like to thank three reviewers for comments which helped improve this manuscript significantly, including to help it reach a broader audience of clinicians and biostatisticians alike.

Abbreviations used

SMART
sequential multiple assignment randomized trial
ATS
adaptive treatment strategy
SERT
sertraline medication
CBT
cognitive behavioral therapy
COMB
SERT+CBT
GAD
generalized anxiety disorder

References

1. Insel TR. Translating scientific opportunity into public health impact: a strategic plan for research on mental illness. Archives of General Psychiatry. 2009 Feb;66(2):128–133. [PubMed]
2. Birmaher B. Practice parameter for the assessment and treatment of children and adolescents with depressive disorders. Journal of the American Academy of Child and Adolescent Psychiatry. 2007 Nov;46(11):1503–1526. [PubMed]
3. Connolly SD, Bernstein GA, Work Group on Quality Issues Practice parameter for the assessment and treatment of children and adolescents with anxiety disorders. Journal of the American Academy of Child and Adolescent Psychiatry. 2007 Feb;46(2):267–283. [PubMed]
4. Murphy SA, Lynch KG, Oslin D, Mckay JR, TenHave T. Developing adaptive treatment strategies in substance abuse. Drug and Alcohol Dependence. 2007 May;88:S24–S30. [PMC free article] [PubMed]
5. Murphy SA, Collins LM, Rush AJ. Customizing treatment to the patient: Adaptive treatment strategies. Drug and Alcohol Dependence. 2007 May;88:S1–S3. [PMC free article] [PubMed]
6. Dawson R, Lavori PW. Placebo-free designs for evaluating new mental health treatments: the use of adaptive treatment strategies. Statistics in Medicine. 2004 Nov;23(21):3249–3262. [PubMed]
7. Lavori PW, Dawson R, Rush AJ. Flexible treatment strategies in chronic disease: clinical and research implications. Biological Psychiatry. 2000 Sep;48(6):605–614. [PubMed]
8. Murphy SA, Almirall D. Dynamic Treatment Regimens. In: Kattan MW, editor. Encyclopedia of Medical Decision Making. Sage Publications; Thousand Oaks, CA: 2009.
9. Lavori PW, Dawson R. Dynamic treatment regimes: practical design considerations. Clinical Trials. 2004 Feb;1(1):9–20. [PubMed]
10. Murphy SA. Optimal dynamic treatment regimes. Journal of the Royal Statistical Society Series B. 2003;65:331–355.
11. Hernán MA, Lanoy E, Costagliola D, Robins JM. Comparison of Dynamic Treatment Regimes via Inverse Probability Weighting. Basic & Clinical Pharmacology & Toxicology. 2006;98:237–242. [PubMed]
12. Moodie EEM, Richardson TS, Stephens DA. Demystifying optimal dynamic treatment regimes. Biometrics. 2007;63:447–455. [PubMed]
13. Murphy SA, van der Laan MJ, Robins JM, CPPRG Marginal Mean Models for Dynamic Regimes. Journal of the American Statistical Association. 2001;96:1410–1423. [PMC free article] [PubMed]
14. Orellana L, Rotnitzky A, Robins JM. Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part I: Main Content. The International Journal of Biostatistics. 2010;6(2) Article No. 8. [PubMed]
15. Orellana L, Rotnitzky A, Robins JM. Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part II: Proofs of Results. The International Journal of Biostatistics. 2010;6(2) Article No. 9. [PMC free article] [PubMed]
16. Robins JM. Optimal structural nested models for optimal sequential decisions. In: Lin DY, Heagerty PJ, editors. Proceedings of the Second Seattle Symposium on Biostatistics. Springer; New York: 2004.
17. van der Laan MJ, Petersen ML. History-Adjusted Marginal Structural Models and Statically-Optimal Dynamic Treatment Regimes. UC Berkeley Division of Biostatistics Working Paper Series. 2004 Sep; Working Paper No. 158. http://www.bepress.com/ucbbiostat/paper158.
18. Wahed AS, Tsiatis AA. Optimal estimator for the survival distribution and related quantities for treatment policies in two-stage randomization designs in clinical trials. Biometrics. 2004;60:124–133. [PubMed]
19. Bembon O, van der Laan MJ. Analyzing sequentially randomized trials based on causal effect models for realistic individualized treatment rules. Statistics in Medicine. 2008 Aug;27(19):3689–3716. [PubMed]
20. Cain LE, Robins JM, Lanoy E, Logan R, Costagliola D, Hernán MA. When to Start Treatment? A Systematic Approach to the Comparison of Dynamic Regimes Using Observational Data. The International Journal of Biostatistics. 2010;6(2) Article No. 18. [PMC free article] [PubMed]
21. Murphy SA, Bingham D. Screening Experiments for Developing Dynamic Treatment Regimes. Journal of the American Statistical Association. 2009;184:391–408. [PMC free article] [PubMed]
22. Thall PF, Millikan RE, Sung HG. Evaluating multiple treatment courses in clinical trials. Statistics in Medicine. 2000 Apr;19(8):1011–1028. [PubMed]
23. Murphy SA. An experimental design for the development of adaptive treatment strategies. Statistics in Medicine. 2005 May;24(10):1455–1481. [PubMed]
24. Lavori PW, Dawson R. Improving the efficiency of estimation in randomized trials of adaptive treatment strategies. Clinical Trials. 2007;4(4):297–308. [PubMed]
25. Lavori PW, Dawson R. Adaptive treatment strategies in chronic disease. Annual Review of Medicine. 2008;59:443–453. [PMC free article] [PubMed]
26. Institute of Medicine . 100 Initial Priority Topics for Comparative Effectiveness Research. http://www.iom.edu/Reports/2009/ComparativeEffectivenessResearchPriorities.aspx.
27. Schneider LS, Tariot PN, Lyketsos CG, Dagerman KS, Davis KL, Davis S, Hsiao JK, Jeste DV, Katz IR, Olin JT, Pollock BG, Rabins PV, Rosenheck RA, Small GW, Lebowitz B, Lieberman JA. National Institute of Mental Health Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE): Alzheimer disease trial methodology. American Journal of Geriatric Psychiatry. 2001 Fall;9(4):346–360. [PubMed]
28. Lavori PW, Rush AJ, Wisniewski SR, et al. Strengthening clinical effectiveness trials: equipoise-stratified randomization. Biological Psychiatry. 2001 Nov;50(10):792–801. [PubMed]
29. Rush AJ, Crismon ML, Kashner TM, Toprac MG, Carmody TJ, Trivedi MH, Suppes T, Miller AL, Biggs MM, Shores-Wilson K, Witte BP, Shon SP, Rago WV, Altshuler KZ. TMAP Research Group. Texas Medication Algorithm Project, phase 3 (TMAP-3): rationale and study design. Journal of Clinical Psychiatry. 2003 Apr;64(4):357–369. [PubMed]
30. Stone RM, Berg DT, George SL, Dodge RK, Paciucci PA, Schulman P, Lee EJ, Moore JO, Powell BL, Schiffer CA. Granulocyte-macrophage colony-stimulating factor after initial chemotherapy for elderly patients with primary acute myelogenous leukemia. Cancer and Leukemia Group B. New England Journal of Medicine. 1995 Jun;332(25):1671–1677. [PubMed]
31. Vogt WP. Dictionary of statistics and methodology: a nontechnical guide for the social sciences. Sage Publications; Newbury Park, California: 1993.
32. Collins LM, Murphy SA, Nair VN, Strecher VJ. A strategy for optimizing and evaluating behavioral interventions. Annals of Behavioral Medicine. 2005 Aug;30(1):65–73. [PubMed]
33. Kraemer HC, Mintz J, Noda A, Tinklenberg J, Yesavage JA. Caution regarding the use of pilot studies to guide power calculations for study proposals. Archives of General Psychiatry. 2006 May;63(5):484–489. [PubMed]
34. Thabane L, Ma J, Chu R, Cheng J, Ismaila A, Rios LP, Robson R, Thabane M, Giangregorio L, Goldsmith CH. A tutorial on pilot studies: the what, why and how. BMC Medical Research Methodology. 2010 Jan;10:1. [PMC free article] [PubMed]
35. Lancaster GA, Dodd S, Williamson PR. Design and analysis of pilot studies: recommendations for good practice. Journal of Evaluation in Clinical Practice. 2004 May;10(2):307–312. [PubMed]
36. Leon AC, Davis LL, Kraemer HC. The role and interpretation of pilot studies in clinical research. Journal of Psychiatric Research. 2011 May;45(5):626–629. [PMC free article] [PubMed]
37. Wittes J, Brittain E. The role of internal pilot studies in increasing the efficiency of clinical trials. Statistics in Medicine. 1990;9:65–72. [PubMed]
38. Wittes JT, Schabenberger O, Zucker DM, Brittain E, Proschan M. Internal pilot studies I: type I error rate of the naïve t-test. Statistics in Medicine. 1999;18:3481–3491. [PubMed]
39. Zucker DM, Wittes JT, Schabenberger O, Brittain E. Internal pilot studies II: comparison of various procedures. Statistics in Medicine. 1999;18:3493–3509. [PubMed]
40. Nahum-Shani I, Qian M, Almirall D, Pelham WE, Gnagy B, Fabiano G, Waxmonsky J, Yu J, Murphy SA. Q-Learning: a data analysis method for constructing adaptive interventions. Pennsylvania State University Methodology Center Technical Report Series. :10–107.
41. Berry SM, Carlin BP, Lee JJ, Muller P. Bayesian adaptive methods for clinical trials. CRC Press; Boca Raton, FL: 2011.
42. The Methodology Center Perspective . Ask a Methodologist. Methodology Center, Pennsylvania State University; Spring. 2011. http://methcenter.psu.edu.
43. Guy W. ECDEU Assessment Manual for Psychopharmacology. National Institute of Mental Health; Rockville, MD: 1976.
44. Kraemer HC, Stice E, Kazdin A, Offord D, Kupfer D. How do risk factors work together? Mediators, moderators, and independent, overlapping, and proxy risk factors. American Journal of Psychiatry. 2001 Jun;158(6):848–856. [PubMed]
45. R Development Core Team. R . A language and environment for statistical computing. Vienna, Austria: http://www.R-project.org/
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • Compound
    Compound
    PubChem Compound links
  • PubMed
    PubMed
    PubMed citations for these articles
  • Substance
    Substance
    PubChem Substance links

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...