NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Hong H, Carlin BP, Chu H, et al. A Bayesian Missing Data Framework for Multiple Continuous Outcome Mixed Treatment Comparisons [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2013 Jan.

## A Bayesian Missing Data Framework for Multiple Continuous Outcome Mixed Treatment Comparisons [Internet].

Show details## OA Data

We reviewed publications in English after 1979 that examined physical therapy interventions for community dwelling adults with knee pain secondary to osteoarthritis. A total of 4,266 references were retrieved.^{12} After screening out studies that contained no eligible exposure, target population, outcomes, or associative hypothesis tested, 422 references were included in our review. Knee pain, disability, quality of life, and functional outcomes after physical therapy interventions were reported in 193 RCTs; 84 of those met the study inclusion/exclusion criteria given in the next paragraph. Because definitions of physical therapy interventions and outcomes varied dramatically among studies, only a small proportion of comparisons met these criteria.

Inclusion/exclusion criteria involved the following aspects. First, comparators should include no active treatment, usual care (education), sham stimulation (placebo), or other therapy intervention (that is, active-active trials were not excluded). Eligible patient-centered outcomes were knee pain, disability, quality of life, perceived health status, and global assessments of treatment effectiveness. The target population was adults with knee pain secondary to knee osteoarthritis in outpatient settings, including home-based therapy. Chronic OA was defined as meeting diagnostic criteria and having symptoms of OA for >2 months. We excluded populations with knee OA who had knee arthroplasty on the “study limb” within 6 months before the study, osteonecrosis, acute knee injuries, inflammatory arthritis, arthritis secondary to systemic disease, and physical therapy treatment combined with drug treatments. Since all included studies are applied to the same inclusion and exclusion criteria, we assume that all populations are similar to each other.

For the present analysis, we selected the pain and disability outcomes as primary and secondary outcomes, respectively, resulting in the inclusion of 54 RCTs. Table 1 displays the data from these 54 RCTs, comprising aggregated continuous outcomes (sample mean and standard deviation [SD]) measuring the level of pain and disability after physical therapies using various standard scores. The OA data compare eight physical therapies (low intensity diathermy, high intensity diathermy, electrical stimulation, aerobic exercise, aquatic exercise, strength exercise, proprioception exercise, and ultrasound treatment) and three reference therapies (no treatment, placebo, and education). Under proprioception exercise, we also included tai chi and balance exercise. Most studies reported treatment outcomes at a single followup time, but when a study investigated outcomes at multiple followup times, we selected the one most commonly reported for that treatment. To measure the pain outcome, the Western Ontario MacMaster (WOMAC), Visual Analogue Scale (VAS), Arthritis Impact Measurement Scale (AIMS), and other standard scores were used. For the disability outcome, the measurement tools included the WOMAC total, Medical Outcome Study (MOS) 36-Item Short-Form Health Survey (SF-36 physical function), AIMS, Health Assessment Questionnaire (HAQ), and Knee Injury and Osteoarthritis Outcome Score (KOOS). Although these scores do not share the same scale and differ in a few details, in general they do measure outcomes equivalently, and all of their scales cover the same qualitative ranges (from “no pain” to “extreme pain” for pain measurements, and from “no impairment” to “profound impairment” for disability). The scores they yield also tend to be highly correlated when reported for the same subjects.^{16}^{-}^{18} Because the scores' different scales make their values incomparable, we rescaled the mean scores to range from 0 to 10, where small values indicate better condition, and called this the rescaled score. We also recalculated the SDs based on the transformation of the mean score, and call this the rescaled SD. We remark that we have no reason to doubt the appropriation of linear retransformation here, but our methods apply equally well under nonlinear transformations if more appropriate clinically.

Among the 54 studies, 51 measure the pain outcome, 26 measure the disability outcome, and 23 include both outcomes. Figure 1 exhibits the trial network among therapies for each outcome. The size of each node represents the number of studies investigating the therapy, and the thickness of each edge denotes the total number of samples for the relation. The numbers on the edges indicate the numbers of studies investigating the relation. For example, in the pain outcome, there are five studies investigating the relation between no treatment and proprioception exercise, but this line is thinner than the line between education and strength exercise, though it has only three studies. The network features are similar in both outcomes, but we have limited information on the disability outcome, with fewer connections between therapies and smaller total sample sizes overall than for the pain outcome.

## Likelihood

In MTCs, we must carefully distinguish between the terms *treatment* and *arm*. The former refers to a drug or device being tested, while the latter is the data on patients randomized to a particular drug or device in a *single* study. We must also distinguish between *reference* and *baseline* treatments. The reference treatment is a standard control treatment (often placebo, or simply no treatment) which can be compared with other active treatments. In our OA data, we select “no treatment” as the reference treatment among three possibilities (no treatment, education, and placebo). The baseline treatment is defined as the treatment assigned to the control arm *in each study*. That is, each study has its own baseline treatment, which is often the same as the reference treatment, but could differ. In this report, we assume there is no inconsistency, defined as discrepancy in treatment effects arising from direct and indirect comparisons.^{8}

Suppose we are comparing *K* treatments from *I* studies in terms of *L* outcomes. For the continuous outcome, we assume that the data for a specific outcome from each study follow a normal distribution. That is,

Where *ȳ _{ikl}* is the observed sample mean of the measurements, Δ

_{ikl}is the unknown true population mean, ${\sigma}_{\text{ikl}}^{2}$ is the known sample variance, and n

_{ikl}is the number of subjects in the k

^{th}treatment arm from the

*i*study with respect to the I

^{th}^{th}continuous outcome. For the simplicity, we consider k = 1 as the reference treatment. Generally, in meta-analysis, we cannot estimate within-study correlations because we have only aggregated data.

^{19}We assume

*ȳ*are independent across arms and outcomes in study

_{ikl}*i*since within-study correlations are not observed in every studies.

## Existing Lu and Ades-Style Model

### Fixed Effects Model

For meta-analysis, a fixed effects model, assuming no variability between studies, can easily be implemented. Following Lu and Ades,^{7}^{, }^{8} the model can be written as

where B indicates the baseline treatment in each study *i*. Here, α* _{iBl}* is the effect of baseline treatment and

*η*is the mean difference between treatment

_{Bkl}*k*and the baseline treatment (B) for outcome

*l*in study

*i*. However, we have to be careful to interpret α

*when the baseline treatment is not always the same. We define*

_{iBl}*d*as the mean difference between treatment

_{kl}*k*and the reference treatment for outcome

*l*, with

*d*

_{1}

*= 0. Thus,*

_{l}*η*can be calculated as

_{Bkl}*d*−

_{kl}*d*, and we infer the treatment effects in terms of

_{Bl}*d*; that is, we assign a prior distribution to

_{kl}*d*, rather than

_{kl}*η*. We denote this model as the Lu and Ades (LA)-style fixed effects model (LAFE). In this approach, it is hard to interpret the baseline treatment effect α

_{Bkl}*because not all studies have the same baseline treatment.*

_{iBl}### Random Effects Model

Next, in order to allow variability between studies, we introduce random effects, *δ _{iBkl}*, replacing the

*η*. Specifically model (1) is respecified as

_{Bkl}where we can assume homogeneous variance across random effects for all arms, i.e.,

Here, *δ _{iBkl}* is 0 when

*k*=

*B*, and

*τ*is the standard deviation of the random effects for each outcome

_{l}*l*. We denote this model as the Lu and Ades-style homogeneous random effects model (LAREhom). For multi-arm trials, Lu and Ades provides a between-arm-contrast correlation of 0.5, as a consequence of homogeneous variance and their consistency equation.

^{8}The

*δ*in (3) are replaced by a vector

_{iBkl}*that follows a multivariate normal distribution with dimension equal to the number of arms in study*

**δ**_{il}*i*minus one, for each outcome

*l*.

## Allowing for Missing Data and Correlations Between Outcomes

### Contrast-Based Approach

We denote a model that parameterizes *relative* effects (e.g., the *η _{Bkl}* and

*δ*in (1) and (2), respectively) as a

_{iBkl}*contrast-based*(CB) model. Lu and Ades-style models use such a CB approach. Note that the mean effect difference between treatment

*k*and reference treatment in terms of outcome

*l*(

*d*) is the parameter of interest in CB models. In MTCs it is common that the number of treatments compared in the

_{kl}*i*

^{th}study is less than the complete collection of

*K*treatments. Since each study contributes to the likelihood for a different set of treatments, using the observed measurements only can complicate estimating the covariance matrix for the

*and lead to difficulties in prior assignment and parameter inference. In addition, it is plausible that researchers select study arms based on the trials conducted previously, what statisticians call“nonignorable missingness.” In this case, ignoring the missing treatment arms can potentially lead to biased parameter estimates.*

**δ**_{il}^{15}

To remedy this, we assume that all studies can in principle contain every treatment as their arms, but in practice much of this information is missing for various reasons. Under this assumption, all studies can always have a common (though possibly missing) baseline treatment, *B* = 1, and the distribution for the random effects * δ_{iBkl}* in (3) can be replaced with a matrix form as follows:

Where * δ_{il}* = (

*δ*

_{i}_{12}

*, …,*

_{l}*δ*

_{i}_{1}

*)*

_{Kl}*,*

^{T}

*d*_{l}= (

*d*, …,

_{2l}*d*)

_{Kl}*, and $\mathbf{\sum}{}_{l}^{\mathit{\text{Trt}}}$ is a (*

^{T}*K*− 1) × (

*K*− 1) unstructured covariance matrix for

*l*= 1, …,

*L*. Note that since

*δ*

_{i}_{11}

*and*

_{l}*d*

_{1}

*are always 0, they are not included in*

_{l}

*δ**and*

_{il}

*d**. Here, $\mathbf{\sum}{}_{l}^{\mathit{\text{Trt}}}$ captures all random contrasts' relations among treatments in each outcome*

_{l}*l*. We refer to this model as a contrast-based random effects model assuming independence between outcomes (CBRE1).

To allow correlations among outcomes, the distribution of *δ** _{il}* in (4) needs to be respecified to

where * δ_{ik}* = (

*δ*

_{i}_{1}

_{k}_{1}, …,

*δ*

_{i}_{1}

*)*

_{kL}^{T},

*= (*

**d**_{k}*d*

_{k}_{1}, …,

*d*)

_{kL}^{T}, and $\mathbf{\sum}{}_{k}^{\mathit{\text{Out}}}$ is a

*L*×

*L*unstructured covariance matrix for

*k*= 2, …,

*K*. In this model, we assume independent random contrasts between treatments but incorporate the correlation structure of those contrasts between outcomes through $\mathbf{\sum}{}_{k}^{\mathit{\text{Out}}}$. We call this model CBRE2. Alternatively, we can also use the same

**∑**

*for all*

^{Out}*k*, if such an assumption is sensible.

In this approach, we can always have the same length of vector *δ** _{il}* or

*δ**in each study*

_{ik}*i*, and incorporate all sources of uncertainty by considering unobserved arms as missing data to be imputed by our MCMC algorithm using Gibbs-Metropolis sampling. For example, suppose Study 1 compares treatments 1, 2, and 3, giving information about two contrasts,

*δ*

_{i}_{12}

*and*

_{l}*δ*

_{i}_{13}

*, whereas Study 2 compares only treatments 1 and 2, and Study 3 includes only treatments 1 and 3. We can impute the missing contrast*

_{l}*δ*

_{i}_{13}

*and*

_{l}*δ*

_{i}_{12}

*in Studies 2 and 3 respectively by using the information related to these contrasts observed in Study 1. The reference treatment effect,*

_{l}*α*in (2), is uninterpretable in this case, since each study will have different baseline treatment, as in the LA models. However, in our CB approach,

_{iBl}*α*becomes meaningful because the baseline treatment is the same (

_{iBl}*B*= 1) across all studies.

Although we only introduced the LA homogeneous random effects model, a heterogeneous random effects model can be applied with rigorous construction of covariance matrices to satisfy the positive definiteness condition under the consistency assumption.^{20} However, our approach does not lead to this same set of consistency equation; the imputation allows us to independently estimate all possible contrasts in every study.

### Arm-Based Approach

The CB method estimates the treatment contrasts; say, the mean difference between treatment *k* and the reference treatment. However, the approach's singular focus on relative treatment effects ultimately leads to many limitations. First, although we may resolve the incomparable baseline treatment problem by imputing such missing arms in our CB models, LA models still need complex model parameterizations for those studies with incomparable baseline treatments. Second, the interpretation of correlations between treatments or outcomes with respect to relative effects can be difficult. For example, we cannot directly calculate the correlation between treatments via correlation between differences of treatment effects. Furthermore, our CB model restricts the variance of a baseline effect to always be smaller than that of other treatments. That is, the variance of population mean of baseline treatment, Δ* _{iBl}*, is Var(α

*), whereas for other treatments we have Var(α*

_{iBl}*) + Var(*

_{iBl}*δ*), which is never smaller than Var(α

_{iBkl}*).*

_{iBl}As an alternative, we introduce an *arm-based* (AB) approach^{10}^{, }^{21} by respecifying mean structure (2) as

*=*

_{ikl}*μ*+

_{kl}*v*,

_{ikl}where *μ _{kl}* is the fixed mean effect of treatment

*k*with respect to outcome

*l*and

*v*is the study-specific random effect. In this approach, we estimate the

_{ikl}*absolute*treatment effect size,

*μ*, not the relative effect size,

_{kl}*d*.

_{kl}If we begin by assuming independent random effects between outcomes, then the random effects *v _{ikl}* in (6) can be structured as (

*v*

_{i}_{1}

*,…,*

_{l}*v*)

_{iKl}*∼ MVN $(\mathbf{0},{\mathbf{\Lambda}}_{l}^{\mathit{\text{Trt}}})$ with ${\mathbf{\Lambda}}_{l}^{\mathit{\text{Trt}}}$ a*

^{T}*K*×

*K*unstructured covariance matrix having relations of random effects between treatments, for

*l*= 1, …,

*L*. We denote this model as ABRE1. Alternatively, we can allow dependence of random effects between outcomes but independence between treatments by defining (

*v*

_{ik}_{1}, …,

*v*)

_{ikL}*∼ MVN $(\mathbf{0},{\mathbf{\Lambda}}_{k}^{\mathit{\text{Out}}})$ where ${\mathbf{\Lambda}}_{k}^{\mathit{\text{Out}}}$ is a*

^{T}*L*×

*L*unstructured covariance matrix having relations between outcomes, for

*k*= 1, …,

*K*. We refer to this model as ABRE2. Again, we can also use the same

**Λ**

*for all*

^{Out}*k*when it is reasonable to do so.

The parameters in arm-based models permit more straightforward interpretation, especially in estimating a pure treatment effect. However, these models do require strong assumptions regarding the similarity and exchangeability of all populations, in order to preserve the randomization and permit meaningful clinical inference. Note that in AB models, there is no restriction on variances of random effects because all of our covariance matrices are unstructured. That is, AB models are less constrained, but thus have slightly larger number of parameters than CB models.

## Choice of Priors

Lu and Ades assume a noninformative prior on each parameter, in order to let the data dominate the posterior calculation. For *α _{iBl}* and

*d*, a normal distribution with mean 0 and variance 100

_{kl}^{2}is used, and a Uniform (0.01, 10) is assigned for τ in LAREhom. In all CB models, we assume α

*follows a $\mathrm{N}({\mathrm{\text{a}}}_{l},{\xi}_{l}^{2})$ rather than a N(0, 100*

_{iBl}^{2}) distribution, where a

*is the mean reference treatment effect, with noninformative priors for a*

_{l}*and ξ*

_{l}*; namely, N(0, 100*

_{l}^{2}) and Uniform(0.01, 10), respectively. Throughout all CB and AB models, the fixed effects (

*d*and

_{kl}*μ*, respectively) follow a N(0, 100

_{kl}^{2}) distribution, while the inverse covariance matrices follow a Wishart(

**Ω**, γ) having mean γ

**Ω**

^{−1}, with the matrix dimension usually chosen for the degrees of freedom parameter γ because it is the smallest value that will still yield a proper prior.

^{22}We can select

**Ω**to be γ times a prior guess for the covariance matrix (

**Ω**

_{0}). Since we do not know the true covariance matrices, we begin with a vague Wishart prior having

**Ω**

_{0}= $\left(\begin{array}{ccc}5& \dots & 0\\ \vdots & \ddots & \vdots \\ 0& \dots & 5\end{array}\right)$, and later investigate more informative Wishart priors in a sensitivity analysis.

## Decisionmaking

Regarding Bayesian model choice, we adopt the Deviance Information Criterion (DIC).^{22}^{,}^{23} DIC is a hierarchical models generalization of the Akaike Information Criterion, and is the sum of *$\stackrel{\u0304}{D}$*, a measure of goodness of fit, and *P _{D}*, a measure of complexity. For all CB and AB models we implement, we insist that only the observed data contribute to the calculation of

*$\stackrel{\u0304}{D}$*.

^{24}

We can identify the best treatments based on a reasonable measurement of the effect size.^{25} For instance, we can calculate the probability of being the best or second best treatment, which we call the “Best12” probability. Suppose Δ* _{kl}* is the marginal mean effect of having event

*l*under treatment

*k*, modeled from (2) using the posterior of

*d*and posterior mean of

_{kl}*μ*

_{i}_{1}

*across studies, instead of*

_{l}*δ*and

_{iBkl}*μ*in CB models. For AB models, we can obtain Δ

_{iBl}_{kl}by plugging in the posterior of

*μ*in (6), noting that the prior mean of

_{kl}*v*is 0. Denoting the data on outcome

_{ikl}*l*by

*y*

_{l}, then define the “Best12” probability under each outcome as

*y*} = Pr{rank(Δ

_{l}*) = 1 or 2 ∣*

_{kl}*y*}

_{l}To integrate these univariate probabilities over all the outcomes and obtain one omnibus measure of “best,” we propose an overall, weighted score denoted by *S _{k}*. Suppose all measurements have the same directionality, that is, small values indicate better condition in all outcomes, our overall score is defined as

*S*= ∑

_{k}*Δ*

_{l}w_{l}*,*

_{kl}where *w _{l}* is the weight for outcome

*l*, and

*∑*

_{l}*w*= 1. This score can be used to obtain overall Best12 probabilities by replacing Δ

_{l}*by*

_{kl}*S*in (7). The weights can be chosen by physicians or public health professionals based on their preferences (say, for weighting safety versus efficacy).

_{k}## Simulation Study Settings

In this simulation, we generate 1,000 data pairs (*ȳ _{ik}*

_{1},

*ȳ*

_{ik}_{2}) and fit the LAREhom, CBRE2, and ABRE2 models to investigate how the missingness in our design affects 5 percent two-sided Type I error, power, and the rates of incorrect decisions when the correlation between outcomes is incorporated into the models (CBRE2 and ABRE2) or not (LAREhom). Figure 2 illustrates the design of the simulated complete and partially missing data. For the “complete” data, we generate artificial data from 40 studies having two treatments and two outcomes featuring moderate positive correlation between outcomes, but independence between arms. In panel (b), we drop 20 studies in the first outcome; that is, we mimic our OA data, in which only half the studies report the disability outcome. For simplicity, we assume that every study has sample size 100 and standard deviation of 2 for every arm.

To sample the partially missing data, we compare the results under missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR) mechanisms. The MCAR mechanism assumes that the missingness does not depend on the data, so we choose 20 studies randomly and make *ȳ _{i}*

_{11}and

*ȳ*

_{i}_{21}missing for those studies. The MAR mechanism assumes that the missingness depends only on the observed data, but not on the missing data, whereas MNAR missingness can depend on both observed and unobserved data. To generate partially missing data under the MAR and MNAR mechanisms, we first calculate the ‘probability of missing’ (

*p*) for study

_{i,mis}*i*by applying a logit model with the observed or missing data as covariates. Here

*ȳ*

_{i}_{12}and

*ȳ*

_{i}_{22}are considered as observed data, and

*ȳ*

_{i}_{11}and

*ȳ*

_{i}_{21}are missing data since they are not fully observed in our design. We use the following two logit models:

*p*) = 2 +

_{i,mis}*y¯*

_{i}_{12}−

*y¯*

_{i}_{22}

*p*) = − 4 −

_{i,mis}*y¯*

_{i}_{11}+

*y¯*

_{i}_{22}.

The coefficients are selected to result in a mean *p _{i,mis}* of about 30 to 40 percent. Given

*p*, we generate the missingness indicator vector until 20 studies are selected as missing data.

_{i,mis}For the true parameters,
$\left({\mu}_{11}^{*},{\mu}_{21}^{*},{\mu}_{12}^{*},{\mu}_{22}^{*}\right)=\left(0,0,0,3\right)$ is chosen in (6), yielding
${d}_{21}^{*}=0$ and
${d}_{22}^{*}=3$ in the LAREhom and CBRE models. We calculate Type I error in terms of parameter *d*_{21} in the three models, with the superscript * indicating the truth. To estimate power at two particular alternatives, we select
$\left({\mu}_{11}^{*},{\mu}_{21}^{*},{\mu}_{12}^{*},{\mu}_{22}^{*}\right)=\left(0,1,0,3\right)\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}(0,2,0,3)$, giving
${d}_{21}^{*}=1\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}2$, respectively, which we notate as “Power1” and “Power2.” We also calculate the rate of incorrectly selecting the best treatment, given as
$\text{Pr}(\widehat{{\mu}_{11}}>\widehat{{\mu}_{21}})$ under Power1 and 2 scenarios because the truth is that
${\mu}_{11}^{*}<{\mu}_{21}^{*}$. This rate should be around 0.5 under the Type I error setting.

For the random effect parameters, in (6), we generate them from
$\left(\begin{array}{l}{\nu}_{i11}^{\mathit{\text{AB}}}\\ {\nu}_{i21}^{\mathit{\text{AB}}}\end{array}\right)~\mathit{\text{MVN}}\left(\left(\begin{array}{l}0\\ 0\end{array}\right),\left(\begin{array}{cc}1& {\rho}_{\mathit{\text{AB}}}^{\ast}\\ {\rho}_{\mathit{\text{AB}}}^{\ast}& 1\end{array}\right)\right)$ and
$\left(\begin{array}{l}{\nu}_{i12}^{\mathit{\text{AB}}}\\ {\nu}_{i22}^{\mathit{\text{AB}}}\end{array}\right)\sim \mathit{\text{MVN}}\left(\left(\begin{array}{l}0\\ 0\end{array}\right),\left(\begin{array}{cc}3& 3{\rho}_{\mathit{\text{AB}}}^{\ast}\\ 3{\rho}_{\mathit{\text{AB}}}^{\ast}& 3\end{array}\right)\right)$, which on the CB scale corresponds to
$\left(\begin{array}{l}{\nu}_{i21}^{\mathit{\text{CB}}}\\ {\nu}_{i22}^{\mathit{\text{CB}}}\end{array}\right)\sim \mathit{\text{MVN}}\left(\left(\begin{array}{l}0\\ 0\end{array}\right),\left(\begin{array}{cc}2& 3{\rho}_{\mathit{\text{AB}}}^{\ast}\\ 3{\rho}_{\mathit{\text{AB}}}^{\ast}& 2\end{array}\right)\right)$. Here, the superscripts and subscripts on *v _{ikl}* and

*ρ**, AB and CB, indicate the model used. From the covariance matrix of random effects in the CB model, we can easily calculate the true correlation in the CB model, ${\rho}_{\mathit{\text{CB}}}^{\ast}=\frac{3}{2}\phantom{\rule{0.2em}{0ex}}{\rho}_{\mathit{\text{AB}}}^{\ast}$. To ensure a positive definite covariance matrix for the random effects in the CB model, ${\rho}_{\mathit{\text{AB}}}^{\ast}$ should therefore be between $-\frac{2}{3}$ and $\frac{2}{3}$. We set ${\rho}_{\mathit{\text{AB}}}^{\ast}=0.6\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}0.0$ which induces ${\rho}_{\mathit{\text{CB}}}^{\ast}=0.9\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}0.0$.

For the OA data analysis, WinBUGS is used to generate two parallel chains of 50,000 MCMC samples after a 50,000-sample burn-in. To check MCMC convergence, we used standard diagnostics, including trace plots and lag 1 sample autocorrelations. The WinBUGS codes are now publicly available at www.biostat.umn.edu/~brad/software.html.

We used the R2WinBUGS package^{26} in R to perform our simulation studies, where we call WinBUGS^{27} 1,000 times from R, once for each simulated data set. In each case, we obtain 20,000 samples, after a 20,000 sample burn-in, and collect medians of parameters across 1,000 simulated datasets, then estimate Type I error and power.

- Methods - A Bayesian Missing Data Framework for Multiple Continuous Outcome Mixe...Methods - A Bayesian Missing Data Framework for Multiple Continuous Outcome Mixed Treatment Comparisons

Your browsing activity is empty.

Activity recording is turned off.

See more...