- We are sorry, but NCBI web applications do not support your browser and may not function properly. More information

- Journal List
- NIHPA Author Manuscripts
- PMC3110016

# PERFORMANCE GUARANTEES FOR INDIVIDUALIZED TREATMENT RULES

^{*}Supported by NIH grants R01 MH080015 and P50 DA10075.

## Abstract

Because many illnesses show heterogeneous response to treatment, there is increasing interest in individualizing treatment to patients [11]. An *individualized treatment rule* is a decision rule that recommends treatment according to patient characteristics. We consider the use of clinical trial data in the construction of an individualized treatment rule leading to highest mean response. This is a difficult computational problem because the objective function is the expectation of a weighted indicator function that is non-concave in the parameters. Furthermore there are frequently many pretreatment variables that may or may not be useful in constructing an optimal individualized treatment rule yet cost and interpretability considerations imply that only a few variables should be used by the individualized treatment rule. To address these challenges we consider estimation based on *l*_{1} penalized least squares. This approach is justified via a finite sample upper bound on the difference between the mean response due to the estimated individualized treatment rule and the mean response due to the optimal individualized treatment rule.

**Keywords and phrases:**decision making,

*l*

_{1}penalized least squares, Value

## 1. Introduction

Many illnesses show heterogeneous response to treatment. For example, a study on schizophrenia [12] found that patients who take the same antipsychotic (olanzapine) may have very different responses. Some may have to discontinue the treatment due to serious adverse events and/or acutely worsened symptoms, while others may experience few if any adverse events and have improved clinical outcomes. Results of this type have motivated researchers to advocate the individualization of treatment to each patient [16, 24, 11]. One step in this direction is to estimate each patient’s risk level and then match treatment to risk category [5, 6]. However, this approach is best used to decide whether to treat; otherwise it assumes the knowledge of the best treatment for each risk category. Alternately, there is an abundance of literature focusing on predicting each patient’s prognosis under a particular treatment [10, 28]. Thus an obvious way to individualize treatment is to recommend the treatment achieving the best predicted prognosis for that patient. In general the goal is to use data to construct individualized treatment rules that, if implemented in future, will optimize the mean response.

Consider data from a single stage randomized trial involving several active treatments. A first natural procedure to construct the optimal individualized treatment rule is to maximize an empirical version of the mean response over a class of treatment rules (assuming larger responses are preferred). As will be seen, this maximization is computationally difficult because the mean response of a treatment rule is the expectation of a weighted indicator that is non-continuous and non-concave in the parameters. To address this challenge we make a substitution. That is, instead of directly maximizing the empirical mean response to estimate the treatment rule, we use a two-step procedure that first estimates a conditional mean and then from this estimated conditional mean derives the estimated treatment rule. As will be seen in Section 3, even if the optimal treatment rule is contained in the space of treatment rules considered by the substitute two-step procedure, the estimator derived from the two-step procedure may not be consistent. However if the conditional mean is modeled correctly, then the two-step procedure consistently estimates the optimal individualized treatment rule. This motivates consideration of rich conditional mean models with many unknown parameters. Furthermore there are frequently many pretreatment variables that may or may not be useful in constructing an optimal individualized treatment rule, yet cost and interpretability considerations imply that fewer rather than more variables should be used by the treatment rule. This consideration motivates the use of *l*_{1} penalized least squares (*l*_{1}-PLS).

We propose to estimate an optimal individualized treatment rule using a two step procedure that first estimates the conditional mean response using *l*_{1}-PLS with a rich linear model and second, derives the estimated treatment rule from estimated conditional mean. For brevity, throughout, we call the two step procedure the *l*_{1}-PLS method. We derive several finite sample upper bounds on the difference between the mean response to the optimal treatment rule and the mean response to the estimated treatment rule. All of the upper bounds hold even if our linear model for the conditional mean response is incorrect and to our knowledge are, up to constants, the best available. We use the upper bounds in Section 3 to illuminate the potential mismatch between using least squares in the two-step procedure and the goal of maximizing mean response. The upper bounds in Section 4.1 involve a minimized sum of the approximation error and estimation error; both errors result from the estimation of the conditional mean response. We shall see that *l*_{1}-PLS estimates a linear model that minimizes this approximation plus estimation error sum among a set of suitably sparse linear models.

If the part of the model for the conditional mean involving the treatment effect is correct, then the upper bounds imply that, although a surrogate two-step procedure is used, the estimated treatment rule is consistent. The upper bounds provide a convergence rate as well. Furthermore in this setting the upper bounds can be used to inform how to choose the tuning parameter involved in the *l*_{1}-penalty to achieve the best rate of convergence. As a by-product, this paper also contributes to existing literature on *l*_{1}-PLS by providing a finite sample prediction error bound for the *l*_{1}-PLS estimator in the random design setting without assuming the model class contains or is close to the true model.

The paper is organized as follows. In Section 2, we formulate the decision making problem. In Section 3, for any given decision, e.g. individualized treatment rule, we relate the reduction in mean response to the excess prediction error. In Section 4, we estimate an optimal individualized treatment rule via *l*_{1}-PLS and provide a finite sample upper bound on the maximal reduction in optimal mean response achieved by the estimated rule. In Section 5, we consider a data dependent tuning parameter selection criterion. This method is evaluated using simulation studies and illustrated with data from the Nefazodone-CBASP trial [13]. Discussions and future work are presented in Section 6.

## 2. Individualized treatment rules

We use upper case letters to denote random variables and lower case letters to denote values of the random variables. Consider data from a randomized trial. On each subject we have the pretreatment variables *X* , treatment *A* taking values in a finite, discrete treatment space , and a real-valued response *R* (assuming large values are desirable). An *individualized treatment rule* (ITR) *d* is a deterministic decision rule from into the treatment space .

Denote the distribution of (*X, A, R*) by *P*. This is the distribution of the clinical trial data; in particular, denote the known randomization distribution of *A* given *X* by *p*(·*|X*). The likelihood of (*X, A, R*) under *P* is then *f*_{0}(*x*)*p*(*a|x*)*f*_{1}(*r|x, a*), where *f*_{0} is the unknown density of *X* and *f*_{1} is the unknown density of *R* conditional on (*X, A*). Denote the expectations with respect to the distribution *P* by an *E*. For any ITR *d* : → , let *P ^{d}* denote the distribution of (

*X, A, R*) in which

*d*is used to assign treatments. Then the likelihood of (

*X, A, R*) under

*P*is

^{d}*f*

_{0}(

*x*)1

_{a}_{=}

_{d}_{(}

_{x}_{)}

*f*

_{1}(

*r|x, a*). Denote expectations with respect to the distribution

*P*by an

^{d}*E*. The

^{d}*Value*of

*d*is defined as

*V*(

*d*) =

*E*(

^{d}*R*). An

*optimal ITR*,

*d*

_{0}, is a rule that has the maximal Value, i.e.

where the argmax is over all possible decision rules. The Value of *d*_{0}, *V*(*d*_{0}), is the *optimal Value*.

Assume *P*[*p*(*a|X*) > 0] = 1 for all *a* (i.e. all treatments in are possible for all values of *X* a.s.). Then *P ^{d}* is absolutely continuous with respect to

*P*and a version of the Radon-Nikodym derivative is

*dP*= 1

^{d}/dP

_{a}_{=}

_{d}_{(}

_{x}_{)}

*/p*(

*a|x*). Thus the Value of

*d*satisfies

Our goal is to estimate *d*_{0}, i.e. the ITR that maximizes (2.1), using data from distribution *P*. When *X* is low dimensional and the best rule within a simple class of ITRs is desired, empirical versions of the Value can be used to construct estimators [21, 27]. However if the best rule within a larger class of ITRs is of interest, these approaches are no longer feasible.

Define *Q*_{0}(*X*,*A*) *E*(*R*|*X*,*A*) (*Q*_{0}(*X*,*A*) is sometimes called the “Quality” of treatment *a* at observation *x*). It follows from (2.1) that for any ITR *d*,

Thus *V* (*d*_{0}) = *E*[*Q*_{0}(*X, d*_{0}(*X*))] *≤ E*[max_{a}_{}*Q*_{0}(*X, a*)]. On the other hand, by the definition of *d*_{0},

Hence an optimal ITR satisfies *d*_{0}(*X*) arg max_{a}_{} *Q*_{0}(*X, a*) a.s.

## 3. Relating the reduction in Value to excess prediction error

The above argument indicates that the estimated ITR will be of high quality (i.e. have high Value) if we can estimate *Q*_{0} accurately. In this section, we justify this by providing a quantitative relationship between the Value and the prediction error.

Because is a finite, discrete treatment space, given any ITR, *d*, there exists a square integrable function *Q* : × → for which *d*(*X*) arg max* _{a} Q*(

*X, a*) a.s. Let

*L*(

*Q*)

*E*[

*R*−

*Q*(

*X*,

*A*)]

^{2}denote the prediction error of

*Q*(also called the mean quadratic loss). Suppose that

*Q*

_{0}is square integrable and that the randomization probability satisfies

*p*(

*a|x*) ≥

*S*

^{−1}for an

*S >*0 and all (

*x, a*) pairs. Murphy [23] showed that

Intuitively, this upper bound means that if the excess prediction error of *Q* (i.e. *E*(*R* − *Q*)^{2} − *E*(*R* − *Q*_{0})^{2}) is small, then the reduction in Value of the associated ITR *d* (i.e. *V*(*d*_{0}) − *V*(*d*)) is small. Furthermore the upper bound provides a rate of convergence for an estimated ITR. For example, suppose *Q*_{0} is linear, that is *Q*_{0} = Φ(*X, A*)*θ*_{0} for a given vector-valued basis function Φ on × and an unknown parameter *θ*_{0}. And Suppose we use a correct linear model for *Q*_{0} (here “linear” means linear in parameters), say the model = {Φ(*X, A*)** θ : θ** →

^{dim}^{(Φ)}} or a linear model containing with dimension of parameters fixed in

*n*. If we estimate

**by least squares and denote the estimator by**

*θ***, then the prediction error of = Φ**

**converges to**

*L*(

*Q*

_{0}) at rate 1/

*n*under mild regularity conditions. This together with inequality (3.1) implies that the Value obtained by the estimated ITR, (

*X*) arg max

*(*

_{a}*X, a*), will converge to the optimal Value at rate at least $1/\sqrt{n}$.

In the following theorem, we improve this upper bound in two aspects. First, we show that an upper bound with exponent larger than 1/2 can be obtained under a margin condition, which implicitly implies a faster rate of convergence. Second, it turns out that the upper bound need only depend on one term in the function *Q*; we call this the treatment effect term, *T*. For any square integrable *Q*, the associated treatment effect term is defined as *T*(*X*,*A*) *Q*(*X*,*A*) − *E*[*Q*(*X*,*A*)|*X*]. Note that *d*(*X*) arg max* _{a} T*(

*X, a*) = arg max

*(*

_{a}Q*X, a*) a.s. Similarly, the true treatment effect term is given by

*T*_{0}(*x, a*) is the centered effect of treatment *A* = *a* at observation *X* = *x; d*_{0}(*X*) arg max_{a} T_{0}(*X, a*).

#### Theorem 3.1

Suppose p(a|x) ≥ S^{−1} for a positive constant S for all (x, a) pairs. Assume there exists some constants C > 0 and α ≥ 0 such that

for all positive ε. Then for any ITR d : → and square integrable function Q : × → such that d(X) arg max_{a}_{} Q(X, a) a.s., we have

and

where C′ = (2^{2+3α}S^{1+α}C)^{1/(2+α)}.

The proof of Theorem 3.1 is in Appendix A.1.

##### Remarks

- We set the second maximum in (3.3) to −∞ if for an
*x*,*T*_{0}(*x, a*) is constant in*a*and thus the set \arg max_{a}_{}*T*_{0}(*x, a*) = . - Condition (3.3) is similar to the margin condition in classification [25, 18, 32]; in classification this assumption is often used to obtain sharp upper bounds on the excess 0 − 1 risk in terms of other surrogate risks [1]. Here can be viewed as the “margin” of
*T*_{0}at observation*X*=*x*. It measures the difference in mean responses between the optimal treatment(s) and the best suboptimal treatment(s) at*x*. For example, suppose*X ~ U*[−1, 1],*P*(*A*= 1|*X*) =*P*(*A*= −1|*X*) = 1/2 and*T*_{0}(*X, A*) =*XA*. Then the margin condition holds with*C*= 1/2 and*α*= 1. Note the margin condition does not exclude multiple optimal treatments for any observation*x*. However, when α*>*0, it does exclude suboptimal treatments that yield a conditional mean response very close to the largest conditional mean response for a set of*x*with nonzero probability. - The larger the
*α*, the larger the exponent (1 +*α*)/(2 +*α*) and thus the stronger the upper bounds in (3.4) and (3.5). However the margin condition is unlikely to hold for all*ε*if*α*is very large. An alternate margin condition and upper bound are as follows.*Suppose p*(*a|x*) ≥*S*^{−1}*for all*(*x, a*)*pairs. Assume there is an ε >*0,*such that*$$\mathbf{P}\left(\underset{a\in \mathcal{A}}{max}{T}_{0}(X,a)-\underset{a\in \mathcal{A}\backslash arg{max}_{a\in \mathcal{A}}{T}_{0}(X,a)}{max}{T}_{0}(X,a)<\epsilon \right)=0.$$(3.6)*Then V*(*d*_{0}) −*V*(*d*) ≤ 4*S*[*L*(*Q*) −*L*(*Q*_{0})]/*ε and V*(*d*_{0}) −*V*(*d*) ≤ 4*SE*(*T*−*T*_{0})^{2}/*ε*.The proof is essentially the same as that of Theorem 3.1 and is omitted. Condition (3.6) means that*T*_{0}evaluated at the optimal treatment(s) minus*T*_{0}evaluated at the best suboptimal treatment(s) is bounded below by a positive constant for almost all*X*observations. If*X*assumes only a finite number of values, then this condition always holds, because we can take*ε*to be the smallest difference in*T*_{0}when evaluated at the optimal treatment(s) and the suboptimal treatment(s) (note that if*T*_{0}(*x, a*) is constant for all*a*for some observation*X*=*x*, then all treatments are optimal for that observation). - Inequality (3.5) cannot be improved in the sense that choosing
*T*=*T*_{0}yields zero on both sides of the inequality. Moreover an inequality in the opposite direction is not possible, since each ITR is associated with many non-trivial T-functions. For example, suppose*X ~ U*[−1, 1],*P*(*A*= 1|*X*) =*P*(*A*= −1|*X*) = 1/2 and*T*_{0}(*X, A*) = (*X*− 1/3)^{2}*A*. The optimal ITR is*d*_{0}(*X*) = 1 a.s. Consider*T*(*X, A*) =*θA*. Then maximizing*T*(*X, A*) yields the optimal ITR as long as*θ >*0. This means that the left hand side (LHS) of (3.5) is zero, while the right hand side (RHS) is always positive no matter what value*θ*takes.

Theorem 3.1 supports the approach of minimizing the estimated prediction error to estimate *Q*_{0} or *T*_{0} and then maximizing this estimator over *a* to obtain an ITR. It is natural to expect that even when the approximation space used in estimating *Q*_{0} or *T*_{0} does not contain the truth, this approach will provide the best (highest Value) of the considered ITRs. Unfortunately this does not occur due to the mismatch between the loss functions (weighted 0–1 loss and the quadratic loss). This mismatch is indicated by remark 5 above. More precisely, note that the approximation space, say for *Q*_{0}, places implicit restrictions on the class of ITRs that will be considered. In effect the class of ITRs is = {*d*(*X*) arg max* _{a} Q*(

*X, a*) :

*Q*}. It turns out that minimizing the prediction error may not result in the ITR in that maximizes the Value. This occurs when the approximation space does not provide a treatment effect term close to the treatment effect term in

*Q*

_{0}. In the following toy example, the optimal ITR

*d*

_{0}belongs to , yet the prediction error minimizer over does not yield

*d*

_{0}.

### A toy example

Suppose *X* is uniformly distributed in [−1, 1], *A* is binary {−1, 1} with probability 1/2 each and is independent of *X*, and *R* is normally distributed with mean *Q*_{0}(*X, A*) = (*X* −1/3)^{2}*A* and variance 1. It is easy to see that the optimal ITR satisfies *d*_{0}(*X*) = 1 a.s. and *V*(*d*_{0}) = 4/9. Consider approximation space = {*Q*(*X, A*; ** θ**) = (1,

*X, A, XA*)

*θ : θ*^{4}} for

*Q*

_{0}. Thus the space of ITRs under consideration is = {

*d*(

*X*) =

*sign*(

*θ*

_{3}+

*θ*

_{4}

*X*) :

*θ*

_{3},

*θ*

_{4}}. Note that

*d*

_{0}since

*d*

_{0}(

*X*) can be written as

*sign*(

*θ*

_{3}+

*θ*

_{4}

*X*) for any

*θ*

_{3}

*>*0 and

*θ*

_{4}= 0.

*d*

_{0}is the best treatment rule in . However, minimizing the prediction error

*L*(

*Q*) over yields

*Q*

^{*}(

*X, A*) = (4/9−2/3

*X*)

*A*. The ITR associated with

*Q*

^{*}is

*d*

^{*}(

*X*) = arg max

_{a}_{{−1}

_{,}_{1}}

*Q*

^{*}(

*X, a*) =

*sign*(2/3 −

*X*), which has lower Value than ${d}_{0}(V({d}^{\ast})=E\left[{\scriptstyle \frac{{1}_{A(2/3-X)>0}R}{1/2}}\right]=29/81<V({d}_{0}))$.

## 4. Estimation via *l*_{1}-penalized least squares

To deal with the mismatch between minimizing the prediction error and maximizing the Value discussed in the prior section, we consider a large linear approximation space for *Q*_{0}. Since overfitting is likely (due to the potentially large number of pretreatment variables and/or large approximation space for *Q*_{0}) we use penalized least squares (see Section S.1 of the supplementary material for further discussion of the overfitting problem). Furthermore we use *l*_{1} penalized least squares (*l*_{1}-PLS, [31]) as the *l*_{1} penalty does some variable selection and as a result will lead to ITRs that are cheaper to implement (fewer variables to collect per patient) and easier to interpret. See Section 6 for the discussion of other potential penalization methods.

Let
${\{({X}_{i},{A}_{i},{R}_{i})\}}_{i=1}^{n}$ represent i.i.d. observations on *n* subjects in a randomized trial. For convenience, we use *E _{n}* to denote the associated empirical expectation (i.e.
${E}_{n}f={\sum}_{i=1}^{n}f({X}_{i},{A}_{i},{R}_{i})/n$ for any real-valued function

*f*on × × ). Let := {

*Q*(

*X, A*;

**) = Φ(**

*θ**X, A*)

*θ**,*

*θ**} be the approximation space for*

^{J}*Q*

_{0}, where (

*X, A*) = (

_{1}(

*X, A*),

*…,*(

_{J}*X, A*)) is a 1 by

*J*vector composed of basis functions on ×,

**is a**

*θ**J*by 1 parameter vector, and

*J*is the number of basis functions (for clarity here

*J*will be fixed in

*n*, see Appendix A.2 for results with

*J*increasing as

*n*increases). The

*l*

_{1}-PLS estimator of

**is**

*θ*

where * _{j}* = [

*E*(

_{n}_{j}*X, A*)

^{2}]

^{1/2},

*θ*is the

_{j}*j*component of

^{th}**and**

*θ**λ*is a tuning parameter that controls the amount of penalization. The weights

_{n}*’s are used to balance the scale of different basis functions; these weights were used in Bunea et al. [4] and van de Geer [33]. In some situations, it is natural to penalize only a subset of coefficients and/or use different weights in the penalty; see Section S.2 of the supplementary material for required modifications. The resulting estimated ITR satisfies*

_{j}### 4.1. Performance guarantee for the l_{1}-PLS

In this section we provide finite sample upper bounds on the difference between the optimal Value and the Value obtained by the *l*_{1}-PLS estimator in terms of the prediction errors resulting from the estimation of *Q*_{0} and *T*_{0}. These upper bounds guarantee that if *Q*_{0} (or *T*_{0}) is consistently estimated, the estimator of *d*_{0} will be consistent and will inherit a rate of convergence from the rate of convergence of the estimator of *Q*_{0} (or *T*_{0}). Perhaps more importantly, the finite sample upper bounds provided below do *not* require the assumption that either *Q*_{0} or *T*_{0} is consistently estimated. Thus each upper bound includes approximation error as well as estimation error. The estimation error decreases with decreasing model sparsity and increasing sample size. An “oracle” model for *Q*_{0} (or *T*_{0}) minimizes the sum of these two errors among suitably sparse linear models (see remark 2 after Theorem 4.3 for a precise definition of the oracle model). In finite samples, the upper bounds imply that * _{n}*, the ITR produced by the

*l*

_{1}-PLS method, will have Value roughly as if the

*l*

_{1}-PLS method detects the sparsity of the oracle model and then estimates from the oracle model using ordinary least squares (see remark 3 below).

Define the prediction error minimizer *θ*^{*} * ^{J}* by

For expositional simplicity assume that *θ*^{*} is unique, and define the sparsity of *θ** ^{J}* by its

*l*

_{0}norm, ||

**||**

*θ*_{0}(see Appendix A.2 for a more general setting, where

*θ*^{*}is not unique and a laxer definition of sparsity is used). As discussed above, for finite

*n*, instead of estimating

*θ*^{*}, the

*l*

_{1}-PLS estimator

*estimates a parameter, ${\mathit{\theta}}_{n}^{\ast \ast}$, possessing small prediction error but with controlled sparsity. For any bounded function*

_{n}*f*on × , let ||

*f*||

_{∞}sup

_{x}_{,}

_{a}_{}|

*f*(

*x*,

*a*)|. ${\mathit{\theta}}_{n}^{\ast \ast}$ lies in the set of parameters Θ

*defined by*

_{n}

where
${\sigma}_{j}={(E{\phi}_{j}^{2})}^{1/2}$, and *η*, *β* and *U* are positive constants that will be defined in Theorem 4.1.

The first two conditions in (4.4) restrict Θ* _{n}* to

**’s with controlled distance in sup norm and with controlled distance in prediction error via first order derivatives (note that $\mid E[{\phi}_{j}\mathrm{\Phi}(\mathit{\theta}-{\mathit{\theta}}^{\ast})]/{\sigma}_{j}\mid \phantom{\rule{0.16667em}{0ex}}=\phantom{\rule{0.16667em}{0ex}}\mid \partial L(\mathrm{\Phi}\mathit{\theta})/\partial {\theta}_{j}-\phantom{\rule{0.16667em}{0ex}}\partial L(\mathrm{\Phi}{\mathit{\theta}}^{\ast})/\partial {\theta}_{j}^{\ast}\mid /2{\sigma}_{j})$. The third condition restricts Θ**

*θ**to sparse*

_{n}**’s. Note that as**

*θ**n*increases this sparsity requirement becomes laxer, ensuring that

*θ*^{*}Θ

*for sufficiently large*

_{n}*n*.

When Θ* _{n}* is non-empty,
${\mathit{\theta}}_{n}^{\ast \ast}$ is given by

Note that
${\mathit{\theta}}_{n}^{\ast \ast}$ is at least as sparse as *θ*^{*} since by (4.3),
$L(\mathrm{\Phi}\mathit{\theta})+3{\left|\right|\mathit{\theta}\left|\right|}_{0}{\lambda}_{n}^{2}/\beta >L(\mathrm{\Phi}{\mathit{\theta}}^{\ast})+3{\left|\right|{\mathit{\theta}}^{\ast}\left|\right|}_{0}{\lambda}_{n}^{2}/\beta $ for any ** θ** such that ||

**||**

*θ*_{0}> ||

*θ*^{*}||

_{0}.

The following theorem provides a finite sample performance guarantee for the ITR produced by *l*_{1}-PLS method. Intuitively, this result implies that if *Q*_{0} can be well approximated by the sparse linear representation
${\mathit{\theta}}_{n}^{\ast \ast}$ (so that both
$L(\mathrm{\Phi}{\mathit{\theta}}_{n}^{\ast \ast})-L({Q}_{0})$ and
${\left|\right|{\mathit{\theta}}_{n}^{\ast \ast}\left|\right|}_{0}$ are small), then * _{n}* will have Value close to the optimal Value in finite samples.

#### Theorem 4.1

Suppose p(a|x) ≥ S^{−1} for a positive constant S for all (x, a) pairs and the margin condition (3.3) holds for some C > 0, α ≥ 0 and all positive ε. Assume

- the error terms ε
_{i}= R_{i}− Q_{0}(X_{i}, A_{i}), i = 1, …, n, are independent of (X_{i}, A_{i}), i …, n and are i.i.d. with E(ε_{i}) = 0 and $E[{\mid {\epsilon}_{i}\mid}^{l}]\le {\scriptstyle \frac{l!}{2}}{c}^{l-2}{\sigma}^{2}$ for some c, σ^{2}> 0 for all l ≥ 2; - there exist finite, positive constants U and η such that max
_{j=1,…,J}||_{j}||_{∞}/σ_{j}≤ U and ||Q_{0}− Φ**θ**^{*}||_{∞}≤ η; and - E[(
_{1}/σ_{1}, …,_{J}/σ_{J})^{T}(_{1}/σ_{1},…,_{J}/σ_{J})] is positive definite, and the smallest eigenvalue is denoted by β.

Consider the estimated ITR _{n} defined by (4.2) with tuning parameter

where k = 82 max{c, *σ*, η}. Let Θ_{n} be the set defined in (4.4). Then for any n ≥ 24U^{2} log(Jn) and for which Θ_{n} is non-empty, we have, with probability at least 1 − 1/n, that

where C′ = (2^{2+3α}S^{1+α}C)^{1/(2+α)}.

The result follows from inequality (3.4) in Theorem 3.1 and inequality (4.10) in Theorem 4.3. Similar results in a more general setting can be obtained by combining (3.4) with inequality (A.7) in Appendix A.2.

##### Remarks

- Note that ${\mathit{\theta}}_{n}^{\ast \ast}$ is the minimizer of the upper bound on the RHS of (4.7) and that ${\mathit{\theta}}_{n}^{\ast \ast}$ is contained in the set { ${\mathit{\theta}}_{n}^{\ast ,(m)}$:
*m*{1,*…, J*}}. Each ${\mathit{\theta}}_{n}^{\ast ,(m)}$ satisfies ${\mathit{\theta}}_{n}^{\ast ,(m)}=arg{min}_{\{\mathit{\theta}\in {\mathrm{\Theta}}_{n}:{\theta}_{j}=0\phantom{\rule{0.16667em}{0ex}}\text{for}\phantom{\rule{0.16667em}{0ex}}\text{all}\phantom{\rule{0.16667em}{0ex}}j\notin m\}}L(\mathrm{\Phi}\theta )$; that is, ${\mathit{\theta}}_{n}^{\ast ,(m)}$ minimizes the prediction error of the model indexed by the set*m*(i.e. model {Σ_{j}_{}:_{m}_{j}θ_{j}*θ*}) (within Θ_{j}). For each ${\mathit{\theta}}_{n}^{\ast ,(m)}$, the first term in the upper bound in (4.7) (i.e. $L(\mathrm{\Phi}{\mathit{\theta}}_{n}^{\ast ,(m)})-L({Q}_{0})$) is the approximation error of the model indexed by_{n}*m*within Θ. As in van de Geer [33], we call the second term $3{\left|\right|{\mathit{\theta}}_{n}^{\ast ,(m)}\left|\right|}_{0}{\lambda}_{n}^{2}/\beta $ the estimation error of the model indexed by_{n}*m*. To see why, first put ${\lambda}_{n}=k\sqrt{log(Jn)/n}$. Then, ignoring the log(*n*) factor, the second term is a function of the sparsity of model*m*relative to the sample size,*n*. Up to constants, the second term is a “tight” upper bound for the estimation error of the OLS estimator from model*m*, where “tight” means that the convergence rate in the bound is the best known rate. Note that ${\mathit{\theta}}_{n}^{\ast \ast}$ is the parameter that minimizes the sum of the two errors over all models. Such a model (the model corresponding to ${\mathit{\theta}}_{n}^{\ast \ast}$) is called an oracle model. The log(*n*) factor in the estimation error is the price paid for not knowing the sparsity of the oracle model. By using the*l*_{1}-PLS method, we pay by a factor of log(*n*) in the estimation error and as an exchange, the*l*_{1}-PLS estimator behaves roughly as if it knew the sparsity of the oracle model and as if is was estimated from the oracle model using OLS. Thus the log(*n*) factor can be viewed as the price paid for not knowing the sparsity of the oracle model and thus having to conduct model selection. See remark 2 after Theorem A.1 for the precise definition of the oracle model and its relationship to ${\mathit{\theta}}_{n}^{\ast \ast}$. - Suppose
*λ*=_{n}*o*(1). Then in large samples the estimation error term $3{\left|\right|\mathit{\theta}\left|\right|}_{0}{\lambda}_{n}^{2}/\beta $ is negligible. In this case, ${\mathit{\theta}}_{n}^{\ast \ast}$ is close to*θ*^{*}. When the model Φ*θ*^{*}approximates*Q*_{0}sufficiently well, we see that setting*λ*equal to its lower bound in (4.6) provides the fastest rate of convergence of the upper bound to zero. More precisely, suppose_{n}*Q*_{0}= Φ*θ*^{*}(i.e.*L*(Φ*θ*^{*}) −*L*((*Q*_{0}) = 0). Then inequality (4.7) implies that*V*(*d*_{0}) −*V*() ≤_{n}*O*((log_{p}*n/n*)^{(1+}^{α}^{)}^{/}^{(2+}^{α}^{)}). A convergence in mean result is presented in Corollary 4.1. - In finite samples, the estimation error $3{\left|\right|\mathit{\theta}\left|\right|}_{0}{\lambda}_{n}^{2}/\beta $ is nonnegligible. The argument of the minimum in the upper bound (4.7), ${\mathit{\theta}}_{n}^{\ast \ast}$, minimizes prediction error among parameters with controlled sparsity. In remark 2 after Theorem 4.3, we discuss how this upper bound is a tight upper bound for the OLS estimator from an oracle model in the step-wise model selection setting. In this sense, inequality (4.7) implies that decision rule produced by the
*l*_{1}-PLS method will have a reduction in Value roughly as if it knew the sparsity of the oracle model and were estimated from the oracle model using OLS. - Assumptions 1–3 in Theorem 4.1 are employed to derive the finite sample prediction error bound for the
*l*_{1}-PLS estimatordefined in (4.1). Below we briefly discuss these assumptions._{n}Assumption 1 implicitly implies that the error terms do not have heavy tails. This condition is often assumed to show that the sample mean of a variable is concentrated around its true mean with a high probability. It is easy to verify that this assumption holds if each*ε*is bounded. Moreover, it also holds for some commonly used error distributions that have unbounded support, such as the normal or double exponential._{i}Assumption 2 is also used to show the concentration of the sample mean around the true mean. It is possible to replace the boundedness condition by a moment condition similar to Assumption 1. This assumption requires that all basis functions and the difference between*Q*_{0}and its best linear approximation are bounded. Note that we do not assume to be a good approximation space for*Q*_{0}. However, if Φ* approximates*θ**Q*_{0}well,*η*will be small, which will result in a smaller upper bound in (4.7). In fact, in the generalized result (Theorem A.1) we allow*U*and*η*to increase in*n*.Assumption 3 is employed to avoid collinearity. In fact, we only need$$E{[\mathrm{\Phi}({\mathit{\theta}}^{\prime}-\mathit{\theta})]}^{2}{\left|\right|\mathit{\theta}\left|\right|}_{0}\phantom{\rule{0.16667em}{0ex}}\ge \beta {\left(\sum _{j\in {M}_{0}(\mathit{\theta})}{\sigma}_{j}\mid {\theta}_{j}^{\prime}-{\theta}_{j}\mid \right)}^{2},$$(4.8)for,*θ*′ belonging to a subset of*θ*(see Assumption A.3), where^{J}*M*_{0}() {*θ**j*= 1,…,*J*:*θ*≠ 0}. Condition (4.8) has been used in van de Geer [33]. This condition is also similar to the restricted eigenvalue assumption in Bickel et al. [3] in which_{j}*E*is replaced by*E*, and a fixed design matrix is considered. Clearly, Assumption 3 is a sufficient condition for (4.8). In addition, condition (4.8) is satisfied if the correlation_{n}*|E*|/(_{j}_{k}*σ*) is small for all_{j}σ_{k}*k**M*_{0}(),*θ**j*≠*k*and a subset of’s (similar results in a fixed design setting have been proved in Bickel et al. [3]. The condition on correlation is also known as “mutual coherence” condition in Bunea at al. [4]). See Bickel et al. [3] for other sufficient conditions for (4.8).*θ*

The above upper bound for *V* (*d*_{0}) − *V* (* _{n}*) involves

*L*(Φ

**) −**

*θ**L*(

*Q*

_{0}), which measures how well the conditional mean function

*Q*

_{0}is approximated by . As we have seen in Section 3, the quality of the estimated ITR only depends on the estimator of the treatment effect term

*T*

_{0}. Below we provide a strengthened result in the sense that the upper bound depends only on how well we approximate the treatment effect term.

First we identify terms in the linear model that approximate *T*_{0} (recall that *T*_{0}(*X*,*A*) *Q*_{0}(*X*,*A*) − *E*[*Q*_{0}(*X*,*A*)|*X*]). Without loss of generality, we rewrite the vector of basis functions as Φ(*X, A*) = (Φ^{(1)}(*X*), Φ^{(2)}(*X, A*)), where Φ^{(1)} = (_{1}(*X*), …, _{J(1)}(*X*)) is composed of all components in Φ that do not contain *A* and Φ^{(2)} = (_{J(1)+1}(*X, A*), …*, _{J}* (

*X, A*)) is composed of all components in Φ that contain

*A*. Since

*A*takes only finite values and the randomization distribution

*p*(

*a|x*) is known, we can code

*A*so that

*E*[Φ

^{(2)}(

*X, A*)

*] =*

^{T}|X**0**a.s. (see Section 5.2 and Appendix A.3 for examples). For any

**= (**

*θ**θ*

_{1}

*,*…,

*θ*)

_{J}

^{T}*, denote*

^{J}

*θ*^{(1)}= (

*θ*

_{1}, …,

*θ*

_{J(1)})

*and*

^{T}

*θ*^{(2)}= (

*θ*

_{J(1)+1}, …,

*θ*)

_{J}*. Then Φ*

^{T}^{(1)}

*θ*^{(1)}approximates

*E*(

*Q*

_{0}(

*X, A*)|

*X*) and Φ

^{(2)}

*θ*^{(2)}approximates

*T*

_{0}.

The following theorem implies that if the treatment effect term *T*_{0} can be well approximated by a sparse representation, then * _{n}* will have Value close to the optimal Value.

#### Theorem 4.2

Suppose p(a|x) ≥ S^{−1} for a positive constant S for all (x, a) pairs and the margin condition (3.3) holds for some C > 0, α ≥ 0 and all positive ε. Assume E[Φ^{(2)}(X, A)^{T}|X] = **0** a.s. Suppose Assumptions 1 – 3 in Theorem 4.1 hold. Let _{n} be the estimated ITR with λ_{n} satisfying condition (4.6). Let Θ_{n} be the set defined in (4.4). Then for any n ≥ 24U^{2} log(Jn) and for which Θ_{n} is non-empty, we have, with probability at least 1 − 1/n, that

where C′ = (2^{2+3α}S^{1+α}C)^{1/(2+α)}.

The result follows from inequality (3.5) in Theorem 3.1 and inequality (4.11) in Theorem 4.3.

##### Remarks

- Inequality (4.9) improves inequality (4.7) in the sense that it guarantees a small reduction in Value of
as long as the treatment effect term_{n}*T*_{0}is well approximated by a sparse linear representation; it does not require that the entire conditional mean function*Q*_{0}be well approximated. In many situations*Q*_{0}may be very complex, but*T*_{0}could be very simple. This means that*T*_{0}is much more likely to be well approximated as compared to*Q*_{0}(indeed, if there is no difference between treatments, then*T*_{0}0). - Inequality (4.9) cannot be improved in the sense that if there is no treatment effect (i.e.
*T*_{0}0), then both sides of the inequality are zero. This result implies that minimizing the penalized empirical prediction error indeed yields high Value (at least asymptotically) if*T*_{0}can be well approximated.

The following asymptotic result follows from Theorem 4.2. Note that when *E*[Φ^{(2)}(*X, A*)* ^{T}|X*] =

**0**a.s. (see Section 5 for examples),

*E*(Φ

**−**

*θ**Q*

_{0})

^{2}=

*E*(Φ

^{(1)}

*θ*^{(1)}− [

*Q*

_{0}−

*E*(

*Q*

_{0}|

*X*)])

^{2}+

*E*(Φ

^{(2)}

*θ*^{(2)}−

*T*

_{0})

^{2}. Thus the estimation of the treatment effect term

*T*

_{0}is asymptotically separated from the estimation of the main effect term

*Q*

_{0}−

*E*(

*Q*

_{0}

*|X*). In this case, Φ

^{(2)}

*θ*^{(2),*}is the best linear approximation of the treatment effect term

*T*

_{0}, where

*θ*^{(2),*}is the vector of components in

*θ*^{*}corresponding to Φ

^{(2)}.

#### Corollary 4.1

Suppose p(a|x) ≥ S^{−1} for a positive constant S for all (x, a) pairs and the margin condition (3.3) holds for some C > 0, α ≥ 0 and all positive ε. Assume E[Φ^{(2)}(X, A)^{T}|X] = **0** a.s. In addition, suppose Assumptions 1 – 3 in Theorem 4.1 hold. Let _{n} be the estimated ITR with tuning parameter
${\lambda}_{n}={k}_{1}\sqrt{log(Jn)/n}$ for a constant k_{1} ≥ 82 max{c, σ, η}. If T_{0}(X, A) = Φ^{(2)}**θ**^{(2),*}, then

This result provides a guarantee on the convergence rate of *V* (* _{n}*) to the optimal Value. More specifically, it means that if

*T*

_{0}is correctly approximated, then the Value of

*will converge to the optimal Value in mean at rate at least as fast as (log*

_{n}*n*/

*n*)

^{(1+}

^{α}^{)/(2+}

^{α}^{)}with appropriate choice of

*λ*.

_{n}### 4.2. Prediction error bound for the l_{1}-PLS estimator

In this section we provide a finite sample upper bound for the prediction error of the *l*_{1}-PLS estimator * _{n}*. This result is needed to prove Theorem 4.1. Furthermore this result strengthens existing literature on

*l*

_{1}-PLS method in prediction. Finite sample prediction error bounds for the

*l*

_{1}-PLS estimator in the random design setting have been provided in Bunea et al. [4] for quadratic loss, van de Geer [33] mainly for Lipschitz loss, and Koltchinskii [15] for a variety of loss functions. With regards quadratic loss, Koltchinskii [15] requires the response

*Y*is bounded, while both Bunea et al. [4] and van de Geer [33] assumed the existence of a sparse

*θ**such that*

^{J}*E*(Φ

**−**

*θ**Q*

_{0})

^{2}is upper bounded by a quantity that decreases to 0 at a certain rate as

*n*→ ∞ (by permitting

*J*to increase with

*n*so Φ depends on

*n*as well). We improve the results in the sense that we do not make such assumptions (see Appendix A.2 for results when Φ,

*J*are indexed by

*n*and

*J*increases with

*n*).

As in the prior sections, the sparsity of ** θ** is measured by its

*l*

_{0}norm, ||

**||**

*θ*_{0}(see the Appendix A.2 for proofs with a laxer definition of sparsity). Recall that the parameter, ${\mathit{\theta}}_{n}^{\ast \ast}$ defined in (4.5) has small prediction error and controlled sparsity.

#### Theorem 4.3

Suppose Assumptions 1–3 in Theorem 4.1 hold. For any η_{1} ≥ 0, Let _{n} be the l_{1}-PLS estimator defined by (4.1) with tuning parameter λ_{n} satisfying condition (4.6). Let Θ_{n} be the set defined in (4.4). Then for any n ≥ 24U^{2} log(Jn) and for which Θ_{n} is non-empty, we have, with probability at least 1 − 1/n, that

Furthermore, suppose E[Φ^{(2)}(X, A)^{T}|X] = **0** a.s. Then with probability at least 1 − 1/n,

The results follow from Theorem A.1 in Appendix A.2 with *ρ* = 0, *γ*= 1/8, *η*_{1} = *η*_{2} = *η*, *t* = log 2*n* and some simple algebra (notice that Assumption 3 in Theorem 4.1 is a sufficient condition for Assumptions A.3 and A.4).

##### Remarks

Inequality (4.11) provides a finite sample upper bound on the mean square difference between *T*_{0} and its estimator. This result is used to prove Theorem 4.2. The remarks below discuss how inequality (4.10) contributes to the *l*_{1}-penalization literature in prediction.

- The conclusion of Theorem 4.3 holds for all choices of
*λ*that satisfy (4.6). Suppose_{n}*λ*=_{n}*o*(1), then $L(\mathrm{\Phi}{\mathit{\theta}}_{n}^{\ast \ast})-L(\mathrm{\Phi}{\mathit{\theta}}^{\ast})\to 0$ as*n*→ ∞ (since ||||*θ*_{0}is bounded). Then (4.10) implies that*L*(Φ) −_{n}*L*(Φ*θ*^{*}) → 0 in probability. To achieve the best rate of convergence, equal sign should be taken in (4.6). - Note that ${\mathit{\theta}}_{n}^{\ast \ast}$ minimizes $L(\mathrm{\Phi}\mathit{\theta})-L({Q}_{0})+3{\left|\right|\mathit{\theta}\left|\right|}_{0}{\lambda}_{n}^{2}/\beta $. Below we demonstrate that the minimum of $L(\mathrm{\Phi}\mathit{\theta})-L({Q}_{0})+3{\left|\right|\mathit{\theta}\left|\right|}_{0}{\lambda}_{n}^{2}/\beta $ can be viewed as the approximation error plus a “tight” upper bound of the estimation error of an “oracle” in the stepwise model selection framework (when “=” is taken in (4.6)). Here “tight” means the convergence rate in the bound is the best known rate, and “oracle” is defined as follows. Let
*m*denote a non-empty subset of the index set {1, …*, J*}. Then each*m*represents a model which uses a non-empty subset of {_{1}*,*…*,*} as basis functions (there are 2_{J}− 1 such subsets). Define ${\widehat{\mathit{\theta}}}_{n}^{(m)}=arg{min}_{\{\mathit{\theta}\in {\mathbb{R}}^{J}:{\theta}_{j}=0\phantom{\rule{0.16667em}{0ex}}\text{for}\phantom{\rule{0.16667em}{0ex}}\text{all}\phantom{\rule{0.16667em}{0ex}}j\notin m\}}{E}_{n}{(R-\mathrm{\Phi}\mathit{\theta})}^{2}$ and^{J}*θ*^{*,(}^{m}^{)}= arg min_{{θ J:θj=0}for all_{j}_{}_{m}_{}}*L*(Φ). In this setting, an ideal model selection criterion will pick model*θ**m*^{*}such that $L(\mathrm{\Phi}{\widehat{\mathit{\theta}}}_{n}^{({m}^{\ast})})={inf}_{m}L(\mathrm{\Phi}{\widehat{\mathit{\theta}}}_{n}^{(m)})$. ${\widehat{\mathit{\theta}}}_{n}^{({m}^{\ast})}$ is referred as an “oracle” in Massart [20]. Note that the excess prediction error of each ${\widehat{\mathit{\theta}}}_{n}^{(m)}$ can be written as$$L(\mathrm{\Phi}{\widehat{\mathit{\theta}}}_{n}^{(m)})-L({Q}_{0})=[L(\mathrm{\Phi}{\mathit{\theta}}^{\ast ,(m)})-L({Q}_{0})]+[L(\mathrm{\Phi}{\widehat{\mathit{\theta}}}_{n}^{(m)})-L(\mathrm{\Phi}{\mathit{\theta}}^{\ast ,(m)})],$$where the first term is called the approximation error of model*m*and the second term is the estimation error. It can be shown that [2] for each model*m*and*x*0, with probability at least 1 − exp(−_{m}>*x*),_{m}$$L(\mathrm{\Phi}{\widehat{\mathit{\theta}}}_{n}^{(m)})-L(\mathrm{\Phi}{\mathit{\theta}}^{\ast ,(m)})\le \mathit{constant}\times \left(\frac{{x}_{m}+\mid m\mid log(n/\mid m\mid )}{n}\right)$$under appropriate technical conditions, where*|m|*is the cardinality of the index set*m*. To our knowledge this is the best rate known so far. Taking*x*= log_{m}*n*+ |*m*| log*J*and using the union bound argument, we have with probability at least 1 −*O*(1/*n*),$$\begin{array}{l}L({\mathrm{\Phi}}_{n}{\widehat{\mathit{\theta}}}_{n}^{({m}^{\ast})})-L({Q}_{0})\\ =\underset{m}{min}\left([L(\mathrm{\Phi}{\mathit{\theta}}^{\ast ,(m)})-L({Q}_{0})]+L(\mathrm{\Phi}{\widehat{\mathit{\theta}}}_{n}^{(m)})-L(\mathrm{\Phi}{\mathit{\theta}}^{\ast ,(m)})\right)\\ \le \underset{m}{min}\left([L(\mathrm{\Phi}{\mathit{\theta}}^{\ast ,(m)})-L({Q}_{0})]+\mathit{constant}\times \frac{\mid m\mid log(Jn)}{n}\right)\\ =\underset{\mathit{\theta}}{min}\left([L(\mathrm{\Phi}\mathit{\theta})-L({Q}_{0})]+\mathit{constant}\times \frac{{\left|\right|\mathit{\theta}\left|\right|}_{0}log(Jn)}{n}\right).\end{array}$$(4.12)On the other hand, take*λ*so that condition (4.6) holds with “=”. (4.10) implies that, with probability at least 1 − 1/_{n}*n,*$$L(\mathrm{\Phi}{\widehat{\mathit{\theta}}}_{n})-L({Q}_{0})\le \underset{\mathit{\theta}\in {\mathrm{\Theta}}_{n}}{min}\left([L(\mathrm{\Phi}\mathit{\theta})-L({Q}_{0})]+\mathit{constant}\times \frac{{\left|\right|\mathit{\theta}\left|\right|}_{0}log(Jn)}{n}\right),$$which is essentially (4.12) with the constraint ofΘ*θ*. (The “_{n}*constant*” in the above inequalities may take different values.) Since $\mathit{\theta}={\mathit{\theta}}_{n}^{\ast \ast}$ minimizes the approximation error plus a tight upper bound for the estimation error in the oracle model, withinΘ*θ*, we refer to ${\mathit{\theta}}_{n}^{\ast \ast}$ as an oracle._{n} - The result can be used to emphasize that
*l*_{1}penalty behaves similarly as the*l*_{0}penalty. Note thatminimizes the empirical prediction error,_{n}*E*(_{n}*R*− Θ)*θ*^{2}, plus an*l*_{1}penalty whereas*θ*^{**}(*u*) minimizes the prediction error_{n}*L*(Φ) plus an*θ**l*_{0}penalty. We provide an intuitive connection between these two quantities. First note that*E*(_{n}*R*− Φ)*θ*^{2}estimates*L*(Φ) and*θ*estimates_{j}*σ*. We use “≈” to denote this relationship. Thus_{j}$${E}_{n}{(R-\mathrm{\Phi}\mathit{\theta})}^{2}+{\lambda}_{n}\sum _{j=1}^{J}{\widehat{\sigma}}_{j}\mid {\theta}_{i}\mid \approx L(\mathrm{\Phi}\mathit{\theta})+{\lambda}_{n}\sum _{j=1}^{J}{\sigma}_{j}\mid {\theta}_{j}\mid \le L(\mathrm{\Phi}\mathit{\theta})+{\lambda}_{n}\sum _{j=1}^{J}{\sigma}_{j}\mid {\widehat{\theta}}_{n,j}-{\theta}_{j}\mid +{\lambda}_{n}\sum _{j=1}^{J}{\sigma}_{j}\mid {\widehat{\theta}}_{n,j}\mid ,$$(4.13)whereis the_{n,j}*j*component of_{th}. In Appendix B we show that for any_{n}Θ*θ*, ${\lambda}_{n}{\sum}_{j=1}^{J}{\sigma}_{j}\mid {\widehat{\theta}}_{n,j}-{\theta}_{j}\mid $ is upper bounded by ${\left|\right|\mathit{\theta}\left|\right|}_{0}{\lambda}_{n}^{2}/\beta $ up to a constant with a high probability. Thus_{n}minimizes (4.13) and_{n}*θ*^{**}(*u*) roughly minimizes an upper bound of (4.13)._{n} - The constants involved in the theorem can be improved; we focused on readability as opposed to providing the best constants.

## 5. A Practical Implementation and an Evaluation

In this section we develop a practical implementation of the *l*_{1}-PLS method, compare this method to two commonly used alternatives and lastly illustrate the method using the motivating data from the Nefazodone-CBASP trial [13].

A realistic implementation of *l*_{1}-PLS method should use a data-dependent method to select the tuning parameter, *λ _{n}*. Since the primary goal is to maximize the Value, we select

*λ*to maximize a cross validated Value estimator. For any ITR

_{n}*d*, it is easy to verify that

*E*[(

*R*−

*V*(

*d*))1

_{A}_{=}

_{d}_{(}

_{X}_{)}/

*p*(

*A*|

*X*)] = 0. Thus an unbiased estimator of

*V*(

*d*) is

[21] (recall that the randomization distribution *p*(*a|X*) is known). We split the data into 10 roughly equal-sized parts; then for each *λ _{n}* we apply the

*l*

_{1}-PLS based method on each 9 parts of the data to obtain an ITR, and estimate the Value of this ITR using the remaining part; the

*λ*that maximizes the average of the 10 estimated Values is selected. Since the Value of an ITR is noncontinuous in the parameters, this usually results in a set of candidate

_{n}*λ*’s achieving maximal Value. In the simulations below the resulting

_{n}*λ*is nonunique in around 97% of the data sets. If necessary, as a second step we reduce the set of

_{n}*λ*’s by including only

_{n}*λ*’s leading to the ITR’s using the least number of variables. In the simulations below this second criterion effectively reduced the number of candidate

_{n}*λ*’s in around 25% of the data sets, however multiple

_{n}*λ*’s still remained in around 90% of the data sets. This is not surprising since the Value of an ITR only depends on the relative magnitudes of parameters in the ITR. In the third step we select the

_{n}*λ*that minimizes the 10-fold cross validated prediction error estimator from the remaining candidate

_{n}*λ*’s; that is, minimization of the empirical prediction error is used as a final tie breaker.

_{n}### 5.1. Simulations

A first alternative to *l*_{1}-PLS is to use ordinary least squares (OLS). The estimated ITR is * _{OLS}* arg max

*Φ(*

_{a}*X, a*)

*where*

_{OLS}*is the OLS estimator of*

_{OLS}**. A second alternative is called “prognosis prediction” [14]. Usually this method employees multiple data sets, each of which involves one active treatment. Then the treatment that is associated with the best predicted prognosis [14] is selected. We implement this method by estimating**

*θ**E*(

*R|X, A*=

*a*) via least squares with

*l*

_{1}penalization for each treatment group (each

*a*) separately. The tuning parameter involved in each treatment group is selected by minimizing the 10-fold cross-validated prediction error estimator. The resulting ITR satisfies

*(*

_{PP}*X*) arg max

_{a}_{}

*Ê*(

*R|X, A*=

*a*) where the subscript “PP” denotes prognosis prediction.

For simplicity we consider binary *A*. All three methods use the same number of data points and the same number of basis functions but use these data points/basis functions differently. *l*_{1}-PLS and OLS use all *J* basis functions to conduct estimation with all *n* data points whereas the prognosis prediction method splits the data into the two treatment groups and uses *J*/2 basis functions to conduct estimation with the *n*/2 data points in each of the two treatment groups. To ensure the comparison is fair across the three methods, the approximation model for each treatment group is consistent with the approximation model used in both *l*_{1}-PLS and OLS (e.g. if *Q*_{0} is approximated by (1*, X, A, XA*)** θ** in

*l*

_{1}-PLS and OLS, then in prognosis prediction we approximate

*E*(

*R|X, A*=

*a*) by (1

*, X*)

*θ**for each treatment group). We do not penalize the intercept coefficient in either prognosis prediction or*

_{PP}*l*

_{1}-PLS.

The three methods are compared using two criteria: 1) Value maximization; and 2) simplicity of the estimated ITRs (measured by the number of variables/basis functions used in the rule).

We illustrate the comparison of the three methods using 4 examples selected to reflect three scenarios; please see Section S.3 of the supplementary material for 4 further examples.

- There is no treatment effect (i.e.
*Q*_{0}is constructed so that*T*_{0}= 0; example 1). In this case, all ITRs yield the same Value. Thus the simplest rule is preferred. - There is a treatment effect and the treatment effect term
*T*_{0}is correctly modeled (example 4 for large*n*, and example 2). In this case, minimizing the prediction error will yield the ITR that maximizes the Value. - There is a treatment effect and the treatment effect term
*T*_{0}is misspecified (example 4 for small*n*, and example 3). In this case, there might be a mismatch between prediction error minimization and Value maximization.

The examples are generated as follows. The treatment *A* is generated uniformly from {−1, 1} independent of *X* and the response *R*. The response *R* is normally distributed with mean *Q*_{0}(*X, A*). In examples 1–3, *X* ~ *U* [−1, 1]^{5} and we consider three simple examples for *Q*_{0}. In example 4, *X* ~ *U* [0, 1] and we use a complex *Q*_{0}, where *Q*_{0}(*X,* 1) and *Q*(*X,* −1) are similar to the blocks function used in Donoho and Johnstone [8]. Further details of the simulation design are provided in Appendix A.3.

We consider two types of approximation models for *Q*_{0}. In examples 1–3, we approximate *Q*_{0} by (1*, X, A, XA*)** θ**. In example 4, we approximate

*Q*

_{0}by Haar wavelets. The number of basis functions may increase as

*n*increases (we index

*J*, Φ and

*θ*^{*}by

*n*in this case). Plots for

*Q*

_{0}(

*X, A*) and the associated best wavelet fits ${\mathrm{\Phi}}_{n}(X,A){\mathit{\theta}}_{n}^{\ast}$ are provided in Figure 1.

_{0}(X; A) (left), Q

_{0}(X; A) and the associated best wavelet fit when J

_{n}= 8 (middle), and Q

_{0}(X; A) and the associated best wavelet fit when J

_{n}= 128 (right) (example 4).

For each example, we simulate data sets of sizes *n* = 2* ^{k}* for

*k*= 5

*,*…, 10. 1000 data sets are generated for each sample size. The Value of each estimated ITR is evaluated via Monte Carlo using a test set of size 10, 000. The Value of the optimal ITR is also evaluated using the test set.

Simulation results are presented in Figure 2. When the approximation model is of high quality, all methods produce ITRs with similar Value (see examples 1, 2 and example 4 for large *n*). However, when the approximation model is poor, the *l*_{1}-PLS method may produce highest Value (see example 3). Note that in example 3 settings in which the sample size is small, the Value of the ITR produced by *l*_{1}-PLS method has larger median absolute deviation (MAD) than the other two methods. One possible reason is that due to the mismatch between maximizing the Value and minimizing the prediction error, the Value estimator plays a strong role in selecting *λ _{n}*. The non-smoothness of the Value estimator combined with the mismatch results in very different

*λ*s and thus the estimated decision rules vary greatly from data set to data set in this example. Nonetheless, the

_{n}*l*

_{1}-PLS method is still preferred after taking the variation into account; indeed

*l*

_{1}-PLS produces ITRs with higher Value than both OLS and PP in around 46%, 55% and 67% in data sets of sizes

*n*= 32

*,*64 and 128, respectively. Furthermore, in general the

*l*

_{1}-PLS method uses much fewer variables for treatment assignment than the other two methods. This is expected because the OLS method does not have variable selection functionality and the PP method will use all variables that are predictive of the response

*R*whereas the use of the Value in selecting the tuning parameter in

*l*

_{1}-PLS discounts variables that are only useful in predicting the response (and less useful in selecting the best treatment).

### 5.2. Nefazodone-CBASP trial example

The Nefazodone-CBASP trial was conducted to compare the efficacy of several alternate treatments for patients with chronic depression. The study randomized 681 patients with non-psychotic chronic major depressive disorder (MDD) to either Nefazodone, cognitive behavioral-analysis system of psychotherapy (CBASP) or the combination of the two treatments. Various assessments were taken throughout the study, among which the score on the 24-item Hamilton Rating Scale for Depression (HRSD) was the primary outcome. Low HRSD scores are desirable. See Keller et al. [13] for more detail of the study design and the primary analysis.

In the data analysis, we use a subset of the Nefazodone-CBASP data consisting of 656 patients for whom the response HRSD score was observed. In this trial, pairwise comparisons show that the combination treatment resulted in significantly lower HRSD scores than either of the single treatments. There was no overall difference between the single treatments.

We use *l*_{1}-PLS to develop an ITR. In the analysis the HRSD score is reverse coded so that higher is better. We consider 50 pretreatment variables *X* = (*X*_{1},…, *X*_{50}). Treatments are coded using contrast coding of dummy variables *A* = (*A*_{1}, *A*_{2}), where *A*_{1} = 2 if the combination treatment is assigned and −1 otherwise and *A*_{2} = 1 if CBASP is assigned, −1 if nefazodone and 0 otherwise. The vector of basis functions, Φ(*X*, *A*), is of the form (1, *X*, *A*_{1}, *XA*_{1}, *A*_{2}, *XA*_{2}). So the number of basis functions is *J* = 153. As a contrast, we also consider the OLS method and the PP method (separate prognosis prediction for each treatment). The vector of basis functions used in PP is (1, *X*) for each of the three treatment groups. Neither the intercept term nor the main treatment effect terms in *l*_{1}-PLS or *PP* is penalized (see Section S.2 of the supplementary material for the modification of the weights * _{j}* used in (4.1)).

The ITR given by the *l*_{1}-PLS method recommends the combination treatment to all (so none of the pretreatment variables enter the rule). On the other hand, the PP method produces an ITR that uses 29 variables. If the rule produced by PP were used to assign treatment for the 656 patients in the trial, it would recommend the combination treatment for 614 patients and nefazodone for the other 42 patients. In addition, the OLS method will use all the 50 variables. If the ITR produced by OLS were used to assign treatment for the 656 patients in the trial, it would recommend the combination treatment for 429 patients, nefazodone for the 145 patients and CBASP for the other 82 patients.

## 6. Discussion

Our goal is to construct a high quality ITR that will benefit future patients. We considered an *l*_{1}-PLS based method and provided a finite sample upper bound for *V* (*d*_{0}) − *V* (* _{n}*), the excess Value of the estimated ITR.

The use of an *l*_{1} penalty allows us to consider a large model for the conditional mean function *Q*_{0} yet permits a sparse estimated ITR. In fact, many other penalization methods such as SCAD [9] and *l*_{1} penalty with adaptive weights (adaptive Lasso; [37]) also have this property. We choose the non-adaptive *l*_{1} penalty to represent these methods. Interested readers may justify other PLS methods using similar proof techniques.

The high probability finite sample upper bounds (i.e. (4.7) and (4.9)) cannot be used to construct a prediction/confidence interval for *V* (*d*_{0}) − *V* (* _{n}*) due to the unknown quantities in the bound. How to develop a tight computable upper bound to assess the quality of

*is an open question.*

_{n}We used cross validation with Value maximization to select the tuning parameter involved in the *l*_{1}-PLS method. As compared to the OLS method and the PP method, this method may yield higher Value when *T*_{0} is misspecified. However, since only the Value is used to select the tuning parameter, this method may produce a complex ITR for which the Value is only slightly higher than that of a much simpler ITR. In this case, a simpler rule may be preferred due to the interpretability and cost of collecting the variables. Investigation of a tuning parameter selection criterion that trades off the Value with the number of variables in an ITR is needed.

This paper studied a one stage decision problem. However, it is evident that some diseases require time-varying treatment. For example, individuals with a chronic disease often experience a waxing and waning course of illness. In these settings the goal is to construct a sequence of ITRs that tailor the type and dosage of treatment through time according to an individual’s changing status. There is an abundance of statistical literature in this area [29, 30, 22, 23, 26, 17, 34, 35]. Extension of the least squares based method to the multi-stage decision problem has been presented in Murphy [23]. The performance of *l*_{1} penalization in this setting is unclear and worth investigation.

## Acknowledgments

The authors thank Martin Keller and the investigators of the Nefazodone-CBASP trial for use of their data. The authors also thank John Rush, MD, for the technical support and Bristol-Myers Squibb for helping fund the trial. The authors thank valuable comments from Eric B. Laber and Peng Zhang.

## APPENDIX

#### A.1. Proof of Theorem 3.1

For any ITR *d*: → , denote Δ*T _{d}*(

*X*) max

_{a}_{}

*T*

_{0}(

*X*,

*a*) −

*T*

_{0}(

*X*,

*d*(

*X*)). Using similar arguments to that in Section 2, we have

*V*(

*d*

_{0}) −

*V*(

*d*) =

*E*(Δ

*T*). If

_{d}*V*(

*d*

_{0}) −

*V*(

*d*) = 0, then (3.4) and (3.5) automatically hold. Otherwise,

*E*(Δ

*T*)

_{d}^{2}≥ (

*E*Δ

*T*)

_{d}^{2}> 0. In this case, for any

*ε*> 0, define the event

Then Δ*T _{d}* ≤ (Δ

*T*)

_{d}^{2}/

*ε*on the event ${\mathrm{\Omega}}_{\epsilon}^{C}$. This together with the fact that Δ

*T*≤ (Δ

_{d}*T*)

_{d}^{2}/

*ε*+

*ε*/4 implies

where the last inequality follows from the margin condition (3.3). Choosing *ε* = (4*E*(Δ*T _{d}*)

^{2}/

*C*)

^{1/(2+}

^{α}^{)}to minimize the above upper bound yields

Next, for any *d* and *Q* such that *d*(*X*) max_{a}_{} *Q*(*X*, *a*) and decomposition *Q*(*X*, *A*) into *W*(*X*) + *T*(*X*, *A*),

where the last inequality follows from the fact that neither |max_{a} T_{0}(*X*, *a*)− max* _{a} T* (

*X*,

*a*)| nor |

*T*(

*X*,

*d*(

*X*))−

*T*

_{0}(

*X*,

*d*(

*X*))| is larger than max

*|*

_{a}*T*(

*X*,

*a*)−

*T*

_{0}(

*X, a*)|. Since

*p*(

*a*|

*x*) ≥

*S*

^{−1}for all (

*x*,

*a*) pairs, we have

Inequality (3.5) follows by substituting (A.2) into (A.1) and setting *W*(*X*, *A*) = *E*[*Q*(*X*, *A*)|*X*]. Inequality (3.4) follows by setting *W*(*X*) = 0 and noticing that Δ*T _{d}*(

*X*) = max

_{a}_{}

*Q*

_{0}(

*X*,

*a*) −

*Q*

_{0}(

*X*,

*d*(

*X*)).

#### A.2. Generalization of Theorem 4.3

In this section, we present a generalization of Theorem 4.3 where *J* may depend on *n* and the sparsity of any *θ** ^{J}* is measured by the number of “large” components in

**as described in Zhang and Huang [36]. In this case,**

*θ**J*, F and the prediction error minimizer

*θ**are denoted as*

^{*}*J*, Φ

_{n}*and ${\mathit{\theta}}_{n}^{\ast}$, respectively. All relevant quantities and assumptions are re-stated below.*

_{n}Let |*M*| denote the cardinality of any index set *M* {1,…, *J _{n}*}. For any

*θ*^{Jn}and constant

*ρ*≥ 0, define

Then *M _{ρλn}* (

**) is the smallest index set that contains only “large” components in**

*θ***. |**

*θ**M*(

_{ρλn}**)| measures the sparsity of**

*θ***. It is easy to see that when**

*θ**ρ*= 0,

*M*

_{0}(

**) is the index set of nonzero components in**

*θ***and |**

*θ**M*

_{0}(

**)| = ||**

*θ***||**

*θ*_{0}. Moreover,

*M*(

_{ρλn}**) is an empty set if and only if**

*θ***=**

*θ***0**.

Let [ ${\mathit{\theta}}_{n}^{\ast}$] be the set of most sparse prediction error minimizers in the linear model, i.e.

Note that [
${\mathit{\theta}}_{n}^{\ast}$] depends on *ρλ _{n}*.

To derive the finite sample upper bound for *L*(Φ_{n}* _{n}*), we need the following assumptions.

##### Assumption A.1

The error terms ε_{i}, i = 1,…, n are independent of (X_{i}, A_{i}), i = 1,…, n and are i.i.d. with E(ε_{i}) = 0 and
$E[{\mid {\epsilon}_{i}\mid}^{l}]\le {\scriptstyle \frac{l!}{2}}{c}^{l-2}{\sigma}^{2}$ for some c, σ^{2} > 0 for all l ≥ 2.

##### Assumption A.2

For all n ≥ 1,

- there exists an 1 ≤ U
_{n}< ∞ such that max_{j=1,…,Jn}||_{j}||_{∞}/σ_{j}≤ U_{n}, where ${\sigma}_{j}\triangleq {(E{\phi}_{j}^{2})}^{1/2}$. - there exists an 0 < η
_{1,n}< ∞, such that ${sup}_{\mathit{\theta}\in [{\mathit{\theta}}_{n}^{\ast}]}{\left|\right|{Q}_{0}-{\mathrm{\Phi}}_{n}\mathit{\theta}\left|\right|}_{\infty}\phantom{\rule{0.16667em}{0ex}}\le {\eta}_{1,n}$.

For any 0 ≤ *γ* < 1/2, *η*_{2}* _{,n}* ≥ 0 (which may depend on

*n*) and tuning parameter

*λ*, define

_{n}##### Assumption A.3

For any n ≥ 1, there exists a β_{n} > 0 such that

for all
$\mathit{\theta}\in {\mathrm{\Theta}}_{n}^{o}\backslash \{\mathbf{0}\}$, ^{Jn} and
${\sum}_{j\in \{1,\dots ,{J}_{n}\}\backslash {M}_{\rho {\lambda}_{n}}(\mathit{\theta})}{\sigma}_{j}\mid {\stackrel{\sim}{\theta}}_{j}\mid \phantom{\rule{0.16667em}{0ex}}\le {\scriptstyle \frac{2\gamma +5}{1-2\gamma}}({\sum}_{j\in {M}_{\rho {\lambda}_{n}}(\mathit{\theta})}\mid {\stackrel{\sim}{\theta}}_{j}-{\theta}_{j}\mid +\phantom{\rule{0.16667em}{0ex}}\rho \mid {M}_{\rho {\lambda}_{n}}(\mathit{\theta})\mid {\lambda}_{n})$.

When
$E({\mathrm{\Phi}}_{n}^{(2)}{(X,A)}^{T}\mid X)=\mathbf{0}$ a.s. (
${\mathrm{\Phi}}_{n}^{(2)}$ is defined in Section 4.1), we need an extra assumption to derive the finite sample upper bound for the mean square error of the treatment effect estimator,
$E[{{\mathrm{\Phi}}_{n}^{(2)}{\widehat{\mathit{\theta}}}_{n}^{(2)}-{T}_{0}(X,A)]}^{2}$ (recall that *T _{0}*(

*X*,

*A*)

*Q*

_{0}(

*X*,

*A*) −

*E*[

*Q*

_{0}(

*X*,

*A*)|

*X*]).

##### Assumption A.4

For any n ≥ 1, there exists a β_{n} > 0 such that

for all
$\mathit{\theta}\in {\mathrm{\Theta}}_{n}^{o}\backslash \{\mathbf{0}\}$, ^{Jn} and
${\sum}_{j\in \{1,\dots ,{J}_{n}\}\backslash {M}_{\rho {\lambda}_{n}}(\mathit{\theta})}{\sigma}_{j}\mid {\stackrel{\sim}{\theta}}_{j}\mid \phantom{\rule{0.16667em}{0ex}}\le {\scriptstyle \frac{2\gamma +5}{1-2\gamma}}({\sum}_{j\in {M}_{\rho {\lambda}_{n}}(\mathit{\theta})}\mid {\stackrel{\sim}{\theta}}_{j}-{\theta}_{j}\mid +\phantom{\rule{0.16667em}{0ex}}\rho \mid {M}_{\rho {\lambda}_{n}}(\mathit{\theta})\mid {\lambda}_{n})$, where

is the smallest index set that contains only large components in **θ**^{(2)}.

Note that here for simplicity, we assume that Assumptions A.3 and A.4 hold with the same value of *β _{n}*. And with out loss of generality, we can always choose a small enough

*β*so that

_{n}*ρβ*≤ 1 for a given

_{n}*ρ*.

For any *t* > 0, define

Note that we allow *U _{n}*,

*η*

_{1}

*,*

_{,n}*η*

_{2}

*and ${\beta}_{n}^{-1}$ to increase as*

_{,n}*n*increases. However, if those quantities are small, the upper bound in (A.7) will be tighter.

##### Theorem A.1

Suppose Assumptions A.1 and A.2 hold. For any given 0 ≤ γ < 1/2, η_{2,n} > 0, ρ ≥ 0 and t > 0, let _{n} be the l_{1}-PLS estimator defined in (4.1) with tuning parameter

Suppose Assumption A.3 holds with ρβ_{n} ≤ 1. Let Θ_{n} be the set defined in (A.4) and assume Θ_{n} is non-empty. If

then with probability at least $1-exp(-{k}_{n}^{\prime}n)-exp(-t)$, we have

where
${k}_{n}^{\prime}=13{(1-2\gamma )}^{2}[6(27{U}_{n}^{2}-10\gamma -22)]$ and K_{n} = [40γ(12β_{nρ} + 2γ + 5)]/[(1 − 2γ)(2γ + 19)] + 130(12β_{nρ} + 2γ + 5)^{2}/[9(2γ + 19)^{2}].

Furthermore, suppose
$E({\mathrm{\Phi}}_{n}^{(2)}{(X,A)}^{T}\mid X)=\mathbf{0}$ a.s. If Assumption A.4 holds with ρβ_{n} ≤ 1, then with probability at least
$1-exp(-{k}_{n}^{\prime}n)-exp(-t)$, we have

where ${K}_{n}^{\prime}=20(12{\beta}_{n}\rho +2\gamma +5)\{\gamma /[(1-2\gamma )(7-6{\beta}_{n}\rho )]+[3(1-2\gamma ){\beta}_{n}\rho +10(2\gamma +5)]/[9{(2\gamma +19)}^{2}]\}$.

##### Remark

- Note that
*K*is upper bounded by a constant under the assumption_{n}*β*≤ 1. In the asymptotic setting when_{n}ρ*n*→ ∞ and*J*→ ∞, (A.7) implies that with probability tending to 1, $L({\mathrm{\Phi}}_{n}{\widehat{\mathit{\theta}}}_{n})-L({\mathrm{\Phi}}_{n}{\mathit{\theta}}_{n}^{\ast})\to 0$ if (i) $\mid {M}_{\rho {\lambda}_{n}}({\mathit{\theta}}_{n}^{\ast})\mid {\lambda}_{n}^{2}/{\beta}_{n}=o(1)$, (ii) ${U}_{n}^{2}log{J}_{n}/n\le {k}_{1}$ and $\mid {M}_{\rho {\lambda}_{n}}({\mathit{\theta}}_{n}^{\ast})\mid \phantom{\rule{0.16667em}{0ex}}\le {k}_{2}{\beta}_{n}\sqrt{n/({U}_{n}^{2}log{J}_{n})}$ for some sufficiently small positive constants_{n}*k*_{1}and*k*_{2}, and (iii) ${\lambda}_{n}\ge {k}_{3}max\{1,{\eta}_{1,n}+{\eta}_{2,n}\}\sqrt{log{J}_{n}/n}$ for a sufficiently large constant*k*_{3}, where ${\mathit{\theta}}_{n}^{\ast}\in [{\mathit{\theta}}_{n}^{\ast}]$ (take*t*= log*J*)._{n} - Below we briefly discuss Assumptions A.2 – A.4.Assumption A.2 is very similar to Assumption 2 in Theorem 4.1 (which is used to prove the concentration of the sample mean around the true mean), except that
*U*and_{n}*η*_{1}may increase as_{,n}*n*increases. This relaxation allows the use of basis functions for which the sup norm max||_{j}||_{j}is increasing in_{∞}*n*(e.g. the wavelet basis used in example 4 of the simulation studies).Assumption A.3 is a generalization of condition (4.8) (which has been discussed in remark 4 following Theorem Theorem 4.1)) to the case where*J*may increase in_{n}*n*and the sparsity of a parameter is measured by the number of “large” components as described at the beginning of this section. This condition is used to avoid the collinearity problem. It is easy to see that when*ρ*= 0 and*β*is fixed in_{n}*n*, this assumption simplifies to condition (4.8).Assumption A.4 puts a strengthened constraint on the linear model of the treatment effect part, as compared to Assumption A.3. This assumption, together with Assumption A.3, is needed in deriving the upper bound for the mean square error of the treatment effect estimator. It is easy to verify that if $E[{\mathrm{\Phi}}_{n}^{T}{\mathrm{\Phi}}_{n}]$ is positive definite, then both A.3 and A.4 hold. Although the result is about the treatment effect part, which is asymptotically independent of the main effect of*X*(when $E[{\mathrm{\Phi}}_{n}^{(2)}(X,A)\mid X]=\mathbf{0}$ a.s.), we still need Assumption A.3 to show that the cross product term ${E}_{n}[({\mathrm{\Phi}}_{n}^{(1)}{\widehat{\mathit{\theta}}}_{n}^{(1)}-{\mathrm{\Phi}}_{n}^{(1)}{\mathit{\theta}}^{(1)})({\mathrm{\Phi}}_{n}^{(2)}{\widehat{\mathit{\theta}}}_{n}^{(2)}-{\mathrm{\Phi}}_{n}^{(2)}{\mathit{\theta}}^{(2)})]$ is upper bounded by a quantity converging to 0 at the desired rate. We may use a really poor model for the main effect part*E*(*Q*_{0}(*X*,*A*)|*X*) (e.g. ${\mathrm{\Phi}}_{n}^{(1)}\equiv 1$), and Assumption A.4 implies Assumption A.3 when*ρ*= 0. This poor model only effects the constants involved in the result. When the sample size is large (so that*λ*is small), the estimated ITR will be of high quality as long as_{n}*T*_{0}is well approximated.

##### Proof

For any ** θ** Θ

*, define the events*

_{n}Then there exists a ${\mathit{\theta}}^{o}\in [{\mathit{\theta}}_{n}^{\ast}]$ such that

where the first equality follows from the fact that *E*[(*R* − Φ_{n}*θ** ^{o}*)

*] = 0 for any ${\mathit{\theta}}^{o}\in [{\mathit{\theta}}_{n}^{\ast}]$ for*

_{j}*j*= 1,…,

*J*and the last inequality follows from the definition of ${\mathrm{\Theta}}_{n}^{o}$.

_{n}Based on Lemma A.1 below, we have that on the event Ω_{1} ∩ Ω_{2}(** θ**) ∩Ω

_{3}(

**),**

*θ*Similarly, when
$E[{\mathrm{\Phi}}_{2}^{(2)}{(X,A)}^{T}\mid X]=\mathbf{0}$, by Lemma A.2, we have that on the event Ω_{1} ∩ Ω_{2}(** θ**) ∩ Ω

_{3}(

**),**

*θ*The conclusion of the theorem follows from the union probability bounds of the events Ω_{1}, Ω_{2}(** θ**) and Ω

_{3}(

**) provided in Lemmas A.3, A.4 and A.5.**

*θ*Below we state the lemmas used in the proof of Theorem A.1. The proofs of the lemmas are given in Section S.3 of the supplementary material.

##### Lemma A.1

Suppose Assumption A.3 holds with ρβ_{n} ≤ 1. Then for any **θ** Θ_{n}, on the event Ω_{1} ∩ Ω_{2}(**θ**) ∩ Ω_{3}(**θ**), we have

and

##### Remark

This lemma implies that * _{n}* is close to each

**Θ**

*θ**on the event Ω*

_{n}_{1}∩ Ω

_{2}(

**) ∩ Ω**

*θ*_{3}(

**). The intuition is as follows. Since**

*θ**minimizes (4.1), the first order conditions imply that max*

_{n}*|*

_{j}*E*(

_{n}*R*−Φ

_{n}*)*

_{n}*/*

_{j}*| ≤*

_{j}*λ*/2. Similar property holds for

_{n}**on the event Ω**

*θ*_{1}∩ Ω

_{3}(

**). Assumption A.3 together with event Ω**

*θ*_{2}(

**) ensures that there is no collinearity in the**

*θ**n*×

*J*design matrix ${({\mathrm{\Phi}}_{n}({X}_{i},{A}_{i}))}_{i=1}^{n}$. These two aspects guarantee the closeness of

_{n}*to*

_{n}**.**

*θ*##### Lemma A.2

Suppose
$E[{\mathrm{\Phi}}_{n}^{(2)}{(X,A)}^{T}\mid X]=\mathbf{0}$ a.s. and Assumption A.4 holds with ρβ_{n} ≤ 1. Then for any **θ** Θ_{n}, on the event Ω_{1} ∩ Ω_{2}(**θ**) ∩ Ω_{3}(**θ**), we have

and

##### Lemma A.3

Suppose Assumption A.2(a) and inequality (A.6) hold. Then $\mathbf{P}({\mathrm{\Omega}}_{1}^{C})\le exp(-{k}_{n}^{\prime}n)$, where ${k}_{n}^{\prime}=13{(1-2\gamma )}^{2}/[6(27{U}_{n}^{2}-10\gamma -22)]$.

##### Lemma A.4

Suppose Assumption A.2(a) holds. Then for any t > 0 and **θ** Θ_{n}, **P**({Ω_{2}(**θ**)}^{C}) ≤ 2 exp(−t)/3.

##### Lemma A.5

Suppose Assumptions A.1 and A.2 hold. For any t > 0, if λ_{n} satisfies condition (A.5), then for any **θ** Θ_{n}, we have **P**({Ω_{3}(**θ**)}^{C}) ≤ 2 exp(−t)/3.

#### A.3. Design of simulations in Section 5.1

In this section, we present the detailed simulation design of the examples used in Section 5.1. These examples satisfy all assumptions listed in the Theorems (it is easy to verify that for examples 1–3. Validity of the assumptions for example 4 is addressed in the remark after example 4). In addition, Θ* _{n}* defined in (4.4) is non-empty as long as

*n*is sufficiently large (note that the constants involved in Θ

*can be improved and are not that meaningful. We focused on a presentable result instead of finding the best constants).*

_{n}In examples 1 – 3, *X* = (*X*_{1},…, *X*_{5}) is uniformly distributed on [−1, 1]^{5}. The treatment *A* is then generated independently of *X* uniformly from {−1, 1}. Given *X* and *A*, the response *R* is generated from a normal distribution with mean *Q*_{0}(*X*, *A*) = 1+2*X*_{1} +*X*_{2} +0.5*X*_{3} +*T*_{0}(*X*, *A*) and variance 1. We consider the following three examples for *T*_{0}.

*T*_{0}(*X*,*A*) = 0 (i.e. there is no treatment effect).*T*_{0}(*X*,*A*) = 0.424(1 −*X*_{1}−*X*_{2})*A*.*T*_{0}(*X*,*A*) = 0.446*sign*(*X*_{1})(1 −*X*_{1})^{2}*A*.

Note that in each example *T*_{0}(*X*, *A*) is equal to the treatment effect term, *Q*_{0}(*X*, *A*) − *E*[*Q*_{0}(*X*, *A*)|*X*]. We approximate *Q*_{0} by = {(1, *X*, *A*, *XA*)*θ: θ*^{12}}. Thus in examples 1 and 2 the treatment effect term *T*_{0} is correctly modeled, while in example 3 the treatment effect term *T*_{0} is misspecified.

The parameters in examples 2 and 3 are chosen to reflect a medium effect size according to Cohen’s d index. When there are two treatments, the Cohen’s d effect size index is defined as the standardized difference in mean responses between two treatment groups, i.e.

Cohen [7] tentatively defined the effect size as “small” if the Cohen’s d index is 0.2, “medium” if the index is 0.5 and “large” if the index is 0.8.

In example 4, *X* is uniformly distributed on [0, 1]. Treatment *A* is generated independently of *X* uniformly from {−1, 1}. The response *R* is generated from a normal distribution with mean *Q*_{0}(*X*, *A*) and variance 1, where
${Q}_{0}(X,1)={\sum}_{j=1}^{8}{\vartheta}_{(1),j}{1}_{X<{u}_{(1),j}},{Q}_{0}(X,-1)={\sum}_{j=1}^{8}{\vartheta}_{(-1),j}{1}_{X<{u}_{(-1),j}}$, and ’s and *u*’s are parameters specified in (A.12). The effect size is small.

We approximate *Q*_{0} by Haar wavelets

where *h*_{0}(*x*) = 1_{x}_{[0,1]} and *h _{lk}*(

*x*) = 2

^{l}^{/2}(1

_{2lx[k+1/2,k+1)}− 1

_{2lx[k,k+1/2)}) for

*l*= 0,…,

*. We choose*

_{n}*= 3log*

_{n}_{2}

*n*/4 − 2. For a given

*l*and sample ${({X}_{i},{A}_{i},{R}_{i})}_{i=1}^{n}$,

*k*takes integer values from 2

*min*

^{l}*to 2*

_{i}X_{i}*max*

^{l}*− 1. Then*

_{i}X_{i}*J*= 2

_{n}^{3 log2n/4}≤

*n*

^{3/4}.

##### Remark

In example 4, we allow the number of basis functions *J _{n}* to increase with

*n*. The corresponding theoretical result can be obtained by combining Theorem 3.1 and Theorem A.1. Below we demonstrate the validation of the assumptions used in the theorems.

Theorem 3.1 requires that the randomization probability *p*(*a*|*x*) ≥ *S*^{−1} for a positive constant for all (*x*, *a*) pairs and the margin condition (3.3) or (3.6) holds. According the generative model, we have that *p*(*a*|*x*) = 1/2 and condition (3.6) holds.

Theorem A.1 requires Assumptions A.1 - Assumptions A.4 hold and Θ* _{n}* defined in (A.4) is non-empty. Since we consider normal error terms, Assumption A.1 holds. Note that the basis functions used in Haar wavelet are orthogonal. It is also easy to verify that Assumptions A.3 and A.4 hold with

*β*= 1 and Assumption A.2 holds with

_{n}*U*=

_{n}*n*

^{3/8}/2 and ${\eta}_{1,n}\le \mathit{constant}+\mathit{constant}\times {\left|\right|{\mathit{\theta}}_{n}^{\ast}\left|\right|}_{0}$ (since each $\mid {\phi}_{j}{\mathit{\theta}}_{n,j}^{\ast}\mid \phantom{\rule{0.16667em}{0ex}}=\phantom{\rule{0.16667em}{0ex}}\mid {\phi}_{j}E({\phi}_{j}R)\mid \phantom{\rule{0.16667em}{0ex}}\le \mathit{constant}\phantom{\rule{0.16667em}{0ex}}\times \mid {\phi}_{j}\mid E\mid {\phi}_{j}\mid \phantom{\rule{0.16667em}{0ex}}\le O(1)$). Since

*Q*

_{0}is piece-wise constant, we can also verify that ${\left|\right|{\mathit{\theta}}_{n}^{\ast}\left|\right|}_{0}\phantom{\rule{0.16667em}{0ex}}\le O(logn)$. Thus for sufficiently large

*n*, Θ

*is non-empty and (A.6) holds. The RHS of (A.5) converges to zero as*

_{n}*n*→ ∞.

## References

## Formats:

- Article |
- PubReader |
- ePub (beta) |
- PDF (1.9M)

- Estimating Individualized Treatment Rules Using Outcome Weighted Learning.[J Am Stat Assoc. 2012]
*Zhao Y, Zeng D, Rush AJ, Kosorok MR.**J Am Stat Assoc. 2012 Sep 1; 107(449):1106-1118.* - Statistical learning of origin-specific statically optimal individualized treatment rules.[Int J Biostat. 2007]
*van der Laan MJ, Petersen ML.**Int J Biostat. 2007; 3(1):Article 6.* - Variable selection for optimal treatment decision.[Stat Methods Med Res. 2013]
*Lu W, Zhang HH, Zeng D.**Stat Methods Med Res. 2013 Oct; 22(5):493-504. Epub 2011 Nov 23.* - Diagnostic management strategies for adults and children with minor head injury: a systematic review and an economic evaluation.[Health Technol Assess. 2011]
*Pandor A, Goodacre S, Harnan S, Holmes M, Pickering A, Fitzgerald P, Rees A, Stevenson M.**Health Technol Assess. 2011 Aug; 15(27):1-202.* - Decision rules for subgroup selection based on a predictive biomarker.[J Biopharm Stat. 2014]
*Krisam J, Kieser M.**J Biopharm Stat. 2014; 24(1):188-202.*

- Estimation of treatment policies based on functional predictors[Statistica Sinica. 2014]
*McKeague IW, Qian M.**Statistica Sinica. 2014 Jul; 24(3)1461-1485* - Estimating Individualized Treatment Rules Using Outcome Weighted Learning[Journal of the American Statistical Associa...]
*Zhao Y, Zeng D, Rush AJ, Kosorok MR.**Journal of the American Statistical Association. 2012 Sep 1; 107(449)1106-1118* - A Robust Method for Estimating Optimal Treatment Regimes[Biometrics. 2012]
*Zhang B, Tsiatis AA, Laber EB, Davidian M.**Biometrics. 2012 Dec; 68(4)1010-1018* - Variable Selection for Optimal Treatment Decision[Statistical methods in medical research. 20...]
*Lu W, Zhang HH, Zeng D.**Statistical methods in medical research. 2013 Oct; 22(5)493-504*

- PubMedPubMedPubMed citations for these articles

- PERFORMANCE GUARANTEES FOR INDIVIDUALIZED TREATMENT RULESPERFORMANCE GUARANTEES FOR INDIVIDUALIZED TREATMENT RULESNIHPA Author Manuscripts. Apr 1, 2011; 39(2)1180PMC

Your browsing activity is empty.

Activity recording is turned off.

See more...