• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
J Mach Learn Res. Author manuscript; available in PMC Sep 29, 2010.
Published in final edited form as:
J Mach Learn Res. May 1, 2010; 11: 1771–1798.
PMCID: PMC2947451
NIHMSID: NIHMS232200

Bayesian Learning in Sparse Graphical Factor Models via Variational Mean-Field Annealing

Abstract

We describe a class of sparse latent factor models, called graphical factor models (GFMs), and relevant sparse learning algorithms for posterior mode estimation. Linear, Gaussian GFMs have sparse, orthogonal factor loadings matrices, that, in addition to sparsity of the implied covariance matrices, also induce conditional independence structures via zeros in the implied precision matrices. We describe the models and their use for robust estimation of sparse latent factor structure and data/signal reconstruction. We develop computational algorithms for model exploration and posterior mode search, addressing the hard combinatorial optimization involved in the search over a huge space of potential sparse configurations. A mean-field variational technique coupled with annealing is developed to successively generate “artificial” posterior distributions that, at the limiting temperature in the annealing schedule, define required posterior modes in the GFM parameter space. Several detailed empirical studies and comparisons to related approaches are discussed, including analyses of handwritten digit image and cancer gene expression data.

Keywords: Annealing, Graphical factor models, Variational mean-field method, MAP estimation, Sparse factor analysis, Gene expression profiling

1. Introduction

Bayesian sparse modelling in multivariate analysis is of increasing interest in applications as diverse as life science, economics and information science, and is driving a need for effective computational methods for learning model structure, i.e. sparse configurations. Parallel developments of sparse latent factor models (e.g. West, 2003; Griffiths and Ghahramani, 2006; Lucas et al., 2006; Wang et al., 2007; Archambeau and Bach, 2009; Carvalho et al., 2008; Guan and Dy, 2009; Rai and Daumé, 2009) and inherently sparsely structured graphical models (e.g. Jordan, 1999, 2004; Dobra et al., 2004; Jones et al., 2005; Carvalho and West, 2007) have explored Bayesian computations using a range of stochastic and deterministic search methods. With a view to scaling to higher dimensions and identification of regions of interest in model structure space, efficient and effective computation remains a challenge. We describe a previously undeveloped class of sparse graphical factor models (GFMs)—a subclass of linear, Gaussian latent factor models with sparse factor loadings that also induce sparse conditional independencies. In this context, we develop a computational technique for posterior mode evaluation using a hybrid of variational mean-field method (Attias, 1999; Wainwright and Jordan, 2008) and annealing-based optimization.

As a previously unexplored class of sparse (linear, Gaussian) factor models, the intrinsic graphical structure of the GFM arises from use of an orthogonal factor loadings matrix and appropriate scaling of its columns, together with the usual diagonal covariance matrix for latent factors (with no loss of generality). We show that this generally induces zero elements in the precision matrix of the GFM, as well as the covariance matrix. Particularly, the zero entries in the covariance matrix have corresponding zeros in the precision matrix. We also show that covariance matrices of fitted values (i.e., “data reconstructions”) from such a model have the same sparse structure, and demonstrate aspects of robustness of the model in inferring variable-latent factor relationships in the presence of outliers. These properties are not shared in general by sparse factor models that lack the graphical structure on variables, nor of course by non-sparse approaches. These intrinsic properties of the GFM, along with relationships with earlier studies on sparse factor analyses, are discussed in Section 2.

Our variational mean-field annealing algorithm (VMA2) addresses the combinatorial optimization involved in aiming to compute approximate posterior modes for GFM parameters in the context of the huge space of zero/non-zero potential patterns in factor loadings. Using a prescribed schedule of decreasing temperatures, VMA2 successively generates tempered “artificial” posteriors that, at the limiting zero temperature, yield posterior modes for both GFM parameters and the 0/1 loadings indicators. Defined via an artificial, dynamic regularization on the posterior entropy of configured sparse structures, VMA2 is developed in Section 3.

Section 4 provides additional algorithmic details, including prior modelling for evaluating degree of sparseness, and a stochastic variant of VMA2 for higher-dimensional problems is described in Section 5. Performance and comparisons on artificial data appear in Section 6. Section 7 summarizes extensive, detailed empirical comparisons with related approaches in analyses of hand-written digit images and cancer gene expression data. Section 8 concludes with brief additional comments. A range of detailed supplementary materials, extended discussion on the gene expression studies and R code, is accessible from http://daweb.ism.ac.jp/~yoshidar/anneals/.

2. Sparse Graphical Factor Models

2.1 GFM Form

Observed sample vectors xi [set membership] Rp in p dimensional feature space are each linearly related to independent, unobserved Gaussian latent factor vectors λi [set membership] Rk with additional Gaussian noise. We are interested in sparse variable-factor relationships so that the bipartite mapping λx is sparse, with the underlying p×k matrix of coefficients—the factor loadings matrix—having a number of zero elements; the p×k binary matrix Z defines this configured sparsity pattern. We use a sparse, orthogonal loading matrix and diagonal covariance matrices for both latent factors and residuals; the model is mathematically identified in the usual sense in factor analysis (Anderson, 2003).

With Z as the p × k binary matrix with elements zgj such that variable g is related to factor j if and only if zgj = 1, the GFM is

xi=Ψ1/2ΦZλi+νiwithλi~N(λi0,Δ)andνi~N(νi0,Ψ)
(1)

where: (a) the factor loading matrix Ψ1/2ΦZ has ΦZ [equivalent] Φ [composite function (small circle)] Z with [composite function (small circle)] representing elementwise product; (b) ΦZ is orthogonal, i.e., ΦZΦZ=Ik; (c) the factors have diagonal covariance matrix Δ = diag(δ1,…, δk); and (d) the idiosyncratic Gaussian noise (or residual) νi is independent of λi and has covariance matrix Ψ = diag(ψ1,…, ψp). The implied covariance matrix of the sampling model, Σ, and the corresponding precision matrix, Σ−1, are

=Ψ1/2{I+ΦZΔΦZ}Ψ1/2and1=Ψ1/2{IΦZTΦZ}Ψ1/2
(2)

where T = diag(τ1,…, τk) with τj = δj/(1 + δj) (j = 1 : k). In general, sparse loading matrices induce some zero elements in the covariance matrix whether or not they are orthogonal, but not in the implied precision matrix. In the GFM here, however, a sparse factor model also induces off-diagonal zeros in Σ−1. Zeros in the precision matrix defines a conditional independence or graphical model, hence the GFM terminology. In (2), the pattern of sparsity (location of zero entries) in the covariance and precision matrices are the same. The set of variables associated with one specific factor forms a clique in the induced graphical model, with sets of variables that have non-zero loadings on any two factors lying in the separating subgraph between the corresponding cliques. Hence, we have a natural and appealing framework in which sparse factor models and graphical models are reconciled and consistent.

2.2 Some Model Attributes

In general, a non-orthogonal factor model with the sparse loading matrix W—a sparse extension of probabilistic PCA (Bishop, 1999, 2006)—has the form

xi=Wλi+νiwithλi~N(0,I)andνi~N(0,Ψ).

The GFM arises when a singular value decomposition is applied to the scaled-factor loading matrix Ψ−1/2W = ΦZΔ1/2R with a k × k orthogonal matrix R begin removed. This non-orthogonal model defines a Bayes optimal reconstruction of the data via the fitted values (or extracted signal)

x^(xi):=WE[λixi]=WW(WW+Ψ)1xi.

Then, asymptotically,

1ni=1nx^(xi)x^(xi)pov[x^(xi)]=WW(WW+Ψ)1WW
(3)

and this is generally a non-sparse matrix (no zero entries) even though W is sparse. This is an inconsistency in the sense that data reconstructions should be expected to share the dominant patterns of covariance sparsity evident in the original covariance matrix Cov[xi] = WW′ + Ψ. In the GFM, however, ov[x^(xi)]=Ψ1/2ΦZGΦZΨ1/2 where G is diagonal with entries δj2/(1+δj). In such cases, Cov[x̂(xi)] is sparse and shares the same 0 elements as Cov[xi].

Another feature of the GFM is related to a robust property acquired by the implied graphical structure. Consider an example of 4 variables xi=(xi1,xi2,xi3,xi4) and 2 factors λi=(λi1,λi2) with two cliques in the conditional independence graph; {xi1, xi2, xi3} ← λi1 and {xi2, xi3, xi4} ← λi2 (see Figure 1). The graph defines the decomposition of the joint density p(xi1, xi2, xi3, xi4) = p(xi1[mid ]xi2, xi3)p(xi2, xi3[mid ]xi4)p(xi4) or p(xi1, xi2, xi3, xi4) = p(xi4[mid ]xi2, xi3)p(xi2, xi3[mid ]xi1)p(xi1). This implies that presence of one or more outliers in the isolated feature variable, i.e. xi1 or xi4, associated with a single factor clique, has no effect on the variables, xi4 or xi1 once the intermediate variables xi2 and xi3 are given. Then, the parameters involved in p(xi1) or p(xi4), for instance, the loading components and the noise variances corresponding to the isolated variable, can be estimated independently of the impact of outliers in xi4 or xi1. The numerical experiment shown in Section 7.1 highlights this robustness property in terms of data compression/restoration tasks, with comparison to other sparse factor models.

Figure 1
Graphical model structure of an example GFM.

2.3 Likelihood, Priors and Posterior

Denote by Θ the full set of parameters Θ = {Φ, Δ, Ψ}. Our computations aim to explore model structures Z and corresponding posterior modes of parameters Θ under the posterior p(Z, Θ[mid ]X) using specified priors and based on the n observations forming the columns of the p × n data matrix X.

Likelihood function

The likelihood function is

p(XZ,Θ)Ψn/2ITn/2etr(SΨ1/2+Ψ1/2SΨ1/2ΦZTΦZ/2)
(4)

where etr(A) = exp(trace(A)) for any square matrix A, and S is the sample sum-of-square matrix S = XX′ with elements sgh. In (4), the factor loadings appear only in the last term and form the important statistic

trace(Ψ1/2SΨ1/2ΦZTΦZ)=j=1kτjϕzjΨ1/2SΨ1/2ϕzj
(5)

where ϕzj is column j of ΦZ, or ϕzj = ϕj [composite function (small circle)] zj where ϕj is column j of Φ and zj is column j of Z.

Priors on Θ and Z

Priors over non-zero factor loadings may reflect substantive a priori knowledge if available, and will then be inherently context specific. For examples here, however, we use uniform priors p(Θ[mid ]Z) for exposition. Note that, on the critical factor loadings elements Φ, this involves a uniform on the hypersphere defined by the orthogonality constraint that is then simply conditioned (by setting implied elements of Φ to zero) as we move across candidate models Z.

Concerning the sparse structure Z, we adopt independent priors on the binary variates zgj with logit(Pr(zgj = 1[mid ]ζgj)) = −ζgj/2 where logit(p) = log(p/(1−p)) and the parameters ζgj are assigned hyperpriors and included in the overall parameter set in later. Beta priors are obvious alternatives to this; the logit leads to a minor algorithmic simplification, but otherwise the choice is arbitrary. Using beta priors can be expected to lead to modest differences, if any of practical relevance, in many cases, and users are free to explore variants. The critical point is that including Bayesian inference on these p × k sparsity-determining quantities leads to “self-organization” as their posterior distributions concentrate on larger or smaller values. Examples in Section 6 highlight this.

2.4 MAP Estimation for (Θ,Z) in GFMs

Conditional on the p×k matrix of sparsity comtrol hyperparameters ζ whose elements are the ζgj, it follows that posterior modes (Z) maximize

2logp(Z,ΘX,ζ)=2logp(ΘZ)g=1pj=1kzgjζgjg=1p(nlogψg+sggψg1)+j=1k(nlog(1τj)+τjϕzjΨ1/2SΨ1/2ϕzj).
(6)

The first two terms in (6) arise from the specified priors for Θ and Z, respectively. The quadratic form in the last term is ϕzjΨ1/2SΨ1/2ϕzj=ϕjS(zj,Ψ)ϕj for each j, where the key p × p matrices S (zj, Ψ) have elements (S (zj, Ψ))gh given by

(S(zj,Ψ))gh=zgjzhjsgh(ψgψh)1/2,forg,h=1:p.
(7)

The (relative) signal-to-noise ratios τj = δj / (1 + δj) control the roles played by the last term in (6).

Optimizing (6) over Θ and Z involves many discrete variables and the consequent combinatorial computational challenge. Greedy hill-climbing approaches will get stuck at improper local solutions, often and quickly. The VMA2 method in Section 3 addresses this.

2.5 Links to Previous Sparse Factor Modelling and Learning

In the MAP estimation defined by (6), there are evident connections with traditional sparse principal component analyses (sparse PCA; Jolliffe et al. (2003); Zou et al. (2006); d’Aspremont et al. (2007)). If Ψ = I and Δ = I, the latter likelihood component in (6) is the pooled-variance of projections, i.e. j=1kϕjS(zj,I)ϕj, constructed by the k sparse loading vectors. This is the central statistic optimized in many sparse PCAs. Differences among existing sparse PCAs arise in the way they regulate degrees of sparseness and whether or not orthogonality is imposed on the loading vectors.

The direct sparse PCA of d’Aspremont et al. (2007) imposes an upper-bound d > 0 on the cardinality of zj (the number of non-zero elements), with a resulting semidefinite programming of computational complexity O(p4log(p)). The applicability of that approach is therefore limited to problems with p rather small. Such cardinality constraints can be regarded as suggestive of structure for the prior distribution on ζ in our model.

The SCoTLASS algorithm of Jolliffe et al. (2003) uses [ell]1-regularization on loading vectors, later extended to SPCA using elastic nets by Zou et al. (2006). Recently, Mairal et al. (2009) presented a [ell]1-based dictionary learning for sparse coding in which the method aims to explore sparsity on factor-sample mapping rather than that on factor-variable relations. Setting Laplace-like prior distributions on scale loadings is a counterpart of [ell]1-based penalization (Jolliffe et al., 2003; Zou et al., 2006). However, our model-based perspective aims for a more probabilistic analysis, with advantages in probabilistic assessment of appropriate dimension of the latent factor space as well as flexibility in the determination of effective degrees of sparseness via the additional parameters ζ. Other than the preceding studies, [ell]1-regularizations have widely been employed to make sparse latent factor analyses. Archambeau and Bach (2009) developed a general class of sparse latent factor analyses involving sparse probabilistic PCA (Guan and Dy, 2009) and a sparse variant of probabilistic canonical correlation analysis. A key idea of Archambeau and Bach (2009) is to place the automatic relevance determination (ARD) prior of Mackay (1995) on each loading component, and to apply a variational mean-field learning method.

Key advances in Bayesian sparse factor analysis build on non-parametric Bayesian modelling in Griffiths and Ghahramani (2006) and Rai and Daumé (2009), and developments in Carvalho et al. (2008) stemming from the original sparse Bayesian models in West (2003). Carvalho et al develop MCMC and stochastic search methods for posterior exploration. MCMC posterior sampling can be effective but is hugely challenged as the dimensions of data and factor variables increase. Our focus here is MAP evaluation with a view to scaling to increasingly large dimensions, and we leave open the opportunities for future work on MCMC methods in GFMs.

Most importantly, as remarked in Section 2.2, the GFM differs from some of the forgoing models in the conditional independence graphical structures induced. This characteristic contributes to preserving sparse structure in the data compression/reconstruction process and also to the outlier robustness issue. We leave further comparative discussion to Section 7.1, where we evaluate some of the foregoing methods relative to the sparse GFM analysis in an image processing study.

3. Variational Mean-Field Annealing for MAP Search

3.1 Basic Principle

Relative to (6), consider the class of extended objective functions

GT(Θ,ω)=ZZω(Z)logp(X,Z,Θζ)TZZω(Z)logω(Z)
(8)

where ω(Z)—the sparsity configuration probability—represents any distribution over Z [set membership] Z that may depend on (X,Θ, ζ), and where T ≥ 0. This modifies the original criterion (6) by taking the expectation of p(X, Z, Θ[mid ]ζ) with respect to ω(Z)—the expected complete data log-likelihood in the context of EM algorithm—and by the inclusion of Shannon’s entropy of ω(Z) with the temperature multiplier T.

Now, view (8) as a criterion to maximize over (Θ, ω) jointly for any given T. The following is a key result:

Proposition 1

For any given parameters Θ and temperature T, (8) is maximized with respect to ω at

ωT(Z)p(ZX,Θ,ζ)1/T.
(9)

Proof

See the Appendix.

For any given Θ, a large T leads to ωT (Z) being rather diffuse over sparse configurations Z so that iterative optimization—alternating between Θ and ω—will tend to move more easily and freely around the high-dimensional space Z. This suggests annealing beginning with the temperature T large and successively reducing towards zero. We note that:

  • As T → 0, ωT (Z) converges to a distribution degenerate at the conditional mode Z (Θ, ζ) of p(Z [mid ] X, Θ, ζ), so that
  • joint maximization of GT (Θ, ω) would approach the global maximum of the exact posterior p(Θ, Z [mid ] X, ζ) as T → 0.

The notion of the annealing operation is to realize a gradual move of successively-generated solutions for Θ and ωT (Z), and to escape local mode traps by exploiting annealing. Note that, for any given tempered posterior (9), the expectation in the first term of (8) is virtually impossible to be taken due to the combinatorial explosion. In what follows, we introduce VMA2 as a mean-field technique coupled with the annealing-based optimization to overcome this central computational difficulty.

3.2 VMA2 based on Factorized, Tempered Posteriors

To define and implement a specific algorithm, we constrain the otherwise arbitrary “artificial configuration probabilitiesω, and do so using a construction that induces analytic tractability. We specify the simplest, factorized form

ω(Z)=g=1pj=1kω(zgj):=g=1pj=1kωgjzgj(1ωgj)1zgj

in the same way as conventional Variational Bayes (VB) procedures do. In this GFM context, the resulting optimization is eased using this independence relaxation as it gives rise to tractability in computing the conditional expectation in the first term of (8).

If T = 1, and given the factorized ω, the objective function G1 exactly agrees with the free energy, which bounds the posterior marginal as

logZZp(X,Θ,Zζ)G1(Θ,ω).

The lower-bound G1 is the criterion that the conventional VB methods aim to maximize (Wainwright and Jordan, 2008). This indicates that any solutions corresponding to the VB inference can be obtained by stopping the cooling schedule at T = 1 in our method. Similar ideas have, of course, been applied in deterministic annealing EM and annealed VB algorithms (e.g. Ueda and Nakano, 1998). These methods exploit annealing schemes to escape from local traps during coordinate-basis updates in aiming to define variational approximations of posteriors.

Even with this relaxation, maximization over ω(Z) cannot be done for all elements of Z simultaneously and so is approached sequentially—sequencing through each ωgj in turn while conditioning the others. For any given T this yields the optimizing value given by

ωgj(T)exp{1TZC\{g,j}hgljω(zhl)logp(zgj=1X,ZC\{g,j},Θ,ζ)}
(10)

where C denotes the collection of all indices (g, j) for the p features and k factor variables, C \ {g, j} is the set of the paired indices (h, l) such that (h, l) ≠ (g, j), and zC\{g,j} stands for the set of zhls other than zgj.

Starting with ωgj [similar, equals] 1/2 at an initial large value of T, (10) gradually concentrates to the point mass as T decays to zero slowly:

z^gj:=limT0ωgj(T)={1,ifZC\{g,j}hgljω(zhl)logp(zgj=1,X,ZC\{g,j},Θ,ζ)p(zgj=0,X,ZC\{g,j},Θ,ζ)>0,0,otherwise.

It remains true that, at the limiting zero temperature, the global maximum of GT (Θ, ω) is the set of p × k point masses at the global posterior mode of p(Θ, Z [mid ] X, ζ). This is seen trivially as follows: (i) As T → 0, and with the non-factorized ω in (8), we have limiting value

supZlogp(X,Θ,Zζ)=supωG0(Θ,ω)
(11)

with the point mass ω(Z) = δZ(Z) at the location of the global maximum (Z)gj = zgj. Further, (ii) any point mass δ Z(Z) is representable by a fully factorized p×k point masses as δZ(Z) = Πg,j δzgj (zgj).

It is stressed that the coordinate-basis updates (10) cannot, of course, guarantee convergence to the global optimum even with prescribed annealing. Nevertheless, VMA2 represents a substantial advance in its ability to move more freely and escape local mode traps. We also note the generality of the idea, beyond factor models and also potentially using penalty functions other than entropy.

4. Sparse Learning in Graphical Factor Models

4.1 MAP Algorithm

Computations alternate between conditional maximization steps for ω and Θ while reducing the temperature T. At each step, the value of the objective function (8) is kept to refine until convergence where the temperature reaches to zero. Specifically:

  1. Set a cooling schedule τ = {T1,…, Td} of length d where Td = 0;
  2. Set ζ;
  3. Initialize Θ;
  4. Initialize ω(Z);
  5. i ← 0;
  6. while ({the loop is not converged }∧{id})
  7. ii +1;
  8. Compute configuration probabilities ωgj (Ti);
  9. Optimize with respect to each column ϕj (j = 1 : k) of Φ in turn under full-conditioning;
  10. Optimize with respect to Δ under full-conditioning;
  11. Optimize with respect to Ψ under full-conditioning;
  12. Optimize with respect to ζ under full-conditioning;
  13. end while

We now summarize key components in the iterative, annealed computation defined above.

4.2 Sparse Configuration Probabilities

First consider maximization with respect to each sparse configuration probability ωgj conditional on all others. We note that the first term in (8) involves the expectation over Z with respect to the probabilities ω, denoted by Eω[·]. Accordingly, for the key terms S(zj, Ψ) we have

Eω[S(zj,Ψ)]=Ωj(Ψ1/2SΨ1/2)with(Ωj)gh={ωgj,ifg=h,ωgjωhj,otherwise.
(12)

Introduce the notation Ψ−1/2SΨ−1/2 = (s1(Ψ),…, sp(Ψ)) to represent the p columns of the scaled-sample sum-of-square matrix here, and define the p–vector

ω~gj=(ω1j,,ωg1,j,1,ωg+1,j,,ωpj).

Then, the partial derivative of (8) with respect to ωgj conditional on Θ and the other configuration probabilities leads to

logit(ωgj(T))=Hgj(ζgj)/TwhereHgj(ζgj):=τjϕgj(ϕjω~gj)sg(Ψ)ζgj.
(13)

This directly yields the conditional maximizer for ωgj in terms of the tempered negative energy Hgjgj)/T . As the temperature T is reduced towards zero, the resulting estimate tends towards 0 or 1 according to the sign of Hgj(ζgj).

4.3 Conditional Optimization over Φ

The terms in (8) that involve Φ are simply the expectation of the quadratic forms in the last term of (6), with the term for each column ϕj involving the key matrices S(zj, Ψ) defined in (7), for each j = 1 : k. At each step through the overall optimization algorithm in Section 4.1, we sequence through these columns of the loadings matrix in turn conditioning on the previously optimized values of all other columns. In the context of the overall iterative MAP algorithm, this yields global optimization over Φ as T → 0.

Conditional optimization then reduces to the following: for each j = 1 : k, sequence through each column ϕj in turn and at each step

maximizeϕjϕjEω[S(zj,Ψ)]ϕjsubject toϕjϕj=1andϕmϕj=0formj,m=1:k.
(14)

The optimization conditions on the most recently updated values of all other columns mj at each step, and is performed as one sweep as the line 9 in the algorithm of Section 4.1. Column order can be chosen randomly or systematically each time while still maintaining convergence. In this step, we stress that the original orthogonality condition is modified to ΦZΦZ=IΦTΦ=I in (14). It remains the case that iteratively refined estimates obtained from (14) satisfy the original condition at the limiting zero temperature, yielding sparsity for Eω[S(zj, Ψ)], as detailed in the mathematical derivations in supplementary material.

The specific computations required for the conditional optimization in (14) are as follows (with supporting details in the Appendix). Note that the central matrices Eω[S(zj, Ψ)] required here are trivially available from equation (12).

  1. Compute the p × (k − 1) matrix Φ(−j) = {ϕm}mj by simply deleting column j from Φ;
  2. Compute the p × p projection matrix Nj=IpΦ(j)Φ(j);
  3. Compute the eigenvector [var phi]j corresponding to the most dominant eigenvalue of NjEω[S(zj, Ψ)]Nj;
  4. Compute the required optimal vector ϕj = Nj[var phi]j/||Nj[var phi]j||.

This procedure solves (14) by optimizing over an eigenvector already constrained by the orthogonality conditions. Here Nj spans the null space of the current k − 1 columns of Φ(−j), so NjEω[S(zj, Ψ)]Nj defines the projection of Eω[S(zj, Ψ)] onto the orthogonal space and eigenvectors [var phi]j lie in the null space. It remains to ensure that the computed value ϕj is of unit length, which involves the normalization in the final step in part 4. Selecting the eigenvector with maximum eigenvalue ensures the conditional maximization in (14).

4.4 Conditional Optimization over Δ

The variances δj of the latent factors appear in equations (6) and (8) in the sum over j = 1 : k of terms

nlog(1+δj)+δj(1+δj)1ϕjEω[S(zj,Ψ)]ϕj.
(15)

This is unimodal in δj with maximizing value

δ^j=max{0,n1ϕjEω[S(zj,Ψ)]ϕj1},
(16)

and so the update at the line 10 of the MAP algorithm of Section 4.1 computes these values in parallel for each factor j = 1 : k. Note that this may generate zero values, indicating the removal of the corresponding factors from the model, and so inducing an intrinsic ability to prune the number of factors as being redundant in a model specified initially with a larger, encompassing value of k. The configured sparse structure drives this pruning; any specific factor j that is inherently very sparse generates a smaller value of the projected “variance explained” ϕjEω[S(zj,Ψ)]ϕj, and so can lead to [delta with circumflex]j = 0 as a result.

4.5 Conditional Optimization over Ψ

The diagonal noise covariance matrix Ψ appears in the objective function of equation (8) in terms that can be re-expressed as

nlogΨtrace(SΨ1)+j=1kτjtrace(ϕjϕjΨ1/2(ΩjS)Ψ1/2)

where τj = δj/(1 + δj) for each j. Differentiating this with respect to Ψ−1/2 yields the gradient equation:

ndiag1(Ψ1/2)diag1(SΨ1/2)+j=1kτjdiag1(ϕjϕjΨ1/2(ΩjS))=0,
(17)

where diag−1(A) denotes the vector of the diagonal elements in A. Iterative solution of this non-linear equation in Ψ can be performed via the reduced implicit equation

diag1(Ψ)=n1diag1({Ipj=1kτj(ϕjϕj)(Ψ1/2ΩjΨ1/2)}S).
(18)

4.6 Degrees of Sparseness

The prior over the logistic hyperparameters ζ = {ζgj} defining the Bernoulli probabilities for the zgj is important in encouraging relevant degrees of sparseness. Extending the model via an hierarchical prior for these parameters enables adaptation to data in evaluating relevant degrees of sparseness. One first class of priors is used here, taking the ζgj to be conditionally independent and drawn from the prior with positive part Gaussian distribution N+(ζgj[mid ]μ, σ) for some specified mean and variance (μ, σ). The annealing search can now be extended to include ζ, simply embedding conditional optimization of (8) under this prior within each step of the iterative search. The conditional independence structure of the model easily yields unique solutions for each of the ζgj in parallel as values satisfying

ωgj=exp(ζgj/2)1+exp(ζgj/2)ζgjμ2σ.
(19)

Solutions to (19) are trivially, iteratively computed. Evidently, as ωgj approaches 0 or 1, the solution for ζgj is shifted to the corresponding boundary. ζgj as a function of ωgj for several values of (μ, σ).

As mentioned earlier, the choice of this logit/truncated normal prior is a subjective preference and could be replaced by others, such as beta priors. Again, we expect that this would typically lead to modest differences, if any of practical relevance, in many cases.

5. A Stochastic Search Variant for Large p

In problems with larger numbers of variables, the computations quickly become challenging, especially in view of the repeated eigen-decompositions required for updating factor loading matrix. In our examples and experiments, analysis with dimensions p ~ 500 would be feasible using our own R code (vma2gfm ( ) available from the supplementary web site), but computation time then rapidly increases with increasing p. More efficient low level coding will speed this, but nevertheless it is of interest to explore additional opportunities for increasing the efficiency of the MAP search.

To reduce the computational time, we explore a stochastic variant of the original deterministic VMA2 that uses realized Z matrices from current, conditional configuration probabilities ωgj(T) at each stage of the search process. The realized binary matrix Z = [z1,…, zk] replaces the full matrix Eω[S(zj, Ψ)] with a sparse alternative S(zj, Ψ). In larger, very sparse problems, this will enable us to greatly reduce the computing time as each eigen-decomposition can be computed based only on the components related to nonzero zgj values. This leads to a stochastic annealing search with all other steps unchanged. We also have the additional benefit of the introduced randomness aiding in potentially moving away from the stuck in suboptimal solutions. It should be stressed that this is an optional complement to the deterministic algorithm and one that may be used for an initial period of time prior to enable swifter initial iterations from arbitrary initial values, prior to switching to the deterministic annealing once in the region of a posterior mode.

The modified search procedure over ϕj in equation (14) is:

  1. Draw a set of binary values zgj, g = 1,…, p, according to the current configuration probabilities ωgj (T);
  2. Define the set of active variables by Aj = {g[mid ]g [set membership] 1 : p, zgj = 1}; denote by ϕj,{Aj} the sub-vector of ϕj for only the active variables, and S{Aj} (zj, Ψ) the submatrix of S(zj, Ψ) whose rows and columns correspond to only the active variables;
  3. Solve the reduced optimization conditional on the Aj, via:
    maximizeϕj,{Aj}ϕj,{Aj}S{Aj}(z^j,Ψ)ϕj,{Aj}subject toϕj,{Aj}2=1andϕm,{Aj}ϕj,{Aj}=0formj.
  4. Update the full p−vector ϕj with elements ϕj,{Aj} for the active variables and all other elements zero.

For example, in a problem with p = 5000 but sparseness of the order of 5%, the Aj will involve a few hundred active variables, and eigenvalue decomposition will then be performed on matrices of that order rather than 5000 × 5000. We note also that this strategy requires a modification to the update operation for the configuration probabilities: the ωgj will be updated at any one step only for the current indices g [set membership] Aj, keeping the remaining zgj at values previously obtained.

6. Experimental Results on Synthetic Data

6.1 Visual Tracking of Annealing Process with a Toy Problem

The first experiment shows how the VMA2 method can solve the combinatorial optimization. Consider 3 variables and 1 factor, so that xi = (ϕ1 · z1)λ1i + νi where all parameters except ϕ1 are fixed as Ψ = I and Δ = I. The likelihood function in (4) is then p(XZ,Θ)exp(ϕz1Sϕz1/2). Assume that true edge on z31 = 1, indicating xi3λi1, is known, but z11 = 1 and z21 = 0 are treated as unknown. Then, with the prior for z11 and z21 as logit(Pr(z11 = 1)) = logit(Pr(z21 = 1)) = −1.5, we explored values for ϕ1 and ωg1, g = 1, 2, based on an artificial data set drawn from the GFM, so as to refine GT (Θ, ω) under the factorized ω(Z) = ω(z11)ω(z21).

We can map the surface GT (Θ, ω) over (ω11, ω21) when Θ is set at the optimized value [Theta with circumflex] for each (ω11, ω21). Figure 2 on the bottom-right corner displays a contour plot of G0([Theta with circumflex], ω). The maximum point lies in one of the four corners corresponding to ωg1 [set membership] {0, 1} and the global MAP estimate has ω11 = 1 and ω21 = 0.

Figure 2
Display of evolving GT ([Theta with circumflex], ω) in the annealing process (from T = 2 to T = 0) with contour plots. The black circle in each panel indicates the maximum point, and that corresponding to T = 0 in the panel on the bottom-right corner ...

Figure 2 also shows a tracking result of the VMA2 search process starting from T = 2 and stopping at T = 0. The change in GT ([Theta with circumflex], ω) and the corresponding maximizing values of (ω11, ω21) can be monitored through the contour plots at selected temperatures. Starting from the initial values, ω11 ≈ 0.5 and ω21 ≈ 0.5, at the highest temperature, the successively-generated maximum points gradually come closer to the global optimum (ω11 = 1 and ω21 = 0) as the annealing process proceeds. At higher temperatures, GT ([Theta with circumflex], ω) is unimodal. In the overall search, the tempered criterion begins to become bimodal after the trajectory moves into regions close to the global maximum.

This simple illustrative example highlights the key to success in the search: moving the trajectory of solutions closer to the global maximum in earlier phases of the cooling schedule, before the tempered criterion function exhibits substantial multimodality. Looking ahead, we may be able to raise the power of the annealing search by, for example, using dynamic control of the cooling schedule or more general penalty functions for ω.

6.2 Snapshot of Algorithm with 30 Variables and 4 Factors

In what follows, we will show some simulation studies to provide insights and comparisons. The data sets have n = 100 data points drawn from the GFM with p = 30 and ktrue = 4, and with Ψ = 0.05I and Δ = diag(1.5, 1.2, 1.0, 0.8). The zgj were independently generated with Pr(zgj = 1) = 0.3, yielding roughly 70% sparsity; then, non-zero elements of Φ where generated as independent standard normal variates, following which ΦZ was constrained to orthogonality.

To explore sensitivity to the chosen temperature schedule for annealing, experiments were run using three settings:

  • (Log-inverse decay) Ti = 3/log2(i + 1) for i = 1,…, 6999, and T7000 = 0
  • (Linear decay) Ti = 3−6 × 103 × (i − 1) for i = 1,…, 1999, and T2000 = 0
  • (Power decay) Ti = 3 × 0.99−(i−1) for i = 1,…, 1999, and T2000 = 0

For each, we evaluated the resulting MAP search in terms of comparison with the true model and computational efficiency, in each case using a model with redundant factor dimension k = 8.

6.3 Annealing with Fixed Hyper-parameters

First analyses fixed ζgj = c and was run repeatedly across some grid points of c [set membership] [0, 5]. Figure 3 summarizes the evaluation of the receiver operating characteristics (ROC) for the three cooling schedules. The true positive (TPR) and false positive rates (FPR) were computed based on the correspondences between estimated and true values of the zgj. For comparison, we used standard PCA, extracting the dominant 4 eigenvectors and setting entries below a threshold (in absolute value) to zero; sliding the threshold towards zero gives a range of truncated loadings vectors in the PCA that define the ROC curve for this approach. The resulting ROC curve, shown in the left panel, is very near to the 45° line, comparing very poorly with the annealed GFM; for the latter, each ROC curve indicates rather accurate identification of the sparse structure and the curves differ in small ways only as a function of cooling schedule. The choice of cooling schedule can, however, have a more marked influence on results if initialized at temperatures that are too low.

Figure 3
ROC for threshold PCA assuming a known, true k = 4 factors (left), compared to VMA2 estimation of the GFM under the three cooling schedules and with k = 8 (right). TPR (vertical) and FPR (horizontal) were calculated according to TP/P and FP/N where P ...

6.4 Inference on Degrees of Sparseness

A second analysis uses the sparsity prior p(ζgj) = N+(ζgj[mid ]μ, σ) with μ = 3 and σ = 6, and adopts the log-inverse cooling schedule. As shown in the right panel of Figure 4, the analysis realized a reasonable control of FNR (15.4%) and FPR (0%), inducing a slightly less sparse solution than the true structure. The GFM analysis automatically prunes the redundant factors, identifying the true model dimension. Figure 5 displays a snapshot of evolving configuration probabilities ωgj and hyper-parameters ζgj during the annealing schedule, demonstrating convergence over 2000 steps. At around Ti [similar, equals] 0.45, all the configuration probabilities corresponding to the redundant four factors reached to zero.

Figure 4
Result of the VMA2 estimation using the log-inverse rate cooling in analysis of synthetic data. (Left) Precision and factor loadings matrix used for generating the synthetic data (p = 30, ktrue = 4). Non-zero elements are colored black. (Right) Estimated ...
Figure 5
Convergence trajectories of the ζgj (upper) and ωgj (lower) in analysis of synthetic data over 2000 steps of annealed MAP estimation.

We further evaluated sensitivity to the choice of cooling schedules; in addition to the previous three cooling schedules, we compared with:

  • (Log-inverse decay) Ti = 0.7/log2(i + 1) for i = 1,…, 6999 and, T7000 = 0
  • (Linear decay) Ti = 0.7 − 6 × 103 × (i − 1) for i = 1,…, 1999 and, T2000 = 0
  • (Power decay) Ti = 0.7 × 0.99−(i−1) for i = 1,…, 1999 and, T2000 = 0

The initial temperatures are reduced from 3 to 0.7. Figure 6 shows the variations of TPR and FPR in the use of the six cooling schedules, evaluated in 20 analyses with replicated synthetic models and data sets. The left and center panels indicate significant dominance of the annealing starting from the higher initial temperatures. Performance in identifying model structure seriously degrades when using a temperature schedule that starts too low, and the sensitivity to schedule is very limited when beginning with reasonably high initial temperatures.

Figure 6
Performance tests on 20 synthetic data sets for different cooling schedules and comparison between the GFM and a sparse PCA (SPCA). For each panel, TPR (black) and FPR (blue) are plotted (vertical axis) against the 20 replicate simulations of artificial ...

The right panel in Figure 6 shows TPR and FPR for the sparse PCA (SPCA) proposed by Zou et al. (2006), evaluated on the same 20 data sets using the R code spca() available at CRAN (http://cran.r-project.org/). With spca(), we can specify the number of nonzero elements (cardinality) in each column of the factor loading matrix. We executed spca() after the assignment of the true cardinality as well as the known factor dimension ktrue = 4. The figure indicates a better performance of GFM annealed with high initial temperature than the sparse PCA, and this is particularly notable in that the GFM analysis uses k = 8 and involves no a priori knowledge on the degree of sparseness. It is important to see that the conducted comparison is biased since the data were drawn from the GFM with the orthogonal loading matrix where SPCA does not make orthogonality assumptions. In Section 7.1, we provide deeper comparisons among several existing sparse factor analyses based on image processing in hand-written digits recognition.

6.5 Computing Time Questions

Figure 7 shows the CPU times required for the execution of the GFM analyses as above, repeated with increasing dimension p [set membership] {100, 200, 300, 500, 700, 1000}. The data sets were again generated from GFMs with 4 factors and roughly 70% sparseness. We then performed the VMA2 using a linear decay cooling of length 2000, and using both deterministic and stochastic annealing in a model with k = 8. The deterministic algorithm was not used for p ≥ 500 due to substantial increase in CPU times; this was eased via use of the stochastic search algorithm.

Figure 7
(Left) CPU times (in seconds; Intel(R) Core(TM)2 Duo processor, 2.60Ghz) versus model dimension p for the stochastic VMA2. For the deterministic VMA2, we terminated the tests with the data larger than p = 300. Execution times for the deterministic algorithm ...

7. Real Data Applications

7.1 Application: Hand-written Digit Recognition

We evaluate GFM in pattern recognition analyses of hand-written digit images, and make comparisons to three existing methods; (i) SPCA (Zou et al., 2006), (ii) sparse probabilistic PCA with ARD prior (Archambeau and Bach, 2009), and (iii) MCMC-driven sparse factor analysis (West, 2003; Carvalho et al., 2008). These three methods are all based on models with non-orthogonal sparse loading matrices. The training data set was made from 16 × 16 gray-scale images of 100 digits (i.e. n = 100, p = 256) of ‘3’ that were randomly drawn from the postal zip code data available at http://www-stat.stanford.edu/~tibs/ElemStatLearn/ (Hastie et al., 2001). To evaluate robustness of the four approaches, we added artificial outliers to 15 pixels (features) for about 5% of the original 100 images. Some of the contaminated images are shown in the top-left panels of Figure 8.

Figure 8
Comparison between GFM and the three alternative methods ((i)-(iii)) in the data reconstruction of outlier hand-written digit images. For the implementation of the sparse probabilistic PCA with ARD prior (ARD-PPCA), we prepared our own R function which ...

For the non-probabilistic method, i.e. (i) SPCA, we performed data reconstruction in the standard manner; x(xi) = WWxi with W the matrix of sparse, non-orthogonal loading vectors. In applications of (ii) and (iii) that are inherently driven by probabilistic models, data reconstruction was made via the posterior mean x(xi) = WE[λi[mid ]xi] using obtained sparse loading matrix W. For all the methods, setting factor dimensions to k = 10, we explored sparse estimates so that the degrees of sparseness become approximately 30% (see Figure 8). For SPCA, we use the same number of non-zero elements in each loading vector as in the estimated GFM. The GFM was estimated using VMA2 with a fixed value for ζ and a linear cooling schedule of length 2000.

A set of 100 test data samples was created from the 100 samples above by adding outliers drawn from a uniform distribution to randomly-chosen pixels with probability 0.2. Performance of the four approaches to data compressions/reconstruction were assessed via mean square error (MSE) between x(xi)s and the true, original test images without the outliers. The right four panels in Figure 8 show some digit images reconstructed by each method with the corresponding original/contaminated test data. The reconstruction errors for the training and test instances are also summarized in the figure. For the results on (ii) and (iii)—the non-orthogonal probabilistic analyses—the reconstructed digits were vaguely-outlined. Such poor reconstructions arise partly from effects of the outliers spread from pixel to pixel along the complete graph defined by non-sparse precision matrix. This empirical result indicates the vulnerability issue of non-restricted sparse factor models in presence of outliers. In the reconstructions of the test instances, the GFM could capture characteristics of original digits with the highest accuracy among the methods. SPCA attained the second highest accuracy in terms of MSE. These observations highlight the substantial merit of using sparse linear mapping in data reconstructions. The GFM and SPCA limit the propagations of outliers within some factor cliques, as most pixel images in the other isolated, non-adjacent factor cliques could be restored clearly.

7.2 Application: Breast Cancer Gene Expression Study

Data and GFM Analysis

Latent factor models are being more used in microarray-based gene expression studies in both basic biological and clinical studies, such as cancer genomics. An example in breast cancer gene expression study here further illustrates the practical relevance of GFM structure and adds to comparisons with other approaches. In addition to the summary details below, a much extended discussion of both statistical and biological aspects is available in supporting material at the first author’s web site (see link below).

Among the goals of most such studies are identification of multiple factors that may represent underlying activity of biological pathways and provide opportunities for improved predictive models with estimated latent factors for physiological and clinical outcomes (e.g. Carvalho et al., 2008; Chang et al., 2009; Hirose et al., 2008; Lucas et al., 2006, 2009; West, 2003; Yoshida et al., 2004, 2006). Here we discuss an example application of our sparse GFM in analysis of data from previously published breast cancer studies (Cheng et al., 2006; Huang et al., 2003; Pittman et al., 2004; West et al., 2001).

The GFM approach was applied to a sample of gene expression microarray data collected in the CODeX breast cancer genomics study (Carvalho et al., 2008; Pittman et al., 2004; West et al., 2001) at the Sun-Yat Sen Cancer Center, Taipei, during 2001-2004. In addition to summary expression indices from Affymetrix Human Genome U95 arrays, the data set includes immunohistochemistry (IHC) test for key hormonal receptor proteins in clinical prognostics; ERBB2 (Her2) and estrogen (ER). The IHC measures are discrete: ER negative (ER=0), ER positive with low/high-level expression (ER=1 and ER=2), Her2 negative (Her2=0), and Her2 positive with low/high-level (Her2=1 and Her=2). We performed analysis of p = 996 genes with the expression levels that, on a log2 (fold change) scale, exceed a median level of 7 and a range of at least 3-fold changes across the tumors. The data set, including the expression data and the IHC hormonal measures, are available on-line as supplementary material.

The annealed estimation of GFM was run with k = 25, μ = 7 and σ = 10. The cooling schedule was prescribed by a linearly-decreasing sequence of 2000 temperatures under which the decay rate and initial temperature were set to 0.006 and 3, respectively. The applied GFM identified 19 factors, pruning from the model maximum k = 25. Heatmaps of gene expression for genes identified in each of the factors appear in Figure 9 with the identified sparse pattern of the loadings matrix.

Figure 9
Identified factor probes (left) and sparse structure (right; binary matrix). In each image of the left panel, expression signatures of the probes associated with each factor are depicted across 138 samples (ordered along horizontal axis).

Evaluation and Annotation of Inferred Factors

To investigate potential biological connections of the factors, we evaluated enrichment of the functional annotations shared by genes in each factor through the Gene Ontology (GO). This exploration revealed associations between some factors and GO biological processes; the complete and detailed results, including tables of the GO enrichment analyses for each factor and detailed biological descriptions, are available from the web site of supporting information.

Factors related to ER

Figure 10 displays boxplots of fitted values of the factor scores for each sample, plotted across all 19 factors and stratified by levels of each of the clinical ER and Her2 (0/1/2) categories. For each sample i, the posterior mean of the factor vector, namely λ^i=(Ik+Δ)1ΔΦZΨ1/2xi, is evaluated at the estimated model, providing the fitted values displayed. We note strong association of ER status to factors 8 (GO: hormone metabolic process), 9 (GO: glucose metabolic process, negative regulation of MAPK activity), 12 (GO: C21-steroid hormone metabolic process), 14 (GO: apoptotic program, positive regulation of caspase activity), 18 (GO: M phase of meiotic cell cycle) and 19 (GO: regulation of Rab protein signal transduction). These clear relationships of ER status to multiple factors with intersecting but also distinct biological pathway annotations is consistent with the known complexity of the broader ER network, as estrogen receptor-induced signaling impacts multiple cellular growth and developmentally related downstream target genes and strongly defines expression factors linked to breast cancer progression.

Figure 10
Boxplots of fitted values of breast tumor-specific factor scores, stratified by protein IHC determinations of clinical ER status (upper) and Her2 status (lower) in their 0/1/2 categories.

Her2 status and oncogenomic recombination hotspot on 17q12

Figure 10 indicates factor 16 as strongly associated with Her2 status (0, 1) versus 2. Factor 16 significantly loads on only 7 genes that include STARD3, GRB7 and two probe sets on the locus of ERBB2 (which encodes Her2). This is consistent with earlier gene expression studies that have consistently identified a single expression pattern related to Her2 and a very small number of additional genes, and that have found the “low Her2 positives” level(1) to be generally comparable to negatives. Interestingly, we note that STARD3, GRB7 and ERBB2 are all located on the same chromosomal locus 17q12, which is known as PPP1R1B-STARD3-TCAP-PNMT-PERLD1-ERBB2-MGC14832-GRB7 locus. This locus has been reported in many studies (e.g. Katoh and Katoh, 2004) as an oncogenomic recombination hotspot which is amplified frequently in breast tumor cells, and the purely exploratory (i.e., unsupervised) GFM analysis clearly identifies the “Her2 factor” as strongly reflective of increased expression of genes in this hotspot, consistent with the amplification inducing Her2 positivity.

Comparison to non-sparse analysis

Finally, we show a comparison to non-sparse traditional PCA. Supplementary Fig.1 and 2 show the estimated factors (principal components) corresponding to the most dominant 19 eigenvalues, stratified by the levels of ER and Her2. The PCA failed to capture the existing factor relevant to Her2-specific phenotypes in the analysed data. Note that the foregoing sparse analysis identified the Her2-relevant factor only through the 7 non-zero loadings. Indeed, our post-analysis has found that the data set contains very few genes exhibiting significant fold change across the Her2 phenotypes. The non-sparse analysis would capture many irrelevant features through too redundant nonzero loadings. The failure of PCA signifies the importance of sparse modelling in handling high-dimensional data having inherently sparse structure.

8. Additional Comments

The novel graphical property of GFMs provides a nice reconciliation of sparse covariance models with sparse precision models—sparse latent factor analysis and graphical models, respectively. Some of the practical benefits of this arise from the ability of GFM to define data reconstructions exhibiting the same patterns of covariances as the model/data predict, and the potential to induce robustness to outliers relative to non-graphical factor models, whether sparse or not. Some theoretical questions remain about precise conditions under which the sparsity patterns of covariance and precision matrices are guaranteed to agree in general sparse Gaussian factor models other than the GFM form. Additionally, extensions to integrate non-parametric Bayesian model components for factors, following Carvalho et al. (2008), are of clear future interest.

The ability of the VMA2 to aid in the identification of model structure in sparse GFM, and to provide an additional computational strategy and tools to address the inherently challenging combinatorial optimization problem, has been demonstrated in our examples. Scaling to higher dimensional models is enabled by relaxation of the direct deterministic optimization viewpoint, with stochastic search components that promote greater exploration of model space and can speed up search substantially. Nevertheless, moving to higher dimensions will require new, creative computational implementations, such as using distributed computing, that will themselves require novel methodological concepts.

The annealed search methodology evidently will apply in other contexts beyond factor models. At one level, sparse factor models are an instance of problems of variable selection in multivariate regression, in which the regression predictors (feature variables) are themselves unknown (i.e., are the factors). The annealed entropy approach is therefore in principle applicable to problems involving regression model search and uncertainly in general classes of linear or nonlinear multivariate regression with potentially many predictor variables. Beyond this, the same can be said about potential uses in other areas of graphical modelling involving structural inference of directed or undirected graphical models, and also in multivariate time series problems where some of the sparse structure may relate to relationships among variables over time.

We also remark on generalization of the basic form of VMA2 here that might utilize penalty functions other than the Shannon’s entropy used here. The central idea of the VMA2 is the design of a temperature-controlled iterative optimization that converges to the joint posterior distribution of model parameters and sparse structure indicators. The entropy formulation used in our GFM context was inspired by the form of the posterior itself, but similar algorithms—with the same convergent property—could be designed using other forms. This, along with computational efficiency questions and applications in models beyond the sparse GFM framework, and also potential extensions to consider heavy-tailed or Bayesian nonparametric distributions for latent factors and/or residuals (e.g. Carvalho et al., 2008), are open areas for future research.

Supplementary Material

Wed site and examples

code

spca code

synthetic data

Acknowledgments

The authors are grateful to the Action Editor and three anonymous referees for their detailed and most constructive comments on the original version of this paper. Elements of the research reported here were developed while Ryo Yoshida was visiting SAMSI and Duke University during 2008-09. Aspects of the research of Mike West were partially supported by grants from the U.S. National Science Foundation (DMS-0342172) and National Institutes of Health (NCI U54-CA-112952). The research of Ryo Yoshida was partially supported by the Japan Science and Technology Agency (JST) under the Core Research for Evolutional Science and Technology (CREST) program. Any opinions, findings and conclusions or recommendations expressed in this work are those of the authors and do not necessarily reflect the views of the NSF or NIH.

Appendix

Proof of Proposition 1

Replace the objective function of (8) by multiplying by inverse temperature 1/T:

1TGT(Θ,ω)=ZZω(Z)logp(X,Z,Θζ)1/TZZω(Z)logω(Z).

An upper-bound of this modified criterion is derived as follows:

1TGT(Θ,ω)=ZZω(Z)logp(ZX,Θ,ζ)1/Tp(X,Θζ)1/Tω(Z)=ZZω(Z)logp(ZX,Θ,ζ)1/Tω(Z)ZZp(ZX,Θ,ζ)1/T+K0K0.

In the second equality, the terms irrelevant to ω(Z) are included in K0 = log p(X, Θ[mid ]ζ)1/T + log ΣZ′ [set membership]zp(Z′[mid ]X, Θ, ζ)1/T. The first term in the second line is the negative of the Kullback Leibrier divergence between ω(Z) and the normalized tempered posterior distribution. The lower-bound of the Kullback librier divergence is attained if and only if

ω(Z)=p(ZX,Θ,ζ)1/TZZp(ZX,Θ,ζ)1/T,

as required.

Derivation: Optimization over Φ

Let ρj, j [set membership] {1,…, k} be the Lagrange multipliers to ensure the restrictions in (14). We now write down the Lagrange function:

ϕjEω[S(zj,Ψ)]ϕjρj(ϕj21)mjρmϕmϕj.
(20)

Differentiation of (20) with respect to ϕj yields

Eω[S(zj,Ψ)]ϕjρjϕjmjρmϕm=0.
(21)

In order to solve this equation, the first step to be addressed is to find the closed form solution for the vector of the Lagrange multipliers, ρ(−j) = {ρm}mj [set membership] Rk−1. Multiplying (21) by each ϕm,mj, from the left, we have the k − 1 equations as follows:

ϕmEω[S(zj,Ψ)]ϕjmjρmϕmϕj=0forms.t.mj.

This yields the matrix representation

Φ(j)Eω[S(zj,Ψ)]ϕjΦ(j)Φ(j)ρ(j)=0,

which in turn leads to the solution for ρ(−j) as

ρ(j)=(Φ(j)Φ(j))1Φ(j)Eω[S(zj,Ψ)]ϕj.

Substituting this into the original equation (21) yields the eigenvalue equation

NjEω[S(zj,Ψ)]ϕjρjϕj=0withNj=IΦ(j)Φ(j).
(22)

Now consider the alternative, symmetrized eigenvalue equation

NjEω[S(zj,Ψ)]Njφjρjφj=0.
(23)

Since Nj is idempotent, left-multiplication of (23) by Nj yields

NjEω[S(zj,Ψ)]NjφjρjNjφj=0.

which is equivalent to the required equation (22) when ϕj = Nj[var phi]j.

Contributor Information

Ryo Yoshida, Department of Statistical Modeling, Institute of Statistical Mathematics, Minato-ku, Tokyo 106-8569, Japan.

Mike West, Department of Statistical Science, Duke University, Durham, NC 27708-0251, USA.

References

  • Anderson TW. An Introduction to Multivariate Statistical Analysis. 3. Wiley-Interscience; New Jersey: 2003.
  • Archambeau C, Bach F. Sparse probabilistic projections. In: Koller D, Schuurmans D, Bengio Y, Bottou L, editors. Advances in Neural Information Processing Systems. Vol. 21. Cambridge, MA: MIT Press; 2009. pp. 73–80.
  • Attias H. Inferring parameters and structure of latent variable models by variational Bayes. Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence (UAI99) 1999:21–30.
  • Bishop CM. Probabilistic principal component analysis. Journal of the Royal Statistical Society (Series B) 1999;61:611–622.
  • Bishop CM. Pattern Recognition and Machine Learning. 1. Springer; Singapore: 2006.
  • Carvalho CM, West M. Dynamic matrix-variate graphical models. Bayesian Analysis. 2007;2:69–98.
  • Carvalho CM, Lucas JE, Wang Q, Chang JT, Nevins JR, West M. High-dimensional sparse factor modelling: Applications in gene expression genomics. Journal of the American Statistical Association. 2008;103:1438–1456. [PMC free article] [PubMed]
  • Chang J, Carvalho CM, Mori S, Bild A, Wang Q, West M, Nevins JR. Decomposing cellular signaling pathways into functional units: A genomic strategy. Molecular Cell. 2009;34:104–114. [PMC free article] [PubMed]
  • Cheng SH, West M, Horng CF, Huang E, Pittman J, Dressman H, Tsou MH, Chen CM, Tsail SY, Jian JJ, Nevins JR, Liu MC, Huang AT. Genomic prediction of loco-regional recurrence following mastectomy in breast cancer. Journal of Clinical Oncology. 2006;24:4594–4602. [PubMed]
  • d’Aspremont A, El Ghaoui L, Jordan MI, Lanckriet GRG. A direct formulation for sparse PCA using semidefinite programming. SIAM Review. 2007;49(3):434–448.
  • Dobra A, Jones B, Hans C, Nevins JR, West M. Sparse graphical models for exploring gene expression data. Journal of Multivariate Analysis. 2004;90:196–212.
  • Griffiths TL, Ghahramani Z. Infinite latent feature models and the indian buffet process. In: Weiss Y, Schölkopf B, Platt J, editors. Advances in Neural Information Processing Systems. Vol. 18. Cambridge, MA: MIT Press; 2006. pp. 475–482.
  • Guan Y, Dy J. Sparse probabilistic principal component analysis. Proceedings of AISTATS 2009. 2009;5:185–192.
  • Hastie T, Tibshirani R, Friedman J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Newv York: Springer; 2001.
  • Hirose O, Yoshida R, Imoto S, Yamaguchi R, Higuchi T, Charnock-Jones DS, Print C, Miyano S. Statistical inference of transcriptional module-based gene networks from time course gene expression profiles by using state space models. Bioinformatics. 2008;24(7):932–942. [PubMed]
  • Huang E, Chen S, Dressman HK, Pittman J, Tsou MH, Horng CF, Bild A, Iversen ES, Liao M, Chen CM, West M, Nevins JR, Huang AT. Gene expression predictors of breast cancer outcomes. The Lancet. 2003;361:1590–1596. [PubMed]
  • Jolliffe IT, Trendafilov NT, Uddin M. A modified principal component technique based on the lasso. Journal of Computational and Graphical Statistics. 2003;112(3):531–547.
  • Jones B, Dobra A, Carvalho CM, Hans C, Carter C, West M. Experiments in stochastic computation for high-dimensional graphical models. Statistical Science. 2005;20:388–400.
  • Jordan MI, editor. Learning in Graphical Models. Cambridge MA: MIT Press; 1999.
  • Jordan MI. Graphical models. Statistical Science. 2004;19:140–155.
  • Katoh M, Katoh M. Evolutionary recombination hotspot around GSDML-GSDM locus is closely linked to the oncogenomic recombination hotspot around the PPP1R1B-ERBB2-GRB7 amplicon. International Journal of Oncology. 2004;24:757–63. [PubMed]
  • Lucas JE, Carvalho CM, Wang Q, Bild AH, Nevins JR, West M. Sparse statistical modelling in gene expression genomics. In: Müller P, Do KA, Vannucci M, editors. Bayesian Inference for Gene Expression and Proteomics. Cambridge University Press; 2006. pp. 155–176.
  • Lucas JE, Carvalho CM, Chi J-TA, West M. Cross-study projections of genomic biomarkers: An evaluation in cancer genomics. PLoS ONE. 2009;4(2):e4523. [PMC free article] [PubMed]
  • Mackay DJC. Probable networks and plausible predictions - a review of practical Bayesian methods for supervised neural networks. Network: Computation in Neural Systems. 1995;6:469–505.
  • Mairal J, Bach F, Ponce J, Sapiro G. Online dictionary learning for sparse coding. Proceedings of the 26th Annual International Conference on Machine Learning (ICML2009); New York, NY, USA. 2009. pp. 689–696. ACM.
  • Pittman J, Huang E, Dressman H, Horng CF, Cheng SH, Tsou MH, Chen CM, Bild A, Iversen ES, Huang AT, Nevins JR, West M. Integrated modeling of clinical and gene expression information for personalized prediction of disease outcomes. Proceedings of the National Academy of Sciences. 2004;101:8431–8436. [PMC free article] [PubMed]
  • Rai P, Daumé H. The infinite hierarchical factor regression model. In: Koller D, Schuurmans D, Bengio Y, Bottou L, editors. Advances in Neural Information Processing Systems. Vol. 21. Cambridge, MA: MIT Press; 2009. pp. 1321–1328.
  • Ueda N, Nakano R. Deterministic annealing EM algorithm. Neural Networks. 1998;11:271–282. [PubMed]
  • Wainwright MJ, Jordan MI. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning. 2008;1:1–305.
  • Wang Q, Carvalho CM, Lucas JE, West M. BFRM: Bayesian factor regression modelling. Bulletin of the International Society for Bayesian Analysis. 2007;14(2):4–5.
  • West M. Bayesian factor regression models in the “large p, small n” paradigm. In: Bernardo JM, Bayarri MJ, Berger JO, Dawid AP, Heckerman D, Smith AFM, West M, editors. Bayesian Statistics. Vol. 7. Oxford University Press; 2003. pp. 723–732.
  • West M, Blanchette C, Dressman H, Huang E, Ishida S, Zuzan H, Spang R, Marks JR, Nevins JR. Predicting the clinical status of human breast cancer utilizing gene expression profiles. Proceedings of the National Academy of Sciences. 2001;98:11462–11467. [PMC free article] [PubMed]
  • Yoshida R, Higuchi T, Imoto S. A mixed factors model for dimension reduction and extraction of a group structure in gene expression data. Proceedings of the 2004 IEEE Computational Systems Bioinformatics Conference. 2004:161–172. [PubMed]
  • Yoshida R, Higuchi T, Imoto S, Miyano S. Arraycluster: an analytic tool for clustering, data visualization and module finder on gene expression profiles. Bioinformatics. 2006;22(12):1538–1539. [PubMed]
  • Zou H, Hastie T, Tibshirani R. Sparse principal component analysis. Journal of Computational and Graphical Statistics. 2006;15(2):265–286.
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...