• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of ejbiosysbioJournal's HomeManuscript SubmissionSpringerOpen.comRegisterThis article
EURASIP J Bioinform Syst Biol. 2009; 2009(1): 195712.
Published online Mar 4, 2009. doi:  10.1155/2009/195712
PMCID: PMC3171421

Clustering of Gene Expression Data Based on Shape Similarity

Abstract

A method for gene clustering from expression profiles using shape information is presented. The conventional clustering approaches such as K-means assume that genes with similar functions have similar expression levels and hence allocate genes with similar expression levels into the same cluster. However, genes with similar function often exhibit similarity in signal shape even though the expression magnitude can be far apart. Therefore, this investigation studies clustering according to signal shape similarity. This shape information is captured in the form of normalized and time-scaled forward first differences, which then are subject to a variational Bayes clustering plus a non-Bayesian (Silhouette) cluster statistic. The statistic shows an improved ability to identify the correct number of clusters and assign the components of cluster. Based on initial results for both generated test data and Escherichia coli microarray expression data and initial validation of the Escherichia coli results, it is shown that the method has promise in being able to better cluster time-series microarray data according to shape similarity.

1. Introduction

Investigating the genetic structure and metabolic functions of organisms is an important yet demanding task. Genetic actions, interactions, how they control and are controlled, are determined, and/or inferred by data from many sources. One of these sources is time-series microarray data, which measure the dynamic expression of genes across an entire organism. Many methods of analyzing this data have been presented and used. One popular method, especially for time-series data, is gene-based profile clustering [1]. This method groups genes with similar expression profiles in order to find genes with similar functions or to relate genes with dissimilar functions across different pathways occurring simultaneously.

There has been much work on clustering time-series data and clustering can be done based on either similarity of expression magnitude or the shape of expression dynamics. Clustering methods include hierarchical and partitional types (such as K-means, fuzzy K-means, and mixture modeling) [2]. Each method has its strengths and weaknesses. Hierarchical techniques do not produce clusters per se; rather, they produce trees or dendrograms. Clusters can be built from these structures by later cutting the output structure at various levels. Hierarchical techniques can be computationally expensive, require relatively smooth data, and/or be unable to "recover" from a poor guess; that is, the method is unable to reverse itself and recalculate from a prior clustering set. They also often require manual intervention in order to properly delineate the clusters. Finally, the clusters themselves must be well defined. Noisy data resulting in ill-defined boundaries between clusters usually results in a poor cluster set.

Partitional clustering techniques strive to group data vectors (in this case, gene expression profiles) into clusters such that the data in a particular cluster are more similar to each other than to data in other clusters. Partitional clustering can be done on the data itself or on spline representations of the data [3, 4]. In either case, square-error techniques such as K-means are often used. K-means is computationally efficient and can always find the global minimum variance. However, it must know the number of clusters in advance; there is no provision for determining an unknown number of clusters other than repeatedly testing the algorithm with different cluster numbers, which for large datasets can be very time consuming. Further, as is the case with hierarchical methods, K-means is best suited for clusters which are compact and well separated; it performs poorly with overlapping clusters. Finally, it is sensitive to noise and has no provision for accounting for such noise through a probabilistic model or the like. A related technique, fuzzy K-means, attempts to mimic the idea of posterior cluster membership probability through a concept of "degree of membership." However, this method is not computationally efficient and requires at least an a priori estimate of the degree of membership for each data point. Also, the number of clusters must be supplied a priori, or a separate algorithm must be used in order to determine the optimum number of clusters. Another similar method is agglomerative clustering [5]. Model-based techniques go beyond fuzzy K-means and actually attempt to model the underlying distributions of the data. The methods maximize the likelihood of the data given the proposed model [4, 6].

More recently, much study has been given toward clustering based on expression profile shape (or trajectory) rather than absolute levels. Kim et al. [7] show that genes with similar function often exhibit similarity in signal shape even though the expression magnitude can be far apart. Therefore, expression shape is a more important indication of similar gene functions than expression magnitude.

The same clustering methods mentioned above can be used based on shape similarity. An excellent example of a tree-based algorithm using shape-similarity as a criterion can be found in [8]. While the results of this investigation proved fruitful, it should be noted that the data used in the study resulted in well-defined clusters. Further, the clustering was done manually once the dendrogram was created. Möller-Levet et al. [9] used fuzzy K-means to cluster time-series microarray data using shape similarity as a criterion. However, the number of clusters was known beforehand; no separate optimization method was used in order to find the proper number of clusters. Balasubramaniyan et al. [10] used a similarity measure over time-shifted profiles to find local (short-time scale) similarities. Phang et al. [11] used a simple An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i1.gif shape decomposition and used a nonparametric Kruskal-Wallis test to group the trajectories. Finally, Tjaden [12] used a K-means related method with error information included intrinsically in the algorithm.

A common difficulty with these approaches is to determine the optimal number of clusters. There have been numerous studies and surveys over the years aimed at finding optimal methods for unsupervised clustering of data; for example, [1320]. Different methods achieve different results, and no single method appears to be optimal in a global sense. The problem is essentially a model selection problem. It is well known that the Bayesian methods provide the optimal framework for selecting models, though a complete treatment is analytically intractable for most cases. In this paper, a Bayesian approach based on the Variational Bayes Expectation Maximization (VBEM) algorithm is proposed to determine the number of clusters and better performance than MDL and BIC criterion has been demonstrated.

In this study, the goal was to find clusters of genes with similar functions; that is, coregulated genes using time-series microarray data. As a result, we choose to cluster genes based on signal shape information. Particularly, signal shape information is derived from the normalized time-scaled forward first differences of the time-sequence data. This information is then forwarded to a Variational Bayes Expectation Maximization algorithm (VBEM, [21]), which performs the clustering. Unlike K-means, VBEM is a probabilistic method, which was derived based on the Bayesian statistical framework and has shown to provide better performance. Further, when paired with an external clustering statistic such as the Silhouette statistic [22], the VBEM algorithm can also determine the optimal number of clusters.

The rest of the paper is organized as follows. In Section 2 the problem is discussed in more detail, the underlying model is developed, and the algorithm is presented. In Section 3 the results of our evaluation of the algorithm against both simulated and real time-series data are shown. Also presented are comparisons between the algorithm and K-means clustering, both methods using several different criteria for making clustering decisions. Conclusions are summarized in Section 4. Finally, Appendices A, B, and C present a more detailed derivation of the algorithm.

2. Method

2.1. Problem Statement and Method

Given the microarray datasets of An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i2.gif genes, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i3.gif for An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i4.gif, where An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i5.gif is the number of time points, that is, the columns in the microarray, it is desired to cluster the gene expressions based on signal shape. The clustering is not known a priori; therefore not only must individual genes be assigned to relevant clusters, but the number of clusters themselves must also be determined.

The clustering is based on expression-level shape rather than magnitude. The shape information is captured by the first-order time difference. However, since the gene expression profiles were obscured by the varying levels manifested in the data, the time difference must be obtained on the expression levels with the same scale and dynamic range. Motivated by the observations, the proposed algorithm has three steps. In the first step, the expression data is rescaled. In the second step, the signal shape information is captured by calculating the first-order time difference. In the last step, clustering is performed on the time-difference data using a Variational Bayes Expectation Maximization (VBEM) algorithm. In the following, each step is discussed in detail.

2.2. Initial Data Transformation

Each gene sequence was rescaled by subtracting the mean value of each sequence from each individual gene, resulting in sequences with zero mean. This operation was intended to mitigate the widely different magnitudes and slopes in the profile data. By resetting all genes to a zero-mean sequence, the overall shape of each sequence could be better identified without the complication of comparing genes with different magnitudes.

After this, the resulting sequences were then normalized such that the maximum absolute value of the sequence was 1. Gene expression between related genes can result in a large change or a small; if two genes are related, that relationship should be recoverable regardless of the amplitude of change. By renormalizing the data in this manner, the amplitudes of both large-change and small-change genes were placed into the same order of magnitude.

Mathematically, the above operation can be expressed by

equation image
(1)

where An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i7.gif represents the mean of An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i8.gif.

2.3. Extraction of Shape Information and Time Scaling

To extract shape information of time-varying gene expression, the derivative of the expression trajectory is considered. Since we are dealing with discrete sequences, differences must be used rather than analytical derivatives. To characterize the shape of each sequence, a simple first-difference scheme was used, this being the magnitude difference of the succeeding point and the point under consideration, divided by the time difference between those points. The data was taken nonuniformly over a period of approximately 100 minutes, with sample times varying from 7 to 50 minutes. As the transformation in (1) already scales the data to a range of An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i9.gif, further compressing that scale by nearly 2 orders of magnitude over some time stretches was deemed neither prudent nor necessary. Therefore, the time difference was scaled in hours to prevent this unneeded range compression. The resulting sequences were used as data for clustering.

Mathematically, this operation can be written as

equation image
(2)

where An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i11.gif is the length-An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i12.gif vector of time points associated with gene An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i13.gif, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i14.gif is the vector of transformed time-series data (from (1)) associated with gene An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i15.gif, and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i16.gif is the resulting vector of first differences associated with gene An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i17.gif.

Figure Figure11 shows an example pair of sequences using contrived data. These two sequences are visually related in shape, but their mean values are greatly different. A K-means clustering would place these two sequences in different clusters. By transforming the data, the similarity of the two sequences is enhanced, and the clustering algorithm can then place them in the same cluster. Figure Figure22 shows the original two sequences after data transformation.

Figure 1
Dissimilar expression levels with similar shape.
Figure 2
Normalized differences: the same two sequences after transformations.

2.4. Clustering

Once the sequence of first differences was calculated for each gene, clustering was performed on An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i18.gif, the first-order difference. To this end, a VBEM algorithm was developed. Before presenting that development, a general discussion of VBEM is in order.

An important problem in Bayesian inference is determining the best model for a set of data from many competing models. The problem itself can be stated fairly compactly. Given a set of data An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i19.gif, the marginal likelihood of that data given a particular model m can be expressed as

equation image
(3)

where An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i21.gif and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i22.gif are, respectively, the latent variables and the model parameters. The integration is taken over both variables and parameters in order to prevent overfitting, as a model with many parameters would naturally be able to fit a wider variety of datasets than a model with few parameters.

Unfortunately, this integral is not easily solved. The VBEM method approximates this by introducing a free distribution, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i23.gif, and taking the logarithm of the above integral. If An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i24.gif has support everywhere that An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i25.gif does, we can construct a lower bound to the integral using Jensen's inequality:

equation image
(4)

Maximizing this lower bound with respect to the free distribution An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i27.gif results in An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i28.gif, the joint posterior. Since the normalizing constant is not known, this posterior cannot be calculated exactly. Therefore another simplification is made. The free distribution An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i29.gif is assumed to be factorable, that is, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i30.gif. The inequality then becomes

equation image
(5)

Maximizing this functional An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i32.gif is equivalent to minimizing the KL distance between An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i33.gif and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i34.gif. The distributions An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i35.gif and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i36.gif are coupled and must be iterated until they converge.

With the above discussion in mind, we now develop the model that our VBEM algorithm is based on. Given An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i37.gif clusters in total, we can let An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i38.gif denote the cluster number of gene An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i39.gif. Then, we assume that, given An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i40.gif, the expression level for gene g follows a Gaussian distribution, that is,

equation image
(6)

where An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i42.gif is the mean and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i43.gif is the variance of the kth Gaussian cluster. Since both An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i44.gif and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i45.gif are unknown parameters, a Normal-Inverse-Gamma prior distribution is assigned as

equation image
(7)

where An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i47.gif, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i48.gif, and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i49.gif are the known parameters of the prior distribution. Furthermore, a multinomial prior is assigned for the cluster number An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i50.gif as

equation image
(8)

where An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i52.gif is the prior probability that gene An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i53.gif belongs to An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i54.gifth cluster An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i55.gif and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i56.gif. An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i57.gif further assumes a priori the Dirichlet distribution

equation image
(9)

where An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i59.gif are the known parameters of the distribution. Given the transformed expressions of An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i60.gif genes, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i61.gif, the stated two tasks are equivalent to estimating An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i62.gif, the total number of clusters, and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i63.gif for all An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i64.gif genes.

A Bayesian framework is adopted for estimating both An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i65.gif and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i66.gif, which are calculated by the maximum a posteriori criterion as

equation image
(10)

where An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i68.gif is the marginal likelihood given the model An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i69.gif has An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i70.gif clusters, and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i71.gif is the a posteriori probability of An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i72.gif when the total number of clusters is An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i73.gif.

Unfortunately, there are now multiple unknown nuisance parameters at this point: An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i74.gif, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i75.gif, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i76.gif, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i77.gif, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i78.gif, and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i79.gif all still need to be found. To do so requires a marginalization procedure over all the unknowns, which is intractable for unknown cluster id An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i80.gif. Therefore, a VBEM scheme is adopted for estimating the necessary distributions.

2.5. VBEM Algorithm

Given the development above, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i81.gif can be expressed as

equation image
(11)

where An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i83.gif is the vector of unknown parameters An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i84.gif, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i85.gif, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i86.gif, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i87.gif, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i88.gif, and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i89.gif. Notice the summation in (11) is NP hard, whose complexity increases exponentially with the number of genes. We therefore resort to approximate this integration by variational EM. First, a lower bound is constructed for the expression in (11). The ultimate aim is to maximize this lower bound. The expression for the lower bound can be written

equation image
(12)

where as above the inequality derives by use of Jensen's inequality. The free distributions An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i91.gif and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i92.gif are introduced as approximations to the unknown distributions An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i93.gif and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i94.gif. The An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i95.gif distributions are chosen so as to maximize the lower bound. Using variational derivatives and an iterative coordinate ascent procedure, we find

Vbe Step:

equation image
(13)

Vbm Step:

equation image
(14)

where An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i98.gif and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i99.gif are iterations and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i100.gif are normalizing constants to be determined. Because of the integration in (13), An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i101.gif must be chosen carefully in order to have an analytic expression. By choosing An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i102.gif as a member of the exponential family, this condition is satisfied. Note An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i103.gif is an approximation to the posterior distribution An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i104.gif and therefore can be used to obtain the estimate of An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i105.gif.

2.6. Summary of VBEM Algorithm

The VBEM algorithm is summarized as follows:

(1) Initialization

(i) Initialize An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i106.gif, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i107.gif, a, b, k, and L.

Iterate until lower bound converges enumerate

(2) VBE Step:

(i) for An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i108.gif,

(ii) calculate An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i109.gif using (A.1) in Appendix A,

(iii) end An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i110.gif.

(3) VBM Step:

(i) for An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i111.gif,

(ii) calculate An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i112.gif using (B.1) in Appendix B,

(iii) End k.

(4) Lower bound:

(i) calculate An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i113.gif using (C.1) in Appendix C.

End iteration.

2.7. Choice of the Optimum Number of Clusters

The Bayesian formulation of (11) suggests using the number of clusters that maximize the marginal likelihood, or in the context of VBEM, the lower bound An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i114.gif. Instead of solely basing the determination of the number of clusters using An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i115.gif, 4 different criteria are investigated in this work: (a) lower bound An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i116.gif used within the VBEM algorithm (labelled KL), (b) the Bayes Information Criterion [23], (c) the Silhouette statistic performed on clusters built from transformed data, and (d) the Silhouette statistic performed on clusters built from raw data. The VBEM lower bound An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i117.gif is discussed above; the BIC and Silhouette criteria are discussed below.

2.8. Bayes Information Criterion (BIC)

The Bayes Information Criterion (BIC, [23]) is an asymptotic approximation to the Bayes Factor, which itself is an average likelihood ratio similar to the maximum likelihood ratio. As the Bayes Factor is often a difficult calculation, the BIC offers a less-intensive approximation. Subject to the assumptions of large data size and exponential-family prior distributions, maximizing the BIC is equivalent to maximizing the integrated likelihood function. The BIC can be written as

equation image
(15)

where An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i119.gif is the likelihood function of data An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i120.gif given parameters An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i121.gif, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i122.gif is the size (dimensionality) of parameter set An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i123.gif, and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i124.gif is the sample size. The term An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i125.gif is a penalty term discouraging more complex models.

2.9. Silhouette Statistic

The Silhouette statistic (Sil, [22]) uses the squared difference between a data vector and all other data vectors in all clusters. For any particular data vector An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i126.gif belonging to cluster An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i127.gif, let An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i128.gif be the average squared difference between data vector An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i129.gif and all other vectors in cluster An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i130.gif. Let An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i131.gif be the minimum average squared distance between data vector An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i132.gif and all other vectors of cluster An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i133.gif, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i134.gif. Then the Silhouette statistic for data vector An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i135.gif is

equation image
(16)

It is quickly seen that the range of this statistic is An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i137.gif. A value close to 1 means the data vector is very probably assigned to the correct cluster, while a value close to An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i138.gif means the data vector is very probably assigned to the wrong cluster. A value near 0 is a neutral evaluation.

3. Results

We illustrate the method using simulated expression data and with microarray data available online.

3.1. Simulation Study

In order to test the ability of VBEM to properly cluster data of similar shape but dissimilar mean level, and scale, several datasets were constructed. These datasets were intended to appear as would a set of time-series microarray data. Each consisted of 5 data points in a vector, corresponding to what might be seen from a microarray from a single gene over 5-time samples. Identical assumptions were used to produce these datasets; namely, that the inherent clusters within the data were based upon a mean vector of values for a particular cluster, that each cluster may have subclusters exhibiting a mean shift and/or a scale change from the mean vector, and that the data within a cluster randomly varied about that mean vector (plus any mean shift and scale change). All sets of sample data shared the characteristics shown in Table Table1.1. For example, a test "gene" of cluster "dms" would be a random length-5 vector, drawn from a Gaussian distribution with a mean of An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i139.gif and a particular standard deviation (defined below). This random vector would then be scaled by 0.25 and shifted in value by An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i140.gif.

Table 1
Basis vectors for clusters in sample datasets.

The datasets constructed from these basis vectors differed in number of data vectors per subcluster (and thus the total number of data vectors), and the standard deviation used to vary the individual vector values about their corresponding basis vectors. Generally speaking, the standard deviation vectors were constructed to be approximately 25% of the mean vector for the "low-noise" sets, and approximately 50% of the mean vector for the "high-noise" sets.

3.2. "Low-Noise" Test Datasets

Two datasets were constructed using standard deviation vectors approximately 25% of the relevant mean vector. Table Table22 shows the standard deviation vectors used. Each subcluster in Table Table11 was replicated several times, randomly varying about the mean vector in a Gaussian distribution with a standard deviation as shown in Table Table2.2. Test set 1 had 5 replicates per subcluster (e.g., a1–a5, cs1–cs5), resulting in a total set An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i150.gif data vectors. Test set 2 had 99 replicates per subcluster, resulting in a total set An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i151.gif data vectors.

Table 2
Standard deviation vectors for clusters in "low-noise" sample datasets.

3.3. "High-Noise" Test Datasets

Because of the need to test the robustness of the clustering and prediction algorithms in the presence of higher amounts of noise, six datasets were constructed using standard deviation vectors approximately 50% of the relevant mean vector. Table Table33 shows the standard deviation vectors used. As with the "low-noise" sets, each subcluster in Table Table11 was replicated several times, randomly varying about the mean vector in a Gaussian distribution, this time with a standard deviation as shown in Table Table3.3. Table Table44 shows the number of replicates produced for each dataset. For the test data, an added transformation step was accomplished that would normally not be performed on actual data. Since the test data was produced in already clustered form, the vectors (rows) were randomly shuffled to break up this clustering.

Table 3
Standard deviation vectors for clusters in "high-noise" sample datasets.
Table 4
Subcluster replicates and total vector sizes for "high-noise" datasets.

3.4. Test Types and Evaluation Measures

To evaluate the ability of VBEM to properly cluster the datasets, two test sequences were conducted. First, the data was clustered using VBEM in a "controlled" fashion; that is, the number of clusters was assumed to be known and passed to the algorithm. Second, the algorithm was tested in an "uncontrolled" fashion; that is, the number of clusters was unknown, and the algorithm had to predict the number of clusters given the data. During the uncontrolled tests, a K-means algorithm was also run against the data as a comparison.

The VBEM algorithm as currently implemented requires an initial (random) probability matrix for the distribution of genes to clusters, given a value for An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i162.gif. Therefore, for each dataset, 55 trials were conducted, each trial having a different initial matrix.

Also, each trial begins with an initial clustering of genes. As currently implemented, this initialization is performed using a K-means algorithm. The algorithm attempts to cluster the data such that the sum of squared differences between data within a cluster is minimized. Depending on the initial starting position, this clustering may change. In MATLAB, the built-in K-means algorithm has several options available to include how many different trials (from different starting points) are conducted to produce a "minimum" sum-squared distance, how many iterations are allowed per trial to reach a stable clustering, and how clusters that become "empty" during the clustering process are handled. For these tests, the K-means algorithm conducted 100 trials of its own per initial probability matrix (and output the clustering with the smallest sum-squared distance), had a limit of 100 iterations, and created a "singleton" cluster when a cluster became empty.

As mentioned above, the choice of optimum K was conducted using four different calculations. The first used the estimate for the VBEM lower bound, the second used the BIC equation. In both cases, the optimum An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i163.gif for a particular trial was that which showed a decrease in value when An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i164.gif was increased. This does not mean the values used to determine the optimum An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i165.gif were the absolute maxima for the parameter within that trial; in fact, they usually were not. The overall optimum An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i166.gif for a particular choice of parameter was the maximum value over the number of trials. The third and fourth criteria made use of the Silhouette statistic, one using the clusters of transformed data and one using the corresponding clusters of raw data. We used the built-in Silhouette function contained within MATLAB for our calculations. To find the optimum An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i167.gif, the mean Silhouette value for all data vectors in a clustering was calculated for each value of An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i168.gif. The value of An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i169.gif for which the mean value was maximized was chosen as the optimum An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i170.gif.

To evaluate the actual clustering, a misclassification rate was calculated for each trial cluster. Since the "ground-truth" clustering was known a priori, this rate can be calculated as a sum of probabilities derived from the original data and the clustering results:

equation image
(17)

where An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i172.gif is the probability that computed cluster An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i173.gif belongs to a priori cluster An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i174.gif given that An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i175.gif is in fact the correct cluster, and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i176.gif is the probability of a priori cluster An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i177.gif occurring. An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i178.gif refers to the misclassification rate using statistic An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i179.gif (KL, BIC, both Silhouette) for trial An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i180.gif. This rate is in the range An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i181.gif and is equal to 1 only when the number of clusters is properly predicted and those calculated clusters match the a priori clusters. Thus, both under- and overprediction of clusters were penalized.

For the "controlled" test sequences, the combinations of VBEM + KL (V/KL), VBEM + BIC (V/BIC), VBEM + Silhouette (transformed data) (V/SilT), and VBEM + Silhouette (raw data) (V/SilR) all properly chose the optimum clustering for the two "low-noise" datasets, in all cases with no misclassification. For the six "high-noise" sets, V/KL and V/BIC were completely unable to choose the optimum clustering (lowest misclassification rate). In the case of V/SilT, the algorithm-chosen optimum was rarely the true optimum (2 out of 6 datasets). However, the chosen optimum was always very nearly optimal. Finally, V/SilR chose the optimum clustering 5 out of 6 datasets. The algorithm-chosen optimal clustering for both V/SilT and V/SilR showed a misclassification rate of 6 percent or less, while the misclassification rates for V/KL and V/BIC were often in the range of 15–35 percent. Figure Figure33 summarizes this data.

Figure 3
Misclassification rate versus N, high-noise data, K fixed.

For the "uncontrolled" tests, the above 4 algorithms were tested with the number of clusters unknown. Further, K-means clustering with Silhouette statistic (KM/SilT and KM/SilR) was also conducted for comparison. The results for the 6 "high-noise" datasets are summarized below.

Figure Figure44 shows a summary plot of the predicted number of clusters K versus dataset size N for all combinations. Note that V/SilR correctly identified An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i182.gif for all datasets. Also note that KM/SilT, KM/SilR, and V/SilT predicted An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i183.gif or An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i184.gif for all datasets except for test set 3 (An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i185.gif). However, even though V/SilR correctly identified An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i186.gif for this dataset, it had equivalent optimum values for An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i187.gif, and 15. Given the poor performance of all combinations for this dataset, this suggests that for high-noise data such as this, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i188.gif is insufficient to give good results.

Figure 4
K(pred) versus N, high-noise data.

V/KL and V/BIC both performed poorly with all datasets, in most cases overpredicting the number of clusters. As can be seen in Figure Figure4,4, this overprediction tended to increase with dataset size N. V/BIC resulted in a lower over-prediction than V/KL.

Figure Figure55 shows a summary plot of misclassification rate versus dataset size N for the VBEM versus K-means comparison using Silhouette statistics only (both raw and difference). This plot shows the greater performance of V/SilR even more dramatically. While the misclassification rates for the KM/SilT, KM/SilR, and V/SilT were generally on the order of 10–20%, V/SilR was very stable, generally between 3-4%.

Figure 5
Misclassification rate versus N, high-noise data, K unknown.

3.5. Test Results Conclusion

The VBEM algorithm can correctly cluster shape-based data even in the presence of fairly high amounts of noise, when paired with the Silhouette statistic performed on the raw data clusters (V/SilR). Further, V/SilR is robust in correctly predicting the number of clusters in noise. The misclassification rate is superior to K-means using Silhouette statistics, as well as VBEM using all other statistics. Because of this, it was expected that V/SilR would be the algorithm of choice for the experimental microarray data. However, to maintain comparison, all four VBEM/statistic algorithms were tested.

3.6. Experimental E. Coli Expression Data

The proposed approach for gene clustering on shape similarity was tested using time-series data from the University of Oklahoma E. coli Gene Expression Database resident at their Bioinformatics Core Facility (OUBCF) [24]. The exploration concentrated on the wild-type MG1655 strain during exponential growth on glucose. The data available consisted of 5 time-series log-ratio samples of 4389 genes.

The initial tests were run against genes identified as being from metabolic categories. Specifically, genes identified in the E. coli K-12 Entrez Genome database at the National Center for Biotechnology Information, US National Library of Medicine, National Institutes of Health (http://www.ncbi.nlm.nih.gov/) [25] (NIH) as being in categories C, G, E, F, H, I, and/or Q were chosen.

Because of the short-sequence lengths, any gene with even a single invalid data point was removed from the set. With only 5-time samples to work with in each gene sequence, even a single missing point would have significant ramifications in the final output. The final set of genes used for testing numbered 1309.

In implementing the VBEM algorithm, initial values for the algorithm were An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i189.gif. The algorithm was set to iterate until the change in lower bound decreased below An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i190.gif or became negative (which required the prior iteration to be taken as the end value) or 200 iterations, whichever came first. The optimal number of clusters was arrived at by multiple runs of the algorithm at values of K, the predefined number of clusters, varying from 3 to 15. An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i191.gif was chosen in the same manner as in the test data sequences.

Figure Figure66 shows a summary of the final result of the algorithm. Each subfigure shows the mean shapes clustered by the particular algorithm/statistic. As can be seen from the figure, V/KL resulted in an overclassification of structure in the data. The other three algorithms gave more consistent results. As a result of this, the V/KL clusters were removed from further analysis.

Figure 6
Mean data shapes. (a) V/KL, (b) V/BIC, (c) V/SilT, (d) V/SilR.

3.7. Validation of E. Coli Expression Data Results

We validated the results of our tests using Gene Ontology (GO) enrichment analysis. To this end, the genes used in the analysis were tagged with their respective GO categories and analyzed within each cluster for overrepresentation of certain categories versus the "background" level of the population (in this case, the entire set of metabolic genes used). Again, the Entrez Genome database at NIH was used for the GO annotation information. As most of the entries enriched were from the Biological Process portion of the ontology, the analysis was restricted to those terms.

To perform the analysis, the software package Cytoscape (http://www.cytoscape.org/) [26] was used. Cytoscape offers access to a wide variety of plug-in analysis packages, including a GO enrichment analysis tool, BiNGO, which stands for Biological Network Gene Ontology (http://www.psb.ugent.be/cbd/papers/BiNGO/) [27].

To evaluate the clusters, we modified an approach used by Yuan and Li [28] to score the clusters based on the information content and the likelihood of enrichment (An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i192.gif). Unlike [28], however, a distance metric was not included in the calculations. Because of the large cluster sizes involved, such distance calculations would have exacted a high calculation overhead. Rather, the simpler approach of forming subclusters of adjacent enriched terms was chosen; that is, if two GO terms had a relationship to each other and were both enriched, they were placed in the same subcluster and their scores multiplied by the number of terms in the subcluster. Also, a large portion of the score of any term shared across more than one cluster was subtracted. This method rewarded large subclusters, while penalizing numerous small subclusters and overlapping terms.

The scoring equation for a cluster An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i193.gif, consisting of An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i194.gif subclusters each of size An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i195.gif is given as

equation image
(18)

where An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i197.gif is the probability of GO term An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i198.gif being selected, An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i199.gif is the negative of the information content of the GO term, and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i200.gif is the An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i201.gif-value An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i202.gif of the GO term An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i203.gif. Large subclusters are rewarded by larger values of An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i204.gif. Subtracting 1 from An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i205.gif compensates for the "baseline" score value; that is, the score a cluster would achieve if no terms were connected. The final term in the equation is the devaluation of any GO term shared by An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i206.gif clusters.

Given that algorithm was expected to group related functions together, the expectation for GO analysis was the creation of large, highly-connected subclusters within each main gene cluster. Ideally, one such subcluster would subsume the entire cluster; however, a small number of large subclusters within each cluster would validate the algorithm. The scoring equation (18) greatly rewards large, highly-connected subclusters; in fact, given a cluster, the score is maximized by having all GO terms within that cluster be connected within a single  subcluster.

Figures Figures7,7, ,8,8, and and99 show the results of the clustering using the three algorithms. Subclusters have been outlined for ease of identification. In some instances, nonenriched GO terms (colored white) have been removed for clarity. Visually, V/SilR is the better choice of the three. It has fewer overall clusters, and each cluster has generally fewer subclusters than V/SilT or V/BIC.

Figure 7
GO clusters resulting from V/SilR.
Figure 8
GO clusters resulting from V/SilT.
Figure 9
GO clusters resulting from V/BIC.

The clusters were scored using (18). Table Table55 shows a summary of this analysis. As can be seen, V/SilR (3 clusters) far outscored both V/SilT (4 clusters) and V/BIC (5 clusters), both in aggregate and average cluster scores. Therefore, the conclusion is that V/SilR provides the better clustering performance.

Table 5
Summary scores from E. coli data analysis

4. Conclusion

Four combinations of VBEM algorithm and cluster statistics were tested. One of these, VBEM combined with the Silhouette statistic performed on the raw data clusters, clearly outperformed the other three in both simulated and real data tests. This method definitely shows promise in clustering time-series microarray data according to profile shape.

Appendices

A. Calculation of VBE Step

Let us assume we are on iteration An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i207.gif and have both An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i208.gif and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i209.gif available from iteration An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i210.gif. Then,

equation image
(A.1)

where

equation image
(A.2)

where An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i213.gif: number of time samples; An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i214.gif: number of genes (index An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i215.gif); An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i216.gif, and all other parameters are calculated from the VBM step.

B. Calculation of VBM Step

Now we assume we have An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i217.gif from the prior VBE step. Then,

equation image
(B.1)

where An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i219.gif; An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i220.gif; An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i221.gif; An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i222.gif; An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i223.gif; An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i224.gif; An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i225.gif: Normal-Inverse-Gamma distribution; An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i226.gif: Dirichlet distribution.

C. Calculation of Lower Bound An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i227.gif

Once An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i228.gif and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i229.gif have been calculated, we calculate the lower bound using the following:

equation image
(C.1)
equation image
(C.2)
equation image
(C.3)
equation image
(C.4)

where An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i234.gif and An external file that holds a picture, illustration, etc.
Object name is 1687-4153-2009-195712-i235.gif.

Acknowledgment

This work is supported in part by NSF Grant CCF-0546345. Dr. Tim Lilburn has been instrumental with his assistance and guidance.

References

  • Jiang D, Tang C, Zhang A. Cluster analysis for gene expression data: a survey. IEEE Transactions on Knowledge and Data Engineering. 2004;16(11):1370–1386. doi: 10.1109/TKDE.2004.68. [Cross Ref]
  • Asyali MH, Colak D, Demirkaya O, Inan MS. Gene expression profile classification: a review. Current Bioinformatics. 2006;1(1):55–73. doi: 10.2174/157489306775330615. [Cross Ref]
  • Bar-Joseph Z, Gerber GK, Gifford DK, Jaakkola TS, Simon I. Continuous representations of time-series gene expression data. Journal of Computational Biology. 2003;10(3-4):341–356. doi: 10.1089/10665270360688057. [PubMed] [Cross Ref]
  • Ma P, Castillo-Davis CI, Zhong W, Liu JS. A data-driven clustering method for time course gene expression data. Nucleic Acids Research. 2006;34(4):1261–1269. doi: 10.1093/nar/gkl013. [PMC free article] [PubMed] [Cross Ref]
  • Rueda L, Bari A, Ngom A. Transactions on Computational Systems Biology X. Vol. 5410. Springer, Berlin, Germany; 2008. Clustering time-series gene expression data with unequal time intervals; pp. 100–123. (Lecture Notes in Computer Science). [Cross Ref]
  • Yuan Y, Li C-T. Unsupervised clustering of gene expression time series with conditional random fields. pp. 571–576.
  • Kim K, Zhang S, Jiang K. et al. Measuring similarities between gene expression profiles through new data transformations. BMC Bioinformatics. 2007;8, article 29:1–14. [PMC free article] [PubMed]
  • Wen X, Fuhrman S, Michaels GS. et al. Large-scale temporal gene expression mapping of central nervous system development. Proceedings of the National Academy of Sciences of the United States of America. 1998;95(1):334–339. doi: 10.1073/pnas.95.1.334. [PMC free article] [PubMed] [Cross Ref]
  • Möller-Levet CS, Klawonn F, Cho K-H, Yin H, Wolkenhauer O. Clustering of unevenly sampled gene expression time-series data. Fuzzy Sets and Systems. 2005;152(1):49–66. doi: 10.1016/j.fss.2004.10.014. [Cross Ref]
  • Balasubramaniyan R, Hüllermeier E, Weskamp N, Kämper J. Clustering of gene expression data using a local shape-based similarity measure. Bioinformatics. 2005;21(7):1069–1077. doi: 10.1093/bioinformatics/bti095. [PubMed] [Cross Ref]
  • Phang TL, Neville MC, Rudolph M, Hunter L. Trajectory clustering: a non-parametric method for grouping gene expression time courses, with applications to mammary development. pp. 351–362. [PMC free article] [PubMed]
  • Tjaden B. An approach for clustering gene expression data with error information. BMC Bioinformatics. 2006;7, article 17:1–15. [PMC free article] [PubMed]
  • Ben-Hur A, Elisseeff A, Guyon I. A stability based method for discovering structure in clustered data. pp. 6–17. [PubMed]
  • Dimitriadou E, Dolničar S, Weingessel A. An examination of indexes for determining the number of clusters in binary data sets. Psychometrika. 2002;67(1):137–159. doi: 10.1007/BF02294713. [Cross Ref]
  • Dudoit S, Fridlyand J. A prediction-based resampling method for estimating the number of clusters in a dataset. Genome Biology. 2002;3(7):1–21. [PMC free article] [PubMed]
  • Tibshirani R, Walther G, Hastie T. Estimating the number of clusters in a data set via the gap statistic. Journal of the Royal Statistical Society. Series B. 2001;63(2):411–423. doi: 10.1111/1467-9868.00293. [Cross Ref]
  • Sun H, Sun M. Trail-and-error approach for determining the number of clusters. pp. 229–238. (Lecture Notes in Computer Science).
  • Wild DL, Rasmussen CE, Ghahramani Z. A Bayesian approach to modeling uncertainty in gene expression clusters
  • Xu Y, Olman V, Xu D. Minimum spanning trees for gene expression data clustering. Genome Informatics. 2001;12:24–33. [PubMed]
  • Yan M, Ye K. Determining the number of clusters using the weighted gap statistic. Biometrics. 2007;63(4):1031–1037. doi: 10.1111/j.1541-0420.2007.00784.x. [PubMed] [Cross Ref]
  • Beal MJ, Ghahramani Z. The variational Bayesian EM algorithm for incomplete data: with application to scoring graphical model structures. pp. 453–464.
  • Rousseeuw PJ. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics. 1987;20:53–65. doi: 10.1016/0377-0427(87)90125-7. [Cross Ref]
  • Schwarz G. Estimating the dimension of a model. Annals of Statistics. 1978;6(2):461–464. doi: 10.1214/aos/1176344136. [Cross Ref]
  • The University of Oklahoma's E. coli Gene Expression Database, http://chase.ou.edu/oubcf/
  • The Entrez Genome Database. National Center for Biotechnology Information, National Library of Medicine, National Institutes of HealthEscherichia coli K-12 data. http://www.ncbi.nlm.nih.gov/
  • Shannon P, Markiel A, Ozier O. et al. Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Research. 2003;13(11):2498–2504. doi: 10.1101/gr.1239303. [PMC free article] [PubMed] [Cross Ref]
  • Maere S, Heymans K, Kuiper M. BiNGO: a Cytoscape plugin to assess overrepresentation of gene ontology categories in biological networks. Bioinformatics. 2005;21(16):3448–3449. doi: 10.1093/bioinformatics/bti551. [PubMed] [Cross Ref]
  • Yuan Y, Li C-T. Probabilistic framework for gene expression clustering validation based on gene ontology and graph theory. pp. 625–628.

Articles from EURASIP Journal on Bioinformatics and Systems Biology are provided here courtesy of BioMed Central

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...