- Journal List
- BMC Bioinformatics
- v.12; 2011
- PMC3056725

# Multiclass classification of microarray data samples with a reduced number of genes

^{}

^{1,}

^{2}Leonardo Ornella,

^{1}Pilar Bulacio,

^{1,}

^{2}and Laura Angelone

^{1,}

^{2}

^{1}CIFASIS-Conicet Institute, Bv. 27 de Febrero 210 Bis, Rosario, Argentina

^{2}Facultad de Cs. Exactas e Ingeniería, Riobamba 245 Bis, National University of Rosario, Argentina

^{}Corresponding author.

## Abstract

### Background

Multiclass classification of microarray data samples with a reduced number of genes is a rich and challenging problem in Bioinformatics research. The problem gets harder as the number of classes is increased. In addition, the performance of most classifiers is tightly linked to the effectiveness of mandatory gene selection methods. Critical to gene selection is the availability of estimates about the maximum number of genes that can be handled by any classification algorithm. Lack of such estimates may lead to either computationally demanding explorations of a search space with thousands of dimensions or classification models based on gene sets of unrestricted size. In the former case, unbiased but possibly overfitted classification models may arise. In the latter case, biased classification models unable to support statistically significant findings may be obtained.

### Results

A novel bound on the maximum number of genes that can be handled by binary classifiers in binary mediated multiclass classification algorithms of microarray data samples is presented. The bound suggests that high-dimensional binary output domains might favor the existence of accurate and sparse binary mediated multiclass classifiers for microarray data samples.

### Conclusions

A comprehensive experimental work shows that the bound is indeed useful to induce accurate and sparse multiclass classifiers for microarray data samples.

## Background

A number of multiclass classification methods for microarray data have been developed in the recent years [1,2]. However, their ability to scale well to the number of classes and to provide accurate and sparse multiclass classification models essentially free of model selection-bias remain challenging issues [3,4]. Sparse multiclass classification models of microarray data samples are useful; they involve a reduced number of input genes and thus are easy to compute with and to interpret [5].

In this paper, a new gene selection method valid for binary mediated multiclass classification approaches of microarray data samples and able to implicitly model a gene selection sparsity constraint is presented. We rely on the use of output coding [6] methods allowing the binary reduction of *M*-multiclass classification into *n *binary classification tasks. We assume a model of independent genes, independent binary classifiers and a principle of information content equipartition among binary classifiers to derive a bound on the maximum number of genes that can be handled by binary classifiers in binary mediated multiclass classification approaches of microarray data samples. The derived bound scales with the inverse *n *thus providing a way to tackle the computational complexity of finding accurate and sparse multiclass classification models of microarray data samples: just increase the number *n *of binary classifiers and perform *bounded *optimum gene selection on lists of predictive genes for individual binary classifiers. In other words, the blessing face of dimensionality might be solution for the problem of accurate and sparse multiclass classifiers of microarray data samples; we just need to guarantee the induction of a large number *n *of independent binary classifiers. However, the induction of a large number *n *of independent binary classifiers by means of output coding methods may be hard to achieve when training data is scarce like in microarray data analysis. Hence, we may be forced to accept the best *n *with regard to the key independence factor [7,8] of general output coding methods. Just in case the best *n *is sufficiently large, the design of accurate and sparse multiclass classifiers of microarray data samples would be feasible.

Output coding embodies the design of well-known One Against All (OAA) [9] multiclass classifiers allowing the division of *M *- multiclass classification problems into *n *= *M *binary classification tasks, each binary task dealing with the problem of discriminating a given class against the others. A further generalization of OAA classifiers leads to the design of Error Correcting Output Coding (ECOC) classifiers [10,11] allowing the division of *M *- multiclass classification problems into *n *binary classification tasks, *n *being determined by the size of some error correcting code. ECOC classifiers can then be used to explore the feasibility of accurate and sparse multiclass classifiers of microarray data samples by letting *n *approach to infinity. In this paper, the recently introduced [12] class of ECOC classifiers based on LDPC codes [13] is considered. Hence, ECOC classifiers based on LDPC codes of size *n *up to 15·*log*_{2}*M* and OAA classifiers of size *n *= *M *are evaluated. For OAA as well as ECOC classifiers, binary linear Support Vector Machines (SVMs) [14] classifiers are assumed. For the purposes of selecting most important genes at core SVMs, univariate ranking information [15] based on the widely used S2N metric [16-18] is assumed. Using the above setting, a complete experimental protocol is presented for the design of accurate and sparse multiclass classifiers for microarray data samples essentially free of model selection-bias [19-22]. Our approach is evaluated on 8 benchmark microarray datasets. Experimental results confirm the feasibility of our proposed method.

## Results and Discussion

### An upper bound on the number of genes per binary classifier

How much information can a set of *p *independent genes convey about a set of *M *phenotypes? Being aware of such a fundamental limitation could be crucial in the design of accurate and sparse multiclass classifiers of microarray data samples. Let *S *be a microarray dataset comprising *q *samples from *M ≥ *3 classes, each sample defined by the gene expression measurements of *p *genes (*p * *q*). Hence, the average information content per class sample in *S *can be upper bounded by *H _{M }*=

*log*

_{2}

*M*.

In addition, let us assume that genes behave as a collection of *p *independent identically distributed *binary *random variables, i.e., a kind of probabilistic boolean model of gene expression is considered [23]. Hence, each gene is in state 1 (expressed) with probability *f *and in state 0 (not expressed) with probability 1 - *f*, each state representing gene activity above or below some threshold for an effect. Thus, in this model of gene expression, each gene conveys on average *H*(*f*) = - *f *· log_{2 }*f *- (1 - *f*) · log_{2}(1 - *f*) bits of information. Furthermore, let us assume an output coding strategy over *S *able to induce *n *independent binary datasets and correspondent binary classifiers. Hence, under a principle of information content equipartition, ${H}_{b}={H}_{M}/n=\frac{lo{g}_{2}M}{n}$ bits of information will be available at each binary classifier. Finally, let us assume that each binary classifier is allowed to select a fraction *Q *of the complete set of genes. Hence, after the selection of *Q *· *p *genes, at most *Q *· *p *· *H*(*f*) bits of information will be available at each binary classifier and this quantity cannot exceed *H _{b}*

Eq. 1 nicely estimates the maximum fraction of genes (*Q _{max}*) that can be selected by any binary classifier in terms of main parameters characterizing any binary mediated multiclass classification problem plus an unknown parameter

*f*. To estimate

*f*, we now turn to the problem of estimating the probability

*f*that a biased coin will come up with heads in a sequence of

*q*independent coin tosses provided

*k*heads have been observed. The maximum likelihood estimate of

*f*, i.e., the value of

*f*with the largest probability for the observed data, is given by

*k*/

*q*. To obtain

*k*, we just need to count the number of expressed genes across the collection of

*q*samples. However, aiming to obtain a more general bound, we would like to avoid overwhelming data dependent counts. If we further assume that averages of gene expression over a sufficiently large population of individuals are equal to averages over many genes, i.e., an ergodic behavior of genes [24] is considered, the fractional

*f*should equal the fraction of genes

*k**//

*p*that are expressed at any individual. Assuming that

*k**/

*p*< 0.5 (otherwise not expressed genes can be considered) and recalling that

*H*(

*f*) is a monotonic increasing function in [0, 0.5], we get

*H*(

*k/q*) ≈

*H*(

*k**/

*p*) ≥

*H*(1/

*p*) and the following

*Q*upper bound (

*Q*) can be derived

_{max}Overall, Eq. 2 suggests that the computational complexity of finding sparse multiclass classifiers of microarray data samples could be overcome with the induction of a large number *n *of *independent *binary classifiers, a requirement which gets easier to satisfy as the number of training samples increases. The evolution of *Q _{max }*with respect to

*n*on benchmark microarray datasets used in this paper is shown in Figure Figure1.1. Before moving onto the next subsection, we notice that a more formal derivation of

*Q*is given in the Appendix.

_{max }### Bounded optimum S2N gene selection

For a fixed *n*, we now face the problem of finding the optimum number of genes in the list of top *p** *Q _{max}*(

*n*) most discriminative genes for each binary classifier. Such optimum will follow from a partial search scheme and thus, we provide no guarantee of identifying the optimal gene set [25]. But as

*n*increases, finding such optimum implies finding a sparse representation of a high dimensional feature space from a small number of training samples. Because sparsity is key structural property of most genomic studies involving disease classification, we conjecture that the proposed gene selection method could indeed be a solution for the problem of designing accurate and sparse multiclass classifiers of microarray data samples.

Letting *n *approach to infinity cannot be realized in practice. Hence, some bounded exploration of the *n *dimension must be assumed in advance. In this paper, the exploration of *n *dimension from *n _{min }*= log

_{2}

*M*+ 2 up to

*n*= 15·

_{max }*log*

_{2}

*M*is considered. Notice that

*n*= 15·

*log*

_{2}

*M*+ 1 is not considered; it would entail the use of parity codes only able to detect (but not correct) binary classifiers errors. For practical

*n*ranges, the exhaustive exploration of

*p**

*Q*(

_{max}*n*) most important genes for each binary classifier may still be too computationally demanding. Thus, a multi-scale resolution approach for the

*Q*-dimension was devised. Firstly, the

*Q*dimension was coarsely quantized with a base 10 logarithmic scale, i.e.,

*Q*[0.001, 0.01, 0.1, 1] was assumed. Secondly, each logarithmic segment, except the last one, was linearly quantized into 10 equal parts; the last logarithmic segment was quantized into 100 equal parts. Finally, genes at each binary classifier were ranked according to their

*S*2

*N*value (see Methods for details) with respect to the response variable and mapped to the formerly quantized

*Q*-dimension for further selection. As a result, for a fixed computational budget, more computational effort can be put into the exploration of highly discriminative genes, i.e., top ranking genes, than into those of poor discriminative power.

### Results on Real Data

We first note that the application of the Shapiro-Wilk test to the empirical distributions of performance measures (classification error, overall fraction of selected genes and gene selection stability) of either ECOC or OAA classifiers frequently rejected the null hypothesis of normally distributed data at the 0.05 *α *level of significance, thus justifying the use of the more conservative Kolmogorov-Smirnov (KS) and Mann-Whitney (MW) U tests.

Table Table11 shows the classification performance of OAA and ECOC classifiers of size *n *up to *η*·*log*_{2}*M* (*η *= 5, 10, 15) over 200 Montecarlo 4:1 train-test partitions. Despite the *η *choice, ECOC and OAA classifiers attain comparable classification performance in 5 out of 8 datasets (*p *> 0.3, two-sided KS tests). Stochastic orderings favorable to OAA classifiers are observed in the SRCBT, NCI60 and GCM datasets (*p *< 0.05, one-sided KS tests; one-side MW tests consistent). In particular, OAA classifiers perform remarkably well on the hard NCI60 and GCM datasets.

Table Table22 shows the overall number of genes selected by OAA and ECOC classifiers of size *n *up to *η*·*log*_{2}*M* (*η *= 5, 10, 15) under bounded optimum *S*2*N *gene selection over 200 Montecarlo 4:1 train-test partitions. Moving from *η *= 5 to *η *= 15 gradually reduces the dimensionality of ECOC classifiers. The strongest reduction effect occurs when moving from *η *= 5 to *η *= 10, suggesting *η *= 10 as a practical upper limit for the exploration of the *n *dimension with ECOC classifiers. However, the extent of ECOC dimensionality reductions are insufficient to improve naive OAA classifiers. Despite the *η *choice, significant differences in the number of genes selected by ECOC and OAA classifiers are observed in all datasets (*p *< 0.05, two-sided KS tests). Stochastic orderings favorable to ECOC classifiers are observed in the Lymphoma and NCI60 datasets (*p *> 0.2, one-sided KS tests; *p *< 0.01, one-sided MW tests).

Table Table33 shows the stability of gene selection attained by OAA and ECOC classifiers of size up to *η*·*log*_{2}*M* (*η *= 5, 10, 15) under bounded optimum *S*2*N *gene selection over 200 Montecarlo 4:1 train-test partitions. Despite the *η *choice, significant differences in the stability of gene selection attained by ECOC and OAA classifiers are observed (*p *< 2.2*e *- 16, two-sided KS tests). Stochastic orderings favorable to ECOC classifiers are observed in Lymphoma, SRCBT and Su datasets (*p *> 0.9, one-sided KS tests; *p *< 2.2*e - *16, one sided MW tests); ambiguous orderings are observed in the Brain, GCM RM and GCM datasets. Remarkably, the stability of gene selection attained by ECOC classifiers is only slightly reduced when moving from *η *= 5 to *η *= 15.

For the sake of completeness, we also report the performance of OAA and ECOC classifiers of size at most *η*·*log*_{2}*M* (*η *= 5, 10, 15) on two benchmark microarray datasets with a public train-test partition (see Table Table4).4). Results agree with observed trends of the classification error in Montecarlo evaluations. Although both ECOC and OAA classifiers seem to be highly effective in the GCMRM dataset, suggesting that ECOC classifiers may be worthy of exploring in such case, only OAA classifiers perform well on the GCM dataset. Since the GCMRM dataset is just a subsample of the GCM dataset to which a more robust preprocessing protocol has been applied, so that fewer samples, fewer classes and fewer genes than in the original dataset are involved, these results raise the question to what extent specific preprocessing protocols could be a affecting the strength of gene selection attainable with ECOC classifiers.

## Conclusions

The divide and conquer approach to the design of multiclass classifiers for microarray data samples which we have presented offers the hope that accurate and sparse multiclass classifiers can be constructed without incurring in undesirable forms of gene selection bias hidden in the selection of optimal gene subsets of restricted or unrestricted size [26]. Generalized binary reductions of *M*-multiclass classification problems into *n *binary classification tasks and bounded explorations of resulting gene spaces are advised to accomplish this objective. At each binary classifier, the maximum number of genes that can be selected scales with the inverse of *n*, thus providing a way to accomplish optimum gene selection at affordable computational costs, provided *n *is sufficiently large.

In this paper, the power of OAA and ECOC binary reductions in the design of accurate and sparse multiclass classifiers for microarray data samples has been evaluated. Without loss of generality, we have restricted ourselves to the class of ECOC classifiers based on LDPC codes, linear SVM binary classifiers and univariate *S*2*N *gene selection. Experimental results show that dimensionality exchange between input and output domains of binary mediated multiclass classifiers of microarray data samples is indeed possible: the larger the size of candidate ECOC classifiers, the greater the chance of selecting smaller sets of genes. Although promising, the dimensionality reduction performance exhibited by ECOC (LDPC) classifiers is not enough to definitely improve naive OAA classifiers, which remain the best practical option.

From an overall view, experimental results suggest that improving the dimensionality reduction ratio of OAA classifiers with ECOC classifiers may not be as easy as it seems. We note, however, that a consensus approach to gene selection and classification on a set of diverse ECOC classifiers under bounded optimum gene selection could finally boost their dimensionality reduction factor beyond that of OAA classifiers. Briefiy, provided individual ECOC solutions are good enough compared to OAA classifiers, a consensus approach to gene selection on a set of diverse ECOC classifiers should preserve most relevant genes and reject a great proportion of irrelevant ones. Since ECOC classifiers based on LDPC codes seem to be closely related neighbors of OAA counterparts, this hypothesis will be focus of future research. Finally, further dimensionality reduction improvements may still be attainable with more elaborated forms of gene selection like SVM-RFE [27].

Overall, our results provide evidence that bounded optimum gene selection in high dimensional binary output domains induced by either OAA or ECOC classifiers may be a solution for the problem of accurate multiclass classification of microarray data samples based on a reduced number of genes.

## Methods

To keep the paper self-contained in this section, we would like to briefiy review the design of ECOC classifiers based on LDPC codes. Then we proceed to describe benchmark microarray data and main points of our experimental protocol. The introduction of error correcting codes in the design of ECOC classifiers aims the automatic recovery of binary classifiers errors leading to erroneous multiclass predictions. For this purpose, an ECOC code must be first defined. An ECOC code is a binary matrix of size *M *by *n*, the *i*-*th *row defining the binary encoding for the *i*-*th *class label, *i *= 1,..., *M*, and the *j*-*th *column defining the binary split to be learn by the *j*-*th *core binary classifier, *j *= 1,..., *n*. Since codewords of length $n={\scriptscriptstyle \frac{\lceil (lo{g}_{2}M)\rceil}{R}}$, 0 <*R *< 1, are required for redundantly encoding *k *= *log*_{2}*M* bits of useful class label information, ECOC classifiers entail output designs of logarithmic complexity with respect to *M*, which can be an advantage when *M *is rather large [28]. As noted by [29], ECOC classifiers based on random ECOC codes are asymptotically Bayes Optimal, i.e., they approximate the minimum possible misclassification error, provided core binary classifiers are Bayes classifiers themselves. As noted by [30], the SVM paradigm efficiently approximates the Bayes classification rule. Hence, core binary classifiers were implemented with linear SVMs, a class of binary classifiers that finds the hyperplane that best separate training samples having different class memberships [31], the trade-off between model complexity and empirical error being determined by the constant complexity hyperparameter *C *> 0. However, regarding the construction of the ECOC coding matrix, we decided to use LDPC codes instead of random codes.

A key problem with conventional ECOC classifiers based on random codes is that randomness inhibits the systematic control of independence between binary classifiers as *n *approaches to infinity. A possible way to overcome this problem is to construct large ECOC classifiers from a number of small ECOC classifiers connected via shared binary classifiers. Small constituent ECOC classifiers able to locally control the key independence factor despite the size *n *of the overall ECOC classifier can be easily designed, for example with simple parity codes. Provided the connectivity profile of constituent ECOC classifiers and binary classifiers remains sparse, the overall ECOC design can be nicely interpreted in terms of the design of LDPC codes.

Briefly, LDPC codes are linear block codes obtained from sparse *random *bipartite graphs subject to sparsity constraints allowing a divide and conquer interpretation of generated ECOC classifiers[12]. Let *G *be a bipartite graph with *n *left nodes (called message nodes) and *m *right nodes (called check nodes). If the *n *message nodes are associated to the *n *coordinates of codewords **c **defined as those vectors (*c*_{1},..., *c _{n}*) satisfying the constraint that the sum of the neighboring positions for all check nodes among the message nodes is zero, then

*G*models a linear code of size

*n*which can protect at least

*k*=

*n*-

*m*bits of information and which structure can be dissected into

*m*simple parity codes. In addition, if the connectivity profile of

*G*is sparse, i.e., each codeword bit is constrained by

*j*< <

*m*parity codes and each parity code constraints

*u*< <

*n*codeword bits, then the corresponding linear code turns to be an LDPC code. The sparsity of the graph structure is a key property in the design of efficient LDPC decoding algorithms for a variety of channel models. A channel model subsumes our prior knowledge about the statistics of binary errors. In this paper, the iterative message passing decoding algorithm described in [13] for the Additive White Gaussian Noise channel is used. A factor graph [32] model of a typical LDPC code is shown in Figure Figure2.2. The construction of ECOC classifiers based on LDPC codes is straightforward once the bipartite graph model of the underlying LDPC code is given. In factor graph terms, we just need to associate right message nodes to ideal binary classifiers predictions

*c*and left check nodes to constituent ECOC classifiers constructed from simple parity codes. To complete the factor graph model of an ECOC-LDPC classifier, message nodes

_{i }*r*modeling practical binary classifiers predictions and check nodes

_{i }*f*modeling prior statistical knowledge about pairs (

_{i }*c*,

_{i}*r*) ("channel functions") must be introduced. A request for an ECOC prediction on a set of input features

_{i}**x**starts with the computation of a corrupted codeword

**r**(

**x**) by the set of

*n*binary classifiers. Assuming a suitable channel model specified by check nodes

*f*, the corrupted codeword

_{i}**r**(

**x**) is given to an iterative message passing decoding algorithm for the computation of a hopefully good estimate $\widehat{c}(x)$ of the unknown codeword

**c**(

**x**) encoding the unknown class label

*y*associated to

**x**. Remarkably, the computation of $\widehat{c}(x)$ can be fully described as a message passing algorithm over the ECOC-LDPC factor graph. In addition to convenient graphical $\widehat{c}(x)$ computation, ECOC-LDPC factor graphs also allow for seamless integration of general bounded gene selection strategies. We just need to add message nodes

*x*,

_{k}*k*= 1,...,

*p*, modeling gene expression behavior, check nodes

*L*,

_{i}*i*= 1,...,

*n*, modeling practical binary classifiers and a sparse connectivity profile ensuring that at each

*L*the number

_{i }*v*of incident edges (selected genes) is no more than $p\cdot {Q}_{max}\approx {\scriptscriptstyle \frac{lo{g}_{2}M}{n\cdot H({\scriptscriptstyle \frac{1}{p}})}}$, in agreement with Eq.2.

**The architecture of an ECOC-LDPC classifier under bounded gene selection**. Right squares represent constituent ECOC classifiers induced from simple parity codes, left squares represent practical binary classifiers, rectangles represent "channel" functions,

**...**

### Microarray Datasets

Eight cancer microarray data sets were used in the evaluation of binary mediated multiclass classification with bounded optimum *S*2*N *gene selection. The **Lymphoma **dataset [33] consists of 62 samples of a specialized cDNA chip spanning *M *= 3 subtypes of Diffuse large B-cell lymphoma, each sample defined by the expression of *p *= 4026 genes. Samples in the Lymphoma dataset are highly imbalanced: 42 samples of diffuse large B-cell lymphoma, 9 of follicular lymphoma and 11 of chronic lymphocytic leukemia. Original data is available at http://llmpp.nih.gov/lymphoma/data/figure1. In this study, a preprocessed dataset version compiled by [34] based on [35] was used.

The Small Round Blue Cell Tumors (**SRBCT**) dataset [36] consists of 63 samples of a specialized cDNA chip spanning *M *= 4 subtypes of small round blue cell tumors of childhood, each sample defined by the expression of *p *= 2308 genes. Samples are distributed as follows: 12 samples of neuroblastoma, 20 samples of rhabdomyosarcoma, 8 samples of non-Hodgkin lymphoma and 23 samples of the Ewing family of tumors. In this study, a preprocessed dataset version available at http://research.nhgri.nih.gov/microarray/Supplement/index.html was used.

The **Brain **dataset [37] consists of 42 samples of the Affymetrix HuGeneFL chip spanning *M *= 5 tumors classes of the central nervous system, each sample defined by the expression of *p *= 5597 genes. Samples are distributed as follows: 10 medulloblastomas, 10 malignant gliomas, 10 atypical teratoid/rhabdoid tumors (AT/RTs), 8 primitive neuro-ectodermal, tumors (PNETs) and 4 human cerebella. In this study, the original dataset version (Dataset A) was used. Expression values based on average difference units were computed using the Affymetrix GENECHIP MAS 4.0 analysis software. This dataset is available at http://www.broadinstitute.org/mpr/CNS/.

The **NCI60 **dataset [35] consists of 61 samples of a specialized cDNA chip spanning *M *= 8 tumor classes, each sample defined by the expression of *p *= 5244 genes. Samples are distributed as follows: 7 breast, 5 central nervous system, 7 colon, 6 leukemia, 8 melanoma, 9 non-small cell lung carcinoma, 6 ovarian and 9 renal tumors. Original data is available at http://genome-www.stanford.edu/nci60. In this study, a preprocessed dataset version compiled by [34] based on [35] was used.

The **Staunton **dataset [38] consists of 60 samples of the Affymetrix Hu6800 chip spanning *M *= 9 classes of tumors, each sample defined by the expression of *p *= 5726 genes. Expression values based on average difference units were computed using the Affymetrix GENECHIP MAS 4.0 analysis software. In this study, a preprocessed dataset version compiled by [1] involving the rescaling of gene expression measurements to the interval 0[1] was used. This dataset is available at http://www.gems-system.org/.

The **Su **[39] consists of 174 samples of the Affymetrix U95a chip spanning *M *= 11 classes of tumors, each sample defined by the expression values of *p *= 12533 genes. Expression values based on average difference units were computed using the Affymetrix GENECHIP MAS 4.0 analysis software. In this study, a preprocessed dataset version compiled by [1] involving the rescaling of gene expression values to the interval 0[1] was used. This dataset is available at http://www.gems-system.org/.

The **GCM **dataset [18] consists of 190 samples of the Affymetrix Hu6800 and Hu35K chips spanning *M *= 14 tumor classes of primary tumors, each sample defined by the expression of values *p *= 16063 genes. Expression values based on average difference units were computed using Affymetrix GENECHIP MAS 4.0 analysis software. This dataset, which comes with a public train-test partition involving *q *= 144 samples for training and 46 for test, is available at http://www.broadinstitute.org/cgi-bin/cancer/datasets.cgi.

The **GCM RM **dataset [40] consists of 123 samples of the Affymetrix Hu6800 chip spanning *M *= 11 classes of tumors, each sample defined by the expression values of *p *= 7129 genes. This dataset was derived from the GCM dataset with the purpose of improving multiclass classification with variability estimates of repeated gene expression measurements. Hence, expression values were computed with the more robust log scale multi-array analysis (RMA) measure. This dataset, which comes with a public train-test partition involving *q *= 96 samples for training and 27 for test, is available at http://expression.washington.edu/publications/kayee/shrunken_centroid/.

### Experimental Protocol

Optimum bounded gene selection over OAA and ECOC multiclass based on linear SVMs classifiers was evaluated on 8 publicly available microarray datasets (M {3, 4, 5, 8, 9, 11, 14}). Aiming a systematic evaluation of the *n*-dimension, we restricted ourselves to the class of ECOC classifiers based on LDPC codes. For both OAA and ECOC classifiers, binary classifiers decisions were fusioned by means of soft-decoding techniques. Hence, OAA classifiers based on hinge loss decoding of SVM's outputs and ECOC classifiers based on LDPC codes able to perform soft iterative decoding of SVM's outputs were used. Owing to the constraint *p *> >*q*, which highly limits the diversity between induced binary classifiers, just one iterative decoding loop was allowed. The Java Weka library version 3.4.10 [41] was used to provide the implementations of OAA multiclass and binary linear SVM classifiers. An extension of the Weka library was developed to implement ECOC classifiers based on LDPC codes and bounded optimum gene selection for both OAA and ECOC classifiers.

#### Assessing the classification performance

The classification performance of OAA and ECOC multiclass classifiers was evaluated by means of a randomized strategy. Based on [42] and [35], 200 Montecarlo 4:1 ($\frac{4}{5}$ for training and $\frac{1}{5}$ for testing) partitions of available data were considered. For those datasets with a public train-test partition, the specific train-test evaluation was additionally performed. The following performance metrics were considered: the test error rate, the number of binary classifiers, the number of genes per binary classifier, the overall number of selected genes and the stability of gene selection. Briefiy, stability of gene selection measures how multiple classification models resemble between them; models may be close to each other in terms of error, but can be distant in terms of their forms (the identity of selected genes) [43]. Thus, stability of gene selection is an important requirement for ensuring reliable conclusions in microarray data analysis [44,45]. Stability of gene selection with respect to changes in the training data was measured by means of the Salton's cosine coefficient [46]. Let *A _{i }*and

*A*respectively denote the sets of genes selected by classifier

_{j }*A*in partitions

*i*and

*j*,

*i*≠ μ

*j*. Hence, the similarity between sets

*A*and

_{i }*A*according to the Salton's coefficient is given by ${\scriptscriptstyle \frac{\#\text{}genes\text{\hspace{0.17em}}in\text{\hspace{0.17em}}both\text{\hspace{0.17em}}{A}_{i}\text{\hspace{0.17em}}and\text{\hspace{0.17em}}{A}_{j}}{\sqrt{\#\text{}genes\text{\hspace{0.17em}}in\text{\hspace{0.17em}}{A}_{i}}\cdot \sqrt{\#\text{}genes\text{\hspace{0.17em}}in\text{\hspace{0.17em}}{A}_{j}}}}$. Using 200 random train-test partitions lead to 200 · 199/2 pairwise similarity measurements from which the mean stability of gene selection can be reported.

_{j }#### Searching the best parameters

Regarding the honest computation [47] of best *n *and *Q*(*n*) parameters for ECOC classifiers, a two-stage optimization approach based on nested 10-Fold CV loops was performed. At each train-test partition, the constant complexity hyperparameter *C *of binary linear SVM classifiers was set to 1 and the best (*n*, *Q*(*n*)) pair was estimated by a nested 10-Fold CV error minimization loop in the current training dataset over the grid [*n _{min}*,

*n*] × (0,

_{max}*Q*],

_{max}*n*=

_{min }*log*

_{2}

*M*+ 2,

*n*=

_{max }*η*·

*log*

_{2}

*M*,

*η*= 5, 10, 15. Regarding the exploration of the

*Q*dimension, the

*S*2

*N*metric was used for inducing ordered lists of genes at each binary classifier. Briefiy, the class discrimination ability of the

*j*-

*th*gene at each binary classifier under the

*S*2

*N*metric, denoted as

*S*2

*N*(

*j*), is defined as follows

where *μ*(*j*)_{+}, *μ*(*j*)_{- }and *σ*(*j*)_{+}, *σ*(*j*)_{- }denote the means and standard deviations of the *j *- *th *gene in positive and negative examples in the current (binary) training set. Most *g *important genes under the *S*2*N *metric are defined as the first *g*/2 and the last *g*/2 genes in the ranked list of genes. For a fixed number *n *of binary classifiers, optimum bounded gene selection requires the estimation of the optimum number of genes *g*(*n*), or its fractional equivalent $Q(n)=\frac{g(n)}{p}$, in the list of *p** *Q _{max}*(

*n*) most important genes. Such threshold can be estimated by a nested 10-Fold CV loop in the current training set using the multiscale resolution approach described in the Results section. The process must be repeated for each candidate

*n*in the range [

*n*,

_{min}*n*]. Afterwards, the best performing (

_{max}*n*,

*Q*(

*n*)) pair can be reported. In case of multiple solutions, that involving the largest

*n*, i.e., the smallest

*Q*(

*n*), is selected.

An additional nested loop of 10-Fold CV was performed to optimize the constant complexity hyperparameter *C *of linear SVMs. Although it would have been better to jointly optimize (*n*, *Q*(*n*), *C*), this would have been computationally prohibitively expensive. Alternatively, the two-step optimization strategy described in [9] was used. Hence, we first set (*n*, *Q*(*n*)) at the best pair of values found at *C *= 1, and then decreased and increased *C *until no improvement was observed for three consecutive steps in nested 10-Fold CV loops. The best performing *C *along with the best performing (*n*, *Q*(*n*)) pair at *C *= 1 were then used as input parameters for the construction of the best ECOC classifier on the current training set and its posterior evaluation on the testing set. Notice that the final performance estimate obtained by this procedure is selection-bias free because each original testing set is used only once to estimate the performance of a single classification model that was built by using training data exclusively. Except for the preselection of *n *= *M*, a similar approach was used to estimate the best *Q*(*M*) and the best *C *for OAA classifiers. Table Table55 shows the central tendency and the variation of the best *C *for ECOC and OAA classifiers over 200 Montecarlo 4:1 train-test partitions. Results suggest that *C *= 1 is indeed a reasonable initial guess.

#### Assessing the statistical significance of results

To assess the statistical significance of observed differences between performance measures of ECOC and OAA classifiers, we invoke the concept of first order stochastic dominance [48] developed in the context of international economics [49]. Let *F *and *G *denote the cumulative distribution functions of two comparison groups regarding the study of some performance measure, e.g., the gene selection stability of ECOC and OAA classifiers. First-order stochastic dominance of *F *with respect to *G *is defined as *F *(*z*) - *G *(*z*) ≤ 0 uniformly in *z * , with strict equality for some *z*. Since this considers all moments of the distributions, it is a stricter test of stability differences than just comparing mean levels of stability. In order to implement first-order stochastic dominance analysis, nonparametric two-sided and one-sided Kolmogorov-Smirnov (KS) tests [50] will be used. The KS test looks for differences in two distributions, both in terms of shape and location. Although the KS test has good power for testing general differences in distributions and not just in their central tendencies, it is less sensitive than the t-test if data is normal. Considering this issue, normality of distributions was analyzed first by means of the Shapiro-Wilk test [50,51]. The two-sided KS statistic tests the hypothesis that both distributions are identical; the null and alternative hypotheses can be expressed as:

By contrast, the one-sided test of stochastic dominance of *F *over *G *(the distribution associated with *F *lies to the right of that associated with G) can be formulated as:

Similarly, the one-sided test of stochastic dominance of *G *over *F *(the distribution associated with *F *lies to the left of that associated with G) can be formulated as:

Hence, in order to conclude that *F *(*G*) stochastically dominates *G *(*F *) we need to reject the null hypothesis for the two sided test, but not reject the null for the corresponding one sided test. The test statistics for the two and one sided tests are, respectively:

where *u *and *v *respectively denote the sample sizes from the empirical distributions of *F *and *G *and *N *= *u *+ *v*.

Hence, to test whether ECOC classifiers can attain better classification performance than OAA classifiers, the two-sided *D *(Eq. 7) and the one-sided *D ^{- }*(Eq. 8) statistics were used (the

*alternative*parameter of the

*ks.test*function in the

*stats*R package respectively set to "two.sided" and "less"). A similar approach was used to assess the statistical significance of the differences between the overall fraction of selected genes by ECOC and OAA classifiers. Finally, to assess the statistical significance of stability differences between ECOC and OAA classifiers, the

*D*(Eq. 7) and the

*D*

^{+ }(Eq. 9) statistics were used (the

*alternativ*e parameter of the

*ks.test*function in the stats R package respectively set to "two.sided" and "greater"). One-sided KS tests were supplemented with one-sided Mann-Whitney U tests (MW) for analyzing the difference between medians of two groups. A criterion alpha level of 0.05 was used for all statistical tests.

## Appendix

### A more formal derivation of an upper bound for the number of genes per binary classifier

We consider the problem of designing accurate and sparse binary mediated multiclass classifiers for microarray data samples. In this context, accuracy is mainly determined by the power of the error correction code defining the multiclass to binary mapping and sparsity is mainly determined by the efficacy of gene selection algorithms used at the binary classification level. A natural question that arises in this system is what amount of information genes can transfer to the multiclass classifier output as the number *p *of genes grows. Knowing such limitation may play a crucial role in the design of effective gene selection algorithms which could significantly reduce their search spaces. Shannon's Information Theory concepts [52] can provide some useful insights into this fundamental question. In particular, the concept of mutual information (MI) can be used to evaluate the information content of a subset of genes with regard to individual binary output classes and the information content of a set of binary output classes with regard to the target multiclass output class. The use of MI for general multiclass classification problems can be motivated by Fano's inequality [53] which gives a lower bound for the probability of error *p _{e }*when estimating a discrete random variable

*y*{

*c*

_{1},...,

*c*} from another random variable x

_{M}*as a function of their MI*

^{p }*I*(

*y*, x)

Where ${p}_{e}=P(\hat{y}\ne y)$, *H*(*y*) is the Shannon entropy of *y*, $\widehat{y}=g(x)$ is a discrete random variable used to estimate *y *and $y\to x\to \widehat{y}$ is the Markov Chain modeling the overall classification process. Let us now consider Markov Chains $\text{y}\to \text{x}\stackrel{{T}_{i}}{\to}{\text{v}}_{i}\stackrel{{L}_{i}}{\to}{r}_{i}$ and *y *→ **x **→ **r **modeling the prediction of a target output class *y * {*c*_{1},..., *c _{M}*} from genes

**x**{0, 1}

*by the mediation of binary output classes*

^{p }**r**= (

*r*), each

_{i}*r*modeling the binary output class of a classifier

_{i }*L*on subset of genes

_{i }**v**

*{0, 1}*

_{i }*,*

^{g}*g*<

*p*, extracted by a gene selection algorithm

*T*on genes

_{i }**x**,

*i*= 1,...,

*n*. By the Fano's inequality, minimizing

*p*requires the maximization of

_{e }*I*(

*y*,

**x**) =

*H*(

*y*) -

*H*(

*y*|

**x**). Since

*y*is fixed, we have

*I*(

*y*,

**x**) ≤

*H*(

*y*) ≤

*log*

_{2}

*M*. On the other hand, by the data processing inequality [54], we have

*I*(

*y*,

**r**) ≤

*I*(

*y*,

**x**). In other words, the maximization of

*I*(

*y*,

**x**) requires the choice of an error correcting output code such that

*I*(

*y*,

**r**) is maximized. In addition, let

**r**be a set

*n*i.i.d. random variables

*r*. Thus, we have

_{i}*I*(

*y*,

**r**) = Σ

_{i }*I*(

*y*,

*r*) and $I(y,{r}_{i})\le \frac{lo{g}_{2}M}{n}$. Again by the data processing inequality, we have

_{i}*I*(

**v**

*,*

_{i}*r*) ≤

_{i}*I*(

*y*,

*r*). If we further assume that

_{i}*T*is a gene selection algorithm able to select just relevant genes to

_{i }*r*, i.e.,

_{i}*H*(

**v**

*|*

_{i }*r*) = 0, we have

_{i}*I*(

**v**

*,*

_{i}*r*) =

_{i}*H*(

**v**

*) -*

_{i}*H*(

**v**

*|*

_{i }*r*) =

_{i}*H*(

**v**

*). Finally, let genes in*

_{i}**v**

*be a set of*

_{i }*g*i.i.d. binary random variables. Thus, we have

*H*(

**v**

*) =*

_{i}*H*(

*T*(

_{i}**x**)) =

*Q*·

*p*·

*H*(

*f*) where

*Q*is the fraction of relevant genes to

*r*and

_{i }*H*(

*f*) is the binary entropy function measuring the information content of a generic gene which is expressed with probability

*f*and not expressed with probability 1 -

*f*. Hence, the following upper bound on the fraction of genes

*Q*that can be handled by any binary classifier in a binary mediated multiclass classifier for microarray data samples is obtained

## Authors' contributions

ET devised the study, set up and performed simulation experiments, and drafted the manuscript. LO contributed to the design of simulation experiments, to the statistical analysis of experimental results and to the manuscript. PB and LA contributed to the design of simulation experiments, to organize experimental results and to the manuscript. All authors read and approved the final manuscript.

## Acknowledgements

The authors would like to thank Javier De Las Rivas, member of the CIC, CSIC/USAL, Spain, for providing initial access to computational resources. The authors would also like to thank anonymous reviewers for their helpful comments. ET's, LO's, PB's and LA's work was supported by projects PICT No. 02226, SECYT, Argentina and Red Sudamericana e Iberoamericana de Bioinformática, PROSUL CNPq 011/2008, Brasil.

## References

- Statnikov A, Aliferis C, Tsamardinos I, Hardin D, Levy S. A comprehensive evaluation of multicategory classification methods for microarray gene expression cancer diagnosis. Bioinformatics. 2005;21:631–643. doi: 10.1093/bioinformatics/bti033. [PubMed] [Cross Ref]
- Liu KH, Xu CG. A genetic programming-based approach to the classification of multiclass microarray datasets. Bioinformatics. 2009;25:331–337. doi: 10.1093/bioinformatics/btn644. [PubMed] [Cross Ref]
- Li T, Zhang C, Ogihara M. A comparative study of feature selection and multiclass classification methods for tissue classification based on gene expression. Bioinformatics. 2004;20(15):2429–2437. doi: 10.1093/bioinformatics/bth267. [PubMed] [Cross Ref]
- Statnikov A, Wang L, Aliferis C. A comprehensive comparison of random forests and support vector machines for microarray-based cancer classification. BMC Bioinformatics. 2008;9:319. doi: 10.1186/1471-2105-9-319. [PMC free article] [PubMed] [Cross Ref]
- Fan J, Fan Y. High dimensional classification using features annealed independence rules. Ann Statist. 2008. [PMC free article] [PubMed]
- Allwein EL, Schapire RE, Singer Y. ICML '00: Proceedings of the Seventeenth International Conference on Machine Learning. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc; 2000. Reducing Multiclass to Binary: A Unifying Approach for Margin classifiers; pp. 9–16.
- Guruswami V, Sahai A. COLT '99: Proceedings of the twelfth annual conference on Computational learning theory. USA: ACM Press; 1999. Multiclass learning, boosting, and error-correcting codes; pp. 145–155.
- Masulli F, Valentini G. Dependence among Codeword Bits Errors in ECOC Learning Machines: An Experimental Analysis. Multiple classifier Systems. 2001. pp. 158–167.
- Rifkin R, Klautau A. In Defense of One-Vs-All classification. Journal of Machine Learning Research. 2004;5:101–141.
- Dietterich TG, Bakiri G. In: Proceedings of the Ninth AAAI National Conference on Artificial Intelligence. Dean TL, Mckeown K, editor. Menlo Park, CA: AAAI Press; 1991. Error-correcting output codes: a general method for improving multiclass inductive learning programs; pp. 572–577.
- Rifkin R. PhD thesis. Massachusetts Institute of Technology; 2002. Everything old is new again: A fresh look at historical approaches in machine learning.
- Tapia E, Bulacio P, Angelone L. Recursive ECOC classification. Pattern Recognition Letters. 2010;31(3):210–215. doi: 10.1016/j.patrec.2009.09.031. [Cross Ref]
- Mackay DJC. Good error-correcting codes based on very sparse matrices. Information Theory, IEEE Transactions on. 1999;45(2):399–431. doi: 10.1109/18.748992. [Cross Ref]
- Vapnik V. The nature of statistical learning theory (Information Science and Statistics) Springer; 1999.
- Saeys Y, Inza I, Larranaga P. A review of feature selection techniques in bioinformatics. Bioinformatics. 2007;23(19):2507–2517. doi: 10.1093/bioinformatics/btm344. [PubMed] [Cross Ref]
- Slonim DK, Tamayo P, Mesirov JP, Golub TR, Lander ES. Class prediction and discovery using gene expression data. Recomb. 2000. pp. 263–272. full_text.
- Furey T, Cristianini N, Duffy N, Bednarski D, Schummer M, Haussler D. Support vector machine classification and validation of cancer tissue samples using microarray expression data. Bioinformatics. 2000;16(10):906–914. doi: 10.1093/bioinformatics/16.10.906. [PubMed] [Cross Ref]
- Yeang CH, Ramaswamy S, Tamayo P, Mukherjee S, Rifkin RM, Angelo M, Reich M, Lander E, Mesirov J, Golub T. Molecular classification of multiple tumor types. Bioinformatics. 2001;17(Suppl 1) [PubMed]
- Furlanello C, Serafini M, Merler S, Jurman G. Entropy-based gene ranking without selection bias for the predictive classification of microarray data. BMC Bioinformatics. 2003;4:54. doi: 10.1186/1471-2105-4-54. [PMC free article] [PubMed] [Cross Ref]
- Dupuy A, Simon R. Critical review of published microarray studies for cancer outcome and guidelines on statistical analysis and reporting. J Natl Cancer Inst. 2007;99:147–157. doi: 10.1093/jnci/djk018. [PubMed] [Cross Ref]
- Lee S. Mistakes in validating the accuracy of a prediction classifier in high-dimensional but small-sample microarray data. Stat Methods Med Res. 2008;17:635–642. doi: 10.1177/0962280207084839. [PubMed] [Cross Ref]
- Aliferis CF, Statnikov A, Tsamardinos I, Schildcrout JS, Shepherd BE, Harrell FE. Factors influencing the statistical power of complex data analysis protocols for molecular signature development from microarray data. PLoS ONE. 2009;4:3:e4922. doi: 10.1371/journal.pone.0004922. [PMC free article] [PubMed] [Cross Ref]
- Shmulevich I, Dougherty ER, Kim S, Zhang W. Probabilistic Boolean networks: a rule-based uncertainty model for gene regulatory networks. Bioinformatics. 2002;18(2):261–274. doi: 10.1093/bioinformatics/18.2.261. [PubMed] [Cross Ref]
- Huang S. Non-genetic heterogeneity of cells in development: more than just noise. Development. 2009;136(23):3853–3862. doi: 10.1242/dev.035139. [PMC free article] [PubMed] [Cross Ref]
- Tsamardinos I, Aliferis CF. Towards Principled Feature Selection: Relevancy, Filters and Wrappers. in Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics. 2003.
- Zhu J, McLachlan G, Jones LBT, Wood I. On selection biases with prediction rules formed from gene expression data. Journal of Statistical Planning and Inference. 2008;138(2):374–386. doi: 10.1016/j.jspi.2007.06.003. [Cross Ref]
- Guyon I, Weston J, Barnhill S, Vapnik V. Gene Selection for Cancer Classification using Support Vector Machines. Machine Learning. 2002;46(1-3):389–422. doi: 10.1023/A:1012487302797. [Cross Ref]
- Berger A. Error-Correcting Output Coding for Text classification. In Proceedings of IJCAI-99 Workshop on Machine Learning for Information Filtering. 1999.
- James G, Hastie T. The Error Coding Method and PICTs. Journal of Computational and Graphical Statistics. 1998;7(3):377–387. doi: 10.2307/1390710. [Cross Ref]
- Lin Y. Support vector machines and the Bayes rule in classification. Data Mining and Knowledge Discovery. 2002;6:259–275. doi: 10.1023/A:1015469627679. [Cross Ref]
- Cristianini N, Shawe-Taylor J. An introduction to support vector machines: and other kernel-based learning methods. 1. Cambridge University Press; 2000.
- Kschischang FR, Frey BJ, Loeliger HA. Factor graphs and the sum-product algorithm. Information Theory, IEEE Transactions on. 2001;47(2):498–519. doi: 10.1109/18.910572. [Cross Ref]
- Alizadeh A, Eisen M, Davis R, Ma C, Lossos I, Rosenwald A, Boldrick J, Sabet H, Tran T, Yu X, Powell J, Yang L, Marti G, Moore T, Hudson J Jr, Lu L, Lewis D, Tibshirani R, Sherlock G, Chan W, Greiner T, Weisenburger D, Armitage J, Warnke R, Levy R, Wilson W, Grever M, Byrd J, Botstein D, Brown P, Staudt L. Distinct types of diffuse large B-cell lymphoma identified by gene expression profiling. Nature. 2000;403(6769):503–11. doi: 10.1038/35000501. [PubMed] [Cross Ref]
- Dettling M. BagBoosting for tumor classification with gene expression data. Bioinformatics. 2003;20(18):3583. doi: 10.1093/bioinformatics/bth447. [PubMed] [Cross Ref]
- Dudoit S, Fridlyand J, Speed TP. Comparison of Discrimination Methods for the Classification of Tumors Using Gene Expression Data. Journal of the American Statistical Association. 2002;97(457):77–87. doi: 10.1198/016214502753479248. [Cross Ref]
- Khan J, Wei J, Ringner M, Saal L, Ladanyi M, Westermann F, Berthold F, Schwab M, Antonescu C, Peterson C, Meltzer P. Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nat Med. 2001;7(6):673–679. doi: 10.1038/89044. [PMC free article] [PubMed] [Cross Ref]
- Pomeroy S, Tamayo P, Gaasenbeek M, Sturla L, Angelo M, Mclaughlin M, Kim J, Goumnerova L, Black P, Lau C, Allen J, Zagzag D, Olson J, Curran T, Wetmore C, Biegel J, Poggio T, Mukherjee S, Rifkin R, Califano A, Stolovitzky G, Louis D, Mesirov J, Lander E, Golub T. Prediction of central nervous system embryonal tumour outcome based on gene expression. Nature. 2002;415(6870):436–442. doi: 10.1038/415436a. [PubMed] [Cross Ref]
- Staunton JE, Slonim DK, Coller HA, Tamayo P, Angelo MJ, Park J, Scherf U, Lee JK, Reinhold WO, Weinstein JN, Mesirov JP, Lander ES, Golub TR. Chemosensitivity prediction by transcriptional profiling. Proc Natl Acad Sci USA. 2001;98:10787–10792. doi: 10.1073/pnas.191368598. [PMC free article] [PubMed] [Cross Ref]
- Su AI, Welsh JB, Sapinoso LM, Kern SG, Dimitrov P, Lapp H, Schultz PG, Powell SM, Moskaluk CA, Frierson HF, Hampton GM. Molecular Classification of Human Carcinomas by Use of Gene Expression Signatures. Cancer Res. 2001;61(20):7388–7393. [PubMed]
- Yeung K, Bumgarner R. Multiclass classification of microarray data with repeated measurements: application to cancer. Genome Biol. 2003;4(12):R83. doi: 10.1186/gb-2003-4-12-r83. [PMC free article] [PubMed] [Cross Ref]
- Witten I, Frank E. Data mining: Practical machine learning tools and techniques with Java implementations. Morgan Kaufmann; 1999.
- Azuaje F. Genomic data sampling and its effect on classification performance assessment. BMC Bioinformatics. 2003;4:5. doi: 10.1186/1471-2105-4-5. [PMC free article] [PubMed] [Cross Ref]
- Breiman L. Statistical Modeling: The Two Cultures. Statistical Science. 2001;16(3):199–215. doi: 10.1214/ss/1009213726. [Cross Ref]
- Abeel T, Helleputte T, Van de Peer Y, Dupont P, Saeys Y. Robust biomarker identification for cancer diagnosis with ensemble feature selection methods. Bioinformatics. 2010;26(3):392–398. doi: 10.1093/bioinformatics/btp630. [PubMed] [Cross Ref]
- Qiu X, Xiao Y, Gordon A, Yakovlev A. Assessing stability of gene selection in microarray data analysis. BMC Bioinformatics. 2006;7:50. doi: 10.1186/1471-2105-7-50. [PMC free article] [PubMed] [Cross Ref]
- Salton G. Automatic text processing: the transformation, analysis, and retrieval of information by computer. USA: Addison-Wesley Longman Publishing Co., Inc; 1989.
- Ambroise C, Mclachlan GJ. Selection bias in gene extraction on the basis of microarray gene-expression data. Proc Natl Acad Sci USA. 2002;99(10):6562–6566. doi: 10.1073/pnas.102102699. [PMC free article] [PubMed] [Cross Ref]
- Hadar J, Russell WR. Rules for Ordering Uncertain Prospects. American Economic Review. 1969;59:25–34.
- Delgado MA, Farinas JC, Ruano S. Firm productivity and export markets: a non-parametric approach. Journal of International Economics. 2002;57(2):397–422. doi: 10.1016/S0022-1996(01)00154-4. [Cross Ref]
- Hollander M, Wolfe DA. Nonparametric Statistical Methods. 2. Wiley-Interscience; 1999.
- Shapiro SS, Wilk MB. An analysis of variance test for normality (complete samples) Biometrika. 1965;3(52)
- Shannon CE. A Mathematical Theory of Communication. The Bell System Technical Journal. 1948;27:379–423. 623.
- Fano RM. Transmission of information: a statistical theory of communications. M.I.T. Press & Wiley, London; 1961.
- Cover TM, Thomas JA. Elements of Information Theory. Wiley-Interscience; 1991.

**BioMed Central**

## Formats:

- Article |
- PubReader |
- ePub (beta) |
- PDF (404K) |
- Citation

- Instance-based concept learning from multiclass DNA microarray data.[BMC Bioinformatics. 2006]
*Berrar D, Bradbury I, Dubitzky W.**BMC Bioinformatics. 2006 Feb 16; 7:73. Epub 2006 Feb 16.* - A comparative study of feature selection and multiclass classification methods for tissue classification based on gene expression.[Bioinformatics. 2004]
*Li T, Zhang C, Ogihara M.**Bioinformatics. 2004 Oct 12; 20(15):2429-37. Epub 2004 Apr 15.* - MSVM-RFE: extensions of SVM-RFE for multiclass gene selection on DNA microarray data.[Bioinformatics. 2007]
*Zhou X, Tuck DP.**Bioinformatics. 2007 May 1; 23(9):1106-14.* - Filter versus wrapper gene selection approaches in DNA microarray domains.[Artif Intell Med. 2004]
*Inza I, Larrañaga P, Blanco R, Cerrolaza AJ.**Artif Intell Med. 2004 Jun; 31(2):91-103.* - Relative expression analysis for molecular cancer diagnosis and prognosis.[Technol Cancer Res Treat. 2010]
*Eddy JA, Sung J, Geman D, Price ND.**Technol Cancer Res Treat. 2010 Apr; 9(2):149-59.*

- PubMedPubMedPubMed citations for these articles

- Multiclass classification of microarray data samples with a reduced number of ge...Multiclass classification of microarray data samples with a reduced number of genesBMC Bioinformatics. 2011; 12()59

Your browsing activity is empty.

Activity recording is turned off.

See more...