Logo of bioinfoLink to Publisher's site
Bioinformatics. 2008 Sep 1; 24(17): 1881–1888.
Published online 2008 Jul 16. doi:  10.1093/bioinformatics/btn347
PMCID: PMC2519157

Automated annotation of Drosophila gene expression patterns using a controlled vocabulary


Motivation: Regulation of gene expression in space and time directs its localization to a specific subset of cells during development. Systematic determination of the spatiotemporal dynamics of gene expression plays an important role in understanding the regulatory networks driving development. An atlas for the gene expression patterns of fruit fly Drosophila melanogaster has been created by whole-mount in situ hybridization, and it documents the dynamic changes of gene expression pattern during Drosophila embryogenesis. The spatial and temporal patterns of gene expression are integrated by anatomical terms from a controlled vocabulary linking together intermediate tissues developed from one another. Currently, the terms are assigned to patterns manually. However, the number of patterns generated by high-throughput in situ hybridization is rapidly increasing. It is, therefore, tempting to approach this problem by employing computational methods.

Results: In this article, we present a novel computational framework for annotating gene expression patterns using a controlled vocabulary. In the currently available high-throughput data, annotation terms are assigned to groups of patterns rather than to individual images. We propose to extract invariant features from images, and construct pyramid match kernels to measure the similarity between sets of patterns. To exploit the complementary information conveyed by different features and incorporate the correlation among patterns sharing common structures, we propose efficient convex formulations to integrate the kernels derived from various features. The proposed framework is evaluated by comparing its annotation with that of human curators, and promising performance in terms of F1 score has been reported.

Contact: ude.usa@ey.gnipeij

Supplementary information: Supplementary data are available at Bioinformatics online.


Detailed knowledge of the expression and interaction of genes is crucial to deciphering the mechanisms underlying cell-fate specification and tissue differentiation. DNA microarrays and RNA in situ hybridization are two primary methods for monitoring gene expression levels on a large scale. Microarrays provide a quantitative overview of the relative changes of expression levels of a large number of genes, but they do not often document the spatial information on individual genes. In contrast, RNA in situ hybridization uses gene-specific probes and can determine the spatial patterns of gene expression precisely. Recent high-throughput investigations have yielded spatiotemporal information for thousands of genes in organisms such as Drosophila (Lécuyer et al., 2007; Tomancak et al., 2002) and mouse (Carson et al., 2005; Lein et al., 2006). These data have the potential to provide significant insights into the functions and interactions of genes (Kumar et al., 2002; Samsonova et al., 2007).

The fruit fly Drosophila melanogaster is one of the model organisms in developmental biology, and its patterns of gene expression have been studied extensively (Arbeitman et al., 2002; Campos-Ortega and Hartenstein, 1997; Lécuyer et al., 2007; Tomancak et al., 2002). The comprehensive atlas of spatial patterns of gene expression during Drosophila embryogenesis has been created by in situ hybridization techniques, and the patterns are documented in the form of digital images (Grumbling et al., 2006; Harmon et al., 2007; Tomancak et al., 2002; Van Emden et al., 2006). Comparative analysis of gene expression pattern images can potentially reveal new genetic interactions and yield insights into the complex regulatory networks governing embryonic development (Estrada et al., 2006; Kumar et al., 2002; Peng and Myers, 2004; Tomancak et al., 2002).

To facilitate pattern comparison and searching, the images of Drosophila gene expression patterns are annotated with anatomical and developmental ontology terms using a controlled vocabulary (Grumbling et al., 2006; Tomancak et al., 2002). The basic requirement for annotation is to assign a unique term, not only for each terminally differentiated embryonic structure, but also for the developmental intermediates that correspond to it. Four general classes of terms, called anlage in statu nascendi, anlage, primordium and organ (ordered in terms of developmental time), are used in the annotation. Such an elaborate naming scheme describes a developing ‘path’, starting from the cellular blastoderm stage until organs are formed, which documents the dynamic process of Drosophila embryogenesis. Due to the overwhelming complexity of this task, the images are currently annotated manually by human experts. However, the number of available images produced by high-throughput in situ hybridization is now rapidly increasing (Gurunathan et al., 2004; Kumar et al., 2002; Peng and Myers, 2004; Tomancak et al., 2007; Ye et al., 2006). It is, therefore, tempting to design computational methods for the automated annotation of gene expression patterns.

The automated annotation of Drosophila gene expression patterns was originally considered difficult due to the lack of a large reference dataset from which to learn. Moreover, the ‘variation in morphology and incomplete knowledge of the shape and position of various embryonic structures’ have made this task more elusive (Tomancak et al., 2002). We attempt to address this problem by resorting to advanced tools developed recently in the computer vision and machine learning research communities and on the large set of annotated data available from the Berkeley Drosophila Genome Project (BDGP) (Tomancak et al., 2002). There are several challenging questions that need to be addressed when approaching this problem by computational methods. As has been stated in (Tomancak et al., 2002), the first challenge is to deal with the issue that the same embryonic structure can appear in different shapes and positions due to the distortions caused by the image acquisition process. Fortunately, recent advances in object recognition research have led to robust methods that can detect interest regions and extract features that are invariant to a class of local transformations from these regions. These two correlated lines of research have reached some maturity now [see Mikolajczyk et al., (2005) and Mikolajczyk and Schmid, (2005) for an overview].

The second challenge of this task lies in the data representation. The embryogenesis of Drosophila has been divided into six discrete stage ranges (1–3, 4–6, 7–8, 9–10, 11–12 and 13–16) in the BDGP high-throughput study (Tomancak et al., 2002). Gene expression patterns are documented collectively by a group of images in a specific stage range. Similarly, annotation terms are also associated with a group of patterns sharing a subset of the named structures (Fig. 1). These attributes of the existing biological data pose challenges, because traditional machine learning tools require that each object in question be represented by a feature vector of fixed length. It is challenging to encode the variable number of images in a group into a fixed-length vector. The existing approach (Zhou and Peng, 2007) is based on the simplifying assumption that the terms are associated with individual images instead of image groups. Kernel methods developed in machine learning are a class of versatile tools for learning from unconventional data types, since they only require that the similarity between objects be abstracted into the so-called kernel matrix (Schölkopf and Smola, 2002). Kernels between various data types, e.g., strings, trees, graphs, and sets of vectors, have been proposed in the literature (Grauman and Darrell, 2005; Kondor and Jebara, 2003; Schölkopf et al., 2004). We propose to extract a number of locally invariant features from each gene expression pattern image, and to compute kernels between sets of images based on the pyramid match algorithm (Grauman and Darrell, 2007a).

Fig. 1.
Sample image sets and the associated terms in the BDGP database in two stage ranges. Only images taken from lateral view with the anterior to the left are shown.

A recent comprehensive study shows that when local features are used to compute kernels between images, a combination of multiple feature types tends to yield better results than even the most discriminative individual feature type (Zhang et al., 2007). This motivates us to extract multiple feature types from each image and obtain multiple kernel matrices, one for each feature type. Thus, the final challenge for automated gene expression pattern annotation is to develop methods that can combine the multiple kernel matrices effectively. Automated methods for combining multiple kernel matrices, called multiple kernel learning (MKL), have been studied in machine learning recently. In such a framework, the optimal kernel matrix is obtained as a convex combination of a set of predefined candidate kernel matrices, and the coefficients for the combination can be computed by optimizing certain criteria. Methods for MKL have been proposed in the contexts of binary-class (Lanckriet et al., 2004a) and multi-class classification (Zien and Ong, 2007), and they have been applied successfully to various biological applications (De Bie et al., 2007; Lanckriet et al., 2004b). For the problem of gene expression pattern annotation, a variable number of terms from the controlled vocabulary can be assigned to a group of patterns. Hence, this problem belongs to the more general framework of multi-label learning. We propose methods based on hypergraph (Agarwal et al., 2006; Zhou et al., 2007) to project and combine the multiple kernel matrices for multi-label data. The proposed formulation can capture the correlation among patterns sharing a common embryonic structure by including them in a common edge in hypergraph. We also show that kernel canonical correlation analysis (Hardoon et al., 2004) is a special case of the proposed formulation. The overall flowchart of the proposed framework is depicted in Figure 2.

Fig. 2.
Illustration of the proposed framework for annotating gene expression patterns. We extract multiple types of features from image groups and construct multiple kernels using the pyramid match algorithm. The multiple kernels are then combined to annotate ...

We discuss feature generation and kernel construction in Section 2. The proposed formulation for multi-label multiple kernel learning is presented in Section 3. We report the results on gene expression pattern annotation in Section 4 and conclude this article with future work in Section 5.


In this section, we present our methods for extracting features from gene expression pattern images and constructing kernels between sets of patterns.

2.1 Feature generation

There are two primary methods for extracting features. When the images are not well-aligned, the covariant region detector is first applied on the images to detect interest regions. Then, local descriptor is used to extract features from the detected regions. An alternative approach is to apply local descriptor on a dense regular grid, instead of interest regions (Grauman and Darrell, 2007b; Lazebnik et al., 2006). Such an approach is motivated from the bag-of-words model from the text-modeling literature, and competitive performance has been achieved on image applications (Fei-Fei and Perona, 2005). Since the images in our FlyExpress (Van Emden et al., 2006) database are already well-aligned, we take the second approach in this article (Fig. 2). Instead of tuning the local descriptor and grid size manually, we apply several popular local descriptors on regular grids of different sizes, and rely on the MKL framework to select the appropriate local descriptors and grid size. More details on feature generation are described in Section 4.

2.2 Pyramid match kernels

In kernel methods, a symmetric function is called a ‘kernel function’ if it satisfies the Mercer's condition (Schölkopf and Smola, 2002). When used for a finite number of samples in practice, this condition amounts to requiring that the kernel matrix is positive semidefinite. The wide applicability of kernel methods stems from the fact that they only require the characterization of similarities between objects by the use of the kernel trick.

The pyramid match algorithm (Grauman and Darrell, 2005); (Grauman and Darrell, 2007a; Grauman and Darrell, 2007b) computes kernels for variable-sized sets of feature vectors. The main idea of this approach is to convert sets of features to multi-dimensional, multi-resolution histograms, and then compute the similarity between the corresponding histograms based on histogram intersections. The final similarity between two sets of vectors is computed as a weighted sum of the similarities at the histogram levels. This similarity is an approximation to the similarity of the best partial matching between the feature sets. The resulting similarity matrix based on this measure is provably positive definite, and it can be used in existing kernel-based learning algorithms. Details on the pyramid match algorithm can be found in the Supplementary Material.

The pyramid match algorithms proposed in Grauman and Darrell, (2005, 2007a, b) treat the sets of features to be matched as orderless. In some applications, the spatial layout of features within a set may convey critical discriminative information. Lazebnik et al., (2006) proposed the spatial pyramid matching algorithm to perform pyramid matching in the 2D image space, thus taking the spatial information into account directly. The main idea of this approach is to quantize the local features in images into a number of discrete types by applying clustering algorithms, and then place multi-resolution histogram pyramid on the 2D images. It is also possible to integrate geometric information directly into the original pyramid match algorithm by adding the image coordinates as two additional dimensions into each feature vector (Lazebnik et al., 2006, Grauman, K 2007, private communication), and we adopt this approach in this article. Note that the original pyramid match algorithms are proposed to match two images, and that we extend them to match two sets of images.


In this section, we present a multi-label, multiple kernel learning formulation based on hypergraph for integrating the kernel matrices derived from various local descriptors. Results in Section 4 show that the integrated kernels yield better performance than that of the best individual kernel.

3.1 Hypergraph spectral learning

Hypergraph generalizes traditional graph by allowing edges, known as ‘hyperedges’, to connect more than two vertices, thus capturing the joint relationship among multiple vertices. We propose to construct a hypergraph (for the collection of gene expression patterns in question) in which each pattern is represented as a vertex. To document the joint similarity among patterns annotated with a common term, we propose to construct a hyperedge for each term in the vocabulary, and include all patterns annotated with a common term into one hyperedge. Hence, the number of hyperedges in this hypergraph equals the number of terms in the vocabulary.

Laplacian is commonly used to learn from a graph (Chung, 1997). To learn from a hypergraph, one can either define hypergraph Laplacian directly, or expand it into a traditional graph for which Laplacian is constructed. Since it has been shown that the Laplacians defined in both ways are similar (Agarwal et al., 2006), we use the expansion-based approaches in this article. The star and clique expansions are two commonly used schemes for expanding hypergraphs. Following the spectral graph embedding theory (Chung, 1997), we propose to project the patterns into a low-dimensional space in which patterns sharing a common term are close to each other. When formulated in the kernel-induced feature space, this can be achieved by solving the following optimization problem:

equation image

where K∈ℝn × n is the kernel matrix, n is the number of image sets, C=I−ℒ in which ℒ is the normalized Laplacian matrix derived from the hypergraph, B is the coefficient matrix for reconstructing the projection in feature space and λ>0 is the regularization parameter.

Kernel canonical correlation analysis (kCCA) (Hardoon et al., 2004) is a widely used method for dimensionality reduction. It can be shown that kCCA involves the following optimization problem:

equation image

where Y is the label matrix. Thus, kCCA is a special case of our proposed formulation based on hypergraph.

3.2 A convex formulation

It follows from the theory of reproducing kernels (Schölkopf and Smola, 2002) that the kernel K in Equation (1) uniquely determines a mapping of the patterns to some feature space. Thus, kernel selection (learning) is one of the central issues in kernel methods. Following the multiple kernel learning framework (Lanckriet et al., 2004a), we propose to obtain an optimal kernel matrix by integrating multiple kernel matrices constructed from various features, that is, K=∑j=1pθj Kj where {Kj}j=1p are the p kernels constructed from various local descriptors and {θj}j=1p are the weights satisfying ∑j=1p θj trace(Kj)=1. We show that the optimal weights that maximize the objective function in Equation (1) can be obtained by solving a semi-infinite linear program (SILP) (Hettich and Kortanek, 1993) in which a linear objective is optimized subject to an infinite number of linear constraints. This is summarized in the following theorem (Proof given in the Supplementary Material):

THEOREM 3.1. —

Given a set of p kernel matrices K1, …, Kp, the optimal kernel matrix, in the form of a linear combination of the given p kernel matrices that maximizes the objective function in Equation (1), can be obtained by solving the following SILP problem:

equation image

equation image

where Sj(Ψ), for j=1,…,p, is defined as

equation image

Ψ=[ψ1,…,ψk], r=(r1,…,rp)T, and rj=trace(Kj).

The SILP formulation proposed in Theorem 3.1 can be solved by the so-called ‘column generation’ technique, and more details can be found in the Supplementary Material.


In this section, we apply the proposed framework for annotating gene expression patterns. We use a collection of images obtained from the FlyExpress database (Van Emden et al., 2006), which contains standardized and aligned images. All the images used are taken from lateral view with the anterior to the left. The size of each raw image is 128 × 320.

4.1 Experimental setup

We apply nine local descriptors on regular grids of two different sizes on each image. The nine local descriptors are SIFT, shape context, PCA-SIFT, spin image, steerable filters, differential invariants, complex filters, moment invariants and cross-correlation. These local descriptors are commonly used for objection recognition (more details can be found in Mikolajczyk and Schmid, 2005). The sizes of the grids we used are 16 and 32 pixels in radius and spacing (Fig. 2), and 133 and 27 local features are produced for each image, respectively.

It is known that local textures are important discriminative features of gene expression pattern images, and features constructed from filter banks and raw pixel intensities are effective in capturing such information (Varma and Zisserman, 2003). We, therefore, apply Gabor filters with different wavelet scales and filter orientations on each image to obtain global features of 384 and 2592 dimensions. We also sample the pixel values of each image using a bilinear technique, and obtain features of 10240, 2560 and 640 dimensions. The resulting features are called ‘global features’.

After generating the features, we apply the vocabulary-guided pyramid match algorithm (Grauman and Darrell, 2007a) to construct kernels between image sets. A total of 23 kernel matrices (2 grid sizes × 9 local descriptors + 2 Gabor + 3 pixel) are obtained. Then, the proposed MKL formulation is employed to obtain the optimal integrated kernel matrix. The performance of kernel matrices (either single or integrated) is evaluated by applying the support vector machine (SVM) for each term and treating image sets annotated with this term as positive, and all other image sets as negative. We extract different numbers of terms from the FlyExpress database and use various numbers of image sets annotated with the selected terms for the experiments.

Precision and recall are two commonly used criteria for evaluating the performance of multi-label classification systems (Datta et al., 2008). For each term, let Π and Λ denote the indices of patterns that are annotated with this term by the proposed framework and by human curators in BDGP, respectively. Then, precision and recall for this term are defined to be P=|Π∩Λ|/|Π| and R=|Π∩Λ|/|Λ|, respectively, where |·| denotes the set cardinality. The F1 score is the harmonic mean of precision and recall as F1=(2 × P × R)/(P+R). To measure performance across multiple terms, we use both the macro F1 (average of F1 across all terms) and the micro F1 (F1 computed from the sum of per-term contingency tables) scores, which are commonly used in text and image applications (Datta et al., 2008). In each case, the entire dataset is randomly partitioned into training and test sets with ratio 1:1. This process is repeated 10 times, and the averaged performance is reported. We report the performance of each individual kernel and compare it with methods based on multi-instance learning on a dataset of 10 terms and 1000 image sets in the Supplementary Marterial. Results indicate that kernels constructed from the SIFT and PCA-SIFT descriptors yield the highest performance.

4.2 Annotation results

We apply the proposed formulations (star, clique and kCCA) to combine the various kernel matrices derived from different local descriptors. The performance of multiple kernel learning based on the soft margin 1-norm SVM (SVM1) criterion proposed in Lanckriet et al., (2004a) is also reported. Since the SVM1 formulation is only applicable to binary-class problems, we apply the formulation for each term by treating image sets annotated with this term as positive, and all other image sets as negative. To demonstrate the effectiveness of the proposed formulation for integrating kernels, we also report results obtained by combining the candidate kernels with uniform weight, along with the performance of the best individual kernel (among the 23 kernels) for each dataset. To compare with the existing method proposed in Zhou and Peng, (2007), we extract wavelet features from images and apply the min-redundancy max-relevance feature selection algorithm to select a subset of features. As was done in Zhou and Peng, (2007), we assign terms to individual images and apply linear discriminant analysis to annotate each image. Note that this setup does not consider the image group information and is the same as the one proposed in Zhou and Peng (2007). The annotation results measured by F1 score and precision and recall are summarized in Tables 1–4.

Table 1.
Performance of integrated kernels on gene expression pattern annotation in terms of macro F1 score
Table 2.
Performance of integrated kernels on gene expression pattern annotation in terms of micro F1 score
Table 3.
Performance of integrated kernels on gene expression pattern annotation in terms of precision
Table 4.
Performance of integrated kernels on gene expression pattern annotation in terms of recall

It can be observed from the results that in terms of both macro and micro F1 scores, the kernels integrated by either star or clique expansions achieve the highest performance on all but one of the datasets. This shows that the proposed formulation is effective in combining multiple kernels and potentially exploiting the complementary information contained in different kernels. For all datasets, the integrated kernels outperform the best individual kernel. In terms of precision and recall, our results indicate that SVM1 and Uniform achieve higher precision than the proposed formulations, while they both yield significantly lower recall. On the other hand, the best individual kernel produces slightly higher recall than the proposed formulations, while it yields significantly lower precision. Note that precision and recall are two competing criteria, and one can always achieve a perfect score on one of them at the price of the other. Hence, the proposed formulation achieves a harmonic balance between precision and recall, as indicated by the F1 scores. Note that BIK can have both higher precision and higher recall than the proposed formulation, since we report the highest precision and the highest recall among all of the candidate kernels separately. Hence, the BIK for precision and recall may not correspond to the same kernel. For all the four measures, the proposed formulations outperform the method proposed in Zhou and Peng, (2007) significantly. This shows that the annotation performance can be improved by considering the image group information.

Figure 3 shows some annotation results obtained by clique expansion for sample patterns in each stage range. Note that the pyramid match algorithm can compute kernels between variable-sized sets of images. Thus, terms can be predicted for image sets of any size. Overall, the proposed computational framework achieves promising performance on annotating gene expression patterns. Meanwhile, we realize that the current framework suffers from some potential limitations. By comparing the BDGP terms and the predicted terms for patterns in stage ranges 7–8 and 9–10, we can see that the structures related to endoderm are predicted correctly while some of those related to mesoderm are prone to error. This may be due to the fact that, when viewed laterally, structures related to mesoderm are more prone to be hidden than those related to endoderm. This phenomenon becomes clearer when we examine the results for stage range 13–16 in Figures 3 and and4.4. As shown in Figure 4, there are a total of five images in this set in the original BDGP database. Among these five images, only two of them (the first and third) are taken from the lateral view and hence are used in our experiments. The second and the fourth images are taken from the ventral view, and the fifth image is taken from the dorsal view. The structure ventral midline can only be documented by digital images taken from the ventral view as can be seen from the second and the fourth images in Figure 4. Since we only use images from the lateral view, it can be seen from Figure 3 that the proposed framework cannot predict this term correctly. This problem can potentially be solved by using images taken from other views such as ventral and dorsal. However, incorporation of images with multiple views may complicate the computational procedure, so requires a special care.

Fig. 3.
Annotation results for sample patterns in the six stage ranges. BDGP terms denote terms that are assigned by human curators in the Berkeley Drosophila Genome Project (Tomancak et al., 2002), and predicted terms denote terms predicted by the proposed computational ...
Fig. 4.
The original five images in stage range 13–16 from BDGP. The first and the third images are taken from lateral view; the second and the fourth images are taken from ventral view; the fifth image is taken from dorsal view. Only the first and the ...

To evaluate the scalability of the proposed formulations, we vary the number of terms and the number of image sets, and compare the change of computation time. On a machine with Pentium 43.40 GHz CPU and 1 GB of RAM, when the number of terms is increased from 20 to 60 on a dataset of 500 image sets, the computation time increases from approximately 4s to 11s. In terms of the number of image sets, datasets of 1500 and 2000 image sets with 60 terms take around 3 and 4 min, respectively.


In this article, we have presented a computational framework for annotating gene expression patterns of Drosophila. We propose to extract invariant features from gene expression pattern images and construct kernels between these sets of features. To integrate multiple kernels effectively, we propose multi-label, multiple kernel learning formulations based on hypegraph. Experimental evaluation shows that the integrated kernels consistently outperform the best individual kernel. Currently, the annotation of patterns by human curators requires multiple passes, and the proposed framework can be used as a preprocessing step whose annotation is further refined by human curators.

In future work, we plan to perform a detailed analysis of the weights obtained by the MKL formulation, and investigate how they are related to the relevance of each kernel. Our experimental results show that features extracted on smaller grids tend to yield better results. However, computational resource limitations prevent the use of a grid size smaller than 16 pixels. We plan to explore ways to overcome this problem. Retrieving gene expression patterns by combining information from images and annotations is an interesting and challenging research issue. The proposed framework can assign a probability of associating each term to each image, producing a probability vector for unannotated images from various high-throughput experiments. Such information can potentially be exploited to facilitate pattern retrieval. Detailed analysis of the annotation results produced by the proposed framework indicates that integration of gene expression pattern images taken from multiple views can potentially improve the annotation performance. In this case, the current pyramid match algorithms need to be adapted so that only images taken from the same view are matched. It can be seen from the third and fifth images in Figure 4 that the annotation terms can also be associated with partial patterns in each image. These partial patterns have been removed in our FlyExpress database (Fig. 3), so these terms cannot be predicted correctly by the proposed framework. We plan to explore ways to incorporate these partial patterns in the future.

Supplementary Material

[Supplementary Data]


We thank Kristen Grauman, John Lee and Zhi-Hua Zhou for fruitful discussions, Hanchuan Peng for the feature selection code, Kristi Garboushian for editorial support, and Bernard Van Emden for help with access to the gene expression data.

Funding: This work is supported in part by research grants from National Institutes of Health under No. HG002516 and National Science Foundation under No. IIS-0612069.

Confilct of Interest: none declared.


  • Agarwal S, et al. Proceedings of the 23rd International Conference on Machine Learning. New York, NY, USA: ACM Press; 2006. Higher order learning with graphs; pp. 17–24.
  • Arbeitman MN, et al. Science. 2002;297:2270–2275. [PubMed]
  • Campos-Ortega JA, Hartenstein V. The Embryonic Development of Drosophila Melanogaster. 2nd edn. Berlin/Heidelberg, New York: Springer-Verlag; 1997.
  • Carson J, et al. A digital atlas to characterize the mouse brain transcriptome. PLoS Computat. Biol. 2005;1:e41. [PMC free article] [PubMed]
  • Chung F.RK. Spectral Graph Theory. American Mathematical Society; 1997.
  • Datta R, et al. ACM Computing Surveys. Vol. 40. New York, NY, USA: ACM Press; 2008. Image retrieval: ideas, influences, and trends of the new age; pp. 1–60.
  • De Bie T, et al. Kernel-based data fusion for gene prioritization. Bioinformatics. 2007;23:i125–i132. [PubMed]
  • Estrada B, et al. An integrated strategy for analyzing the unique developmental programs of different myoblast subtypes. PLoS Genet. 2006;2:160–171. [PMC free article] [PubMed]
  • Fei-Fei L, Perona P. Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington, DC, USA: IEEE Computer Society; 2005. A Bayesian hierarchical model for learning natural scene categories; pp. 524–531.
  • Grauman K, Darrell T. Proceedings of the Tenth IEEE International Conference on Computer Vision. Washington, DC, USA: IEEE Computer Society; 2005. The pyramid match kernel: discriminative classification with sets of image features; pp. 1458–1465.
  • Grauman K, Darrell T. Approximate correspondences in high dimensions. In: Schölkopf B, et al., editors. Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press; 2007a. pp. 505–512.
  • Grauman K, Darrell T. The pyramid match kernel: efficient learning with sets of features. J. Mach. Learn. Res. 2007b;8:725–760.
  • Grumbling G, et al. FlyBase: anatomical data, images and queries. Nucleic Acids Res. 2006;34:D484–D488. [PMC free article] [PubMed]
  • Gurunathan R, et al. Identifying spatially similar gene expression patterns in early stage fruit fly embryo images: binary feature versus invariant moment digital representations. BMC Bioinformatics. 2004;5:13. [PMC free article] [PubMed]
  • Hardoon DR, et al. Canonical correlation analysis: an overview with application to learning methods. Neural Comput. 2004;16:2639–2664. [PubMed]
  • Harmon C, et al. Proceedings of the Eleventh Annual International Conference on Research in Computational Molecular Biology. Berlin/Heidelberg: Springer; 2007. Comparative analysis of spatial patterns of gene expression inDrosophila melanogasterimaginal discs; pp. 533–547.
  • Hettich R, Kortanek KO. Semi-infinite programming: theory, methods, and applications. SIAM Rev. 1993;35:380–429.
  • KondorR., Jebara T. Proceedings of the Twentieth International Conference on Machine Learning. AAAI Press; 2003. A kernel between sets of vectors; pp. 361–368.
  • Kumar S, et al. BEST: a novel computational approach for comparing gene expression patterns from early stages ofDrosophlia melanogasterdevelopment. Genetics. 2002;169:2037–2047. [PMC free article] [PubMed]
  • Lanckriet G.RG, et al. Learning the kernel matrix with semidefinite programming. J. Mach. Learn. Res. 2004a;5:27–72.
  • Lanckriet G.RG, et al. A statistical framework for genomic data fusion. Bioinformatics. 2004b;20:2626–2635. [PubMed]
  • Lazebnik S, et al. Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington, DC, USA: IEEE Computer Society; 2006. Beyond bags of features: spatial pyramid matching for recognizing natural scene categories; pp. 2169–2178.
  • Lécuyer E, et al. Global analysis of mRNA localization reveals a prominent role in organizing cellular architecture and function. Cell. 2007;131:174–187. [PubMed]
  • Lein ES, et al. Genome-wide atlas of gene expression in the adult mouse brain. Nature. 2006;445:168–176. [PubMed]
  • Mikolajczyk K, Schmid C. A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 2005;27:1615–1630. [PubMed]
  • Mikolajczyk K, et al. A comparison of affine region detectors. Int. J. Comput. Vis. 2005;65:43–72.
  • Peng H, Myers EW. Proceedings of the Eighth Annual International Conference on Research in Computational Molecular Biology. New York, NY, USA: ACM Press; 2004. Comparingin situmRNA expression patterns ofDrosophilaembryos; pp. 157–166.
  • Samsonova AA, et al. Prediction of gene expression in embryonic structures ofDrosophila melanogaster. PLoS Comput. Biol. 2007;3:1360–1372. [PMC free article] [PubMed]
  • Schölkopf S, Smola A. Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond. Cambridge, MA, USA: MIT Press; 2002.
  • Schölkopf S, et al. Kernel Methods in Computational Biology. Cambridge, MA, USA: MIT Press; 2004.
  • Sonnenburg S, et al. Large scale multiple kernel learning. J. Mach. Learn. Res. 2006;7:1531–1565.
  • Tomancak P, et al. Systematic determination of patterns of gene expression duringDrosophilaembryogenesis. Genome Biol. 2002;3 [PMC free article] [PubMed]
  • Tomancak P, et al. Global analysis of patterns of gene expression duringDrosophilaembryogenesis. Genome Biol. 2007;8:R145. [PMC free article] [PubMed]
  • Van Emden B, et al. 2006. [last accessed date 30 April 2008]. FlyExpress: an image-matching web-tool for finding genes with overlapping patterns of expression inDrosophilaembryos. Available at http://www.flyexpress.net/
  • Varma M, Zisserman A. Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington, DC, USA: IEEE Computer Society; 2003. Texture classification: are filter banks necessary? pp. 691–698.
  • Ye J, et al. Proceedings of the Computational Systems Bioinformatics Conference. 2006. Classification ofDrosophilaembryonic developmental stage range based on gene expression pattern images; pp. 293–298. [PubMed]
  • Zhang J, et al. Local features and kernels for classification of texture and object categories: a comprehensive study. Int. J. Comput. Vis. 2007;73:213–238.
  • Zhou D, et al. Learning with hypergraphs: clustering, classification, and embedding. In: Schölkopf B, et al., editors. Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press; 2007. pp. 1601–1608.
  • Zhou J, Peng H. Automatic recognition and annotation of gene expression patterns of fly embryos. Bioinformatics. 2007;23:589–596. [PubMed]
  • Zhou ZH, Zhang ML. Multi-instance multi-label learning with application to scene classification. In: Schölkopf B, et al., editors. Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press; 2007. pp. 1609–1616.
  • Zien A, Ong CS. Proceedings of the 24th International Conference on Machine Learning. ACM, New York, NY, USA: Multiclass multiple kernel learning; pp. 1191–1198. 2207.

Articles from Bioinformatics are provided here courtesy of Oxford University Press
PubReader format: click here to try


Save items

Related citations in PubMed

See reviews...See all...


Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...