Deep Multiview Learning to Identify Population Structure with Multimodal Imaging

Proc IEEE Int Symp Bioinformatics Bioeng. 2020 Oct:2020:308-314. doi: 10.1109/bibe50027.2020.00057. Epub 2020 Dec 16.

Abstract

We present an effective deep multiview learning framework to identify population structure using multimodal imaging data. Our approach is based on canonical correlation analysis (CCA). We propose to use deep generalized CCA (DGCCA) to learn a shared latent representation of non-linearly mapped and maximally correlated components from multiple imaging modalities with reduced dimensionality. In our empirical study, this representation is shown to effectively capture more variance in original data than conventional generalized CCA (GCCA) which applies only linear transformation to the multi-view data. Furthermore, subsequent cluster analysis on the new feature set learned from DGCCA is able to identify a promising population structure in an Alzheimer's disease (AD) cohort. Genetic association analyses of the clustering results demonstrate that the shared representation learned from DGCCA yields a population structure with a stronger genetic basis than several competing feature learning methods.

Keywords: Deep learning; deep generalized canonical correlation analysis; image-driven population structure; multimodal imaging; multiview learning.