Genomics pipelines and data integration: challenges and opportunities in the research setting

Expert Rev Mol Diagn. 2017 Mar;17(3):225-237. doi: 10.1080/14737159.2017.1282822. Epub 2017 Jan 25.

Abstract

The emergence and mass utilization of high-throughput (HT) technologies, including sequencing technologies (genomics) and mass spectrometry (proteomics, metabolomics, lipids), has allowed geneticists, biologists, and biostatisticians to bridge the gap between genotype and phenotype on a massive scale. These new technologies have brought rapid advances in our understanding of cell biology, evolutionary history, microbial environments, and are increasingly providing new insights and applications towards clinical care and personalized medicine. Areas covered: The very success of this industry also translates into daunting big data challenges for researchers and institutions that extend beyond the traditional academic focus of algorithms and tools. The main obstacles revolve around analysis provenance, data management of massive datasets, ease of use of software, interpretability and reproducibility of results. Expert commentary: The authors review the challenges associated with implementing bioinformatics best practices in a large-scale setting, and highlight the opportunity for establishing bioinformatics pipelines that incorporate data tracking and auditing, enabling greater consistency and reproducibility for basic research, translational or clinical settings.

Keywords: ExomeSeq; High throughput sequencing; RNAseq; analysis provenance; bioinformatics best practices; bioinformatics pipelines; genomic data management; reproducible computational research; variant calling.

Publication types

  • Review
  • Research Support, Non-U.S. Gov't
  • Research Support, N.I.H., Extramural

MeSH terms

  • Computational Biology* / instrumentation
  • Computational Biology* / methods
  • Computational Biology* / trends
  • Genetic Research*
  • Genomics* / instrumentation
  • Genomics* / methods
  • Genomics* / trends