Format

Send to

Choose Destination
Gigascience. 2018 Jun 1;7(6). doi: 10.1093/gigascience/giy052.

Optimized distributed systems achieve significant performance improvement on sorted merging of massive VCF files.

Author information

1
Department of Computer Sciences, Emory University, Atlanta, GA 30322, USA.
2
Department of Medical Informatics, Emory University School of Medicine, Atlanta, GA 30322, USA.
3
Department of Human Genetics, Emory University School of Medicine, Atlanta, GA 30322, USA.
4
Department of Medicine, University of California, San Francisco, San Francisco, CA 94143 USA.
5
Department of Epidemiology, Bloomberg School of Public Health, JHU, Baltimore, MD 21205 USA.
6
Department of Biostatistics, Bloomberg School of Public Health, JHU, Baltimore, MD 21205 USA.
7
Department of Medicine, Johns Hopkins University, Baltimore, MD 21224 USA.
8
Department of Medicine, University of Colorado, Denver, Aurora, CO, 80045 USA.
9
Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY 11794, USA.
10
Department of Biostatistics, Emory University, Atlanta, GA 30322, USA.

Abstract

Background:

Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance.

Findings:

In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)-based high-performance computing (HPC) implementation, and the popular VCFTools.

Conclusions:

Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems.

PMID:
29762754
PMCID:
PMC6007233
DOI:
10.1093/gigascience/giy052
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for Silverchair Information Systems Icon for PubMed Central
Loading ...
Support Center