Send to

Choose Destination
Bioinformatics. 2019 Mar 1;35(5):760-768. doi: 10.1093/bioinformatics/bty733.

SpaRC: scalable sequence clustering using Apache Spark.

Author information

Department of Computer Science, School of Computer Science, Florida State University, Tallahassee, FL, USA.
US Department of Energy, Joint Genome Institute, Walnut Creek, CA, USA.
Environmental Genomics and Systems Biology Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USA.
Pacific Biosciences Inc, Menlo Park, CA, USA.
School of Natural Sciences, University of California at Merced, Merced, CA, USA.



Whole genome shotgun based next-generation transcriptomics and metagenomics studies often generate 100-1000 GB sequence data derived from tens of thousands of different genes or microbial species. Assembly of these data sets requires tradeoffs between scalability and accuracy. Current assembly methods optimized for scalability often sacrifice accuracy and vice versa. An ideal solution would both scale and produce optimal accuracy for individual genes or genomes.


Here we describe an Apache Spark-based scalable sequence clustering application, SparkReadClust (SpaRC), that partitions reads based on their molecule of origin to enable downstream assembly optimization. SpaRC produces high clustering performance on transcriptomes and metagenomes from both short and long read sequencing technologies. It achieves near-linear scalability with input data size and number of compute nodes. SpaRC can run on both cloud computing and HPC environments without modification while delivering similar performance. Our results demonstrate that SpaRC provides a scalable solution for clustering billions of reads from next-generation sequencing experiments, and Apache Spark represents a cost-effective solution with rapid development/deployment cycles for similar large-scale sequence data analysis problems.


Supplemental Content

Full text links

Icon for Silverchair Information Systems
Loading ...
Support Center