Faster Genome Analysis Enabling Clinical Application, Population-Scale Translational Research
Genome analysis pipelines are getting faster. These advances in computational pipelines are going to alleviate the notorious analysis bottleneck that challenges clinical adoption of genome sequencing. To achieve widespread clinical relevance, time to results must be cut significantly and to facilitate the next wave of understanding about the genetic origins of disease, these analysis pipelines must be robust enough to accommodate population-sized datasets of tens of thousands of genomes. Experts believe the technology to overcome these analysis challenges is now entering the marketplace. As next-generation sequencing (NGS) instruments become more commonplace in laboratories and as these platforms churn out raw data at even faster rates, access to scalable analysis tools becomes even more critical. Optimized analysis workflow solutions become the missing link—able to transform big data into clinically actionable information or scientific discoveries. To get from raw base pair data to reports of pathogenic variants requires multiple computational steps—alignment, deduplication, realignment, recalibration, and variant discovery. The resulting variant call format then requires tertiary analysis to match variants with clinically relevant information. Current analysis approaches can take weeks to complete and require bioinformatics expertise and computing infrastructure that poses a significant cost exceeding the price of actually generating sequencing data. Taken […]
Subscribe to Clinical Diagnostics Insider to view
Start a Free Trial for immediate access to this article