Quake: quality-aware detection and correction of sequencing errors
© Kelley et al.; licensee BioMed Central Ltd. 2010
Received: 7 September 2010
Accepted: 29 November 2010
Published: 29 November 2010
We introduce Quake, a program to detect and correct errors in DNA sequencing reads. Using a maximum likelihood approach incorporating quality values and nucleotide specific miscall rates, Quake achieves the highest accuracy on realistically simulated reads. We further demonstrate substantial improvements in de novo assembly and SNP detection after using Quake. Quake can be used for any size project, including more than one billion human reads, and is freely available as open source software from http://www.cbcb.umd.edu/software/quake.
Massively parallel DNA sequencing has become a prominent tool in biological research [1, 2]. The high-throughput and low cost of second-generation sequencing technologies has allowed researchers to address an ever-larger set of biological and biomedical problems. For example, the 1000 Genomes Project is using sequencing to discover all common variations in the human genome . The Genome 10K Project plans to sequence and assemble the genomes of 10,000 vertebrate species . Sequencing is now being applied to a wide variety of tumor samples in an effort to identify mutations associated with cancer [5, 6]. Common to all of these projects is the paramount need to accurately sequence the sample DNA.
DNA sequence reads from Illumina sequencers, one of the most successful of the second-generation technologies, range from 35 to 125 bp in length. Although sequence fidelity is high, the primary errors are substitution errors, at rates of 0.5-2.5% (as we show in our experiments), with errors rising in frequency at the 3' ends of reads. Sequencing errors complicate analysis, which normally requires that reads be aligned to each other (for genome assembly) or to a reference genome (for detection of mutations). Mistakes during the overlap computation in genome assembly are costly: missed overlaps may leave gaps in the assembly, while false overlaps may create ambiguous paths or improperly connect remote regions of the genome . In genome re-sequencing projects, reads are aligned to a reference genome, usually allowing for a fixed number of mismatches due to either SNPs or sequencing errors . In most cases, the reference genome and the genome being newly sequenced will differ, sometimes substantially. Variable regions are more difficult to align because mismatches from both polymorphisms and sequencing errors occur, but if errors can be eliminated, more reads will align and the sensitivity for variant detection will improve.
Fortunately, the low cost of second-generation sequencing makes it possible to obtain highly redundant coverage of a genome, which can be used to correct sequencing errors in the reads before assembly or alignment. Various methods have been proposed to use this redundancy for error correction; for example, the EULER assembler  counts the number of appearances of each oligonucleotide of size k (hereafter referred to as k-mers) in the reads. For sufficiently large k, almost all single-base errors alter k-mers overlapping the error to versions that do not exist in the genome. Therefore, k-mers with low coverage, particularly those occurring just once or twice, usually represent sequencing errors. For the purpose of our discussion, we will refer to high coverage k-mers as trusted, because they are highly likely to occur in the genome, and low coverage k-mers as untrusted. Based on this principle, we can identify reads containing untrusted k-mers and either correct them so that all k-mers are trusted or simply discard them. The latest instance of EULER determines a coverage cutoff to separate low and high coverage k-mers using a mixture model of Poisson (low) and Gaussian (high) distributions, and corrects reads with low coverage k-mers by making nucleotide edits to the read that reduce the number of low coverage k-mers until all k-mers in the read have high coverage . A number of related methods have been proposed to perform this error correction step, all guided by the goal of finding the minimum number of single base edits (edit distance) to the read that make all k-mers trusted [11–14].
In addition, a few alternative approaches to error correction should be mentioned. Past methods intended for Sanger sequencing involve multiple sequence alignments of reads rendering them infeasible for short read datasets [15–17]. More recently, a generalized suffix tree of the reads was shown to be an effective data structure for detecting and correcting errors in short reads [18, 19]. De Bruijn graph-based short read assemblers [10, 11, 13, 20, 21] perform substantial error correction of reads in the de Bruijn graph. For example, short dead end paths are indicative of a sequencing error at the end of a read and can be removed, and 'bubbles' where a low coverage path briefly diverges from and then reconnects to high coverage nodes are indicative of sequencing errors at the middle of a read and can be merged. Finally, a number of methods have been proposed to cluster reads and implicitly correct sequencing errors in data where the targets vary in abundance such as sequencing of small RNAs or 16 s rRNA [22–25].
Although methods that search for the correct read based on minimizing edit distance will mostly make the proper corrections, edit distance is an incomplete measure of relatedness. First, each position in a sequencing read is assigned a quality value, which defines the probability that the basecall represents the true base. Though questions have been raised about the degree to which quality values exactly define the probability of error , newer methods for assigning them to base calls demonstrate substantial improvements [27–31], and for our purpose of error correction, the quality values can be useful even if they only rank one base as more likely to be an error as another. We should prefer to edit a read at these lower quality bases where errors are more likely, but edit distance treats all bases the same regardless of quality. Furthermore, specifics of the Illumina technology cause certain miscalls to be more likely than others. For example, bases are called by analysis of fluorescent output from base-incorporating chemical reactions, and A and C share a red detection laser while G and T share a green detection laser. Thus, A and C are more likely to be mistaken for each other than for G or T . Edit distance treats all error substitutions as equally likely.
In this paper, we introduce a new algorithm called Quake to correct substitution errors in sets of DNA sequencing reads produced as part of >15× coverage sequencing projects, which has become commonplace thanks to the efficiency of second-generation sequencing technologies. Quake uses the k-mer coverage framework, but incorporates quality values and rates of specific miscalls computed from each sequencing project. In addition, Quake incorporates a new method to choose an appropriate coverage cutoff between trusted k-mers (those that are truly part of the genome) and erroneous k-mers based on weighting k-mer counts in the reads using the quality values assigned to each base. On simulated data using quality values from real reads, Quake is more accurate than previous methods, especially with relatively long Illumina reads. Correcting reads guided by edit distance alone, without the use of quality values, results in many more improperly corrected reads. These reads are then chimeric, containing sequence from two distinct areas of the genome, which can be a major problem for assembly software.
Finally, we explore the impact of error correction with Quake on two important bioinformatics applications - de novo assembly and detection of variations with respect to a reference genome. Even a sophisticated assembler such as Velvet , which performs its own error correction using the assembly graph, benefits from pre-processing the reads with Quake. SOAPdenovo , a parallel assembler capable of assembling mammalian-size datasets, also produces better assemblies after error correction. For variant detection, correcting errors before mapping reads to a reference genome results in more reads aligned to SNP locations and more SNPs discovered. Note that Quake and other correction methods that rely on coverage of k-mers are inappropriate for applications where low coverage does not necessary implicate a sequencing error such as metagenomics, RNA-Seq, and ChIP-Seq.
Results and discussion
The two goals of error correction are to cleanly separate reads with errors from reads without errors and to properly correct the reads with errors. To assess Quake's ability to accurately complete these tasks, we simulated sequencing reads with errors from finished genomes (using an approach comparable to the 'Maq simulate' program ) and compared Quake's corrections to the true reference. For each dataset, we categorized reads and their corrections into four outcomes. As positive outcomes, we counted the number of reads that were properly corrected to their original state or trimmed such that no errors remained. As negative outcomes, we counted the number of reads mis-corrected producing a false sequence or left uncorrected even though they contained errors. Reads were simulated by choosing a position in the reference genome, using the quality values from an actual Illumina sequencing read, and changing the nucleotides according to the probabilities defined by those quality values. Dohm et al. measured the bias in Illumina specific nucleotide to nucleotide miscall rates by sequencing reads from Helicobacter acinonychis and Beta vulgaris, aligning them to high quality reference genomes, and counting the number of each type of mismatch in the alignments . At simulated errors, we changed the nucleotide according to these frequencies.
To compare Quake's accuracy to that of previous error correction programs, we corrected the reads using EULER , Shrec , and SOAPdenovo  on a four core 2.4 GHz AMD Opteron machine. Quake and the other k-mer based correction tools used k = 15. SOAPdenovo's error correction module does not contain a method to choose the cutoff between trusted and untrusted k-mers, so we tried a few appropriate values and report the best results. We similarly tried multiple values for Shrec's strictness parameter that is used to help differentiate true and error reads via coverage. These are very sensitive parameters, and leaving them to the user is a critical limitation of these programs. Alternatively, EULER and Quake determine their parameters automatically using the data.
Simulated 36 bp E. coli
Error reads kept
Simulated 124 bp E. coli
Error reads kept
To demonstrate Quake's ability to scale to larger genomes, we simulated 325 million 124 bp reads from the 249 Mbp human chromosome 1 (version hg19), which provided 34× coverage after trimming. Due to the larger size of the sequencing target, we counted and corrected 18-mers in the reads. Of the 15.23 M reads containing errors, Quake corrected 12.83 M (84.2%) and trimmed to a correct prefix another 0.82 M (5.4%). Because we could not successfully run SOAPdenovo using 18-mers, we corrected using 17-mers, a reasonable choice given that the authors of that software chose to correct reads using 17-mers for the entire human genome . Quake corrected 11% more reads than SOAPdenovo, reduced mis-corrections by 64%, and kept 15% fewer error reads. EULER produced very poor correction results, for example, correcting less than half as many reads as Quake with more mis-corrections and error reads kept. On a dataset this large, Shrec required more memory than our largest computer (256 GB).
Relative to the 124 bp simulated reads from E. coli, Quake's attempted corrections were accurate at a lower rate (99.02%) and Quake kept more error reads in the dataset (1.11 M, 7.27%). This is caused by the fact that the human genome contains far more repetitive elements than E. coli, such as the LINE and SINE retrotransposon families . The more repetitive the genome is, the greater the chance is that a sequencing error will merely change one trusted k-mer to another trusted k-mer, hiding the error. To quantify this property of the two genomes, we computed the percentage of all possible single base mutations to k-mers in each genome which create k-mers that also exist in the genome. In E. coli 536, this is true for 2.25% of 15-mer mutations, and in chromosome 1 of the human genome, it is true for 13.8% of 18-mer mutations. Increasing the k-mer size does little to alleviate the problem as still 11.1% of 19-mer mutations are problematic. Nevertheless, allowing a small percentage of error reads may not be terribly problematic for most applications. For example, genome assemblers will notice the lower coverage on the paths created by these reads and clean them out of the assembly graph.
Having established the accuracy of Quake for error correction on simulated data, we measured the impact of Quake on genome assembly by assembling the reads before and after error correction. One assembly is better than another if it is more connected and more accurately represents the sequenced genome. To measure connectedness, we counted the number of contigs and scaffolds in the assembly larger than 50 bp as well as the N50 and N90 for each, which is the contig/scaffold size for which 50% (90%) of the genome is contained in contigs/scaffolds of equal or larger size. Fewer contigs/scaffolds and larger N50 and N90 values signify that the reads have been more effectively merged into large genomic sequences. In addition, we counted the number of reads included in the assembly because greater coverage generally leads to better accuracy in consensus calling. When a reference genome was available, we used it to validate the correctness of the assembly. We aligned all scaffolds to the reference using MUMmer  and considered scaffolds that did not align for their entire length (ignoring 35 bp on each end) at >95% identity to be mis-assembled. We also counted the number of single base differences between the reference and otherwise properly assembled scaffolds. Finally, we computed the percentage of reference nucleotides covered by some aligning scaffold.
Velvet is a widely used de Bruijn graph-based assembler that performs error correction by identifying graph motifs that signify sequencing errors , but does not use a stand-alone error correction module like EULER  or SOAPdenovo . Thus, we hypothesized that Quake would help Velvet produce better assemblies. To test this hypothesis, we corrected and assembled 152× (20.8 M reads) coverage of 36 bp reads from E. coli K12 substrain MG1655 [SRA:SRX000429]. We used Velvet's option for automatic computation of expected coverage and chose the de Bruijn graph k-mer size that resulted in the best assembly based on the connectedness and correctness statistics discussed above.
Velvet E. coli assembly
We also measured Quake's impact on a larger assembly with longer reads by assembling 353.7 M Illumina reads, all of them 124 bp in length, from the alfalfa leafcutting bee Megachile rotundata, with an estimated genome size of 300 Mbp. (Contact the corresponding author for details on data access.) Assembly was performed with SOAPdenovo  using a de Bruijn graph k-mer size of 31 and the '-R' option to resolve small repeats. Assembly of the raw uncorrected reads was quite poor because of the very low quality suffixes of many of the 124 bp reads. Thus, we compare assembly of quality trimmed reads (performed as described above), reads corrected using Quake, and trimmed reads corrected with SOAPdenovo's own error correction module. Quake and SOAPdenovo corrected using 18-mers and a coverage cutoff of 1.0.
SOAPdenovo bee assembly
A second application of short reads that benefits from error correction is detection of variations, such as single nucleotide polymorphisms (SNPs). In such experiments, the genome from which the reads are sequenced differs from a reference genome to which the reads are compared. The first step is to align the reads to the reference genome using specialized methods  that will only allow a few mismatches between the read and reference, such as up to two mismatches in a recent study . A read containing a SNP will start with one mismatch already, and any additional differences from the reference due to sequencing errors will make alignment difficult (see Figure 1). Furthermore, the distribution of SNPs in a genome is not uniform and clusters of SNPs tend to appear . Reads from such regions may contain multiple SNPs. If these reads contain any sequencing errors, they will not align causing the highly polymorphic region to be overlooked.
To explore the benefit that error correction with Quake may have on SNP detection, we randomly sampled reads representing 35× from the E. coli K12 reads used above. To call SNPs, we aligned the reads to a related reference genome (E. coli 536 [GenBank: NC_008253]) with Bowtie  using two different modes. We first mapped reads allowing up to two mismatches to resemble the SNP calling pipeline in a recent, large study . We also mapped reads using Bowtie's default mode, which allows mismatches between the reference and read until the sum of the quality values at those mismatches exceeds 70 . We called SNPs using the SAMtools pileup program , requiring a Phred-style base call quality ≥40 and a coverage of ≥3 aligned reads. Having a reliable reference genome for both strains of E. coli allowed us to compare the SNPs detected using the reads to SNPs detected by performing a whole genome alignment. To call SNPs using the reference genomes, we used the MUMmer utility dnadiff which aligns the genomes with MUMmer, identifies the optimal alignment for each region, and enumerates SNPs in aligning regions . We treat these SNPs as the gold standard (though there may be some false positives in improperly aligned regions) in order to compute recall and precision statistics for the read-based SNP calls.
E. coli SNP calling
Two mismatch uncorrected
Two mismatch corrected
In order to demonstrate the ability of Quake to scale to larger datasets and benefit re-sequencing studies of humans, we corrected 1.7 billion reads from a Korean individual [SRA:SRA008175] . This set includes 1.2 B 36 bp reads and 504 M 75 bp reads. Quake corrected 206 M (11.9%) of these reads, trimmed an additional 75.3 M (4.4%), and removed 344 M (19.9%). Before and after error correction, we aligned the reads to the human genome (NCBI build 37) and called SNPs with Bowtie allowing two mismatches and SAMtools as described above (though requiring the diploid genotype to have quality ≥40 implicitly requires coverage ≥4). Because some putative SNPs had read coverage indicative of a repeat, we filtered out locations with read coverage greater than three times the median coverage of 19, leaving 3,024,283 SNPs based on the uncorrected reads. After error correction, we found 3,083,481 SNPs, an increase of 2.0%. The mean coverage of these SNPs was 20.1 reads, an increase of 4.8% over the coverage of these locations in the alignments of uncorrected reads, which should provide greater accuracy. Thus, Quake helps detect more SNPs in larger diploid genomes as well.
As sequencing becomes more prevalent in biological research, researchers will want to examine and compare the quality of an instance (single lane, machine run, or whole project) of data generation. Error correction with Quake provides two simple measures of data quality in the number of reads corrected and the number of reads removed. Furthermore, Quake allows the user to search for biases in the data like those described above using bundled analysis scripts on the log of all corrections made. Thus, researchers can detect and characterize problems and biases in their data before downstream analyzes are performed.
The low cost and high throughput of second-generation sequencing technologies are changing the face of genome research. Despite the many advantages of the new technology, sequencing errors can easily confound analyzes by introducing false polymorphisms and fragmenting genome assemblies. The Quake system detects and corrects sequencing errors by using the redundancy inherent in the sequence data. Our results show that Quake corrects more reads more accurately than previous methods, which in turn leads to more effective downstream analyzes.
One way Quake improves over prior corrections methods is by q-mer counting, which uses the quality values assigned to each base as a means of weighting each k-mer. The coverage distributions of error and true k-mers cannot be separated perfectly according to their number of appearances due to high coverage errors and low coverage genomic regions. Yet, the choice of a cutoff to determine which k-mers will be trusted in the correction stage can have a significant affect on downstream applications like genome assembly.
Weighting k-mer appearances by quality puts more distance between the two distributions because erroneous k-mers generally have lower quality than true k-mers. Furthermore, with q-mers, the cutoff value separating the two distributions no longer needs to be an integer. For example, at low coverage we might use 0.95 as a cutoff, such that k-mers that appear once with high quality bases would be trusted, but those with lower quality would not. Such fine-grained cutoff selection is impossible with simple k-mer counting.
Quake includes a sophisticated model of sequencing errors that allows the correction search to examine sets of corrections in order of decreasing likelihood, thus correcting the read more accurately. The model also helps to better identify reads with multiple sets of equally good corrections, which allows the system to avoid mis-correcting and creating a chimeric read. At a minimum, quality values should be included in error correction as a guide to the likely locations of sequencing errors. In each dataset we examined, the rates at which each nucleotide was mis-called to other nucleotides were not uniform and often varied according to quality. Adjusting for these rates provides further improvements in error correction, and distinguishes our method.
We expect Quake will be useful to researchers interested in a number of downstream applications. Correcting reads with Quake improves genome assembly by producing larger and more accurate contigs and scaffolds using the assemblers Velvet  and SOAPdenovo . Error correction removes many of the false paths in the assembly graphs caused by errors and helps the assembler to detect overlaps between reads that would have been missed. Eliminating erroneous k-mers also significantly reduces the size of the assembly graph, which for large genomes may be the difference between being able to store the graph in a computer's memory or not . In a re-sequencing application, correcting reads with Quake allows Bowtie  to align many more reads to locations in the reference genome where there is one or more SNPs. Reads containing variants already have differences from the reference genome; correcting additional differences caused by sequencing errors makes these reads easier to align and then available as input for the SNP calling program. Finally, Quake offers a unique perspective into the quality of the data from a sequencing experiment. The proportion of reads corrected, trimmed, and removed are useful statistics with which experiments can be compared and data quality can be monitored. The output log of corrections can be mined for troubling biases.
On microbial sized genomes, error correction with Quake is fast and unobtrusive for the researcher. On larger datasets, such as a human re-sequencing, it is computationally expensive and requires substantial resources. For the Korean individual reads, we counted k-mers on a 20-core computer cluster running Hadoop , which required from two to three days. For error correction, the data structure used to store trusted k-mers requires 4 k bits, which is 32 GB for human if k = 19. Thus, the correction stage of Quake is best run on a large shared memory machine, where correction is parallelized across multiple threads using OpenMP . Running on 16 cores, this took a few days for the Korean individual dataset. Future work will explore alternative ways to perform this step that would require less memory. This way correction could be parallelized across a larger computer cluster and made more accessible to researchers without a large shared memory machine.
k-mer based error correction programs are affected significantly by the cutoff separating true and error k-mers. Improvements in k-mer classification, such as the q-mer counting introduced by Quake, improve the accuracy of error correction. Coverage biases in second-generation sequencing technologies, which are largely inexplicable outside of the affect of local GC content, add to the difficulty . Further characterization of these biases would allow better modeling of k-mer coverage and better classification of k-mers as true or error. In more repetitive genomes, the probability increases that a k-mer that is an artifact of an error actually does occur in the genome. Such k-mers are not really misclassified, but may cause Quake to ignore a sequencing error. To improve error correction in these cases, the local context of the k-mer in the sequencing reads must be taken into account. Though this was done for Sanger read error correction [15–17], it is not currently computationally and algorithmically feasible for high throughput datasets containing many more reads.
Quake's model for sequencing errors takes into account substantial information about which types of substitution errors are more likely. We considered using Quake to re-estimate the probability of a sequencing error at each quality value before using the quality values for correction. Doing so is difficult because Quake detects many reads that have errors for which it cannot find a valid set of corrections and pinpoint the errors' locations. If Quake re-estimated quality value error probabilities without considering these reads, the error probabilities would be underestimated. Additionally, the benefit of re-estimation is minimal because quality values are mainly used to determine the order in which sets of corrections are considered. Alternatively, passing on more information from the base calling stage, such as the probability that each individual nucleotide is the correct one, would be very helpful. Quake's error model could be made more specific, the need to learn nucleotide specific error rates would be alleviated, and more accurate error correction could be expected.
Quake detects and corrects errors in sequencing reads by using k-mer coverage to differentiate k-mers trusted to be in the genome and k-mers that are untrustworthy artifacts of sequencing errors. For reads with untrusted k-mers, Quake uses the pattern of trusted and untrusted k-mers to localize the errors and searches for the set of corrections with maximum likelihood that make all k-mers trusted. The likelihood of a set of corrections to a read is defined by a probabilistic model of sequencing errors incorporating the read's quality values as well as the rates at which nucleotides are miscalled as different nucleotides. Correction proceeds by examining changes to the read in order of decreasing likelihood until a set of changes making all k-mers trusted is discovered and found to be sufficiently unambiguous.
For an approximately 5 Mbp such as E. coli, we set k to 15, and for the approximately 3 Gbp human genome, we set k to 19 (rounding down for computational reasons). For the human genome, counting all 19-mers in the reads is not a trivial task, requiring >100 GB of RAM to store the k-mers and counts, many of which are artifacts of sequencing errors. Instead of executing this computation on a single large memory machine, we harnessed the power of many small memory machines working in parallel on different batches of reads. We execute the analysis using Hadoop  to monitor the workflow, and also to sum together the partial counts computed on individual machines using an extension of the MapReduce word counting algorithm . The Hadoop cluster used in these experiments contains 10 nodes, each with a dual core 3.2 gigahertz Intel Xeon processors, 4 GB of RAM, and 367 GB local disk (20 cores, 40 GB RAM, 3.6 TB local disk total).
In order to better differentiate true k-mers and error k-mers, we incorporate the quality values into k-mer counting. The number of appearances of low coverage true k-mers and high copy error k-mers may be similar, but we expect the error k-mers to have lower quality base calls. Rather than increment a k-mer's coverage by one for every occurrence, we increment it by the product of the probabilities that the base calls in the k-mer are correct as defined by the quality values. We refer to this process as q-mer counting. q-mer counts approximate the expected coverage of a k-mer over the error distribution specified by the read's quality values. By counting q-mers, we are able to better differentiate between true k-mers that were sequenced to low coverage and error k-mers that occurred multiple times due to bias or repetitive sequence.
In theory, the true k-mer coverage distribution should be Poisson, but Illumina sequencing has biases that add variance . Instead, we model true k-mer coverage as Gaussian to allow a free parameter for the variance. k-mers that occur multiple times in the genome due to repetitive sequence and duplications also complicate the distribution. We found that k-mer copy number in various genomes has a 'heavy tail' (meaning the tail of the distribution is not exponentially bounded) that is approximated well by the Zeta distribution , which has a single shape parameter. Our full model for true k-mer coverage is to sample a copy number from a Zeta distribution, and then sample a coverage from a Gaussian distribution with mean and variance proportional to the chosen copy number.
The error k-mer coverage distribution has been previously modeled as Poisson . In data we examined, this distribution also has a heavy tail, which could plausibly be explained if certain sequence motifs were more prone to errors than others due to sequence composition or other variables of the sequencing process. Additionally, by counting q-mers, we have real values rather than the integers that Poisson models. We examined a few options and chose the Gamma distribution with free shape and scale parameters to model error q-mer counts.
Finally, we include a mixture parameter to determine which of the two distributions a k-mer coverage will be sampled from. We fit the parameters of this mixture model by maximizing the likelihood function over the q-mer counts using the BFGS algorithm, implemented as the optim function in the statistical language R . Figure 3 shows an example fit to 76× coverage of E. coli. Using the optimized model, we compute the likelihood ratio of error k-mer to true k-mer at various coverages and set the cutoff to correspond to the appropriate ratio.
A few more complications are worth noting. If the untrusted k-mers reach the edge of the read, there may be more sequencing errors at the edge, so we must extend the region to the edge, as in Figure 4b. In this case and in the case of multiple nearby sequencing errors, we may also benefit from considering every base covered by the right-most trusted k-mer and left-most trusted k-mer to be correct, and trimming the region as in Figure 4c. Because this heuristic is sensitive to misclassified k-mers, we first try to correct in the region shown in Figure 4c, but if no valid set of corrections is found, we try again with the larger region in Figure 4b. Finally, in longer reads we often see clusters of untrusted k-mers that do not overlap. We perform this localizing procedure and correction on each of these clusters separately. Altogether, these heuristics for localizing the error in a read vastly decrease the runtime of the algorithm compared to considering corrections across the entire read.
Sequencing error probability model
Modeling sequencing errors with E allows for biases in base substitution that are known to exist for the Illumina platform. For example, one study found A to C was the most frequent error, likely because A and C are detected by one laser while G and T are detected by another . Making the substitution distribution conditional upon the quality value allows this substitution bias to vary at different qualities, which was found to occur for Sanger sequencing  and here for Illumina (see Figure 2). Although some work has modeled error distributions conditionally on the position of the nucleotide in the read , we assume that quality values capture this sequencing cycle effect. Recent base-calling algorithms incorporate this effect on fluorescence intensity measurements explicitly in some way and generate quality values that satisfy our assumption [27–31].
Once we can assign a likelihood to a set of corrections and localize the error(s) to a specific region of the read, we must search for the set with maximum likelihood such that all k-mers in the corrected read are trusted. We refer to a set of corrections as valid if all resulting k-mers are trusted. In order to limit the search space, we consider only sets of corrections for which the ratio of the likelihood of the corrected read to the original is above a fixed threshold (default 10-6).
In practice, we make a few adjustments to this procedure. Reads from repeats may have multiple sets of valid corrections separated by a small likelihood difference so that the true correction is ambiguous. Therefore, we actually continue past the point of finding a valid set of corrections to ensure that another valid set does not exist within a certain likelihood threshold (default 0.1). As described, the algorithm will devote a large majority of its computation effort to the lowest quality reads, which have many potential sets of corrections to consider. In order to balance correction sensitivity with speed, we pre-screen the error region and immediately abandon a read if its error region is filled with low quality base calls. More specifically, in our experiments we found that regions containing ≥13 positions with a probability of error >1% were difficult or impossible to correct quickly, and these reads are abandoned without further effort. For regions containing ≥9 such positions, we increase the likelihood ratio threshold to 10-3 so that we only consider a limited number of corrections before giving up.
In order to run Quake on very large datasets (for example, containing billions of reads), we must be able to determine very quickly whether a set of corrections makes all k-mers trusted. We accomplish this by mapping all 4 k k-mers to an index in a bit array that is set to one if the k-mer is trusted and zero otherwise. For 15-mers this bit array uses just 128 MB of space, while it requires 32 GB for 19-mers, which are needed for larger genomes. If memory usage must be reduced, a Bloom filter could be used to hash the trusted k-mers in <4 GB at the expense of occasional false positive queries .
single nucleotide polymorphism.
Thanks to Gene Robinson at the University of Illinois for permission to use preliminary sequence data from Megachile rotundata. Thanks to Héctor Corrada Bravo and Darryl Reeves for helpful discussion and feedback.
This work was supported by NIH grants R01-LM006845 and R01-GM083873.
- Shendure J, Ji H: Next-generation DNA sequencing. Nat Biotechnol. 2008, 26: 1135-1145. 10.1038/nbt1486.PubMedView ArticleGoogle Scholar
- Hawkins R, Hon G, Ren B: Next-generation genomics: an integrative approach. Nat Rev Genet. 2010, 11: 476-486.PubMedPubMed CentralGoogle Scholar
- Siva N: 1000 Genomes project. Nat Biotechnol. 2008, 26: 256-PubMedGoogle Scholar
- Haussler D, O'Brien S, Ryder O, Barker F, Clamp M, Crawford A, Hanner R, Hanotte O, Johnson W, McGuire J, Miller W, Murphy R, Murphy W, Sheldon F, Sinervo B, Venkatesh B, Wiley E, Allendorf F, Amato G, Baker C, Bauer A, Beja-Pereira A, Bermingham E, Bernardi G, Bonvicino C, Brenner S, Burke T, Cracraft J, Diekhans M, Edwards S, et al: Genome 10K: a proposal to obtain whole-genome sequence for 10 000 vertebrate species. J Hered. 2009, 100: 659-674. 10.1093/jhered/esp086.View ArticleGoogle Scholar
- Mardis E: Cancer genomics identifies determinants of tumor biology. Genome Biol. 2010, 11: 211-10.1186/gb-2010-11-5-211.PubMedPubMed CentralView ArticleGoogle Scholar
- Robison K: Application of second-generation sequencing to cancer genomics. Brief Bioinform. 2010, 11: 524-534. 10.1093/bib/bbq013.PubMedView ArticleGoogle Scholar
- Palmer L, Dejori M, Bolanos R, Fasulo D: Improving de novo sequence assembly using machine learning and comparative genomics for overlap correction. BMC Bioinformatics. 2010, 11: 33-10.1186/1471-2105-11-33.PubMedPubMed CentralView ArticleGoogle Scholar
- Trapnell C, Salzberg S: How to map billions of short reads onto genomes. Nat Biotechnol. 2009, 27: 455-7. 10.1038/nbt0509-455.PubMedPubMed CentralView ArticleGoogle Scholar
- Pevzner P, Tang H, Waterman M: An Eulerian path approach to DNA fragment assembly. Proc Natl Acad Sci USA. 2001, 98: 9748-10.1073/pnas.171285098.PubMedPubMed CentralView ArticleGoogle Scholar
- Chaisson M, Brinza D, Pevzner P: De novo fragment assembly with short mate-paired reads: does the read length matter?. Genome Res. 2009, 19: 336-10.1101/gr.079053.108.PubMedPubMed CentralView ArticleGoogle Scholar
- Butler J, MacCallum I, Kleber M, Shlyakhter IA, Belmonte MK, Lander ES, Nusbaum C, Jaffe DB: ALLPATHS: de novo assembly of whole-genome shotgun microreads. Genome Res. 2008, 18: 810-820. 10.1101/gr.7337908.PubMedPubMed CentralView ArticleGoogle Scholar
- Shi H, Schmidt B, Liu W, Muller-Wittig W: A parallel algorithm for error correction in high-throughput short-read data on CUDA-enabled graphics hardware. J Comput Biol. 2010, 17: 603-615. 10.1089/cmb.2009.0062.PubMedView ArticleGoogle Scholar
- Li R, Zhu H, Ruan J, Qian W, Fang X, Shi Z, Li Y, Li S, Shan G, Kristiansen K, Li S, Yang H, Wang J, Wang J: De novo assembly of human genomes with massively parallel short read sequencing. Genome Res. 2010, 20: 265-272. 10.1101/gr.097261.109.PubMedPubMed CentralView ArticleGoogle Scholar
- Yang X, Dorman K, Aluru S: Reptile: representative tiling for short read error correction. Bioinformatics. 2010, 26: 2526-2533. 10.1093/bioinformatics/btq468.PubMedView ArticleGoogle Scholar
- Gajer P, Schatz M, Salzberg S: Automated correction of genome sequence errors. Nucleic Acids Res. 2004, 32: 562-10.1093/nar/gkh216.PubMedPubMed CentralView ArticleGoogle Scholar
- Batzoglou S, Jaffe D, Stanley K, Butler J, Gnerre S, Mauceli E, Berger B, Mesirov J, Lander E: ARACHNE: a whole-genome shotgun assembler. Genome Res. 2002, 12: 177-10.1101/gr.208902.PubMedPubMed CentralView ArticleGoogle Scholar
- Tammi MT, Arner E, Kindlund E, Andersson B: Correcting errors in shotgun sequences. Nucleic Acids Res. 2003, 31: 4663-4672. 10.1093/nar/gkg653;.PubMedPubMed CentralView ArticleGoogle Scholar
- Schroder J, Schroder H, Puglisi SJ, Sinha R, Schmidt B: SHREC: a short-read error correction method. Bioinformatics. 2009, 25: 2157-2163. 10.1093/bioinformatics/btp379.PubMedView ArticleGoogle Scholar
- Salmela L: Correction of sequencing errors in a mixed set of reads. Bioinformatics. 2010, 26: 1284-10.1093/bioinformatics/btq151.PubMedView ArticleGoogle Scholar
- Zerbino D, Birney E: Velvet: algorithms for de novo short read assembly using de Bruijn graphs. Genome Res. 2008, 18: 821-10.1101/gr.074492.107.PubMedPubMed CentralView ArticleGoogle Scholar
- Simpson J, Wong K, Jackman S, Schein J, Jones S, Birol I: ABySS: a parallel assembler for short read sequence data. Genome Res. 2009, 19: 1117-10.1101/gr.089532.108.PubMedPubMed CentralView ArticleGoogle Scholar
- Qu W, Hashimoto Si, Morishita S: Efficient frequency-based de novo short-read clustering for error trimming in next-generation sequencing. Genome Res. 2009, 19: 1309-1315. 10.1101/gr.089151.108.PubMedPubMed CentralView ArticleGoogle Scholar
- Wijaya E, Frith M, Suzuki Y, Horton P: Recount: expectation maximization based error correction tool for next generation sequencing data. Genome Inform. 2009, 23: 189-201. full_text.PubMedGoogle Scholar
- Zagordi O, Geyrhofer L, Roth V, Beerenwinkel N: Deep sequencing of a genetically heterogeneous sample: local haplotype reconstruction and read error correction. J Comput Biol. 2010, 17: 417-428. 10.1089/cmb.2009.0164.PubMedView ArticleGoogle Scholar
- Quince C, Lanzen A, Curtis T, Davenport R, Hall N, Head I, Read L, Sloan W: Accurate determination of microbial diversity from 454 pyrosequencing data. Nat Methods. 2009, 6: 639-641. 10.1038/nmeth.1361.PubMedView ArticleGoogle Scholar
- Dohm JC, Lottaz C, Borodina T, Himmelbauer H: Substantial biases in ultra-short read data sets from high-throughput DNA sequencing. Nucleic Acids Res. 2008, 36: e105+-10.1093/nar/gkn425.PubMedPubMed CentralView ArticleGoogle Scholar
- Bravo H, Irizarry R: Model-based quality assessment and base-calling for second-generation sequencing data. Biometrics. 2010, 66: 665-674. 10.1111/j.1541-0420.2009.01353.x.PubMedPubMed CentralView ArticleGoogle Scholar
- Kao W, Stevens K, Song Y: BayesCall: a model-based base-calling algorithm for high-throughput short-read sequencing. Genome Res. 2009, 19: 1884-10.1101/gr.095299.109.PubMedPubMed CentralView ArticleGoogle Scholar
- Erlich Y, Mitra P, de la Bastide M, McCombie W, Hannon G: Alta-Cyclic: a self-optimizing base caller for next-generation sequencing. Nat Methods. 2008, 5: 679-10.1038/nmeth.1230.PubMedPubMed CentralView ArticleGoogle Scholar
- Kircher M, Stenzel U, Kelso J: Improved base calling for the Illumina Genome Analyzer using machine learning strategies. Genome Biol. 2009, 10: R83-10.1186/gb-2009-10-8-r83.PubMedPubMed CentralView ArticleGoogle Scholar
- Rougemont J, Amzallag A, Iseli C, Farinelli L, Xenarios I, Naef F: Probabilistic base calling of Solexa sequencing data. BMC Bioinformatics. 2008, 9: 431-10.1186/1471-2105-9-431.PubMedPubMed CentralView ArticleGoogle Scholar
- Quake. [http://www.cbcb.umd.edu/software/quake]
- Perl Artistic License. [http://www.perl.com/pub/a/language/misc/Artistic.html]
- Li H, Ruan J, Durbin R: Mapping short DNA sequencing reads and calling variants using mapping quality scores. Genome Res. 2008, 18: 1851-10.1101/gr.078212.108.PubMedPubMed CentralView ArticleGoogle Scholar
- Cordaux R, Batzer M: The impact of retrotransposons on human genome evolution. Nat Rev Genet. 2009, 10: 691-703. 10.1038/nrg2640.PubMedPubMed CentralView ArticleGoogle Scholar
- Myers E: Toward simplifying and accurately formulating fragment assembly. J Comput Biol. 1995, 2: 275-290. 10.1089/cmb.1995.2.275.PubMedView ArticleGoogle Scholar
- Kurtz S, Phillippy A, Delcher A, Smoot M, Shumway M, Antonescu C, Salzberg S: Versatile and open software for comparing large genomes. Genome Biol. 2004, 5: R12-10.1186/gb-2004-5-2-r12.PubMedPubMed CentralView ArticleGoogle Scholar
- Pleasance ED, Cheetham RK, Stephens PJ, McBride DJ, Humphray SJ, Greenman CD, Varela I, Lin MLL, Ordonez GR, Bignell GR, Ye K, Alipaz J, Bauer MJ, Beare D, Butler A, Carter RJ, Chen L, Cox AJ, Edkins S, Kokko-Gonzales PI, Gormley NA, Grocock RJ, Haudenschild CD, Hims MM, James T, Jia M, Kingsbury Z, Leroy C, Marshall J, Menzies A, et al: A comprehensive catalogue of somatic mutations from a human cancer genome. Nature. 2010, 463: 191-196. 10.1038/nature08658.PubMedPubMed CentralView ArticleGoogle Scholar
- Kendal W: An exponential dispersion model for the distribution of human single nucleotide polymorphisms. Mol Biol Evol. 2003, 20: 579-10.1093/molbev/msg057.PubMedView ArticleGoogle Scholar
- Langmead B, Trapnell C, Pop M, Salzberg S: Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biol. 2009, 10: R25-10.1186/gb-2009-10-3-r25.PubMedPubMed CentralView ArticleGoogle Scholar
- Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, Marth G, Abecasis G, Durbin R, Subgroup GPDP: The sequence alignment/Map format and SAMtools. Bioinformatics. 2009, 25: 2078-10.1093/bioinformatics/btp352.PubMedPubMed CentralView ArticleGoogle Scholar
- Ahn SM, Kim TH, Lee S, Kim D, Ghang H, Kim DS, Kim BC, Kim SY, Kim WY, Kim C, Park D, Lee YS, Kim S, Reja R, Jho S, Kim CG, Cha JY, Kim KH, Lee B, Bhak J, Kim SJ: The first Korean genome sequence and analysis: full genome sequencing for a socio-ethnic group. Genome Res. 2009, 19: 1622-1629. 10.1101/gr.092197.109.PubMedPubMed CentralView ArticleGoogle Scholar
- Hadoop. [http://hadoop.apache.org]
- Dagum L, Menon R: Open MP: an industry-standard API for shared-memory programming. IEEE Comput Sci Eng. 1998, 5: 46-55. 10.1109/99.660313.View ArticleGoogle Scholar
- Dean J, Ghemawat S: MapReduce: simplified data processing on large clusters. Commun ACM. 2008, 51: 107-113. 10.1145/1327452.1327492.View ArticleGoogle Scholar
- Chin F, Leung H, Li W, Yiu S: Finding optimal threshold for correction error reads in DNA assembling. BMC Bioinformatics. 2009, 10 Suppl 1: S15-10.1186/1471-2105-10-S1-S15.PubMedView ArticleGoogle Scholar
- Johnson N, Kemp A, Kotz S: Univariate Discrete Distributions. 2005, New York: Wiley-InterscienceView ArticleGoogle Scholar
- R: A Language and Environment for Statistical Computing. [http://www.R-project.org]
- Li M, Nordborg M, Li LM: Adjust quality scores from alignment and improve sequencing accuracy. Nucleic Acids Res. 2004, 32: 5183-5191. 10.1093/nar/gkh850.PubMedPubMed CentralView ArticleGoogle Scholar
- Li R, Li Y, Fang X, Yang H, Wang J, Kristiansen K, Wang J: SNP detection for massively parallel whole-genome resequencing. Genome Res. 2009, 19: 1124-10.1101/gr.088013.108.PubMedPubMed CentralView ArticleGoogle Scholar
- Hastie T, Tibshirani R, Friedman J: The Elements of Statistical Learning. 2009, Berlin: SpringerView ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.