Compact hash table
The hash table used by Kraken 2 to store minimizer/LCA key-value pairs is very similar to a traditional hash table that would use linear probing for collision resolution, with some modifications. Kraken 2’s compact hash table (CHT) uses a fixed-size array of 32-bit hash cells to store key-value pairs. Within a cell, the number of bits used to store the value of the key-value pair will vary depending on the number of bits needed to represent all unique taxonomy ID numbers found in the reference sequence library; this was 17 bits with the standard Kraken 2 database in September 2018. The value is stored in the least significant bits of the hash cell and must be a positive integer. Values of 0 represent empty cells. Within the remaining bits of the hash cell, the most significant bits of the key’s hash code (a compact hash code) are stored. Searching for a key K in the CHT is done by computing the hash code of the key h(K) then linearly scanning the table array starting at position h(K) mod |T| (where |T| is the number of cells in the array) for a matching key. Examples of this search process—including both key/value insertion and querying—are shown in Additional file 2: Figure S3. In Kraken 2, the hash function h used is the finalization function from MurmurHash3 [26].
Compacting hash codes in this way allows Kraken 2 to use 32 bits for a key-value pair, a reduction compared to the 96 bits used by Kraken 1 (64 bits for key, 32 for value) (Fig. 1c). But it also creates a new way in which keys can “collide,” in turn impacting the accuracy of queries to the CHT. Two distinct keys can be treated as identical by the CHT if they share the same compact hash code and their starting search positions are close enough to cause a linear probe to encounter a stored matching compact hash code before an empty cell is found. This property gives the CHT its probabilistic nature, in that two types of false-positive query results are possible: either (a) a key that was not inserted can be reported as present in the table or (b) the values of two keys can be confused with each other. In Kraken 2, the former error is indeed a false positive, whereas the latter results in a less specific LCA being assigned to the minimizer (Additional file 2: Figure S3). The probability of either of these errors is < 1% with Kraken 2’s default load factor of 70% (Additional file 2: Figure S4). The adverse effect on read-level classification is further mitigated by the algorithm Kraken 2 uses to combine information from across the read, which is unchanged from Kraken 1 and utilizes information from all k-mers in a sequence to counteract low-frequency erroneous LCA values that could be returned by a key-value store.
The probabilistic nature and comparisons involving parts of a key’s hash code make the CHT similar to the counting quotient filter (CQF) described by Pandey et al. [24] Like the CQF, Kraken 2’s CHT features high locality of memory access during an individual query due to the linear probing that the CHT employs. Unlike the CQF, however, our CHT does not allow the full hash code to be recovered from a stored value (the CQF’s remainder), and so we are unable to resize a CHT once it is instantiated. Additionally, our CHT has an additional possibility of error compared to the CQF, where two keys that do not have the same full hash code but share a truncated hash code will be treated as identical. The CQF can avoid such “soft” hash collisions.
Internal taxonomy of a Kraken 2 database
While Kraken 1 used the taxonomy provided by the user without modification, Kraken 2 makes some modifications to its internal representation of the taxonomy that causes that representation to differ from the user-provided taxonomy. First, Kraken 2 finds a minimal set of nodes in the user-provided taxonomy. This minimal set consists of all nodes to which a reference sequence is assigned, as well as all of those nodes’ ancestors; vertices between nodes in this set remain as they were in the user-provided taxonomy, maintaining the tree structure in the internal representation. Kraken 2 then assigns nodes in the minimal set sequentially increasing internal taxonomy ID numbers using a breadth-first search (BFS) beginning at the root, with the root having an internal ID number of 1. This BFS provides a guarantee that ancestor nodes will have smaller internal ID numbers than their descendants; an example of this numbering is shown in Additional file 2: Figure S3. Kraken 2 stores a mapping of its internal taxonomy numbers to the external taxonomy ID numbers to make its results more easily interpretable, and performs all output using the external taxonomy ID numbers.
Kraken 2’s use of this internal taxonomy representation allows for the easier computation of the LCA of two nodes because the ID numbers themselves give information as to their relative depths in the tree, while the National Center for Biotechnology Information (NCBI) taxonomy IDs lack this property. The internal taxonomy representation also allows Kraken 2 to use the minimal number of bits for storage of taxonomy ID numbers, giving maximal space for the compact hash codes and reducing the probability of CHT errors (or “hash table collisions,” as we describe elsewhere in this paper).
A Kraken 2 database consists of a CHT and this internal taxonomy representation. Typical databases will be built using the NCBI taxonomy [27], but users can override this default to create custom databases for atypical use cases.
Minimizer-based subsampling
In contrast to Kraken 1’s use of all k-mers in the standard use case, Kraken 2 subsamples the set of genomic substrings and inserts only the distinct minimizers into its database (Fig. 1b). We define the ℓ bp minimizer of a k-mer (ℓ ≤ k) to be the lexicographically smallest canonical ℓ-mer found within the k-mer. An ℓ-mer is called canonical if it is lexicographically less than or equal to its reverse complement. Note that if k = ℓ, no subsampling occurs and Kraken 2 inserts the same substrings into its data structure that Kraken 1 would. Additionally, as the difference between k and ℓ grows, fewer substrings are inserted into the CHT, reducing its size along with Kraken 2’s memory usage and runtime (Fig. 1d, Additional file 1: Table S2). The default values for Kraken 2, k = 35 and ℓ = 31, were determined after the analysis of the parameter sweep results we show in Additional file 1: Table S2.
Kraken 2 determines which ℓ-mers are minimizers by the use of a sliding window minimum algorithm, in contrast to Kraken 1’s implementation which examined each k-mer anew. This allows for a faster determination of minimizers, as less work is required when moving from one k-mer to the next overlapping k-mer (in terms of computational complexity, the new approach uses an average of O (1) time to calculate a new minimizer vs. Θ(k) time with the older algorithm). The sliding window minimum calculation uses a double-ended queue (or “deque”) in which canonicalized candidate ℓ-mers are inserted in the back, along with the candidates’ position in the original sequence. As a new candidate is encountered, enqueued candidates are removed from the back of the deque until the candidate at the back has a greater value than the new candidate (as determined by lexicographical ordering). The new candidate is then pushed onto the back of the deque.
Once a k-mer’s worth of ℓ-mers has been processed in this way, the front of the deque contains the minimizer of that k-mer. This property is then maintained during scanning subsequent bases by removing the front element in the deque if it is from a position in the original sequence that is not in the current k-mer. In this way, the front element of the deque holds the minimizer of the k-mer currently being examined.
We further augmented the sliding window algorithm to include the exclusive or (XOR) shuffling operation from Kraken 1. This operation serves to permute the ordering of the ℓ-mers when calculating minimizers and helps to avoid a bias toward low-complexity ℓ-mers when selecting the minimizer of a k-mer [4,15]. To shuffle, we calculate the XOR value of the ℓ-mer and a predefined constant and use this value as the “candidate” that is put in the deque. When the original ℓ-mer value is needed again, the operation is reversed by XORing a second time with the same constant.
Spaced seed usage
Spaced k-mers, a similar concept to spaced seeds, have been shown to improve the ability to classify reads within the Kraken framework [28]. Kraken 2 uses a simple spaced seed approach where a user specifies an integer s when building a database that indicates how many positions in the minimizer will be masked (i.e., not considered when searching). Beginning with the next-to-rightmost position in the minimizer, every other position is masked until s positions have been masked. For example, if s = 3 and ℓ = 12, the positions in the bit string 1111 1101 0101 with a “0” would be masked. When using Kraken 2, Kraken 1’s classification results can be most closely approximated by setting k = ℓ = 31 and s = 0, as these settings will avoid any minimizer-based subsampling and spaced seed usage. Kraken 2’s default value for s is 7 and was determined after the analysis of the parameter sweep results we show in Additional file 1: Table S2.
The canonical ℓ-mers that are minimizer candidates are masked with the spaced seed mask prior to their insertion into the deque for the sliding window calculation. By performing canonicalization of the minimizer candidates prior to applying the spaced seed mask, we ensure the result is the same whether applied to the ℓ-mer or its reverse complement.
Kraken 1’s sensitivity performance was governed by the value of k (the length of the searched substring). By comparison, the use of spaced seeds and minimizer-based subsampling means that Kraken 2’s sensitivity performance will be largely governed by ℓ-s (the number of compared bases in Kraken 2’s searched substring). Thus, increasing s will generally increase sensitivity while decreasing positive predictive value (Fig. 1e, Additional file 1: Table S2).
Hash-based subsampling
Kraken 2 estimates the required capacity of the hash table given the k, ℓ, and s values chosen along with the sequence data in a database’s reference genomic library. Some users will not have access to large memory computers, and therefore, this estimate may be greater than the maximum possible hash table size that they can work with. To aid such users, Kraken 2 allows them to specify a maximum size when building a database. If the estimated required capacity is larger than the maximum requested size, then the minimizers will be subsampled further using a hash function. Given an estimated required capacity S′ and a maximum user-specified capacity of S (S < S′), we can calculate the value f = S/S′, which is the fraction of available minimizers that the user will be able to hold in their database. A minimum allowable hash value of v = (1 − f)∙M can also be calculated, where M is the maximum value output by hash function h. Any minimizer in the reference library with a hash code less than v will not be inserted into the hash table. This value v is also provided to the classifier so that only minimizers with hash codes greater than or equal to v will be used to probe the hash table, saving the search failure runtime penalty that would be incurred by searching for minimizers guaranteed not to be in the hash table.
Evaluation of k-mer level discordance rates
At a k-mer level, there are two main types of discordance between Kraken 1 and Kraken 2’s results: those caused by two distinct k-mers sharing the same minimizer (a “minimizer collision”) and those caused by two distinct minimizers being indistinguishable by the CHT (a “hash table collision”). Minimizer collisions are not always damaging. When it occurs between k-mers from very closely related genomes, such a collision might detect true homology even in the face of single nucleotide polymorphisms and/or sequencing error. That said, minimizer collisions between k-mers from distantly related genomes could produce either elevated LCA values (if both genomes are in the reference library) or incorrectly classified k-mers (if one of the genomes is not in the reference library). Hash table collisions are a consequence of the probabilistic nature of the CHT and can also cause either elevated LCA values or incorrectly classified k-mers (Additional file 2: Figure S3). We note that these different discordant results are all at a k-mer level and may not always affect a query sequence’s classification due to the many k-mers’ worth of data that are used to classify a query sequence; aside from slight modifications to handle the subsampling methods we use in Kraken 2, the classification method of Kraken 2 is identical to Kraken 1.
We wished to estimate the rate at which these collisions would cause discordance at a k-mer level between the Kraken 1 and Kraken 2 results. To do so, we selected a specific bacterial genome for which we had neighboring genomes at each taxonomic rank from species to phylum. The selected genome was our “reference sequence,” and eight others were progressively more taxonomically distant from the reference sequence. We list the nine genomes used in these experiments in Additional file 1: Table S6. We additionally created a synthetic genome with 4 Mbp of uniformly random DNA. Together, these ten sequences formed a set of “query sequences” and were the basis for our evaluation of collision rates. For these experiments, we used the default Kraken 2 values of k = 35, ℓ = 31, and s = 7, unless otherwise noted.
To determine the rates of discordance caused by minimizer collisions, we compared each of the ten query sequences’ k-mers to the set of reference sequence k-mers. For each sequence, the minimizer collision rate is the proportion of distinct k-mers in a query sequence that (a) are not in the set of reference sequence k-mers and (b) share a minimizer with a reference sequence k-mer. The various sequences’ minimizer collision rates are summarized in Additional file 1: Table S7. We hypothesized that the minimizer collision rate would be influenced by the length of the minimizer used, due to the length’s direct relationship to the number of possible minimizers. To test this, we repeated the minimizer collision rate estimation experiment focusing on the reference genome and using the random synthetic genome as the sole query sequence. Setting k = 35 and s = 0, we varied the ℓ parameter from 8 to 31. Minimizer lengths greater than 15 had collision rates under 1%. Minimizer lengths greater than 22 had 0 collisions. The full results are shown in Additional file 2: Figure S5.
To determine the rates of discordance caused by hash table collisions, we compared each of the ten query sequences’ minimizers to a CHT populated with the reference sequence minimizers. The CHT was created with a load factor of 70% and 15 bits reserved for the truncated hash code (the same parameters used in Kraken 2’s standard database in September 2018). For each sequence, the hash table collision rate is the proportion of distinct minimizers in a query sequence that (a) are not minimizers in the set of reference sequence minimizers and (b) are reported by the CHT as being inserted in the hash table. The various sequences’ hash table collision rates are summarized in Additional file 1: Table S8. To investigate the impact of load factor and truncated hash code size on hash table collision rates, we repeated the hash table collision rate experiment, but focused only on the reference genome and used the random synthetic genome as the sole query sequence. We used the same default values of k, ℓ, and s as before (35, 31, and 7, respectively) and calculated hash table collision rates while varying both the load factor and truncated hash code size. The impact of these two parameters on hash table collision rates is shown in Additional file 2: Figure S4. The parameters adopted for Kraken 2’s default mode had an error rate of 0.016%, consistent with the results seen when comparing genomes of different species (Additional file 1: Table S8).
Processing of a standard genomic reference library
The CHT’s modest memory requirements, and the additional savings yielded by minimizer-based subsampling, allow more reference genomic data to be included in Kraken 2’s standard reference library. Whereas Kraken 1’s default database had data from archeal, bacterial, and viral genomes, Kraken 2’s default database additionally includes the GRCh38 assembly of the human genome [29] and the “UniVec_Core” subset of the UniVec database [30]. We include these in Kraken 2’s default database to allow for easier classification of human microbiome reads and more accurate classification of reads containing vector sequences.
Additionally, we have implemented masking of low-complexity sequences from reference sequences in Kraken 2, by using the “dustmasker” [31] (for nucleotide sequences) and “segmasker” [32] (for protein sequences) tools from NCBI. Using the tools’ default settings, nucleotide and protein sequences are checked for low-complexity regions, and those regions identified are masked and not processed further by the Kraken 2 database building process. In this manner, we seek to reduce false positives resulting from these low-complexity sequences, similar to the build process for Centrifuge [1].
Populating the Kraken 2 hash table
Kraken 2 begins building a CHT by first estimating the number of distinct minimizers present in the reference library for the selected values of k, ℓ, and s. This is done through a form of zeroth frequency moment estimation [33] where Kraken 2 creates a small set structure implemented with a traditional hash table. In this set Q, we insert only the distinct minimizers that satisfy the criterion h(m) mod F < E, where h(m) is the hash code of the minimizer m and E ≪ F (in practice, Kraken 2 uses E = 4 and F = 1024). We then find the estimate of the total number of distinct minimizers by multiplying the number of satisfactory distinct minimizers (|Q|) by F/E. This form of estimation requires storing in memory only a fraction of all distinct minimizers (approximately E/F) and allows us to quickly set the capacity of our CHT properly without needing to first store all elements in it.
After estimating the number of distinct minimizers D = |Q|(F/E) present in the reference library, Kraken 2 then allocates memory for a CHT containing D/0.7 hash table cells. We selected the divisor of 0.7 so that the resultant hash table will have approximately 30% of its cells remain empty after the population of the CHT (i.e., the CHT will have a load factor of 70%). As stated earlier, the cells of this table are 32 bits each, and so the total memory required for Kraken 2’s CHT is 32D/0.7 bits or 4D/0.7 bytes.
Kraken 2 then proceeds to scan each genome in the reference library. Each genome must be associated with a taxonomic ID number so that Kraken 2 can calculate LCA values; genomes without associated taxonomy IDs are therefore not processed by Kraken 2. For a minimizer M in a genome G, Kraken 2 attempts to insert a key-value pair containing M (key) and the taxonomic ID T (value) associated with G into the CHT. If the CHT does not report that M was previously inserted, then the <M, T > key-value pair will be inserted, indicating that the LCA of M is currently T. If M was previously inserted into the CHT, with LCA value T*, then its associated LCA value is updated to equal the LCA of T and T*. All minimizers are processed in this way; once the reference library’s minimizers are all processed, the LCA values are properly set for each of the minimizers and the database build is complete. The LCA operation is both commutative and associative, facilitating parallel index construction.
Classification of a sequence fragment with Kraken 2
Kraken 2 classifies sequence fragments similarly to Kraken 1, with modifications to facilitate minimizer- and hash-based subsampling. For each k-mer in an input sequence, Kraken 2 finds its minimizer and, if it is distinct from the previous k-mer’s minimizer, uses it as a key to probe the CHT. If the minimizer matches a key in the CHT, Kraken 2 considers the associated LCA value to be the k-mer’s LCA (Fig. 1b). Classification then proceeds in the same manner as Kraken 1, taking note of how many k-mer hits mapped to each taxon, constructing a pruned classification tree, and using the leaf of the maximally scoring root-to-leaf path of that tree to classify the sequence [4]. If hash-based subsampling was used to build the CHT, each minimizer has its hash code compared against the table’s maximum allowable hash code, and minimizers with higher-than-allowed hash codes are not searched against the CHT. Any k-mer containing an ambiguous nucleotide code is also not searched against the CHT.
We note that although Kraken 2 only uses the minimizer to query the CHT, the LCA found via this query is assigned by Kraken 2 to the k-mer rather than only the minimizer. This means that a stretch of n overlapping k-mers that share a minimizer will all be assigned the same LCA value by Kraken 2 and that n hits to that LCA will be part of the classification tree, even though only one distinct minimizer was present among the k-mers.
Parsing of input files
Previous work by Langmead et al. [17] has shown the importance of removing parsing work from critical sections, i.e., portions of the program that can be executed by only 1 thread at a time. Kraken 2 uses 2 different methods to defer a majority of parsing work from the critical section to thread-local execution. The first method (referred to as “batch deferred” parsing by Langmead et al.) reads a set number of lines (40,000 in Kraken 2) of input in a thread-local buffer within the critical section and then parses the input within a single thread’s execution. This method is used to perform reading of paired-end FASTQ input, where the lengths of a fragment’s mates can be different and reading a consistent number of lines from both input files is necessary to ensure a thread is working with complete mate pairs. For FASTA or single-end FASTQ input, Kraken 2 instead uses a more efficient method that reads in a set number of bytes (3 MB in Kraken 2) of input into a thread-local buffer within the critical section and continues reading input into that buffer until a record boundary is found, at which point a thread leaves the critical section and parses its input. These modifications allow Kraken 2 to more efficiently use multiple threads than did Kraken 1 (Additional file 1: Table S4).
Translated search
To perform a translated search, Kraken 2X first builds a database from a set of reference proteins in the same manner that Kraken 2 does for nucleotide sequences. The usual alphabet of 20 amino acids is reduced to 15 using the 15-character alphabet of Solis [34]; we add a single additional value representing selenocysteine, pyrrolysine, and translation termination (stop codons). This gives us 16 characters in our reduced alphabet, allowing us to represent a character with 4 bits. Minimizers of reference proteins are calculated using the same methods for nucleotide sequences (i.e., using spaced seeds if requested and a sliding window minimum algorithm), but reverse complements are not calculated and by default k = 15, ℓ = 12, and s = 0.
When searching against a protein minimizer database, Kraken 2X translates all six reading frames of the input query DNA sequences into the reduced amino acid alphabet. Minimizers from all six frames are pooled and used to query the CHT, and therefore, all contribute to the Kraken 2X classification of a query sequence.
Generation of data for strain exclusion experiments
We downloaded the reference genome and protein data used for the clade exclusion experiments from NCBI in January 2018 from the archaeal, bacterial, and viral domains. We also downloaded the taxonomy from NCBI at this same time. Using the taxonomy ID information for each sequence, we obtained a set of all taxonomy IDs represented by the reference genomes. From this set, we selected a subset of “eligible strains” that had both two sister sub-species taxa present and two sister species taxa present in the set of reference genomes. We selected this subset by examining only those nucleotide sequences with the phrase “complete genome” in their FASTA record header but excluding those that were plasmids or second or third chromosomes. In this manner, we sought to ensure we did not count a genome multiple times due to multiple sequences being associated with that genome. From the eligible strain subset, 40 prokaryotic taxonomy IDs and 10 viral taxonomy IDs were selected arbitrarily to be the strains of origin for our experiments. The strains selected are listed in Additional file 1: Table S3.
After selecting the taxonomy IDs that represented the strains of origin, we gathered all of the nucleotide sequences we had downloaded—including chromosome and plasmid sequences excluded from our examination when creating the eligible strain subset—into a single file and did the same for the protein sequences. For both the nucleotide and protein files, we placed sequences with taxonomy IDs that were outside the strains of origin into a strain exclusion reference file. Then, for each taxonomy ID in our strain of origin set, we created a single “strain reference” file containing all nucleotide sequences that were associated with that taxonomy ID.
We used Mason 2 [35] to simulate 100-bp paired-end Illumina sequence data from our strains of origin, with 500,000 fragments being simulated from each strain. When simulating the reads, we used the default options for simulating sequencing errors with Mason 2’s “mason_simulator” command. These defaults caused the simulator to simulate sequencing errors at rates of 0.4% for mismatches, 0.005% for insertions, and 0.005% for deletions. We combined simulated reads from the strains of origin into a single set of read data. We also shuffled the order of the fragments in this set to control for ordering effects that might affect runtime.
Execution of strain exclusion experiments
To evaluate the accuracy and computational performance of Kraken 2, we compared it to Kraken 1 and several other programs. In selecting these programs, we concentrated on three main properties. First, because Kraken’s principal aim is to provide high-speed taxonomic sequence classification, we looked for taxonomic sequence classification tools that were high in classification speed (within approximately an order of magnitude of Kraken 1). Secondly, because our experiments rely on holding fixed the reference data between programs, we selected tools which had the ability to customize the underlying reference sequence set and taxonomy using whole-genome reference data. These two requirements led to our selection of KrakenUniq, CLARK, Centrifuge, and Kaiju as comparator programs. We note that these requirements exclude an accuracy evaluation against programs that are not taxonomic sequence classifiers (programs that output a mapping of sequences to taxa). Sequence abundance estimation programs (which map taxa to sequence counts or frequencies), such as Bracken, and population abundance estimation programs (which map taxa to organism counts or frequencies), such as MetaPhlAn [36], are answering related but different problems than those in our comparator set. For example, Bracken does not actually change any of the taxonomic labels associated with the sequenced fragments but rather adjusts the fragment counts associated with low-rank taxa. We also note that although MetaPhlAn does, as part of its operation, classify a small proportion of reads that map to marker genes, this proportion can be less than 10% of reads [6] in whole-genome shotgun metagenomic experiments (such as ours), and thus, MetaPhlAn would yield far lower per-sequence sensitivity relative to the tools in our comparison.
In brief, we used the nucleotide search-based classification programs (Kraken 1, KrakenUniq, Kraken 2, CLARK, and Centrifuge) to build a strain-exclusion database from reference genomes, and we used the translated search-based classification programs (Kraken 2X and Kaiju) to build a strain-exclusion database from reference protein sequences. We compared Kraken 2 and Kraken 2X (both using the code base from Kraken 2.0.8) against Kraken 1.1.1, KrakenUniq 0.5.6, CLARK 1.2.4, Centrifuge 1.0.3-beta, and Kaiju 1.5.0. Because CLARK requires a rank to be specified at the time of building a database, and our evaluations center on genus-rank accuracy, we built a CLARK database for the genus rank for our evaluation work in this paper.
Classifiers received the simulated read data as paired-end FASTQ input. To evaluate runtime and memory usage, we sought to eliminate the performance impact of reading or writing from disk or from a network storage location. To accomplish this, we copied simulated read data and classifier databases onto a random access memory (RAM) filesystem and directed the classifiers to read input from and write output to that RAM filesystem.
Accuracy was evaluated on a smaller subset of the simulated data containing 1000 fragments per genome of origin or 50,000 fragments in total. To obtain processing speed and memory usage information, we ran each classifier using 16 threads on 25 million sequences’ worth of simulated read data. We used the taskset command to restrict each classifier to the appropriate number of processors (e.g., “taskset -c 0-15” was used with our 16 thread experiments); this ensures that a classifier that uses an external process to aid in its execution has that process’ runtime properly counted against its runtime here. The “/usr/bin/time -v” command provided us with elapsed wall clock time and maximum resident set size data (memory usage) for each experiment and allowed us to verify that no major page faults were incurred by a classifier during its execution (the absence of which indicates minimal disk- or network-related input/output effects on the runtime). Classifiers were run on a computer with 32 Xeon 2.3 GHz CPUs (16 hyperthreaded cores) and 244 GB of RAM.
Evaluation of accuracy in strain exclusion experiments
We evaluated the accuracy of each classifier at a per-fragment level, with respect to a particular taxonomic rank. Each fragment had a known true subspecies taxon of origin, which implied a true taxon of origin at both the species and genus ranks, which is where we measured accuracy. We now describe how we counted true-positive (TP), false-negative (FN), vague positive (VP), and false-positive (FP) results at the genus and species levels. We describe this at the genus level specifically, but the analogous procedure was also used at the species level. For a given true genus of origin, a TP classification is a classification at that genus or at a descendant of that genus. Because we excluded the strains of origin from our reference databases, we expected all classifiers to make incorrect strain-level classifications and so allow classifications of descendants of the true genus to be judged as TP. We define an FN classification as a failure of a classifier to assign any classification to a sequence and a VP classification as a classification at an ancestor of the true genus of origin. Finally, we define an FP classification as a classification that is incorrect, that is, not at the true genus of origin nor an ancestor or descendant of that true genus. These four categories are mutually exclusive, and all fragments run through a classifier will have their classification (or lack thereof) categorized by one of these categories.
These categories are different from those typically used for binary classification problems; they are used here because these methods can make classifications that are not at leaves of the taxonomic tree but are still correct. For example, a classification of an Escherichia coli fragment as Escherichia would be evaluated as TP for genus-rank accuracy, but as VP for species-rank accuracy. Classification of that same fragment as Vibrio would be evaluated as FP at any rank below class (because the LCA of Vibrio and Escherichia is the class taxon Gammaproteobacteria) and would be evaluated as TP for the class rank and above.
Using these categories, we define rank-level sensitivity as the proportion of input fragments that were true-positive classifications, or TP/(TP + VP + FN + FP). We define rank-level positive predictive value (PPV) as the proportion of classifications that were true positives (excluding vague positives), or TP/(TP + FP). Along with these definitions of rank-level sensitivity and PPV, we also define an F1-measure as the harmonic mean of those two values.
Evaluation of thread scaling efficiency
To evaluate Kraken 1’s and Kraken 2’s ability to efficiently use multiple threads, we performed an experiment using the strain exclusion databases and simulated read data we describe previously in this section. We ran both Kraken 1 and Kraken 2 on the same data using 1, 4, and 16 threads. The 2 programs were run once on the data as paired-end read data and once as single-end read data. Read data and Kraken database files were all placed on a RAM filesystem, and the “taskset” command was used to limit the classifier programs to only as many cores as the number of threads being used. These conditions mirror those of our main strain exclusion experiments, only varying the number of threads between the various runs of the classifiers. The results for this experiment are shown in Additional file 1: Table S4. In short, Kraken 2 exhibits superior speedup with respect to the number of threads allocated compared to Kraken 1. This is especially true for paired-end reads.
FDA-ARGOS experimental concordance evaluation
The FDA-ARGOS (dAtabase for Reference Grade micrObial Sequences) project provides sequencing experiments for many microbial isolates [22]. We used the NCBI’s Sequence Read Archive [37] to find all 1392 experiments related to the FDA-ARGOS project (accession PRJNA231221). Because some tools are unable to properly process reads of differing lengths, we selected only those 263 experiments that were run on an Illumina HiSeq 4000 instrument and produced 151-bp reads. We then randomly selected 1 experiment from each genus to download and used reservoir sampling to select a subset of 10,000 paired-end fragments from each selected experiment. We also removed experiments for which our strain-exclusion reference genome set did not have a reference genome of the same species as the sequenced isolate. These steps yielded 25 experiments’ worth of data, for 250,000 paired-end fragments in total. Using the strain-exclusion databases created earlier, we then used each classifier to classify the data and examined the percentage of each experiment’s fragments that were classified.
Because the FDA-ARGOS data are from real sequencing experiments, several factors could explain discordance between a classifier’s results and the experiments’ assigned taxa, including the evolutionary distance between sequences and reference data, low-quality sequencing runs, and contamination. The true causes of such discordance may not be discernable, and even when they are, they often require an in-depth examination of the sequencing and reference data. For these reasons, we do not report sensitivity and PPV for these data because we cannot be certain of the true taxonomic origin of each individual fragment of real sequencing data. Rather, we evaluated the concordance of the SRA-assigned taxa with the fragments’ classifications at the genus rank and report for each classifier the following quantities: (a) the percentage of fragments with a concordant classification at the genus rank, (b) the percentage of fragments with a discordant classification at the genus rank, (c) the percentage of fragments with a classification of an ancestor of the SRA-assigned genus taxon, and (d) the percentage of fragments that were not classified. The results of this concordance evaluation are provided in full in Additional file 1: Table S5.
Parameter sweeps
We examined various values for parameters to ensure Kraken 2’s default parameters would provide an advantageous balance of accuracy, classification speed, and memory usage. Specifically, we looked at parameters relating to minimizer-based subsampling (k and ℓ), hash-based subsampling (f = S/S′), and spaced seed usage (s). For Kraken 2, we performed two parameter sweeps, with one focused on minimizer-based subsampling and one focused on hash-based subsampling. The first parameter sweep looked at values for ℓ in the interval [25, 31], values for k in the interval [ℓ, ℓ + 10], and values for s in the interval [0, 7]; the second parameter sweep looked at values of ℓ in the interval [25, 31], fixed k = ℓ, values for f in the set {0.125, 0.25, 0.5}, and values for s in the interval [0, 7]. We also performed a third parameter sweep, focused on translated search (Kraken 2X), where we looked at values for ℓ in the interval [11, 15], values for k in the interval [ℓ, ℓ + 3], and values for s in the interval [0, 3].
Each parameter sweep used the strain exclusion data that we previously created to build databases, and we used the same accuracy and timing methods for these databases that we did in the cross-classifier comparison. The results of the first two parameter sweeps, run on nucleotide databases, are provided in Additional file 1: Table S2, while the results of the third parameter sweep, run on protein databases, are provided in Additional file 1: Table S9. We note that the parameter sweeps yielded a large number of parameter combinations giving approximately the same, near-optimal levels of accuracy. This suggests performance is not overly sensitive to particular parameter settings.
Evaluation of database sizes of Kraken 1 and Kraken 2
We began by shuffling the reference DNA sequences in our strain exclusion set and recorded the total number of bases in each sequence. We modified Kraken 2’s capacity estimator to report an estimate of the number of distinct minimizers after each sequence processed, rather than only after all sequences are processed. Finally, we ran the capacity estimator twice on the shuffled genomic data, once with k = 31, ℓ = 31, s = 0 (corresponding to Kraken 1’s defaults—effectively counting the number of distinct k-mers) and again with k = 35, ℓ = 31, s = 7 (Kraken 2’s defaults).
The size of a Kraken 1 database is a function of the number of distinct k-mers in the reference data. If there are X distinct k-mers, the size of Kraken 1’s database.kdb (sorted list of k-mer/LCA pairs) file will be 1072 + 12X bytes; the 1072-byte term is the size of the Jellyfish/Kraken header data, and 12 bytes are used for each k-mer/LCA pair. The database.idx (minimizer offset index) file is 8,589,934,608 bytes, a function of Kraken 1’s default minimizer length of 15. The full database size is the sum of the sizes of those two files.
Similarly, the size of a Kraken 2 hash table is a function of the estimate of the number of distinct minimizers in the reference data. If there are an estimated Y distinct minimizers, Kraken 2’s hash table will be 32 + ⌊4Y/0.7⌋ bytes in size (representing 32 bytes of metadata and using 4 bytes per cell and a load factor of 0.7).
We used the estimates of the numbers of distinct k-mers and distinct minimizers to calculate the database sizes of Kraken 1 and Kraken 2 for successively larger subsets of the strain exclusion set. The results of this evaluation are shown in Additional file 2: Figure S1, with raw data available in Additional file 1: Table S10.
Reviewing the results when all genomic sequences were added, our results indicate that the number of distinct k-mers is approximately 3.1 times the number of distinct minimizers for the settings we have selected for Kraken 1 and Kraken 2. It is not possible to draw a direct relationship between the number of distinct k-mers or minimizers and the number of sequence bases processed. For example, homology between similar strains and species will cause the number of distinct k-mers/minimizers to grow slower than the total number of bases. Examining the linear-term coefficients from the database-size expressions (12X and 4Y/0.7) indicates a Kraken 2 database will be approximately 15% of the size of a Kraken 1 database of the same reference data; this is because X ≈ 3.1Y, and (4/0.7)/(12 × 3.1) = 0.15. When we examine the full reference set, the 15% estimate is consistent with the ratio of Kraken 2’s hash table size (10.456 GB) to Kraken 1’s database.kdb file size (77.490 − 8.589 = 68.901 GB), which is 10.456/68.901 = 0.152.
Bracken experiments on strain exclusion data
We first generated Bracken metadata from each of the Kraken 1 and Kraken 2 reference libraries used in the strain exclusion experiments. We then used Bracken to estimate genus- and species-level abundance from the Kraken 1 and Kraken 2 classification results on the prokaryotic strain exclusion read data. Due to the low sequence similarity between our simulated viral reads and the strain-exclusion reference data, none of the nucleotide search programs exhibited high sensitivity on these reads, including Kraken 1 and Kraken 2. Such low classification rates prevent Bracken from inferring taxonomy for a large proportion of the viral reads. Additionally, the taxonomy for viruses has several examples where species are not grouped by ancestry and lack similarity in both gene organization and genomic sequence [38]. For these reasons, we chose to exclude the simulated viral reads from our analysis of Bracken.
For overall evaluation of the accuracy of Bracken in these strain exclusion experiments, we calculated the mean absolute percentage error (MAPE):
$$ \mathrm{MAPE}=\frac{100\%}{n}\sum \limits_{x=1}^n\left|\frac{T_x-{S}_x}{T_x}\right| $$
where Sx is the estimated number of reads and Tx is the true number of reads for taxon x. In this strain exclusion experiment, n = 40, the total number of distinct prokaryotic species and genera in the sample and Tx = 1000 for each taxon.