Assembling genomes using short-read sequencing technology
© BioMed Central Ltd 2010
Published: 28 January 2010
Skip to main content
We’re sorry, something doesn't seem to be working properly. Please try refreshing the page. If that doesn't work, please contact us so we can address the problem.
© BioMed Central Ltd 2010
Published: 28 January 2010
Gigabase-scale genome assemblies are now feasible using short-read sequencing technology, bringing the cost of such projects below the million-dollar mark.
Moore's law is often used as a predictor in the informatics field for the growth of processing power based on the increase in the number of transistors in integrated circuits. It states that, according to the historical trend, this number doubles roughly every 2 years. A similar trend manifests itself in the number of base pairs deposited in the GenBank database, which had a mere 680,338 base pairs (bp) in its December 1982 release. Twenty-seven years later, that number reached 110,118,557,163 bp in its core repository, and 158,317,168,385 bp in the Whole Genome Shotgun sequencing project repository. This increase corresponds to a doubling roughly every 17 months over 3 decades. If this trend is sustained, by the mid-21st century we will have enough sequencing data to cover the genomes of the entire projected human population of 9 billion with more than fivefold redundancy, and have several exabases (1018 bp) remaining to sequence other species.
This gap between the rates of growth of informatics and sequencing throughput is exerting a considerable strain on the development of bioinformatics tools to process the sequencing data generated. Hence, we need ever faster and more accurate algorithms to keep up with this increasing gap, much as media-specific compression algorithms such as those used by MP3 and DVD filled the gap between the digital media revolution and its storage requirements. This article focuses on three large and two smaller de novo sequencing projects, all published within the last 6 months, with a special emphasis on the recently published giant panda genome , which used a so-called next-generation sequencing (next-gen) platform from Illumina.
Of the three major contenders in the next-gen sequencing field, the 454 platform from Roche generates the longest reads, and so its data are suited for de novo sequencing studies. However, it is also the most expensive per sequenced base to operate. The SOLiD platform from ABI sequences dinucleotides in color space rather than individual nucleotides. In color space representation, each of the 16 dinucleotides is assigned to one of four dyes. Each nucleotide is interrogated twice, which can improve accuracy, but the fact that each dye is shared by four dinucleotides complicates analysis. Hence, although less expensive to run, the SOLiD platform has mostly been used for resequencing studies. The Illumina platform is on a par with SOLiD in throughput and sequencing cost. However, it generates short-sequence data in nucleotide space and so is suitable for de novo sequencing. Although all three platforms were originally marketed for resequencing, with increasing read lengths, improving quality, and the development of protocols for paired-end reads, they are all now being used in de novo sequencing studies as well [1, 2].
Three genome projects recently published their results on the assembly and analysis of gigabase-scale genomes. For two of these, the B73 maize genome  and the domestic horse genome , researchers took the more conventional approach of sequencing clones using capillary technology. In contrast, researchers on the third project - the panda genome  - exclusively used Illumina's short-read technology to sequence the complete genome.
The B73 maize genome project followed the approach used by the original human genome project, using a physical map to select a minimum bacterial artificial chromosome (BAC) tiling path, and sequencing and assembling the selected clones to construct the Zea mays ssp. mays L. genome . The high prevalence of repeat elements, constituting about 85% of the 10-chromosome, 2.3-gigabase genome, necessitated this rather conservative strategy. The project team assembled the 4× to 6× coverage data from capillary (Sanger) sequencing of a BAC library of 16,848 clones using Phrap , confirmed the assembly by BAC end sequencing, and refined it by sequencing 63 fosmid clones. The resulting assembly contains 125,325 contigs (61,161 scaffolds) with a contig (scaffold) N50 of 40 kb (76 kb), reconstructing 89% of the genome, with N50 denoting the weighted median; for a given assembly, half the genome is assembled in contigs larger than its N50. The estimated cost of the project, excluding the bioinformatics cost, is around US$30 million.
The project team for the domestic horse genome reported the second version of the draft Equus caballus genome , which has 31 pairs of autosomes and one pair of sex chromosomes. Genome length is estimated to be between 2.5 and 2.7 Gb. Sampling the genome of a thoroughbred mare, three clone libraries were generated: 4-kb and 10-kb inserts, and 40-kb fosmids, yielding sequence fold-coverages of 4.96×, 1.42× and 0.40×, respectively, on the capillary sequencing platform to a total of 6.8× coverage. To improve the contiguity of the draft assembly, the team used end sequences of 314,972 BACs derived from a half-brother of the sequenced mare. The horse genome was assembled by Arachne 2.0  to obtain a contig (scaffold) N50 of 112 kb (46 Mb), with about 46% of the assembled genome in repetitive sequences. The use of a whole-genome shotgun approach reduced the cost of this project to half that of the maize project.
The above two projects used capillary sequencing data. In contrast, the giant panda genome project used Illumina sequencing data with an average read length of 52 bp and 73× coverage to assemble the Ailuropoda melanoleuca genome , which, at an estimated 2.4-2.5 Gb, is of comparable length to the other two genomes. The assembly was performed in two stages using SOAPdenovo . In the first stage, the project team used paired-end sequencing data from 26 fragment libraries with nominal fragment sizes ranging from 110 bp to 570 bp. In the second stage, they used the pairing information from these libraries and from 11 long insert libraries of lengths 2 kb, 5 kb and 10 kb in successive iterations to scaffold the initial contigs. The resulting draft assembly is reported to have a contig (scaffold) N50 of 40 kb (1.3 Mb), reconstructing an estimated 92% of the genome. They also report that 36% of the panda genome is composed of transposable elements. The estimated cost of sequencing for this project is well under $1 million, making it 25 to 50 times more cost-efficient than the B73 maize and horse genome projects.
Assembly statistics for maize, horse, panda, blue-stain fungus (G. clavigera) and P. syringae genomes and their cost
G. clavigera †
P. syringae †
Sanger, 454, Illumina
Number of contigs
Number of scaffolds
Estimated sequencing cost
Arguably, even if state-of-the-art sequencing protocols and bioinformatics tools are used, genomes with high repeat content, such as B73 maize, may still not yield to short-read sequencing. However, if the success and the quality of the paradigm used by the giant panda genome project team is validated and reproduced, new de novo sequencing projects for complex genomes will benefit from the reduction in cost as well as the time efficiencies offered by the short-read technologies.
The enabling paradigm behind the de novo assembly of the giant panda genome is based on a de Bruijn graph representation of short sequence overlaps. A de Bruijn graph is a directed graph where vertices are strings of length k and edges represent overlaps of k-1 symbols, or nucleotides in the case of genome sequences. This approach was introduced to the field by Pevzner and coworkers with the Euler software , and was made popular by the software Velvet . The first application of the technology for mammalian-sized genomes was demonstrated by Simpson et al. using ABySS .
These tools produce first-pass draft assemblies using a de Bruijn graph, followed by contig merging using pairedend information. For the latter stage, several groups have developed alternative ways of using the information in the read pairs. The ALLPATHS algorithm  uses the pairedend information in layers, starting with the large-fragment libraries to build 20 kb regions, called neighborhoods, around unique contigs, called seeds. The short-fragment pairs are then used to assemble the neighborhood, including the repetitive regions between the seeds. The panda assembly  also used a similar layered approach to using fragment libraries, but started with the shorter-fragment libraries and proceeded to the longer-fragment libraries.
The authors of Velvet suggest in a subsequent paper  that shorter-fragment libraries may be unnecessary. They argue that distance between two nearby contigs can be calculated by comparing their distances, estimated using a large-fragment library, to a third more distant contig. The distance between the two nearby contigs is logically the difference between their distances to the distant contig.
In ABySS , multiple libraries of different sized fragments are considered simultaneously. Distances between pairs of contigs are estimated using each fragment library on its own, and the most accurate distance estimates between contig pairs, which typically come from the library with the smallest fragments that span each distance, are retained. After smaller contigs have been merged into larger contigs, cases that could not be resolved in previous iterations are then reconsidered.
Effect of the choice of k-mer size on the single-end contig N50 for the giant panda assembly using ABySS 1.1.0
Contig N50 (bp)
The five genomes noted in this article have different levels of completeness, and the cost estimates we report are based on a number of assumptions and on the summary numbers reported in the respective studies. Furthermore, they exclude any costs related to the bioinformatics activities. As such, the sequencing costs are not directly comparable. Nevertheless, at face value, a pattern emerges that favors the short-read technology. This is not news, certainly, as it is the underlying premise of the next-gen platforms, yet the short-read assembly studies cited show that bioinformatics is catching up with the pace of data generation by these platforms. Thus, with software tools maturing and experimental protocols being refined, the number of genomes assembled with short reads will increase, and their size will expand.