Computational and statistical approaches to analyzing variants identified by exome sequencing
© BioMed Central Ltd 2011
Published: 14 September 2011
Skip to main content
© BioMed Central Ltd 2011
Published: 14 September 2011
New sequencing technology has enabled the identification of thousands of single nucleotide polymorphisms in the exome, and many computational and statistical approaches to identify disease-association signals have emerged.
From quantitative trait locus mapping and linkage analysis to genome-wide association studies (GWASs), genetic markers have been used to locate causal genes underlying Mendelian and complex traits with impressive success: the molecular basis for nearly 3,000 Mendelian disorders is known  and over 4,500 single nucleotide polymorphisms (SNPs) have been associated with a variety of human traits and complex diseases . These studies rely on linkage with the disease-causing variant and, by their very nature, indirect genetic marker studies have limitations. The causal variant or gene remains unknown for the majority of the 4,500 SNPs associated with complex disease and for over 3,500 Mendelian disorders. New sequencing-based studies have emerged and are poised to change genetic mapping fundamentally by enabling the direct identification of causal sequence variants in a single experiment. We will no longer have to rely on linkage with the disease-causing variant; instead, by obtaining full sequence data for all genes we can now directly test for association with disease. As we have learned in the past few years, however, there is a great deal of human genetic variation  and finding the causal variant among thousands of candidates can be difficult.
Here we review the computational and statistical approaches that have emerged for managing these data in this rapidly exploding field. First, we briefly review the process for identifying variants in next-generation sequencing (NGS) studies and then discuss strategies for identifying the causal variant in Mendelian disorders among the total number of variants identified. We also discuss strategies for identifying the causal gene(s) in complex diseases among all genes in the genome, before outlining some challenges facing current exome sequencing studies.
NGS methods have been developed that harness massively parallel DNA sequencing  and enable large-scale sequencing projects that have applications ranging from cataloging genetic diversity on a population level  to identifying a disease-causing variant in a single individual, which might lead to directed therapy . Most large-scale medical sequencing projects so far have focused on the protein-coding region of the genome (the 'exome'). This has been driven in part by cost (whole genome sequencing is still relatively expensive for large sample sizes), biology (most known examples of disease-causing variants alter the protein sequence), and practical considerations (there is currently little consensus on interpreting non-coding genetic variation).
Various methods have been developed to select a subset of the genome for sequencing, but only solid-phase hybridization  and liquid-phase hybridization  have been commercially applied for selecting the entire human exome as the target for sequencing. After target enrichment, sequencing is performed using various NGS technologies, including reversible terminator reactions, sequencing by ligation, pyrosequencing and real-time sequencing . These generate millions of short sequence copies, or reads, tiled across the portions of the reference genome that were targeted. Although numerous algorithms have been developed to align NGS reads to the reference genome (Bowtie, Short Oligonucleotide Analysis Package (SOAP) and Blat-like Fast Accurate Search Tool (BFAST), among others ), most sequencing projects use Mapping and Assembly with Qualities (MAQ)  or the Burroughs-Wheeler Aligner (BWA)  because of computational efficiency and multi-platform compatibility. The resulting aligned sequence is then inspected for positions that vary from the human reference sequence and are identified as SNPs.
As with alignment tools, many algorithms have been developed to identify a high-quality set of variants in NGS projects. Most current SNP discovery tools rely on the calculation of genotype likelihoods at each position , defined as the probability of observing the given sequencing data (base calls and base quality scores) at that position given a set of underlying genotypes. Bayesian posterior probabilities can then be calculated for each potential genotype . Two popular tools for SNP discovery in NGS data that are easily incorporated into data-processing pipelines are SAMtools  and the Genome Analysis Toolkit UnifiedGenotyper [14, 15]. Other tools have been developed to exploit aspects of specific types of NGS technologies (optimizing base quality estimates from pyrosequences, for example) [16–18] or low-coverage sequencing data [18, 19].
By applying the appropriate tool one can identify a set of positions in the sequencing data that are different from the reference sequence along with an indication of genotype quality. Typically 15,000 to 20,000 variants are discovered per exome, with the variation in this number occurring from different exome target definitions [20–23] (a target set with fewer genes or exons would be expected to have fewer total variants) and ancestry (individuals of African ancestry have more variants per exome than individuals of European ancestry , for example). By contrast, about 3 million SNPs per genome are discovered using whole-genome sequencing  because of the larger sequencing target (whole genome sequencing targets about 3 Gb, whereas the typical exome target is about 33 Mb). To facilitate the processing and sharing of these large datasets, the Variant Call Format (VCF) text file format  is emerging as the accepted format for reporting sequence variation from NGS projects, and the SAM/BAM file format is routinely being used for storing and sharing raw NGS data .
Because even a single base-pair change can be associated with disease, SNP discovery algorithms must robustly distinguish true variation from sequencing errors. This challenge is magnified in exome sequencing projects, in which discovering rare variants is often the goal. NGS has an inherently higher per-base error rate than Sanger sequencing  but is generally thought to compensate for these errors with much higher coverage (most NGS experiments for disease-association generate an average of greater than 20- to 30-fold coverage). Despite this degree of coverage, however, the higher error rate of NGS can introduce false-positive associations if cases and controls have differential coverage depths . In large-scale sequencing projects aimed at discovering rare variants associated with complex disease, differential coverage between cases and controls should be one of the quality control metrics (of potentially many); however, a standardized quality control approach to NGS data has not yet emerged.
Exome sequencing has been successfully used to find the causal variant in several Mendelian disorders, such as Miller syndrome  (a rare autosomal recessive disorder characterized by craniofacial abnormalities), Kabuki syndrome  (an autosomal dominant form of mental retardation with facial abnormalities), and many others . It is emerging as an attractive method for disease-gene mapping in Mendelian traits when linkage studies have been inconclusive or impossible  (often owing to low numbers of affected individuals) or when looking for causal de novo mutations [20, 28]. Successful studies have typically analyzed fewer than ten individuals and often only affected individuals have been sequenced. These small studies are underpowered for detecting association using currently available association tests and use a different analytic approach for novel gene discovery compared with methods developed for the analysis of complex diseases.
To further narrow the search, investigators have imposed a recessive model of disease when the pedigree suggests this mode of inheritance, requiring a putative causal variant to be present in a homozygous state for all individuals (while absent in public databases), or for individuals to be compound heterozygotes in the putative gene (carrying two separate variants in the same gene), which can reduce the list to a single variant or gene [20, 22, 23]. This has been successfully performed in at least 11 studies of recessive disorders with various numbers of individuals down to as few as one, in which a single individual with Perrault syndrome (ovarian dysgenesis with sensorineural deafness) was found to have two separate non-synonymous variants in HSD17B4, a gene that is involved in peroxisomal fatty acid β-oxidation.
These simple filtering techniques may not be sufficient, however, and additional approaches might be needed to further narrow the search. An example of this was the use of an identity by descent analysis in a sequencing study to discover the cause of hyperphosphatemia mental retardation syndrome . After common variants were excluded from the list of shared variants among three affected individuals, 14 candidate genes were left; of these, however, only two were found in regions of the exome that were inferred to be identical by descent. PIGV (encoding phosphatidylinositol glycan class V), a gene that is involved in the synthesis of glycosyl-phosphatidylinositol, was identified as the causal gene after the final two candidate genes were sequenced in additional families. Our guess is that after the 'low-hanging fruit' are found, additional novel methods incorporating techniques from population and statistical genetics will be needed to identify causal genes in sequencing projects in which the answer is not immediately apparent.
In contrast to the autosomal recessive model of disease, there have been fewer published examples of novel gene association with autosomal dominant disorders (only four have yet been published ), perhaps highlighting the relative difficulty in finding such causal genes with exome sequencing. The general approach in the dominant model also relies on filtering a list of nonsynonymous variants to exclude those previously identified in either public databases or shared control exomes, and it requires affected individuals to be heterozygous for the same variant  or to be heterozygous for different variants in the same gene . As a proof of principle for exome sequencing in gene discovery for Mendelian disorders, the exomes of four individuals with Freeman-Sheldon syndrome (a rare autosomal dominant disorder previously known to arise from mutations in myosin heavy chain 3, MYH3) were sequenced in one of the first publications detailing exome sequencing of multiple individuals . MYH3 was identified as the only gene containing non-synonymous variants in all four individuals while being absent from dbSNP and other control exomes.
All exome sequencing studies for gene discovery in Mendelian disorders have relied on the assumption of complete penetrance. Under this assumption, they exclude variants from consideration if present in public catalogs of human genetic variation or unpublished datasets. As these databases expand, however, disease-causing variants might appear in one or more publicly available datasets. The limitation of requiring absence from these datasets is also apparent when one allows for a genetic model of incomplete penetrance (that is, if the phenotype is present in only some fraction of carriers). In the future such a filtering strategy might need to specify a minor allele frequency threshold in such datasets as opposed to requiring complete absence. The converse of penetrance (the probability of observing a phenotype given a genotype) is detectance (the probability of observing a genotype given a phenotype), and almost all exome sequencing studies for Mendelian disorders have relied on a model of complete detectance. The causal gene for Kabuki syndrome, however, was found only after allowing for incomplete detectance , and might not have been identified as MLL2 (mixed lineage leukemia 2) if the discovery panel had not been so enriched for carriers (90% of the discovery panel carried a loss-of-function variant in MLL2 compared with 60% of the replication panel). In the future, better tests will be needed that incorporate incomplete penetrance and detectance. However, it is clear that integration of gene length will be critical, as longer genes will dominate the results given the greater numbers of variants due to their size.
The simplest approach to analyzing variants from exome sequencing data is to examine each one individually for association with the given phenotype. For example, dichotomous traits (myocardial infarction, diabetes, schizophrenia, and so on) can be analyzed using the χ2 test for contingency tables, Fisher's exact test, Cochran-Armitage test for trend, or logistic regression . These methods test for an enrichment of the 'risk' allele in cases or controls (if seen more frequently in controls, it would be deemed a 'protective' allele). An example would be finding a variant present in 3% of cases but only 1% of controls. Whether this overrepresentation is statistically significant depends on the total number of individuals in the study and the required level of statistical stringency. Quantitative traits (such as blood lipid levels, body mass index or height) can be analyzed by linear regression . By definition, rare variants have low population frequency, and the statistical power to detect association with a phenotype is low for modestly sized studies. For example, assuming 10% disease prevalence, in a study with 1,000 cases and 1,000 controls, there is 2% power to detect an association for a rare variant (minor allele frequency of 0.5%), with a threefold effect at the genome-wide significance level of 5 × 10-8.
Groups of variants can be analyzed together in an attempt to improve power. In whole genome sequencing, a sliding window can be used to group variants, whereas in exome sequencing the natural unit of grouping is one gene. Alternative splicing can complicate this analysis, however, as a single variant might belong to multiple transcripts of the same gene with different functional effects (a variant might be classified as synonymous for one transcript and missense for another, for example). To extend the single variant tests above, single-SNP P-values from multiple variants can be combined by Fisher's  or Stouffer's  methods. Variants can also be combined in multiple logistic or linear regression models. However, because these simple approaches still essentially test each variant separately and then combine evidence from multiple variants, the results must be adjusted for many degrees of freedom, which will limit the power of these approaches.
Given the large amount of human genetic variation, it would not be surprising to find neutral variants in a causal gene. Therefore, selecting a subset of variants for regression can improve the power to detect an association. For example, synonymous variants are typically discarded because they are less likely to be causal. Shrinkage and regularization regression methods such as LASSO , ridge regression , and stepwise regression have been proposed for association studies. In these methods, the regression model is fitted while accounting for the cost of adding each additional variable to the model. Other approaches, such as logic regression  and the method proposed by Han and Pan , use data-driven combinations of variants to select variables for regression.
Another approach to increasing power is to collapse multiple rare variants together for analysis. The framework of these tests involves collapsing all variants across a unit (each gene being a unit, for example) together so that even if variants are individually rare, they might be jointly present in sufficient frequency to be used in a univariate test. When used for dichotomous traits, collapsing methods test whether the overall burden of rare variants is higher in cases than controls. For example, CAST  examines the differences in the number of individuals with one or more rare variants between cases and controls, and the CMC test  is based on comparison of non-synonymous rare variants between cases and controls. These tests rely on designating a set of variants as 'rare' for inclusion, and it is not surprising that altering this definition can greatly influence the association results. Unfortunately there is little guidance in this area and allele frequency thresholds of 1% or 5% are commonly (and arbitrarily) chosen. An alternative approach has been developed that uses the data to select the best variants. The variable-threshold test  finds the frequency threshold that best discriminates cases from controls. Similarly, RareCover  aims to find the optimal set of variants to collapse together. Although there have been no published complex-disease exome sequencing studies, these tests have been applied to candidate gene sequencing results [58, 60].
An alternative to the collapsing methods involves aggregation, which aims to summarize the information from many variants while appropriately weighing the contribution of each variant. Although collapsing methods discard variants that are considered unlikely to be causal, aggregation methods aim to include the full frequency spectrum of alleles (rare and common) into the association test. The weighted-sum statistic  weighs variants according to allele frequency (rare variants are given stronger weighting) because of an assumption that functional variants of large effect are kept at a low population frequency by purifying selection. Weighing variants by apparent effect size is also effective and is implemented in KBAC  and the test described by Ionita-Laza et al. . These tests have been applied to candidate gene sequencing results .
The association of genotype with phenotype can be confounded by various factors such as ancestry, age and sex. Methods that can directly account for such covariates can be advantageous in discerning the causal effect of genetic variants. When a test does not directly accommodate covariates, regressing the genotype and phenotype on the covariate and using the residuals for the association analysis can remove the effect of the covariate on the phenotype.
The effects of genetic variants can be neutral, protective or detrimental for a given disease trait. Many existing methods test for a frequency differential of variants between cases and controls and a mixture of positive and negative effects will adversely affect these tests. For example, PCSK9 (encoding proprotein convertase subtilisin/kexin type 9), a gene associated with cholesterol levels and coronary artery disease, contains both risk-lowering loss-of-function variants and gain-of-function variants that increase risk . Testing for a difference in the aggregate of these alleles in either cases or controls would not be expected to yield significant results as cases will be enriched for risk variants and controls will be enriched for protective variants, effectively canceling each other out in the sum total. Methods that account for a mixture of directions of effects can be more powerful in such scenarios, and several tests explicitly account for bidirectionality of effects (Additional file 1). The prevalence of genes with variants having bidirectional effects is currently unknown but loss-of-function variants are expected to be more abundant in the general population and this bidirectional effect may be less apparent for sequencing studies not focusing on phenotypic extremes. Regardless, it is likely that multiple genes in a common pathway would have alleles with bidirectional effects, and if a collapsing method is used to group variants across a pathway, these tests can be increasingly used.
Several studies have shown that using functional information improves the power to detect association [58, 65–67]. For protein-coding variants this can include the predicted effect on protein function, using programs such as SIFT [33, 68], PolyPhen [32, 69], Panther [70, 71], MutationAssessor , SNAP  and PupaSuite . For non-coding variants, evolutionary conservation and functional effects can be assessed using programs such as PhyloP , PhastCons , SCONE  and SiPhy .
The statistical power of the methods to test for association with rare variants has not been systematically analyzed. Although articles that describe novel association tests usually provide power comparisons to previous methods, these calculations are prone to being performed under specific assumptions about the genetic architecture of the trait that often favors the test being implemented and might not be representative for human traits in general [79, 80]. Extending the results from theoretical studies  and early sequencing studies of candidate genes [41, 42, 82] would suggest that approximately 10,000 exomes are needed to achieve genome-wide significance for complex traits (in which a Bonferroni-corrected P-value for 20,000 genes would require P < 2.5 × 10-6). Even the most powerful of the methods available for analyzing sequencing data will not lower these requirements substantially. It would not be surprising, then, that the first exome sequencing association studies will be underpowered and exome sequencing will need to be replicated with additional sequencing or genotyping (or both) .
The decision regarding the use of specific tests will depend on many factors, including study design (if the trait is quantitative or dichotomous), the assumption of the underlying genetics (whether only rare variants or both rare and common variants are expected to contribute to disease, whether protective and risk variants are expected), and pragmatic considerations (which test is available for use). Most importantly, different tests are powered to detect associations for different aspects of genetic architectures (number of affected loci, associated population frequencies, or associated effect sizes and directions) [79, 84, 85]. Currently, no software suite contains more than a small number of tests and input formats vary between available software packages, which complicates applying multiple tests to the same study. In the future we expect multiple tests to be implemented in available software suites.
Numerous tests have been developed for analyzing sequencing data (Additional file 1). Running a large battery of these tests comes at the cost, however, of having to penalize multiple hypothesis testing, as well as potential confusion over inconsistent results (a gene can be highly ranked in one test and not significant in another, for instance). Regardless of the test, unless rare variants have a surprisingly large phenotypic effect on complex diseases, achieving sufficient statistical power will require large studies. DNA sequencing costs will continue to decrease, however, and adequately sized studies might soon be performed (simulations suggest that 5,000 cases and 5,000 controls would provide adequate power to detect association for rare variants with modest effect ). Combining results from different studies on the same phenotype is an attractive intermediate option (as has been seen with increasingly larger GWAS meta-analyses). This will probably prove more challenging than GWAS meta-analysis, however, as differences in results from multiple sequencing centers (perhaps with different sequencing technologies or different exome target definitions, for example) can introduce significant technical artifacts. Once putative variants have been discovered, the replication strategy for exome studies will depend on the genetic architecture discovered in the analysis. Disease-associated low-frequency polymorphisms can be verified with follow-up genotyping. If the phenotype is caused by a collection of singleton variants, however, further sequencing in additional individuals will be needed and might prove expensive (especially if multiple genes are being considered or if genes are large or have many exons).
The growing number of exome sequencing studies demonstrates the power of this approach in mapping genes involved in Mendelian phenotypes. The success of this approach is uncertain, however, as publication bias makes it unclear how many studies fail to identify a causal locus by exome sequencing. Non-allelic heterogeneity, regulatory variation and structural variation underlying phenotypes all pose challenges for sequencing-based discovery of Mendelian genes. It is possible that new statistical and computational methods will increase the already impressive success rate of exome sequencing studies for Mendelian disorders.
Although we are still awaiting the completion of the first exome sequencing studies focusing on complex phenotypes, the early studies will probably be underpowered because current sequencing costs prohibit the adequately sized samples discussed above (10,000 samples). Owing to this lack of power, the first studies may not result in the discovery of numerous novel loci involved in traits of medical relevance. We believe that the enthusiasm for sequencing studies should not be diminished, however, because this technology has already shown great promise in the field of Mendelian disorders and sequencing costs will continue to decline, leading to adequately powered studies for complex traits. Technology already allows for the complete characterization of genetic diversity. The success of complex trait genetic research will now be determined by our ability to interpret the data and assemble sufficiently large well-phenotyped clinical populations.
This work was supported in part by National Institutes of Health grants R01-MH084676 and R01-GM078598. NS was supported in part by National Institutes of Health Training Grant T32-HL07604-25, Brigham and Women's Hospital, Division of Cardiovascular Medicine.