Integrating sequence, evolution and functional genomics in regulatory genomics
© BioMed Central Ltd 2009
Published: 30 January 2009
Skip to main content
© BioMed Central Ltd 2009
Published: 30 January 2009
With genome analysis expanding from the study of genes to the study of gene regulation, 'regulatory genomics' utilizes sequence information, evolution and functional genomics measurements to unravel how regulatory information is encoded in the genome.
Sequencing and functional genomics have not only led to a better understanding of genes and their expression collectively, but also to a refueling of interest in how transcriptional regulation is encoded in the 'noncoding' part of the genome. For many years, the state of the art had been to collect observed transcription factor binding sites (TFBSs) in DNA, use them to build a description of the factor's binding motif, for example in the form of a positional weight matrix [1, 2], and then scan a putative regulatory region for hits to this motif. Promoter-prediction programs were, and are still, used to linearly scan genomic sequence for putative markers of promoters, such as CpG islands and/or TATA box motifs, Inr, and so on [3, 4]. After experimental advances, many more elements of this 'linear' code for transcription - such as histone positioning, histone modifications and DNA methylation - are currently being studied. Large-scale sequencing has enabled the comparative study of genomes, which in turn helps identify regulatory sequences. Functional genomics and, in particular, gene-expression data, is showing us the consequences of transcriptional activation and has propelled the quest to find regulatory sequences shared between coexpressed groups of genes. This review will attempt to summarize the past few years' progress in integrating these approaches for the purpose of identifying regulatory sequence elements and their function.
Positional weight matrices (PWMs) have for many years been the workhorse of TFBS annotation. A set of experimentally determined binding sites for a transcription factor is aligned and the distribution of bases in each position of the binding site yields the weights in the PWM. There are two major databases for eukaryotic PWMs: TRANSFAC  and JASPAR . PWMs and their application are reviewed in . Two operations in conjunction with PWMs are important. First, coming up with the alignment for the PWM may be non-trivial. For example, in a landmark paper, Bucher  derived a PWM for the TATA box by extracting the relevant sequence alignment from a set of promoters he had collected . Algorithms like the one used are still being improved and will be briefly summarized in the section Motif discovery.
Second, because the PWM is meant to be a descriptor for a TFBS, a method is required to identify the predicted binding sites. Scanning a sequence with a PWM in a search of predicted binding sites seems simple. Yet because of the notorious lack of information content in the individual binding motif, a search over a long sequence region will inevitably turn up large numbers of probably false-positive results. Computationally, the program MATCH  uses predefined cutoffs for determination of binding sites, whereas patser , ProfileStats , and matrix-scan  determine the statistics of PWM matching under different background models. Weight-matrix based, biophysical models of transcription-factor binding  constitute an alternative to the ad hoc definition of a matching score. They allow the design of learning algorithms [15–17] and can be validated against experimental measurements .
When selecting the most appropriate algorithm for predicting binding sites, one needs to distinguish between the two application scenarios of either predicting target genes for a factor with a given PWM, or predicting which factors, represented by their PWMs, might bind upstream of a gene. For the first task, a biophysical approach like TRAP , which determines the likelihood that a factor binds somewhere in a, say, promoter region, appears to be most appropriate. After proper statistical normalization, this algorithm can also determine a ranking of which factors might bind in a given sequence region . The program matrix-scan from the RSAT package  is most appropriate for determining actual binding sites for a factor, while also supporting the detection of regions enriched in putative binding sites.
The false-positive problem, however, is inherent and to remedy it more information is needed. Besides possibly looking for evolutionarily conserved binding sites, this is usually provided through better motif descriptions, combination of motifs into cis-regulatory modules, and by considering the evolutionary conservation of binding sites.
In 1998, by aligning large genome fragments between mouse and human, Fickett and Wasserman observed that noncoding regions contain conserved fragments of a few kilobases that are enriched in cis-acting elements [22, 23]. Inspection of aligned regulatory regions typically yields a picture where ungapped conserved elements (describable by a PWM) occur at slightly different spacing in a set of sequences. Such an element is frequently called a cis-regulatory module (CRM). An early application of the concept can be found in Gailus-Durner et al. , where promoters were characterized by their arrangement of binding sites. Again, the problem of identifying CRMs comes in different flavors. In a single sequence the clustering of predicted TFBS may lead to the definition of a CRM. This was shown in the work by Berman et al. , who identified distal enhancer elements in the genome of Drosophila melanogaster by locating a sequence window with a high number of TFBSs. Similar windowed counting methods have been used with good success on the analysis of Drosophila early development [26, 27]. Improved probabilistic variants search clusters that are significant according to some statistical model [28–33]. A complementary approach uses probabilistic models that consider the likelihoods of binding sites and their distances [34–37].
The problem of finding CRMs from a single sequence is naturally extended to multiple sequences. The resulting methods work on the binding sites that lie on aligned DNA; hence they depend on the correctness of the alignments, which can be suspicious around the patchily conserved CRMs . The multiple sequence methods can also be divided into combinatorial and probabilistic ones [37, 39–42]. The enhancer element locator (EEL) [43, 44] is a recent CRM prediction tool that takes an orthologous pair of genes from two organisms (say, human and mouse) and searches through the DNA flanking the two genes to locate conserved clusters of TFBSs. The binding sites that might belong to such clusters are computationally determined, using some given collection of PWMs. The clusters are evaluated and ranked using a scoring scheme that gives bonuses and penalties as implied by the underlying biochemically motivated model of CRM structure and evolution. EEL finds the clusters with highest total score using an algorithm that is similar to the Smith-Waterman algorithm for local alignments. The significance analysis of the EEL scores can be based on generalized Karlin-Altschul statistics for local alignment scores . Entire genomes with up to 1 Mbp flanking regions for orthologous pairs of genes can be analyzed by EEL in reasonable time. Some EEL predictions have successfully been verified to have biological function in the mouse using in situ hybridizations and transgenic reporter assays [43, 46]. The method of Blanco et al.  aligns sequences of binding sites in much the same way as EEL, but their alignment is global instead of local. The global alignment is not able to locate novel CRMs from long sequences but requires exact knowledge about the location of the regulatory element. Another recent method  combines binding-site annotation and DNA alignment into a single procedure.
Whereas phylogenetic footprinting aims at delineating conserved patterns in orthologous regulatory elements, another logic says that coexpressed groups of genes from one organism might share regulatory elements, which mediate the coexpression. Clearly, in a first step the coexpressed groups of genes need to be determined, typically from microarray-generated gene-expression data. Many methods for this purpose have been proposed, starting with the work of Eisen et al. , who used single linkage clustering. The literature on clustering is extensive, and specialized algorithms for gene-expression data have been proposed by Sharan and Shamir , and Tamayo et al.  among others. Graphical methods have also proved useful in identifying clusters and associations [52, 53].
A number of databases are available to retrieve or submit microarray data. In particular, the National Center for Biotechnology Information (NCBI) runs the Gene Expression Omnibus (GEO) , the European Bioinformatics Institute (EBI) maintains the ArrayExpress (AE) database  and Stanford University hosts the Stanford Microarray Database (SMD) . Based on AE, the ArrayExpress Warehouse  was established, allowing queries based on a range of gene annotations, including gene symbols, GO terms and disease associations. Coexpressed clusters can be defined by determining which expression profiles in an experiment are significantly correlated with a 'seed gene', which supplies a blueprint for the cluster.
The inclusion of protein links in AE opens up the possibility of identifying human and mouse genes encoding transcription factors. This allows the definition of a group of expression profiles that is not only coherently expressed but is also seeded by a transcription factor. Then, for the transcription factor, all the probe sets significantly correlated to its expression are pooled if they also exhibit significant differential expression for the same experimental factor. Probe sets present on the metazoan Affymetrix microarrays stored in the warehouse have been mapped to ENSEMBL gene entries along with their functional annotations, facilitating the linking of results from protein sequence searches to the expression data stored in the warehouse. An application of this mapping information is the identification of transcription factor expression profiles, initiating the generation of clusters of coexpressed genes.
Whereas phylogenetic footprinting spots conserved, probably orthologous, patterns, a coexpressed group lets one ask whether the genes in the group are also co-regulated. Motifs that might be responsible for this assumed co-regulation can then be searched for. In one approach, no assumptions about known transcription factors binding in the promoter regions of the genes are made - motifs are searched de novo. An early example of such a method was used by Bucher . Today, discovered motifs can a posteriori be compared with databases of known motifs, to check if they are likely to be bound by known transcription factors, or whether the motifs are completely novel. The motif-discovery problem can be addressed by various algorithmic approaches, which take as input a set of sequences and return de novo predicted motifs. The approaches can roughly be subdivided in two classes: matrix-based or string-based pattern discovery.
Matrix-based pattern discovery algorithms evaluate a large number of possible alignments between fragments of the input sequences, and attempt to return the alignment (summarized as a position-specific scoring matrix) that optimizes some scoring function. Historically, the first approach was developed by Gary Stormo's group: their program Consensus relies on a greedy algorithm, which progressively incorporates sequences to build matrices with maximal information content [11, 57]. The Gibbs sampling strategy, initially developed to detect protein domains [58, 59], has been adapted to discovery of transcription factor binding motifs in promoters of coexpressed genes [60–62]. More recent versions of the Gibbs sampling [63, 64] support background models based on Markov chains, which take into account the higher-order dependencies between adjacent residues in biological sequences. The program MEME implements an expectation-maximization algorithm using multiple starting seeds in order to sample a large number of possible motifs [65, 66]. String-based pattern discovery is based on the statistical detection of over-represented oligonucleotides [67–75] or of dyads - that is, spaced pairs of oligonucleotides . Many of these algorithms have been compared and tested by Tompa et al.  and many have also been adapted to detect phylogenetic footprints in promoters of orthologous genes, with the programs Footprinter , PhyloCon , PhyME , OrthoMEME , PhyloGibbs  and Footprint-analysis . A problem common to these approaches is the need to specify a theoretical background model, which usually does not fully capture the complexity and heterogeneity of real promoter sequences.
Patterns discovered among the regulatory regions of a coexpressed cluster of genes will typically be PWMs, like those derived from known TFBSs. The catch, however, is that inspection of PWMs for real binding sites has taught us not to expect well defined patterns with high information content. On the other hand, patterns as badly defined as the real ones can be easily extracted from any set of upstream regions if only the region chosen is large enough. This demonstrates an inherent limitation of the ability to identify patterns from co-regulated genes. Nevertheless, in yeast there has been considerable success with de novo identification of motifs . Pattern identification in Drosophila has profited greatly from the many sequences that are now available in conjunction with fairly well defined enhancer regions made up of clearly discernible regulatory elements. Vertebrate regulatory sequences, however, seem to be much harder to identify. Difficulties stem from the lack of knowledge on how to narrow down the sequence regions in which to look for patterns, and probably from the patchy nature of vertebrate CRMs.
There are several ways of narrowing down the sequence regions to be searched for regulatory patterns. First, systematic identification of complete cDNAs together with new technologies such as cap-analysis gene expression (CAGE) tags have led to highly accurate identification of human and mouse transcription start sites . Therefore, focusing on a promoter sequence is less guesswork today than it was a few years ago. One price to be paid, though, is the increased complexity of promoter definition resulting from the insight that alternative promoters for a gene are more the rule than the exception. It thus seems appropriate to study several promoters per gene. The notion of an enhancer used to be biologically defined, but with more and more complete genomic sequences available, enhancers tend to be identified with highly conserved noncoding regions. Systematic mapping of DNase I hypersensitive sites is also contributing to pinpointing enhancer regions. However, with the identification of transcriptional start sites that are far upstream from the translational start, it is becoming increasingly difficult to distinguish clearly between an enhancer and a promoter.
Even when one knows where to look for regulatory modules, identification of a CRM that might be responsible for co-regulation of a group of genes is hard, and methods development is a very active field of research. In general, available methods put the emphasis either on de novo pattern discovery or on a dictionary-based approach. A dictionary of patterns may contain PWMs from the existing databases, patterns identified through systematic phylogenetic footprinting [86, 87], or the output from de novo pattern identification. The dictionary serves to search for associations between sequence motifs and gene clusters. Often, this association is formalized as a statistical over-representation: that is, that the genes in the clusters contain a certain motif more often than expected.
The prototypic approach to over-representation is the use of the hypergeometric distribution to quantify the probability that within a large set two subsets have an overlap exceeding a certain size. Hughes et al.  apply this to the predicted target genes containing a certain motif and clusters of genes. Clearly, if the overlap between the two sets is large, this hints at a biological role for the motif. Dieterich et al.  analyze human cell-cycle data and quantify the occurrence of motifs upstream of genes that peak in particular phases of the cell cycle. Many more procedures try to solve the problem of finding novel CRMs similar to a dictionary of CRMs [89–92]. Good results have been achieved in flies, and to some extent in mammals [23, 27, 90, 93, 94]. Bussemaker  has pioneered linear models to explain expression data in terms of the occurrence of motifs in the upstream region of the genes. A recent paper by Bulyk and co-workers  relates over-represented CRMs to their target genes in human myogenic differentiation. Other algorithms identify common CRMs from a set of co-regulated genes without assuming a given dictionary [33, 97–101].
Some methods find CRMs by distinguishing the regulatory DNA regions from neutrally evolved sequences and from sequences conserved for a reason other than transcriptional regulation [102–105]. These methods are universal in the sense that they do not need prior transcription factor binding information. The output provides the putative regulatory sequences but gives no clue of the transcription factors binding them. The tissue specificity of the regulators can be tested in the wet lab or predicted using other computational methods [106, 107].
The availability of gene-expression data in conjunction with chromatin immunoprecipitation and DNA microarray (ChIP-chip) and sequence data has led to many attempts to construct entire gene regulatory networks, rather than predicting particular regulatory connections. These efforts are known as 'reverse engineering' of networks, but are beyond the scope of this review.
What are the eventual chances of producing systematic genomic annotation of regulatory elements in the human genome? Impressive progress is clearly being made. The ORegAnno database  collects binding-site information for many organisms and the Encode project  has provided a plethora of regulatory information, such as transcription start sites, DNase I hypersensitive sites, histone-modification data and more, much of which is available in the Ensembl Regulatory Build . The extension of Encode to the entire human genome and to model organisms will provide significantly more of this kind of information. However, the obvious discrepancy between the large number of transcription factors and the comparatively small number of PWMs in the databases makes it clear that the best we can hope for at the moment is an annotation with regulatory elements. The actual binding factor may remain unknown, although the sequence element can be used as a predictor for expression in a certain tissue or under certain conditions. Extensive transcription factor binding experiments such as ChIP-chip, ChIP followed by DNA sequencing (ChIP-seq), protein-binding arrays, and DNA adenine methyltransferase tagging of TFBSs (DamID) will hopefully help in establishing better links between experimental data and computationally derived data. Likewise, the predicted CRMs constitute a huge resource of hypotheses waiting to be tested experimentally.
Support from EU FP6 NoE BioSapiens (contract number LSHG-CT-2003-503265) is gratefully acknowledged.