Statistical tests for differential expression in cDNA microarray experiments
© BioMed Central Ltd 2003
Published: 17 March 2003
Skip to main content
© BioMed Central Ltd 2003
Published: 17 March 2003
Extracting biological information from microarray data requires appropriate statistical methods. The simplest statistical method for detecting differential expression is the t test, which can be used to compare two conditions when there is replication of samples. With more than two conditions, analysis of variance (ANOVA) can be used, and the mixed ANOVA model is a general and powerful approach for microarray experiments with multiple factors and/or several sources of variation.
Gene-expression microarrays hold tremendous promise for revealing the patterns of coordinately regulated genes. Because of the large volume and intrinsic variation of the data obtained in each microarray experiment, statistical methods have been used as a way to systematically extract biological information and to assess the associated uncertainty. Here, we review some widely used methods for testing differential expression among conditions. For these purposes, we assume that the data to be used are of good quality and have been appropriately transformed (normalized) to ensure that experimentally introduced biases have been removed [1, 2]. See Box 1 for a glossary of terms. For other aspects of microarray data analysis, please refer to recent reviews on experimental design [3, 4] and cluster analysis .
A simple microarray experiment may be carried out to detect the differences in expression between two conditions. Each condition may be represented by one or more RNA samples. Using two-color cDNA microarrays, samples can be compared directly on the same microarray or indirectly by hybridizing each sample with a common reference sample [4, 6]. The null hypothesis being tested is that there is no difference in expression between the conditions; when conditions are compared directly, this implies that the true ratio between the expression of each gene in the two samples should be one. When samples are compared indirectly, the ratios between the test sample and the reference sample should not differ between the two conditions. It is often more convenient to use logarithms of the expression ratios than the ratios themselves because effects on intensity of microarray signals tend be multiplicative; for example, doubling the amount of RNA should double the signal over a wide range of absolute intensities. The logarithm transformation converts these multiplicative effects (ratios) into additive effects (differences), which are easier to model; the log ratio when there is no difference between conditions should thus be zero. If a single-color expression assay is used - such as the Affymetrix system  - we are again considering a null hypothesis of no expression-level difference between the two conditions, and the methods described in this article can also be applied directly to this type of experiment.
A distinction should be made between RNA samples obtained from independent biological sources - biological replicates - and those that represent repeated sampling of the same biological material - technical replicates. Ideally, each condition should be represented by multiple independent biological samples in order to conduct statistical tests. If only technical replicates are available, statistical testing is still possible but the scope of any conclusions drawn may be limited . If both technical and biological replicates are available, for example if the same biological samples are measured twice each using a dye-swap assay, the individual log ratios of the technical replicates can be averaged to yield a single measurement for each biological unit in the experiment. Callow et al.  describe an example of a biologically replicated two-sample comparison, and our group  provide an example with technical replication. More complicated settings that involve multiple layers of replication can be handled using the mixed-model analysis of variance techniques described below.
The simplest method for identifying differentially expressed genes is to evaluate the log ratio between two conditions (or the average of ratios when there are replicates) and consider all genes that differ by more than an arbitrary cut-off value to be differentially expressed [10–12]. For example, if the cut-off value chosen is a two-fold difference, genes are taken to be differentially expressed if the expression under one condition is over two-fold greater or less than that under the other condition. This test, sometimes called 'fold' change, is not a statistical test, and there is no associated value that can indicate the level of confidence in the designation of genes as differentially expressed or not differentially expressed. The fold-change method is subject to bias if the data have not been properly normalized. For example, an excess of low-intensity genes may be identified as being differentially expressed because their fold-change values have a larger variance than the fold-change values of high-intensity genes [13, 14]. Intensity-specific thresholds have been proposed as a remedy for this problem .
The t test is a simple, statistically based method for detecting differentially expressed genes (see Box 2 for details of how it is calculated). In replicated experiments, the error variance (see Box 1) can be estimated for each gene from the log ratios, and a standard t test can be conducted for each gene ; the resulting t statistic can be used to determine which genes are significantly differentially expressed (see below). This gene-specific t test is not affected by heterogeneity in variance across genes because it only uses information from one gene at a time. It may, however, have low power because the sample size - the number of RNA samples measured for each condition - is small. In addition, the variances estimated from each gene are not stable: for example, if the estimated variance for one gene is small, by chance, the t value can be large even when the corresponding fold change is small. It is possible to compute a global t test, using an estimate of error variance that is pooled across all genes, if it is assumed that the variance is homogeneous between different genes [16, 17]. This is effectively a fold-change test because the global t test ranks genes in an order that is the same as fold change; that is, it does not adjust for individual gene variability. It may therefore suffer from the same biases as a fold-change test if the error variance is not truly constant for all genes.
As noted above, the error variance (the square root of which gives the denominator of the t tests) is hard to estimate and subject to erratic fluctuations when sample sizes are small. More stable estimates can be obtained by combining data across all genes, but these are subject to bias when the assumption of homogeneous variance is violated. Modified versions of the t test (Box 2) find a middle ground that is both powerful and less subject to bias.
In the 'significance analysis of microarrays' (SAM) version of the t test (known as the S test) , a small positive constant is added to the denominator of the gene-specific t test. With this modification, genes with small fold changes will not be selected as significant; this removes the problem of stability mentioned above. The regularized t test  combines information from gene-specific and global average variance estimates by using a weighted average of the two as the denominator for a gene-specific t test. The B statistic proposed by Lonnstedt and Speed  is a log posterior odds ratio of differential expression versus non-differential expression; it allows for gene-specific variances but it also combines information across many genes and thus should be more stable than the t statistic (see Box 2 for details).
The t and B tests based on log ratios can be found in the Statistics for Microarray Analysis (SMA) package ; the S test is available in the SAM software package ; and the regularized t test is in the Cyber T package . In addition, the Bioconductor  has a collection of various analysis tools for microarray experiments. Additional modifications of the t test are discussed by Pan .
After a test statistic is computed, it is convenient to convert it to a p-value. Genes with p-values falling below a prescribed level (the 'nominal level') may be regarded as significant. Reporting p-values as a measure of evidence allows some flexibility in the interpretation of a statistical test by providing more information than a simple dichotomy of 'significant' or 'not significant' at a predefined level. Standard methods for computing p-values are by reference to a statistical distribution table or by permutation analysis. Tabulated p-values can be obtained for standard test statistics (such as the t test), but they often rely on the assumption that the errors in the data are normally distributed. Permutation analysis involves shuffling the data and does not require such assumptions. If permutation analysis is to be used, the experiment must be large enough that a sufficient number of distinct shuffles can be obtained. Ideally, the labels that identify which condition is represented by each sample are shuffled to simulate data from the null distribution. A minimum of about six replicates per condition (yielding a total of 924 distinct permutations) is recommended for a two-sample comparison. With multiple conditions, fewer replicates are required. If the experiment is too small, permutation analysis can be conducted by shuffling residual values across genes (see Box 1), under the assumption of homogeneous variance [6, 25].
When we conduct a single hypothesis test, we may commit one of two types of errors. A type I or false-positive error occurs when we declare a gene to be differentially expressed when in fact it is not. A type II or false-negative error occurs when we fail to detect a differentially expressed gene. A statistical test is usually constructed to control the type I error probability, and we achieve a certain power (which is equal to one minus the type II error probability) that depends on the study design, sample size, and precision of the measurements. In a microarray experiment, we may conduct thousands of statistical tests, one for each gene, and a substantial number of false positives may accumulate. The following are some of the methods available to address this problem, which is called the problem of multiple testing.
One approach to multiple testing is to control the family-wise error rate (FWER), which is the probability of accumulating one or more false-positive errors over a number of statistical tests. This is achieved by increasing the stringency that we apply to each individual test. In a list of differentially expressed genes that satisfy an FWER criterion, we can have high confidence that there will be no errors in the entire list. The simplest FWER procedure is the Bonferroni correction: the nominal significance level is divided by the number of tests. The permutation-based one-step correction  and the Westfall and Young step-down adjustment  provide FWER control and are generally more powerful but more computationally demanding than the Bonferroni procedure. FWER criteria are very stringent, and they may substantially decrease power when the number of tests is large.
An alternative approach to multiple testing considers the false-discovery rate (FDR), which is the proportion of false positives among all of the genes initially identified as being differentially expressed - that is, among all the rejected null hypotheses [28, 29]. An arguably more appropriate variation, the positive false-discovery rate (pFDR) was proposed by Storey . It multiplies the FDR by a factor of π0, which is the estimated proportion of non-differentially expressed genes among all genes. Because π0, is between 0 and 1, the estimated pFDR is smaller than the FDR. The FDR is typically computed  after a list of differentially expressed genes has been generated. Software for computing FDR and related quantities can be found at [32, 33]. Unlike a significance level, which is determined before looking at the data, FDR is a post-data measure of confidence. It uses information available in the data to estimate the proportion of false positive results that have occurred. In a list of differentially expressed genes that satisfies an FDR criterion, one can expect that a known proportion of these will represent false positive results. FDR criteria allow a higher rate of false positive results and thus can achieve more power than FWER procedures.
When there are more than two conditions in an experiment, we cannot simply compute ratios; a more general concept of relative expression is needed. One approach that can be applied to cDNA microarray data from any experimental design is to use an analysis of variance (ANOVA) model (Box 3a) to obtain estimates of the relative expression (VG) for each gene in each sample [6, 34]. In the microarray ANOVA model, the expression level of a gene in a given sample is computed relative to the weighted average expression of that gene over all samples in the experiment (see Box 3a for statistical details). We note that the microarray ANOVA model is not based on ratios but is applied directly to intensity data; the difference between two relative expression values can be interpreted as the mean log ratio for comparing two samples (as logA - logB = log(A/B), where log A and log B are two relative expression values). Alternatively, if each sample is compared with a common reference sample, one can use normalized ratios directly. This is an intuitive but less efficient approach to obtaining relative expression values than using the ANOVA estimates. Direct estimates of relative expression can also be obtained from single-color expression assays [35, 36].
The set of estimated relative expression values, one for each gene in each RNA sample, is a derived data set that can be subjected to a second level of analysis. There should be one relative expression value for each gene in each independent sample. The distinction between technical replication and biological replication should be kept in mind when interpreting results from the analysis of a derived data. If inference is being made on the basis of biological replicates and there is also technical replication in the experiment, the technical replicates should be averaged to yield a single value for each independent biological unit. The derived data can be analyzed on a gene-by-gene basis using standard ANOVA methods to test for differences among conditions. For example, our group  have used a derived data set to test for expression differences between natural populations of fish.
The classical ANOVA F test is a generalization of the t test that allows for the comparison of more than two samples (Box 3). The F test is designed to detect any pattern of differential expression among several conditions by comparing the variation among replicated samples within and between conditions. As with the t test, there are several variations on the F test (Box 3b). The gene-specific F test (F1), a generalization of the gene-specific t test, is the usual F test and it is computed on a gene-by-gene basis. As with t tests, we can also assume a common error variance for all genes and thus arrive at the global variance F test (F3). A middle ground is achieved by the F2 test, analogous to the regularized t test; this uses a weighted combination of global and gene-specific variance estimates in the denominator. Nominal p-values can be obtained for the F test, from standard tables, but the F2 and F3 statistics do not follow the tabulated F distribution and critical values should be established by permutation analysis.
Among these tests, the F3 test is the most powerful, but it is also subject to the same potential biases as the fold-change test. In our experience, F2 has power comparable to F3 but it has a lower FDR than either F1 or F3. It is possible to derive a version of the B statistic  for the case of multiple conditions. This could provide an alternative approach to combine variance estimates across genes in the context of multiple samples. Any of these tests can be applied to a derived data set of relative expression values to make comparisons among two or more conditions.
The results of all three F statistics can be summarized simultaneously using a volcano plot, but with a slight twist when there are more than two samples. The standard deviation of the relative expression values is plotted on the x axis instead of plotting log fold change; the resulting volcano plot (Figure 1b) is similar to the right-hand half of a standard volcano plot (Figure 1a).
The process of creating a derived data set and computing the F tests described above can be integrated in one step by applying [20, 35] our fixed-effects ANOVA model ; further discussion is provided Lee et al. . The fixed-effects model assumes independence among all observations and only one source of random variation. Depending on the experimental design, this source of variation could be technical, as in our study , or biological if applied to data as was done by Callow et al. . Although it is applicable to many microarray experiments, the fixed-effects model does not allow for multiple sources of variation, nor does it account for correlation among the observations that arise as a consequence of different layers of variation. Test statistics from the fixed-effects model are constructed using the lowest level of variation in the experiment: if a design includes both biological and technical replication, tests are based on the technical variance component. If there are replicated spots on the microarrays, the lowest level of variance will be the within-array measurement error. This is rarely appropriate for testing, and the statistical significance of results using within-array error may be artificially inflated. To avoid this problem, replicated spots from the same array can be 'collapsed' by taking the sum or average of their raw intensities. This does not fully utilize the available information, however, and we recommend application of the mixed-effects ANOVA model, described below.
In a complex microarray experiment, the set of conditions may have some structure. For example, Jin et al.  consider eight conditions in a 2 by 2 by 2 factorial design with the factors sex, age, and genotype. There is no biological replication here, but information about biological variance is available because of the factorial design. In other experiments, both biological and technical replicates are included. For example, we  considered samples of five fish from each of three populations, and each fish was assayed on two microarrays with duplicated spots. In this study, the conditions of interest are the populations from which the fish were sampled; the fish are biological replicates, and there are two nested levels of technical replication, arrays and spots within arrays. To use fully the information available in experiments with multiple factors and multiple layers of sampling, we require a sophisticated statistical modeling approach.
The mixed model treats some of the factors in an experimental design as random samples from a population. In other words, we assume that if the experiment were to be repeated, the same effects would not be exactly reproduced but that similar effects would be drawn from a hypothetical population of effects. We therefore model these factors as sources of variance.
In a mixed model for two-color microarrays (Box 3c), the gene-specific array effect (AG in Box 3a) is treated as a random factor. This captures an important component of technical variation. If the same clone is printed multiple times on each array we should include additional random factors for spot (S) and labeling (L) effects. Consider an array with duplicate spots of each clone. Four measurements are obtained for each clone, two in the red channel and two in the green channel. Measurements obtained on the same spot (one red and one green) will be correlated because they share common variation in the spot size. Measurement obtained in the same color (both red or both green) will be correlated because they share variation through a common labeling reaction. Failure to account for these correlations can result in underestimation of technical variance and inflated assessments of statistical significance.
In experiments with multiple factors, the VG term in the ANOVA model is expanded to have a structure that reflects the experimental design at the level of the biological replicates, that is, independent biological samples obtained from the same conditions such as two mice of the same sex and strain. This may include both fixed and random components. Biological replicates should be treated as a random factor and will be included in the error variance of any tests that make comparisons among conditions. This provides a broad-sense inference (see Box 1) that applies to the biological population from which replicate samples were obtained [3, 39].
The components of variation attributable to each random factor in a mixed model can be estimated by any of several methods , of which restricted maximum likelihood (see Box 1) is the most widely used. The presence of random effects in a model can influence the estimation of other effects, including the relative expression values; these will tend to 'shrink' toward zero slightly. This effectively reduces the bias in the extremes of estimated relative expression values.
In the fixed-effects ANOVA model, there is only one variance term and all factors in the model are tested against this variance. In mixed-model ANOVA, there are multiple levels of variance (biological, array, spot, and residual), and the question becomes which level we should use for the testing. The answer depends on what type of inference scope is of interest. If the interest is restricted to the specific materials and procedures used in the experiment, a narrow-sense inference, which applies only to the biological samples used in the experiment, can be made using technical variance. In most instances, however, we will be interested in a broader sense of inference that includes the biological population from which our material was sampled. In this case, all relevant sources of variance should be considered in the test . Constructing an appropriate test statistic using the mixed model can be tricky  and falls outside the scope of the present discussion, but software tools are available that can be applied to compute appropriate F statistics, such as MAANOVA  and SAS . Variations analogous to the F2 and F3 statistics are available in the MAANOVA software package .
In conclusion, fold change is the simplest method for detecting differential expression, but the arbitrary nature of the cutoff value, the lack of statistical confidence measures, and the potential for biased conclusions all detract from its appeal. The t test based on log ratios and variations thereof provide a rigorous statistical framework for comparing two conditions and require replication of samples within each condition. When there are more than two conditions to compare, a more general approach is provided by the application of ANOVA F tests. These may be computed from derived sets of estimated relative expression values or directly through the application of a fixed-effects ANOVA model. The mixed ANOVA model provides a general and powerful approach to allow full utilization of the information available in microarray experiments with multiple factors and/or a hierarchy of sources of variation. Modifications of both t tests and F tests are available to address the problems of gene-to-gene variance heterogeneity and small sample size.