Skip to main content

Statistical methods for ranking differentially expressed genes

Abstract

In the analysis of microarray data the identification of differential expression is paramount. Here I outline a method for finding an optimal test statistic with which to rank genes with respect to differential expression. Tests of the method show that it allows generation of top gene lists that give few false positives and few false negatives. Estimation of the false-negative as well as the false-positive rate lies at the heart of the method.

Background

Microarray technology has revolutionized modern biological research by permitting the simultaneous study of genes comprising a large part of the genome. The blessings stemming from this are accompanied by the curse of high dimensionality of the data output. The main objective of this article is to explore one method for ranking genes in order of likelihood of being differentially expressed. Top gene lists, that give few false positives and few false negatives, are the output. As the interest is mainly in ranking for the purpose of generating top gene lists, issues such as calculation of p-values and correction for multiple tests are of secondary importance.

Microarrays have an important role in finding novel drug targets; the thinking that guides the design and interpretation of such experiments has been expressed by Lonnstedt and Speed [1]: "The number of genes selected would depend on the size, aim, background and follow-up plans of the experiment." Often, interest is restricted to so-called 'druggable' target classes, thus thinning out the set of eligible genes considerably. It is generally sensible to focus attention first on druggable targets with smaller p-values (where the p-value is the probability of obtaining at least the same degree of differential expression by pure chance) before proceeding to ones with larger p-values. In general, p-values have the greatest impact on decisions regarding target selection by providing a preliminary ranking of the genes. This is not to say that multiplicity should never be taken into account, or that the method presented here replaces correction for multiplicity. On the contrary, the approach provides a basis for such calculations (see Additional data files).

The approach presented here could be applied to different types of test statistics, but one particular type of recently proposed statistic will be used. In Tusher [2] a methodology based on a modified t-statistic is described:

where diff is an effect estimate, for example, a group mean difference, S is a standard error, and S0 is a regularizing constant. This formulation is quite general and includes, for example, the estimation of a contrast in an ANOVA. Setting S0 = 0 will yield a t-statistic. The constant, called the fudge constant, is found by removing the trend in d as a function of S in moving windows across the data. The technical details are outlined in [3]. The statistic calculated in this way will be referred to as SAM. The basic idea with d is to eliminate some false positives with low values of S, see Figure 1.

Figure 1
figure 1

The effect of S0. With real microarray data the absolute value of the t-statistic often shows erratic behavior for small values of the standard error S, with an increased risk of false positives. By choosing the constant S0 in equation (1) wisely one can alleviate this problem. In the right panel we see that the statistic d in equation (1) downplays the importance of some of the genes with low standard error, compared to the t-statistic (left panel). Data from Golub [16] were used, and S0 was chosen as the 5% percentile of S values (see also Discussion).

It is more relevant to optimize with respect to false-positive and false-negative rates. This is the basic idea behind the new approach. The idea is to jointly minimize the number of genes that are falsely declared positive and the number of genes falsely declared negative by optimizing over a range of values of the significance level a and the fudge constant S0. How well this is achieved can be judged by a receiver operating characteristics (ROC) curve, which displays the number of false positives against the number of false negatives expressed as proportions of the total number of genes.

An alternative to the statistic (1) is d = diff/√(S02 + S2), or d = diff/√(wS02 + (1 - w)S2) for some weight w, which is basically the statistic proposed in Baldi [4]. Its performance appears to be very similar to that of (1) (data not shown). A software implementation in R code within the package SAG [5, 6] is available from [7] via the function samroc.

Results

The criterion

A comparison of methods in terms of their ROC curves is presented in Lonnstedt [1]. A method whose ROC curve lies below another one (has smaller ordinate for given abscissa) is preferred (Figure 2). A method which has a better ROC curve, in this sense, will produce top lists with more differentially expressed genes (DEGs), fewer non-DEGs, and, consequently, will leave out fewer DEGs. Furthermore, such a method will give higher average ranks to the DEGs, if the ranking is such that high rank means more evidence of differential expression. Superiority in terms of average ranks is a weaker assertion than superiority in terms of ROC curves (see Additional data files for a proof). If it is desirable to compare methods with respect to their ROC curves, then the estimation procedures should find parameter estimates that optimize the ROC curve. This section suggests a goodness criterion based on the ROC curve.

Figure 2
figure 2

ROC curve. This graph displays the number of false negatives (differentially expressed genes (DEGs) not included) versus the number of false positives (non-DEGs included) found on top lists of increasing sizes, expressed as proportions of the total number of genes. The distance C gives an optimal value of equation (2). A method whose ROC curve lies below that of another method is preferable, as it will give more DEGs and fewer non-DEGs on any top list of any size, as explained in Additional data. Hence method 1 is preferable to method 2.

False discovery rate (FDR) may be defined as the proportion of false positives among the significant genes, see [2]. False-positive rate (FP) may be defined as the number of false positives among the significant genes divided by the total number of genes. Similarly, we define the false-negative rate (FN) as the number of false negatives among the nonsignificant genes divided by the total number of genes, the true-positive rate (TP) as the number of true positives divided by the total number of genes, and, the true-negative rate (TN) as the number of true negatives divided by the total number of genes.

In Table 1 relations involving these entities are displayed. For instance, the proportion of unchanged genes (non-DEGs), p0, equals the sum of the proportion of true negative and the proportion of false positive: p0 = TN + FP, and the proportion of significant genes at a certain significance level α equals the sum of the true positives and the false positives: p(α) = TP + FP. It is intuitive that the criterion to be minimized should be an increasing function of FP and FN. Any top list produced should have many DEGs and few non-DEGs.

Table 1 The unknown distribution of true and false positives and negatives

Assume that we can, for every combination of values of the significance level α and the fudge constant S0, calculate (FP, FN). The goodness criterion is then formulated in terms of the distance of the points (FP, FN) to the origin (which point corresponds to no false positives and no false negatives, see Figure 2), which in mathematical symbols may be put as

The optimal value of (α, S0) will be the one that minimizes (2). It is for practical reasons not possible to do this minimization over every combination, so the suggestion is to estimate the criterion over a lattice of (α, S0) values and pick the best combination.

If one has an assessment regarding the relative importance of FP and FN, that may be reflected in a version of the criterion (2) that incorporates a weight λ that reflects the relative importance of FP compared to FN: Cλ = √(λ2 FP2 + FN2). The choice λ = (1 - p0)/p0 corresponds to another type of ROC curve, which displays the proportion of true (TP/(1 - p0)) against the proportion of false (FP/p0) (see Additional data files). Other goodness criteria are possible, such as the sum of FP and FN or the area under the curve in Figure 2. For more details and other approaches see, for example [8, 9].

Calculating p-values

Using the permutation method to simulate the null distribution (no change) we can obtain a p-value for a two-sided test, as detailed below. Loosely speaking, in each loop of the simulation algorithm the group labels are randomly rearranged, so that random groups are formed, the test statistic is calculated for this arrangement and the value is compared to the observed one. How extreme the observed test statistic is will be judged by counting the number of times that more extreme values are obtained from the null distribution.

The data matrix Xhas genes in rows and arrays in columns. Consider the vector of group labels fixed. The permutation method consists of repeatedly permuting the columns (equivalent to rearranging group labels), thus obtaining the matrix X*, and calculating the test statistic for each gene and each permutation. Let d(j)*kbe the value of the statistic of the jth gene in the kth permutation of columns. Then the p-value for gene i equals

where M is the number of genes, d(i) the observed statistic for gene i, B the number of permutations and '#' denotes the cardinality of the set [2, 10, 11]. In words, this gives the relative frequency of randomly generated test statistics with an absolute value that exceeds the observed value of gene i. The formula (3) combines the permutation method in [2] and the p-value calculation in [10]. These p-values are such that a more extreme value of the test statistic will yield a lower p-value.

Given the significance level α (p-values less than α are considered significant), the proportion of the genes considered differentially expressed is

which is the relative frequency of genes with a p-value less than α.

The current version of samroc uses the estimate

where q X is the X% percentile of the d* (compare [3]). This estimate makes use of the fact that the genes whose test statistics fall in the quartile range will be predominantly the unchanged ones. More material on this matter is in the Additional data files.

Estimating FP

Going via results for the FDR in Storey [12] (see also [13, 14]) it is possible to derive the estimate

which is the proportion of unchanged genes multiplied by the probability that such a gene produces a significant result. For a derivation see the Additional data files.

Estimating FN

From Table 1 one obtains, as outlined in the Additional data files,

FN = 1 - p0(1 - α) - p(α)     (6)

To get an intuitive feel for this equality, just note that the second term is the proportion unchanged multiplied by the probability of such genes not being significant, which estimates TN, and that the third term corresponds to the positive (TP + FP). Subtracting the proportion of these two categories from the whole will leave us with the FN.

Estimating the criterion

The entities we need for the optimisation are given by the estimates

and

.

A scatter plot of the estimate of the criterion

versus the true value is shown in Figure 3, and reveals a good level of accuracy.

Figure 3
figure 3

Estimates of the criterion. The true and the estimated value of the goodness criterion √(FP2 + FN2). Using data from the simulated cDNA distributions, the true FP and FN were calculated and then estimated. Finally, the goodness criterion was calculated and displayed in a scatter plot, showing a good correspondence between estimate and estimand.

Tests

A detailed account of the results is given in the Additional data files, where datasets, data preprocessing, analysis and results are described in enough detail to enable the results to be reproduced.

When testing methods in this field it is difficult to find suitable data for which something is known about the true status of the genes. If one chooses to simulate, then the distributions may not be entirely representative of a real-life situation. If one can find non-proprietary real-life data, then the knowledge as to which genes are truly changed may be uncertain. To provide adequate evidence of good performance it is necessary to provide such evidence under different and reproducible conditions.

In the comparison, samroc, t-test, Wilcoxon, the Bayesian method in [1], and SAM [2] were competing. By the t-test I mean the unequal variance t-test: t = (mean1 - mean2)/√(s12/n1 + s22/n2) for sample means mean1 and mean2 and sample variances s12, s22. The Wilcoxon rank sum test is based on the sum W s of the ranks of the observations in one of the groups W s = R1...+ Rn1 [15]. The Bayesian method calculates the posterior odds for genes being changed (available as functions stat.bay.est.lin in the R package SAG, and stat.bayesian in the R package sma [1, 5]). The method starts from the assumption of a joint a priori distribution of the effect estimate and the standard error. The former is assumed normal and the latter inverse Gamma.

Simulated cDNA data

The normal distributions modeled after real-life cDNA data used in Baldi and Long [4] were used here to provide a testing ground for the methods (Table 2). In each simulation two groups of four arrays each were created. Three datasets with 1%, 5% and 10% DEGs were generated using the normal distributions. In all cases samroc and the t-test coincided (S0 = 0), and were the best methods in terms of the ROC curves. Theory predicts that the t-test is optimal in this situation (see Additional data files). When data were antilogarithm-transformed, giving rise to lognormal distributions, samroc again came out best, followed by the Bayes method. The t-test falls behind this time. Figure 4 gives a graphical presentation of the results in terms of ROC curves.

Figure 4
figure 4

Simulated normal and log-normal data. (a) Normal distribution, 1% DEGs. As expected with independent normally distributed observations, the t-test will perform quite well, and is matched by samroc, which in this case equals the equal-variance t-test. The Bayes method has problems with these data, with SAM and Wilcoxon somewhere in between these extremes. (b) Lognormal distribution, 1% DEGs. samroc may have a slight advantage for shorter lists, whereas the Bayes method is better for longer lists, where the number of false positives is larger. The other three methods lag behind, but not by much. (c) Normal distribution, 5% DEGs. The t-test and samroc coincide; samroc is now equivalent to the equal-variance t-test, which behaves in the same way as the unequal-variance t-test in this case. (d) Lognormal distribution, 5% DEGs. The difference between methods is less when data are exponentiated. However, samroc has the edge for a wide range of cutoffs, but the Bayes method catches up when more genes are selected. The other three methods are struggling to avoid last spot. (e) Normal distribution, 10% DEGs. Again the samroc and the t-test coincide and the Bayes method has problems with normal data. SAM is also lagging behind, while the other three are very close together. (f) Lognormal distribution, 10% DEGs. samroc comes out well; Wilcoxon has the worst performance, SAM and the t-test are scarcely better, while the Bayes method is intermediate.

Table 2 The normal distributions simulated defined by their means and standard deviations

Oligonucleotide leukemia data

The data on two types of leukemia, ALL and AML, appeared in Golub et al. [16, 17]. Samples of both types were hybridized to 38 arrays. In [17], 50 genes were identified as DEGs using statistical analysis of data from the full set of arrays. For these data it is impossible to calculate a ROC curve as the DEGs are unknown. Instead, performance was assessed in terms of the average rank of the 50 genes, after all genes were ranked by their likelihood of being DEGs according to each of the methods. Using just three arrays from each of the two groups, samroc gave the best results, followed by SAM (Table 3). This means that a necessary but not sufficient condition for the superiority of samroc in terms of ROC curves is satisfied (see Additional data files).

Table 3 Ranking based on the leukemia data [16]

Affymetrix spiking experiment data

In this test, data generated by Affymetrix in an experiment where 14 transcripts were spiked at known quantities (Table 4) [18, 19] were used. Using three arrays from each of two groups of arrays where 14 probe sets (genes) differ, further datasets with 140 and 714 DEGs were generated by a bootstrap procedure. Thus there were three datasets with roughly 0.1%, 1% and 5% DEGs. In two of these three settings samroc performed best, and in one case (0.1%) SAM and the Bayes method were better. Figure 5 gives a graphical presentation of these results in terms of ROC curves.

Figure 5
figure 5

Spike data. (a) Spike data with 14 DEGs. Three arrays from each group in the Affymetrix spike experiment were used. The granularity of Wilcoxon shows up here as a lack of performance, and at this sample size Wilcoxon is not an option. SAM starts off optimistically, and then falls back when the lists become longer and the false positives more, and samroc and Bayes catch up with it. In particular, the latter performs strongly. (b) Spike data with 154 DEGs. The spike data with an added 140 changed genes obtained from adding permuted residuals to group means for the 14 spiked genes, generating three arrays per group. This makes the percentage of DEGs just above 1%. We see that samroc improves considerably compared to (a), and now shows the best performance for a wide range of top list sizes. (c) Spike data with 714 DEGs. The spike data with an added 700 changed genes obtained from adding permuted residuals to group means for the 14 spiked genes, generating three arrays per group. This makes the percentage DEGs just above 5%. Now samroc takes the lead, and when the false positives reach roughly 10%, it is passed by the Bayes method. This point corresponds to a top list of roughly 1,400.

Table 4 Affymetrix spike experiment

Discussion

Whether to look at data on a log scale or not is a tricky question, and is beyond the scope of this article. However, the best performance by the tests considered was achieved when data were lognormal (see Additional data files). Normal, lognormal and real-life data were all included in order to supply a varied testing ground.

As pointed out in [20], the Bayes statistic is for ranking purposes equivalent to a penalized t-statistic t p = (mean1 - mean2)/√(a1 + S2). Here a1 is a scale parameter related to the a priori distribution of the standard error. This means that it is, at least in form, closely related to the t-test, SAM and samroc. SAM, on the other hand, chooses as its fudge constant the value among the percentiles of S, which minimizes the coefficient of variation of the median absolute deviation of the test statistic computed over a number of percentiles of S [3]. It is interesting to note how different the three related statistics the Bayes method, SAM and samroc turn out in practice.

One clue to why this difference occurs emerges when comparing the denominators of SAM/samroc and Bayes more closely. First square the denominators of (1) and the representation of Bayes above. We obtain (a + S)2 = a2 + 2aS + S2 for (1) and a1 + S2 for Bayes (where generally a1 ≥ a2). For large values of S the former will exceed the latter. This means that SAM/samroc will downplay the importance of the results for high expressing genes in a way that the Bayes method does not.

But there is also another difference. The Bayes method seems to achieve best when the number of false positives is allowed to grow rather large. The constant a corresponds to a large percentile in the distribution of the S2 values (see Additional data files). Whereas the constant in SAM will generally be rather small, often the 5-10% percentile of the S values, the constant in the Bayes method will correspond to at least the 40% percentile of the S2 values. It seems that using a large percentile will give a good performance when the number of false positives grows large. This observation is consistent with the observation made in Lonnstedt and Speed [1] that the particular version of SAM, which always uses the 90% percentile, will pass the Bayes method when the number of false positives is allowed to grow large. Also, samroc will in general make use of a smaller percentile, albeit that samroc shows greater spread between datasets in the values chosen, as a result of its adaptation to the features specific to the data at hand.

Samroc is the only method that makes explicit use of the number of changed genes in the ranking. If one has reason to believe, for example from studying expression (3), that there are very few DEGs (<< 1%), then samroc is probably not the first choice. Probably SAM or the Bayes method is more useful in these situations. If, on the other hand, the number of DEGs is reasonably large, samroc is conjectured to take precedence over SAM, and to be more robust than the Bayes method. Furthermore, one can argue that the kind of experiments undertaken in drug discovery would more often than not end in comparisons in which the biological systems show vast differences in a large number of genes, mostly as a downstream effect of some shock to the system.

The proposed method comes out better than or as good as the original SAM statistic in most tests performed. The samroc statistic is robust and flexible in that it can address all sorts of problems that suit a linear model. The methodology adjusts the fudge constant flexibly and achieves an improved performance. The algorithm gives fewer false positives and fewer false negatives in many situations, and was never much worse than the best test statistic in any circumstance. However, a typical run with real-life data will take several hours on a desktop computer. To make this methodology better suited for production it would be a good investment to translate part of the R code, or the whole of it, into C.

To improve on standard univariate tests one must make use of the fact that data are available on a large number of related tests. One way of achieving this goal has been shown in this paper. The conclusion is that it is possible and sensible to calibrate the test with respect to estimates of the false-positive and false-negative rates.

Additional data files

A zip file (Additional data file 1) containing the R package SAG for retrieval, preparation and analysis of data from the Affymetrix GeneChip and the R script (Additional data file 2) are available with the online version of this article. An appendix (Additional data file 3) giving further details of the statistical methods and the samroc algorithm is also available as a PDF file.

References

  1. Lonnstedt I, Speed TP: Replicated microarray data. Stat Sinica. 2002, 12: 31-46.

    Google Scholar 

  2. Tusher VG, Tibshirani R, Chu G: Significance analysis of microarrays applied to the ionizing radiation response. Proc Natl Acad Sci USA. 2001, 98: 5116-5121. 10.1073/pnas.091062498.

    Article  PubMed  CAS  PubMed Central  Google Scholar 

  3. Chu G, Narasimhan B, Tibshirani R, Tusher VG: SAM Version 1.12: user's guide and technical document. [http://www-stat.stanford.edu/~tibs/SAM/]

  4. Baldi P, Long AD: A Bayesian framework for the analysis of microarray expression data: regularized t-test and statistical inferences of gene changes. Bioinformatics. 2001, 17: 509-519. 10.1093/bioinformatics/17.6.509.

    Article  PubMed  CAS  Google Scholar 

  5. The Comprehensive R Archive Network. [http://www.cran.r-project.org]

  6. Ihaka R, Gentleman R: R: a language for data analysis and graphics. J Comput Graph Stat. 1996, 5: 299-314.

    Google Scholar 

  7. Supplementary files: SAG and simulation script. [http://home.swipnet.se/pibroberg]

  8. Lovell DR, Dance CR, Niranjan M, Prager RW, Dalton KJ: Ranking the effect of different features on the classification of discrete valued data. In Engineering Applications of Neural Networks. 1996, Kingston on Thames, London, 487-494.

    Google Scholar 

  9. Genovese C, Wasserman L: Operating characteristics of the FDR procedure. 2001, Technical Report. New York, Carnegie Mellon University

    Google Scholar 

  10. Dudoit S, Yang YH, Speed TP, Callow MJ: Statistical methods for identifying differentially expressed genes in replicated cDNA microarray experiments. Stat Sinica. 2002, 12: 111-140.

    Google Scholar 

  11. Davison AC, Hinkley DV: Bootstrap Methods and their Application. 1997, Cambridge, UK: Cambridge University Press

    Book  Google Scholar 

  12. Storey JD: A direct approach to false discovery rates. 2001, Technical Report. Stanford, CA: Stanford University

    Google Scholar 

  13. Efron B, Tibshirani R, Storey JD, Tusher VG: Empirical Bayes analysis of a microarray experiment. J Am Stat Assoc. 2001, 96: 1151-1160. 10.1198/016214501753382129.

    Article  Google Scholar 

  14. Benjamini Y, Hochberg Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc B. 1995, 57: 963-971.

    Google Scholar 

  15. Lehmann EL: Nonparametrics: Statistical Methods Based on Ranks. 1975, San Francisco, CA: Holden-Day

    Google Scholar 

  16. Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, Coller H, Loh ML, Downing JR, Caligiuri MA, et al: Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science. 1999, 286: 531-537. 10.1126/science.286.5439.531.

    Article  PubMed  CAS  Google Scholar 

  17. Whitehead Institute Center for Genome Research: Cancer Genomics Publications Data Sets. [http://www-genome.wi.mit.edu/cgi-bin/cancer/datasets.cgi]

  18. Speed Group Microarray Page - Affymetrix data analysis. [http://www.stat.berkeley.edu/users/terry/zarray/Affy]

  19. Affymetrix. [http://www.affymetrix.com]

  20. Smyth GK, Yang YH, Speed TP: Statistical issues in cDNA microarray data analysis. [http://www.stat.berkeley.edu/users/terry/zarray/Html/papersindex.html]

  21. Bioconductor software for bioinformatics. [http://www.bioconductor.org]

  22. Callow MJ, Dudoit S, Gong EL, Speed TP, Rubin EM: Microarray expression profiling identifies genes with altered expression in hdl-deficient mice. Genome Res. 2000, 10: 2022-2029. 10.1101/gr.10.12.2022.

    Article  PubMed  CAS  PubMed Central  Google Scholar 

  23. Arfin SM, Long AD, Ito T, Tolleri L, Riehle MM, Paegle ES, Hatfield GW: Global gene expression profiling in Escherichia coli K12: the effect of integration host factor. J Biol Chem. 2000, 275: 29672-29684. 10.1074/jbc.M002247200.

    Article  PubMed  CAS  Google Scholar 

  24. Lehmann EL: Testing Statistical Hypothesis. 1959, New York: Wiley

    Google Scholar 

  25. Pan W: A comparative review of statistical methods for discovering differentially expressed genes in replicated microarray experiments. Bioinformatics. 2002, 18: 546-554. 10.1093/bioinformatics/18.4.546.

    Article  PubMed  CAS  Google Scholar 

  26. Irizarry RA, Hobbs B, Speed TP: Exploration, normalization, and summaries of high density oligonucleotide array probe level data. [http://www.stat.berkeley.edu/users/terry/zarray/Affy/GL_Workshop/genelogic2001.html]

  27. DNA-Chip Analyzer (dChip). [http://www.dchip.org]

  28. Li C, Wong WH: Model-based analysis of oligonucleotide arrays: expression index computation and outlier detection. Proc Natl Acad Sci. 2001, 98: 31-36. 10.1073/pnas.011404098.

    Article  PubMed  CAS  PubMed Central  Google Scholar 

  29. Newton MA, Kendziorski CM, Richmond CS, Blattner FR, Tsui KW: On differential variability of expression ratios: improving statistical inference about gene expression changes from microarray data. J Comput Biol. 2001, 8: 37-52. 10.1089/106652701300099074.

    Article  PubMed  CAS  Google Scholar 

  30. Tsodikov A, Szabo A, Jones D: Adjustments and measures of differential expression for microarray data. Bioinformatics. 2002, 18: 251-260. 10.1093/bioinformatics/18.2.251.

    Article  PubMed  CAS  Google Scholar 

  31. Feller W: An Introduction to Probability Theory and Its Applications. 1971, New York: Wiley, 2: 2

    Google Scholar 

  32. SMAWEHI: An R Library for statistical microarray analysis. [http://bioinf.wehi.edu.au/smawehi/index.html]

Download references

Acknowledgements

Ingrid Lonnstedt generously made the code to a linear models version of stat.bay.est available. Comments from Terry Speed are gratefully acknowledged, among them the suggestion to use spike data and ROC curves in the evaluation, as is the input from Brian Middleton and Witte Koopmann.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Per Broberg.

Electronic supplementary material

13059_2002_551_MOESM1_ESM.zip

Additional data file 1: A zip file containing the R package SAG for retrieval, preparation and analysis of data from the Affymetrix GeneChip (ZIP 73 KB)

Additional data file 2: The R script (TXT 973 bytes)

13059_2002_551_MOESM3_ESM.pdf

Additional data file 3: An appendix giving further details of the statistical methods and the samroc algorithm (PDF 207 KB)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Broberg, P. Statistical methods for ranking differentially expressed genes. Genome Biol 4, R41 (2003). https://doi.org/10.1186/gb-2003-4-6-r41

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/gb-2003-4-6-r41

Keywords