Open Access

How many replicates of arrays are required to detect gene expression changes in microarray experiments? A mixture model approach

Genome Biology20023:research0022.1

DOI: 10.1186/gb-2002-3-5-research0022

Received: 27 December 2001

Accepted: 11 March 2002

Published: 22 April 2002



It has been recognized that replicates of arrays (or spots) may be necessary for reliably detecting differentially expressed genes in microarray experiments. However, the often-asked question of how many replicates are required has barely been addressed in the literature. In general, the answer depends on several factors: a given magnitude of expression change, a desired statistical power (that is, probability) to detect it, a specified Type I error rate, and the statistical method being used to detect the change. Here, we discuss how to calculate the number of replicates in the context of applying a nonparametric statistical method, the normal mixture model approach, to detect changes in gene expression.


The methodology is applied to a data set containing expression levels of 1,176 genes in rats with and without pneumococcal middle-ear infection. We illustrate how to calculate the power functions for 2, 4, 6 and 8 replicates.


The proposed method is potentially useful in designing microarray experiments to discover differentially expressed genes. The same idea can be applied to other statistical methods.


Microarrays are used to measure the (relative) expression levels of thousands of genes (or expressed sequence tags). A comparison of gene expression in cells or tissues from two conditions may provide useful information on important biological processes or functions [1,2]. The challenge now is how to detect those genuine changes from noisy data. It is now known that simply using fold changes, as in the earlier days, is unreliable and inefficient [3,4]. More sophisticated statistical methods are called for. Many proposals have appeared in the literature [3,4,5,6,7,8,9,10]. In particular, it has been noticed that it may be necessary to design an experiment that uses multiple arrays (or multiple spots on each array) containing multiple measurements for each gene under each condition. One reason is that because of a high noise-to-signal ratio, a single array may not provide enough information that can be reliably extracted [11]. More important, multiple measurements from each gene make it possible to assess the potentially different variability of genes. The problem then seems to fall within the traditional two-sample comparison in statistics. Two of the best known two-sample statistical tests are the two-sample t-test and the Wilcoxon test (or equivalently, Mann-Whitney test). The t-test is parametric and is based on the assumption that the gene-expression levels have normal distributions. In contrast, the Wilcoxon test is nonparametric and is based on the ranks of observed gene-expression levels. Although the t-test is robust to departures from normality and the Wilcoxon test does not depend on the normality assumption, the problem is that under non-normal situations the t-test may be too conservative, and hence, as with the Wilcoxon test, may have too low power, especially when the sample size is small, which is the case for most microarray experiments. These points have been verified in two case studies using real data [8,12]. In a class of nonparametric approaches [5,9,10], a version of the two-sample t-statistic is used but its null distribution is estimated nonparametrically, rather than directly assumed to be a t-distribution. In addition, some earlier studies have suggested that the variability of gene expression may be related to the mean expression [3,4,6]. Therefore, it implies that the t-statistic being used should be based on unequal variances for the two samples.

An important and natural question often asked by biologists is how many replicates are required. For microarray experiments, unlike many other experimental contexts, this issue has rarely been discussed in the literature. To our knowledge, the only exception is the work by Black and Doerge [13], which, however, is for the situation where parametric statistical methods are applied to detect expression changes. In this paper, we discuss the problem when a nonparametric method, the normal mixture model approach [10], is used to detect differential expression. But to facilitate calculations of sample size, the formulation is slightly changed from their original one. Nonparametric methods of microarray data analysis have been pioneered by Efron and Tibshirani and co-workers [5,9]. They take advantage of the presence of replicates and thus can impose much weaker modeling assumptions. For instance, the parametric methods of Black and Doerge [13] depend on the assumption on the log-normal or gamma distribution of gene-expression levels, whereas the mixture model approach does not have such a distributional assumption and directly estimates distributions related to random errors. Note that modeling the distribution of random errors has advantages over direct modeling of expression levels, and is a common practice in applied statistics. For example, gene-expression levels may be correlated (for example, as a result of coexpression of some genes) whereas random errors can be more reasonably assumed to be independent. This is similar to modeling longitudinal data using a linear mixed-effects model [14]: the responses from each subject (corresponding to a group of coregulated genes here) are in general correlated, but the measurement errors from the same subject can be considered to be independent after incorporating a random-subject effect in the model. Note that the random effect will be canceled out from the t-statistic for each gene. Our proposal here also shows an attractive feature of the mixture model approach, as compared to the other two nonparametric approaches [5,9], because it is still unclear how the sample size/power calculation can be done in the other two approaches.

The problem of calculating the number of replicates required in a microarray experiment is similar to that of sample size/power calculations in clinical trials and other experiment designs; the (to-be-determined) sample size in microarray experiments refers to the number of replicates, whereas the number of genes is not an issue here. As usual, we assume that the replicates are (approximately) independent with each other, whether they are drawn from the same individual or multiple individuals. In general, the required sample size depends on several factors: the true magnitude of the change of gene expression (say, d), the desired statistical power (that is, probability) ( ) to detect the change, and the specified Type I error rate ( ). The problem of how to calculate the number of replicates for any given triplet (d, , ) is equivalent to that of how the power depends on the pair (d, ) and the number of replicates, which we consider in the paper.

The proposed method is not restricted to any specific microarray technology. From now on, the expression level can refer to a summary measure of relative red-to-green-channel intensities in a fluorescence-labeled cDNA array, a radioactive intensity of a radiolabeled cDNA array (as used in the example later), or a summary difference of the perfect match (PM) and mismatch (MM) scores from an oligonucleotide array. The gene-expression levels may have been suitably preprocessed, including dimension reduction, data normalization and data transformation [5,15,16,17,18].

Results and discussion

A statistical model

We consider a generic situation that, for each gene i, I = 1,2,..., N, we have (relative) expression levels X1i,..., X mi from m microarrays under condition 1, and Y1i,..., Y mi from m arrays under condition 2. We need to assume that m is an even integer. A general statistical model is assumed for gene expression data:

X ji = (1),i + ji ,    Y li = (2),i + e li ,

(1),i and (2),i are the mean expression levels for gene i under the two conditions respectively, and ji and e li are independent random errors with means and variances

for any j = 1,..., m, l = 1,..., m and i = 1,..., N. It is assumed that random errors ji / (1),i and are randomly taken respectively from one of two (not necessarily equal) distributions that are symmetric about their mean 0. Note that the above assumption on the distributions of random errors, not on that of gene expression levels (that is, X ji and Y li ), is often reasonable, and similar assumptions are common in other statistical applications. In addition, we do not assume that the expression levels of all the genes have an equal variance, because some previous studies [3,4,6] have found that the variance 2(c),i (for c = 1,2) of gene-expression levels may depend on the mean expression (c),i. Also, we do not even need to assume that 2(1),i = 2(2),i unless (1),i = (2),i.

A goal is to detect all genes with
(1),i (2),i. This can be accomplished through statistical hypothesis testing.

A test statistic

To test the null hypothesis H0: (1),i = (2),i, we use a t-type test statistic or score

Note that the mean and variance of Z i are

whereas the mean E(Z i ) = 0 under H0. Hence, it can be seen that a large absolute value of Z i , |Z i |, gives evidence against H0. As the number of arrays (that is, m) increases, the variance of the test statistic Z i decreases. Hence, it is possible to reject H0 (that is, detect differential expression for gene i) with any E(Z i ) ≠ 0 if m is large enough. In other words, if the Type I error rate and other parameters are fixed, then the statistical power of the test will increase as m increases. This is the key point that motivates the discussion on sample size calculations.

To determine the cut-off point for |Z i | to reject H0, we need to know or estimate the distribution of Z i under H0, the null distribution f0. In a parametric approach, based on some full distributional assumptions for X ji and Y ji , one may derive the null distribution f0, such as in a two-sample t-test. However, the validity of such a parametric method critically depends on the correctness of assumed distributions, which of course is not guaranteed. Here, we consider a nonparametric approach: a finite normal mixture model is used to estimate f0 nonparametrically.

Estimating the null distribution

There may be various ways to estimate the null distribution f0. For instance, using expression levels of some housekeeping genes that are known to have non-differential expression, one can construct their Z i scores and then estimate f0 using the obtained Z i scores. In practice, however, there may be only a small number of or no housekeeping genes in a given experiment. Here, following the basic idea in a class of nonparametric methods [5,9,10], we construct a null score z i for each gene and then use these null scores to estimate f0 nonparametrically. The null score is constructed from the same observed gene expression data as used in Z i :

Under the assumption that
ji and e ji have symmetric distributions, then ji and - ji have the same distribution, and e ji and -e ji have the same distribution. Thus, by comparing the form of z i with that of Z i , we know that the distribution of z i is exactly f0, the null distribution for Z i (under H0). Note that under H0, (1),i = (2),i, and hence (1),i = (2),I (since we assume that (c),i only depends on (c),i), then

Thus z i and Z i have the same distribution f0 under H0. We use all z i values across all genes to estimate f0.

In practice,
(c),i (for c = 1, 2) are unknown, and can be estimated using the sample standard deviations (SDs) s(c),i. Although the sample SD s(c),i is asymptotically unbiased, if m and n are small, s(c),i may not be stable, and some modifications may be necessary. In any case, substituting (c),i by any suitable estimates, we can calculate the scores z i values and Z i values, on the basis of which we can estimate f0 and f respectively. By comparing f0 and f, we can gain insight about genes with altered expression (that is, (1),i (2),i).

We assume that all the z i values for i = 1,..., N are a random sample from f0; thus we can use the observed z i values to estimate f0. Pan et al. [10] proposed estimating f0 using a finite normal mixture model [19]. Specifically, it is assumed that

(z; a r , V r ) denotes the density function of a normal distribution N(a r , V r ) with mean a r and variance V r , and r values are mixing proportions. Ωg0 represents all unknown parameters { r , a r , V r ) : r = 1,...g0} in a g0-component mixture model. Among others, a normal mixture is essentially nonparametric and flexible, and easy to use with stable tail probabilities.

A mixture model can be fitted by maximum likelihood using the expectation-maximization (EM) algorithm [19,20,21]. The number of components can be selected adaptively using the Akaike Information Criterion (AIC) [22] or the Bayesian Information Criterion (BIC) [23]. In using the AIC or BIC, one first fits a series of models with various values of g0, then picks up the g0 corresponding to the first local minimum of AIC or BIC [24]. Some empirical studies seem to favor the use of BIC [24].

Determining the cut-off point

Once we obtain an estimate of the null distribution f0, we can determine the cut-off point of the rejection region for testing H0. In general, as for a two-sample test, the rejection region can be selected in the tails of f0 because, under the null hypothesis, Z i should be close to the center of f0, whereas if there is differential expression for gene i, Z i is likely to be in one of the two tails of f0. The specific choice may depend on the goal of the analysis. For example, if we are only interested in detecting upregulated genes, we can choose the rejection region at the right-tail of f0. Our proposed method works for any specified way of determining the rejection region. As f0 should be symmetric about its mean 0, and often we are interested in both up- and downregulated genes, we propose to take the rejection region at the two tails of f0, {z : f0(z) <C }, where the constant C > 0 is the cut-off point and depends on the specified (gene-specific) Type I error rate . As usual, C > 0 is chosen such that the rejection rate under H0 is exactly :

(.; a, V) is the corresponding cumulative distribution function for (.; a, V). Using a numerical algorithm, such as the bisection method [25], we can solve the above equation to obtain C for any given .

For microarray data, because we are testing H0 for each gene, the multiple test problem arises and some control on it is necessary. Usually we can use Bonferroni's method. For instance, if we want to maintain the genome-wide Type I error rate at the usual 5% level, then the Bonferroni-adjusted gene-specific (that is, test-specific) Type I error rate is = 0.05/N, where N is the total number of genes to be tested.

Once C is determined, we can calculate the power as a function of d, the magnitude of the expression change targeted to be detected. Note that

is the difference of the coefficients of variation under the two conditions. If
(1),i = (2),i, d can be interpreted as the change of the mean expression levels from condition 1 to condition 2. Otherwise, it can be regarded as the difference of (variation) standardized mean expression levels. Specifically, we have the power function
Unsurprisingly, we can see that
(d, ) will increase as |d| increases. The effects of having more replicates will reduce the variability of f0, leading to larger (d, ) for any given d.

Calculation of replicate numbers

Now we describe how to calculate replicate numbers based on some pilot data taken from earlier studies. We use zm,i to explicitly denote the z i scores in (2) with m replicates. Based on the data we can estimate the density function f0,m (zg0) of zm,ivalues as a normal mixture

From now on, we treat f0,m as known in Equation (5).

With estimated f0,m, we want to estimate the density function f0,mk for zmk,i, the z i scores based on mk replicates (with k > 1). If we can have an estimate of f0,mk, then we can obtain the corresponding power function (d, ) for mk replicates in the same way as described earlier for m replicates. Of course, we assume that our pilot data are drawn from only m arrays under each of the two experimental conditions, and thus we do not observe any zmk,i based on mk arrays. However, we show next that it is possible to generate zmk,i values from zm,i values. Note that we can draw random realizations of zm,i from the estimated f0,m (see Pan et al. [10] or the example below). Suppose zm,i(j) values (for j = 1,2,..., k) are k independent realizations of zm,i, then it is easy to show that

have the distribution f0,mk. Thus, the density function for zmk,i values is

For example, if we triple the number of replicates, the resulting density function is

The number of components of f0,mk may be too large. For example, if the number of components is g0 = 3 for m = n = 2, the corresponding numbers of components for m = n = 4, m = n = 6 and m = n = 8 are, respectively, g02 = 9, g03 = 27 and g04 = 81. In fact, some of these components maybe very similar or have a negligible role, hence the form of f0,mk, may be simplified. In the extreme situation, as mk → ∞, by the Central Limit Theorem, the mixture model will reduce to a single-component normal distribution. Hence, we propose a simulation-based method to select a more parsimonious model for f0,mk.

On the basis of the mixture model f0,m in Equation (5), we can generate a random sample of zm,i(j) values [10], from which we can calculate zmk,i values using Equation (6). Using zmk,i values we can fit a normal mixture model for f0,mk. As we shall show later, we find such a fitted mixture model often contains a smaller number of components than g k 0, as dictated in Equation (7), leading to a simplified form of f0,mk.

Summary of the proposed method

In summary, our proposed method of calculating the required replicate number works in the following steps.

Step 1. Suppose that we have pilot gene expression data X ji and Y ij from m arrays under each condition. Use formula (2) to calculate the scores zi,m.

Step 2. Use zi,m and the normal mixture model (5) to estimate f0,m.

Step 3. For a specified Type I error rate , determine the cutoff point C for the rejection region using formula (3), in which f0 is replaced with the estimated f0,m.

Step 4. For any specified d, calculate the power function (d, ) using formula (4), in which f0 is replaced with the estimated f0,m.

Step 5. For any given k > 1, use formula (7) or (6) to estimate f0,mk.

Step 6. For a specified Type I error rate , determine the cutoff point C for the rejection region using formula (3), in which f0 is replaced with the estimated f0,mk.

Step 7. For any specified d, calculate the power function (d, ) using formulae (4), in which f0 is replaced with the estimated f0,mk.

Step 8. Repeat Steps 5 to 7 until all k > 1 of interest have been tried.

After the power functions for many possible mk replicates have been obtained, we can determine an appropriate number of replicates by considering all the factors involved, the desired power and Type I error rate, the targeted expression changes and other experimental constraints.

An example

To understand the pathogenesis of otitis media, a study was conducted to identify genes involved in response to pneumococcal middle-ear infection and to study their roles in otitis media. Radioactively labeled DNA microarrays were applied to the mRNA analysis of 1,176 genes in middle-ear mucosa of rats with and without subacute pneumococcal middle-ear infection [26]. The data are available for the control group and for the pneumococcal middle-ear infection group. A more detailed description of how the data were collected and their public availability was provided in Pan et al. [26]. For the purpose of sample size calculations and to mimic many practical situations with only a small number of replicates, we only use m = n = 2 arrays from each group. We first take a natural logarithm transformation for all the observed gene-expression levels (that is, radioactive intensities) so that the resulting distributions are less skewed (which will reduce the number of components of a fitted mixture model). Then, for each microarray, we standardize the transformed gene-expression levels by subtracting their median.

Because of the small m = 2, the sample SDs may not be stable. One way is to add a small constant as suggested by Efron et al. [5]. Here we follow the idea of Lin et al. [27] and use a loess smoother [28] to nonparametrically model the sample SDs in terms of the mean expression levels (Figure 1). Then we plug in the smoothed SD to calculate z2,i. Note that an alternative use of SD or its modification in calculating z2,i values will not change the basic idea and the following steps in sample size calculations.
Figure 1

Sample standard deviations of expression levels and their loess smoothers as a function of the average expression levels for the two conditions respectively.

We fitted three mixture models for f0,2 with g0 ranging from 1 to 3. Table 1 summarizes the model-fitting results. g0 = 1 was selected as both AIC and BIC achieve their minima there. So the fitted f0 is a normal distribution, N(-0.0013, 0.1278). However, for the purposes of general illustration, we choose g0 = 2 as the fitted model:
Table 1

AIC and BIC for fitted mixture models with various number of components g0


Two replicates

Four replicates

Six replicates

Eight replicates

g 0




































f0,2(z) = 0.76 (z;-0.0415, 1.3117) + 0.24 (z;0.0700, 2.6970).

Figure 2a presents the histogram of z i values and the fitted f0 with g0 = 1 and 2. There is not much difference between the two fitted f0,2, both of which fit the data well. In particular, f0,2 does not look like a t-distribution with small degrees of freedom, as predicted from the t-test.
Figure 2

Histograms and estimated distribution density functions. (a-d) Two, four, six and eight replicates (z2 - z8), respectively. In (a), the solid and dotted lines are the fitted one- and two-component mixtures. In (b-d), the solid and dotted lines are the fitted and the theoretically derived mixtures.

A realization of z2,i can be simulated in the following two steps. First, we draw a random number p i from {1, 2} with probability 0.76 and 0.24 respectively. Second, if the drawn p i = 1, z i is randomly drawn from a normal distribution (z; -0.0415, 1.3117); otherwise, it is drawn from (z; 0.0700, 2.6970). From the generated z2,i values, following expression (6) we generated three simulated data sets: z2k,i values, I = 1,..., 1,176 for k = 2, 3 and 4. Then a normal mixture model was fitted to each data set. From Table 1, it can be seen that a single-component normal distribution was selected in each case. In Figure 2, each of the fitted normal distributions, N(-0.0494, 0.8226), N(-0.0644, 0.5383) and N(-0.0438, 0.4206), is compared with its theoretically derived mixture model in Equation (7); they are all very close. Here we see that using simulated data to fit a mixture model results in a much-simplified model. For example, for k = 4, it is a fitted single-component model versus a 24 = 16-component model in Equation (7). Note that, as predicted, all the means of the fitted models are all essentially 0, and their variances decrease as k increases.

If we want to have only one expected false-positive result from testing each of 1,176 non-differentially expressed genes, the gene-specific (or test-specific) Type I error rate is
= 1/1176 = 0.09%. Using formula (3) and fitted-mixture model f0,2k, the cut-off points C are determined. Then the power functions (d, ) are drawn in Figure 3, which may help make a decision on the required number of replicates. For instance, if we want to detect an expression change d = 3 with probability at least 80% and with = 0.09%, then six replicates are needed. Also, with just two replicates, the power to detect a change as high as 4 is very low, smaller than 30%. Note that the choice of d may depend on some prior knowledge. For instance, based on the pilot data, we can estimate the d values for some selected genes (with the sample means and sample SDs substituting the true means and SDs in the formula for d), from which one can determine a range of d values of interest.
Figure 3

Power β(d, α) as a function of the magnitude of expression changes d and the number of replicates, with the gene-specific Type I error rate α = 0.09% for the middle-ear data.

Figures 4,5,6 give the results for testing N = 1,000, 5,000 and 10,000 genes, respectively, while controlling the genome-wide Type I error rate at the usual 5% level. It can be seen that as N increases, we also need a larger number of arrays to maintain the power of the statistical test when other parameters are fixed. For instance, for N = 10,000 (Figure 6), even eight replicates cannot detect a change as large as d = 3 with 80% power, but six replicates can detect a change d = 4 with 80% power.
Figure 4

Power β(d, α) as a function of the magnitude of expression changes d and the number of replicates, with the gene-specific Type I error rate α = 0.05/1,000 for the middle-ear data.

Figure 5

Power β(d, α) as a function of the magnitude of expression changes d and the number of replicates, with the gene-specific Type I error rate α = 0.05/5,000 for the middle-ear data.

Figure 6

Power β(d, α) as a function of the magnitude of expression changes d and the number of replicates, with the gene-specific Type I error rate α = 0.05/10,000 for the middle ear data.


We have described a method for calculating the number of replicates in microarray experiments. This method is designed for the situation where the mixture approach is going to be taken to analyze the data. Note that any method for sample size/power calculations has to depend on a specific statistical test to be used in data analysis; this explains why there is a huge literature on the topic for clinical trials. However, because of the close relation between the mixture approach and the other two recently proposed nonparametric approaches - the empirical Bayes method [5] and the statistical analysis of microarray (SAM) method [9] - our proposed method can be also applied to provide some useful guideline for designing microarray experiments even when one of the latter two approaches (or other approaches) is planned to be used for data analysis in a later stage. For instance, even though the null distribution f0 is estimated using the null scores z i in our proposal, there maybe alternative ways of estimating f0, such as using an alternative nonparametric method (for example, kernel or local likelihood), rather than the finite normal mixture model, to estimate f0 or using the test statistics, Z i , of a large number of housekeeping genes to estimate f0. Some modifications to the test statistic Z i and the null statistic z i are also possible, especially when we consider differential gene expression across more than two conditions. These are all interesting topics we are investigating now.

In most sample size/power calculations, some pilot data are needed to provide reasonable estimates of some parameters needed for subsequent calculations. An alternative is to obtain reasonable estimates from other similar studies in the literature. However, because of the rapid development of microarray technology, the latter is not likely and we expect a researcher will have to do his or her own pilot study. This was the situation we considered in the example. A particular challenge is how to obtain good estimates of the variances of gene expression levels from a small number of replicates. In our example, we considered a nonparametric method to smooth sample variances. Some alternative smoothing methods have also appeared in the literature. But it is not clear which one is the most desirable. This is a topic for future study.

The proposed method is straightforward to statisticians and can be implemented in many existing statistical packages. Our sample S-Plus program and data are available at [29].



This research was partially supported by NIH.

Authors’ Affiliations

Division of Biostatistics, School of Public Health, University of Minnesota
Department of Otolaryngology, School of Medicine, University of Minnesota


  1. Brown P, Botstein D: Exploring the new world of the genome with DNA microarrays. Nat Genet. 1999, 21 (Suppl): 33-37.PubMedView ArticleGoogle Scholar
  2. Lander ES: Array of hope. Nat Genet. 1999, 21 (Suppl): 3-4.PubMedView ArticleGoogle Scholar
  3. Chen Y, Dougherty ER, Bittner ML: Ratio-based decisions and the quantitative analysis of cDNA microarray images. J Biomed Optics. 1997, 2: 364-367. 10.1117/1.429838.View ArticleGoogle Scholar
  4. Newton MA, Kendziorski CM, Richmond CS, Blattner FR, Tsui KW: On differential variability of expression ratios: improving statistical inference about gene expression changes from microarray data. J Comput Biol. 2001, 8: 37-52. 10.1089/106652701300099074.PubMedView ArticleGoogle Scholar
  5. Efron B, Tibshirani R, Goss V, Chu G: Microarrays and their use in a comparative experiment. Technical Report, Department of Statistics, Stanford University,. 2000, []Google Scholar
  6. Ideker T, Thorsson V, Siehel AF, Hood LE: Testing for differentially-expressed genes by maximum likelihood analysis of microarray data. J Comput Biol. 2000, 7: 805-817. 10.1089/10665270050514945.PubMedView ArticleGoogle Scholar
  7. Li H, Hong F: Cluster-Rasch models for microarray gene expression data. Genome Biol. 2001, 2 (8): research0031.1-0031.13. 10.1186/gb-2001-2-8-research0031.Google Scholar
  8. Thomas JG, Olson JM, Tapscott SJ, Zhao LP: An efficient and robust statistical modeling approach to discover differentially expressed genes using genomic expression profiles. Genome Res. 2001, 11: 1227-1236. 10.1101/gr.165101.PubMedPubMed CentralView ArticleGoogle Scholar
  9. Tusher VG, Tibshirani R, Chu G: Significance analysis of microarrays applied to the ionizing radiation response. Proc Natl Acad Sci USA. 2001, 98: 5116-5121. 10.1073/pnas.091062498.PubMedPubMed CentralView ArticleGoogle Scholar
  10. Pan W, Lin J, Le C: A mixture model approach to detecting differentially expressed genes with microarray data. Technical Report 2001-011, Division of Biostatistics, University of Minnesota,. 2001, []Google Scholar
  11. Lee MLT, Kuo FC, Whitmore GA, Sklar J: Importance of replication in microarray gene expression studies: statistical methods and evidence from repetitive cDNA hybridizations. Proc Natl Acad Sci USA. 2000, 97: 9834-9839. 10.1073/pnas.97.18.9834.PubMedPubMed CentralView ArticleGoogle Scholar
  12. Pan W: A comparative review of statistical methods for discovering differentially expressed genes in replicated microarray experiments. Bioinformatics., ,
  13. Black MA, Doerge RW: Calculation of the minimum number of replicate spots required for detection of significant gene expression fold change in microarray experiments. Technical Report, Department of Statistics, Purdue University,. 2001Google Scholar
  14. Diggle PJ, Liang KY, Zeger SL: Analysis of Longitudinal Data. Oxford: Oxford University Press,. 1994Google Scholar
  15. Dudoit S, Yang YH, Callow MJ, Speed TP: Statistical methods for identifying differentially expressed genes in replicated cDNA microarray experiments. Technical Report, Statistics Department, University of California at Berkeley,. 2000, []Google Scholar
  16. Li C, Wong WH: Model-based analysis of oligonucleotide arrays: expression index computation and outlier detection. Proc Natl Acad Sci USA. 2001, 98: 31-36. 10.1073/pnas.011404098.PubMedPubMed CentralView ArticleGoogle Scholar
  17. Kerr MK, Martin M, Churchill GA: Analysis of variance for gene expression microarray data. J Comput Biol. 2000, 7: 819-837. 10.1089/10665270050514954.PubMedView ArticleGoogle Scholar
  18. Yang YH, Buckley MJ, Dudoit S, Speed TP: Comparison of methods for image analysis on cDNA microarray data. Technical Report, Statistics Department, University of California at Berkeley,. 2000, []Google Scholar
  19. Titteringto DM, Smith AFM, Makov UE: Statistical Analysis of Finite Mixture Distributions. New York: Wiley,. 1985Google Scholar
  20. Dempster AP, Laird NM, Rubin DB: Maximum likelihood from incomplete data via the EM algorithm. J Roy Stat Soc Ser B. 1977, 39: 1-38.Google Scholar
  21. McLachlan GL, Basford KE: Mixture Models: Inference and Applications to Clustering. New York: Marcel Dekker,. 1988Google Scholar
  22. Akaike H: Information theory and an extension of the maximum likelihood principle. 2nd International Symposium on Information Theory. Edited by Petrov BN, Csaki F. Budapest: Akademiai Kiado, . 1973, 267-281.Google Scholar
  23. Schwartz G: Estimating the dimensions of a model. Annls Statistics. 1978, 6: 461-464.View ArticleGoogle Scholar
  24. Fraley C, Raftery AE: How many clusters? Which clustering methods? - Answers via model-based cluster analysis. Computer J. 1998, 41: 578-588.View ArticleGoogle Scholar
  25. Press WH, Teukolsky SA, Vetterling WT, Flannery BP: Numerical Recipes in C, The Art of Scientific Computing. 2nd edn. New York: Cambridge University Press,. 1992Google Scholar
  26. Pan W, Lin J, Le C: Model-based cluster analysis of microarray gene expression data. Genome Biol. 2002, 3 (2): research009.1-009.8. 10.1186/gb-2002-3-2-research0009.View ArticleGoogle Scholar
  27. Lin Y, Nadler ST, Attie AD, Yandell BS: Mining for low-abundance transcripts in microarray data. Technical Report, Department of Statistics, University of Wisconsin-Madison,. 2001, []Google Scholar
  28. Cleveland W, Devlin SJ: Locally weighted regression: an approach to regression analysis by local fitting. J Am Stat Assoc. 1988, 83: 596-610.View ArticleGoogle Scholar
  29. Statistical analysis of microarray data. []


© Pan et al., licensee BioMed Central Ltd 2002