 Research
 Open Access
 Published:
Normalization and analysis of DNA microarray data by selfconsistency and local regression
Genome Biology volume 3, Article number: research0037.1 (2002)
Abstract
Background
With the advent of DNA hybridization microarrays comes the remarkable ability, in principle, to simultaneously monitor the expression levels of thousands of genes. The quantiative comparison of two or more microarrays can reveal, for example, the distinct patterns of gene expression that define different cellular phenotypes or the genes induced in the cellular response to insult or changing environmental conditions. Normalization of the measured intensities is a prerequisite of such comparisons, and indeed, of any statistical analysis, yet insufficient attention has been paid to its systematic study. The most straightforward normalization techniques in use rest on the implicit assumption of linear response between true expression level and output intensity. We find that these assumptions are not generally met, and that these simple methods can be improved.
Results
We have developed a robust semiparametric normalization technique based on the assumption that the large majority of genes will not have their relative expression levels changed from one treatment group to the next, and on the assumption that departures of the response from linearity are small and slowly varying. We use local regression to estimate the normalized expression levels as well as the expression leveldependent error variance.
Conclusions
We illustrate the use of this technique in a comparison of the expression profiles of cultured rat mesothelioma cells under control and under treatment with potassium bromate, validated using quantitative PCR on a selected set of genes. We tested the method using data simulated under various error models and find that it performs well.
Background
Among the most fascinating open questions in biology today are those associated with the global regulation of gene expression, itself the basis for the unfolding of the developmental program, the cellular response to insult and changes in the environment, and many other biological phenomena. The answers to some of these questions have been moved a few steps closer to realization with the advent of DNA hybridization microarrays [1,2,3,4,5,6]. These tools allow the simultaneous monitoring of the expression levels of hundreds to tens of thousands of genes  sufficient numbers to measure the expression of all of the genes in many organisms, as is now being done in the eukaryote Saccharomyces cerevisiae [7,8].
If we designate the intensity of a given spot in the microarray as I and the abundance of the corresponding mRNA in the target solution as A, we have, under ideal circumstances,
I = NA + error (1)
where N is a constant, unknown normalization factor. When comparing two different sets of intensities, these factors (or at least their relative sizes) must be determined in order to make a relative comparison of the abundances A.
The simple normalization techniques commonly used at this time assume Equation 1. Under these conditions, normalization amounts to the estimation of the single multiplicative constant N for each array. This task can be implemented by wholearray methods, using the median or mean of the spot intensities or by the inclusion of control mRNA.
We have found in a variety of different hybridization systems that the response function is neither sufficiently linear, nor consistent among replicate assays; the relationship between the intensity and the abundance is more complicated than that found in Equation 1. There may, for example, be a constant term, interpretable as background:
I = N_{0} + N_{1}A + error, (2)
or the intensity may saturate at large abundance:
Both these situations render simple ratio normalizations inadequate. The problems are not obviated by the use of 'housekeeping' genes as controls. First, their quantitative stability is not a priori assured, nor has such stability been demonstrated empirically, and second, even if such genes were found, the nonlinearity of the response is not addressed by this technique. Neither can extrinsic controls (such as bacterial mRNA spiked into human targets) ensure adequate normalization, as the relative concentration of control to target mRNA cannot itself be known with sufficient accuracy. Even simultaneous twocolor probes on the same microarray do not eliminate the problems of normalization because of variation in the relative activity and incorporation of the two fluorescent dyes.
One possible approach to the normalization problem would be to obtain detailed quantitative understanding of each step in the process in order to develop a mechanistic model for the response function. This approach is almost certainly important for the optimization of array design, but may not be necessary for data analysis. Alternatively, one may use the vast quantity of data generated and the assumption of selfconsistency to estimate the response function semiparametrically.
We have pursued the latter path. Our approach does not rely on the consistency of an extrinsic marker or the stability of expression for any given set of genes or on the correctness of an a priori model for the response, but rather upon the assumption that the majority of genes in any given comparison will be expressed at constant relative levels (Figure 1); only a minority of genes will have their expression levels affected appreciably. Thus, we normalize pairs or groups of arrays relative to each other by maximizing the consistency of relative expression levels among them.
The underlying idea is that the majority of genes will not have their expression levels changed appreciably from one treatment to the next (Figure 1). Clearly, there may be some treatment pairs for which this is not a reasonable assumption, but we argue that as long as the cell is alive, the basic mechanism of cell maintenance must continue; the relevant gene products must be kept at relatively stable levels. This approach can be viewed as a generalization of the method of using 'housekeeping' genes to normalize the array. But rather than choosing a particular set of genes beforehand, assuming that their expression levels are constant across treatments, we assume that there is a stable background pattern of activity, that there is a transcriptional 'core', and identify its constituent genes statistically for each experiment.
The essential contrast between our method based on selfconsistency and that based on control genes determined a priori is concisely captured in the following flow diagrams.
Normalization by controls identified a priori

1.
Assume that some genes will not change under the treatment under investigation.

2.
Identify these core genes in advance of the experiment (housekeeping genes, extrinsic controls)

3.
Normalize all genes against these genes assuming they do not change

4.
Done.
Normalization by selfconsistency

1.
Assume that some genes will not change under the treatment under investigation.

2.
Initially designate all genes as core genes.

3.
Normalize (provisionally) all genes against the core genes under the assumption that the true abundance of the core genes does not change.

4.
Determine which genes appear to remain unchanged under this normalization; make this set the new core.

5.
If the new core differs from the previous core, then go to step 3.

6.
Else: done.
Modeling and estimation
We concentrate here on the experimental design with two treatment groups and two or more replicate arrays per group. Generalization to more than two groups is straightforward. Comparisons made without replicate arrays are also possible, and much of the methodology discussed here can be applied in that case as well, but the lack of true replicates introduces unique nontrivial problems that will not be considered here.
The basic model
Let Y_{ ijk } = logI_{ ijk } denote the logarithm of the measured intensity of the kth spot in the jth replicate assay of the ith treatment group. Thus, k ranges from 1 to G, the number of genes per array, j ranges from 1 to r_{ i }, the number of replicate arrays within the ith treatment group, and i takes values from 1 to the number of treatment groups. The examples in this paper use two treatment groups. The logarithmic transformation converts a multiplicative normalization constant to an additive normalization constant. We also find that this transformation renders the error variances more homogeneous than they are in the untransformed data. Then the error model corresponding to Equation 1 is:
Y_{ ijk } = υ_{ ij } + α_{ k } + δ_{ ik } + σ_{0}ε_{ ijk } (4)
where the
_{ ij } = logN_{ ij } are now the normalization constants, _{ k } + _{ ik } = logA_{ ik } are the mean log relative abundance and the differential treatment effects, respectively, and _{0} is the error standard deviation. The treatment effects, are the quantities of most direct interest for comparing expression profiles. We assume that the residuals _{ ijk } are independent and identically distributed and have zero mean and unit variance. For the significance tests below, we will further assume that the errors are normally distributed.
Estimation by selfconsistency
Estimation of the parameters in Equation 4 is carried out in an iteratively reweighted leastsquares (IRLS) procedures. First, let c_{ k } indicate the assignment of the kth gene to the core set: c_{ k } = 0 if gene k is not in the core and c_{ k } = 1/G if gene k is in the core, where G is the number of genes in the core. The vector c is thus normalized: Σ_{ k }c_{ k } = 1. These indicators play the role of weights in an IRLS. Although they do depend on other estimated parameters, in each iteration the weights are treated as constants, depending only on parameter estimates from the previous iteration.
The notion of selfconsistency arises in the combined processes of identifying the core and normalizing the data: the choice of genes belonging to the core depends on the normalization, and the optimal normalization depends on which genes are identified with the core.
We start by minimizing the core sum of squares (SS_{ C }):
over
and . Note that one can add a constant to and subtract the same constant from without changing SS_{ C }. This invariance corresponds to our inability to estimate absolute abundances, but relative abundances only. We therefore enforce an 'identifiability' constraint: ∑_{ k }_{ k } = 0. The minimization gives:
where a and n are the estimators for and , respectively; overbars indicate averages over the dotted subscripts, for example, .
The normalized and scaled data are now given by
Note that if all of the genes are placed in the core, we have
as expected.
Now we estimate the differential treatment effects by minimizing the residual sum of squares,
of the normalized data over
yielding
Note that the matrix, d, of differential treatment effects obeys Σ_{ i }r_{ i }d_{ ik } = 0, as we would hope.
Selfconsistency requires that the vector of core indicators c depend on the estimated differential treatment effects, d. We have tried several methods for implementing an appropriate dependence and find that one of the simplest schemes works very well. We simply fix the proportion of genes in the core, rank the genes by the square of the estimated differential treatment effect and remove from the core for the next iteration those genes in the 1  quantile.
where
is a threshold chosen to ensure that a fixed proportion of genes are designated core genes. Note that a possible improvement to the algorithm might be found by appropriate optimization of rather than simply fixing it in advance.
We carry out the estimation iteratively. We start with c_{ k } = 1/G for all k (all genes in the core) and estimate _{ ik } by Equation 10. We then update c according to Equation 11 and repeat the estimation of with this new c. We stop when c does not change from one iteration to the next.
The local regression model
What we find in the analysis of experimental data, however, is that Equation 1 with N constant is not adequately realistic. A more flexible approach that covers the contingencies of Equations 13 and many others, is to generalize Equation 4 to
Y_{ ijk } = υ_{ ij } (α_{ k }) + α_{ k } + δ_{ ik } + σ(α_{ k })ε_{ ijk }. (12)
where
_{ ij } is the normalization function, now assumed explicitly to depend on the mean log abundance _{ k }. The function , which scales the error variance, describes the heteroscedasticity, or nonconstancy of the variance, which we here assume depends only on the mean log intensity level. The two functions are constrained to vary slowly and thus can be estimated by local regression.
If Equation 4 is used to estimate the normalization, the departures from linearity manifest themselves as systematic bias in the residuals (Figure 2). In all the data we have examined, the resulting biases are small and slowlyvarying function of the mean log intensity, and so can be estimated using local regression on a, the estimator for the mean log abundance. It should be noted that an additive component of the variability with nonzero expectation, in addition to the multiplicative noise (Equation 2) can, when the logarithmic transformation is applied, lead to such nonlinear response curves. Our approach here is to develop a method flexible enough to allow for all sources of nonlinearity, including additive noise. We demonstrate the validity of this method for these formally misspecified models in our simulation studies below.
Estimation of both the normalization function,
, and of the heteroscedasticity is carried out by local regression.
Local regression
Local regression is a generalization of the intuitive idea of smoothing by using a moving average. In local regression, one goes beyond computing the local average of a set of measured points by estimating, at each value of the predictor variables, all of the coefficients in a Pthorder regression in which the regression coefficients themselves are slowly varying functions of the predictor variable. Computation of a moving average is thus a zeroth order local regression. The availability of inexpensive powerful computing has sparked renewed interest in local regression techniques and its theoretical underpinnings have been extensively elucidated [9,10,11].
Modeling a response function
as a function of a predictor u via local regression proceeds in two steps. First, we estimate a function of two variables u and u',
f(u'; (u)) = _{0} (u)+_{1}(u)(uu') + ... + _{ P }(u)(uu')^{P}. (13)
For fixed u, f(u';(u)) is a polynomial in u' with coefficients _{ i }(u). These coefficients will be constrained to vary slowly with u, the quantitative rates of change specified by a parameter introduced below. Second, we estimate (u) as
where b is the vector of estimators for . In other words, we estimate the coefficients and evaluate the function at u'=u. The terms of order greater than 0 vanish, but the estimates for the remaining zerothorder terms depend nevertheless on the estimated higherorder coefficients, as follows. Given a dataset consisting of n pairs (u_{ i },v_{ i }), i ∈ (1,...,n), we estimate the coefficients at a point u (not necessarily corresponding to any u_{ i } in the dataset), by minimizing a weighted sumofsquares over :
The weighting functions w are given by
where W is a symmetric function having a simple maximum at the origin, strictly decreasing on [0,1] and vanishing for u ≥ 1. For our application in this paper, we use the efficiently computed tricube function
The function h is known as the bandwidth, and controls just how slowly f varies with u. We choose the bandwidth so as to give equal span at all points u. The span is defined as the proportion of points u_{ i } contained in a ball of radius h(u). This choice of bandwidth function is used in Loess regression [11]. For all of the computations in this paper, we have used a span of 0.5.
Minimization of Equation 15 over the coefficient vector
(u) results in linear equations of the form
b_{ i }(u) = L_{ i } (u)v (18)
Where L_{ i } is the linear operator appropriate to the ith coefficient and v is the vector with components _{ k }. Note that the L_{ i } depends on the order P of the local regression. For any given value of P, the L_{ i } can be explicitly written down, but quickly become algebraically complicated.
The local regression estimate of f(u; (u)) is
Because of this linearity, the sampling distributions for these coefficients are known and we can compute their sampling variances in a straightforward manner [11].
To adapt this method to the problem of normalization, and simultaneously to implement selfconsistency, we take for the weighting functions the product of a tricube and a core indicator:
where c_{ k } is the core indicator as given in Equation 11 and the a_{ k } are given by Equation 6. In these terms, the local regression estimate n of v is given by
with the normalized data given by
and the differential treatment effects by
Again, we have Σ_{ i }r_{ i }d_{ ik } = 0. The core indicator vector c is then iterated to fixation as described in the previous section but with Σ_{ i }r_{ i }d_{ ik }^{2} compared against s^{2}(_{ k }) where s^{2}() is the estimated local variance, discussed in the next section.
Local variance estimation
In addition to local nonlinearities in the response curve, we also find that the data are heteroscedastic: the error variance shows a clear dependence on the estimated abundance. The logarithmic transformation removes a substantial part of this dependence, but does not flatten it out entirely. One might try an a priori accounting of the sources of error and thereby provide a parametric model for it, but the number of potential error sources is large, so we instead choose a flexible error model and estimate local variance by again using local regression. The technique involves computing the local likelihood and the effective residual degrees of freedom and is described in detail in [11]. Their ratio of the local likelihood and the effective degrees of freedom provides a smooth estimate of the local variance. The estimated residuals are not strictly linear functions of Y because of the implicit dependence of the indicator vector c on the data Y and because of our use of the estimator a, rather than a strictly independent variable, as the predictor for the local regression. We expect these corrections due to nonlinearities to be small and thus neglect them in our estimates of the local variance.
At this stage, we have computed a firstorder approximate solution for the estimation problem. We may now perform another iteration (in addition to the iterated solution for the core indicator c) to improve the approximation, reweighting the data by the inverse of the estimated local variance. Our experience, however, has been that the firstorder corrections are sufficient and the higherorder corrections are more difficult to compute and make little difference in the final analysis. For the applications and validation tests that follow, we use just the firstorder corrections.
Pairwise expressionlevel comparisons
We perform individual pairwise hypothesis tests for each spot in the array by computing the statistic
where s(a_{ k }) is the squareroot of the local variance at the mean relative expression value a_{ k }. We test z as a standard normal under the null hypothesis of no expression difference.
Validation
We illustrate the use of the computational methods by fixing
= 0.9 and applying them to data generated in an experiment carried out on cultured, spontaneously immortalized rat peritoneal mesothelial cells to determine the transcriptional effects of treatment with potassium bromate. The data consist of measured intensities of G = 596 genes from each of four arrays: two replicates r_{1} = r_{2} = 2 in each of two treatment groups. A complete discussion of the biological results obtained in these experiments can be found in [12].
Results and discussion
The geneexpression pattern observed for rat mesothelial cells was indicative of oxidative stress, mitotic arrest and possibly increased apoptosis. (All changes listed are significant at the 0.05 level). Oxidativestressresponsive genes for heme oxygenase1 (HO1), quinone reductase/NMOR/DT diaphorase (QR), growth arrest and DNA damage 45 (GADD45), heatshock protein 70 (HSP70), among others, showed increased expression, as did transcriptional regulatory genes for cJun, cFos, Jun D, Jun B, cMyc and inhibitory κB subunit (IκB). Proteasome components involved in protein repair (Rδ, RC10II, C3, RC7, HR6B ubiquitinconjugating enzyme and ubiquitin) and genes for DNA repair proteins proliferating cell nuclear antigen (PCNA), mismatch repair protein 2 homolog (Msh2), and 06 methylguanine DNA methyltransferase were upregulated. The lipid peroxide excision enzyme phospholipase A2 (PLA2) exhibited increased expression, as did apoptogenic genes for tumor necrosis factor υ (TNFυ), inhibitory nitric oxide synthase 1 (iNOS1) and Fas ligand (FasL). Other components involved in apoptosis including the antiapoptotic Bcell lymphoma 2 (Bcl2), and the proapoptotic Bcl2associated X protein υ (bax υ), BclXL/Bcl2 associated death promoter homolog (Bad) and Bcl2 related ovarian killer protein (bok) (at 12 hours), and cellcycle control elements known as cyclins (at 4 and 12 hours), were downregulated. Several genes that inhibit the cell from entering the cell cycle were increased significantly at both time points.
Confirmation by quantitative PCR
Quantitative PCR analysis confirmed nine gene changes. The tenth, PLA2, could not be confirmed because of lack of signal in both treatment groups and was therefore likely to be due to a problem in the PCR for that gene [12].
Morphologic analysis revealed complete mitotic arrest by 4 hours postexposure, with increased numbers of condensed cells with pyknotic nuclei, believed to be apoptotic. Strong HO1specific staining was observed in treated cells, whereas control cells showed weak nonspecific staining, or no staining at all.
Statistical characteristics of the data
A histogram of mean log spot intensities (Figure 3) shows that nearly a quarter of the 596 spots on the array show little or no signal. The remainder of the distribution shows a very gradual maximum followed by a long tail skewing the distribution to the right. The total range is about 9 (natural) logs, corresponding to approximately 9,000fold change from highest intensity to 'background'.
The estimated variance of the log intensities increases from the lowest log intensities for about one (natural) log to peak at a value of about 0.25 and then decreases to asymptote at about 0.04 for intense spots. This suggests that the error is dominated by different sources in the two intensity regimes. Furthermore, the fact that the variance of the log intensity decreases for large intensities indicates that the variance scales like
^{q}, where q < 2. q = 2 corresponds to lognormal behavior with constant coefficient of variation and q = 1 corresponds to the Poissonlike behavior of independent counting processes.
The four arrays in this study also showed nonnegligible bias (Figure 4). The rootmeansquare (RMS) bias over all four arrays was 17.5 × 10^{2}. This should be compared to the estimated standard deviation of the residuals after bias removal of 19.2 × 10^{2}; it is clearly comparable. This bias is not likely not to be an artifact of the fitting procedure. Application of the fitting procedure to simulated data without bias (see below) results in a range of RMS bias that is much smaller than that seen in the real data (Tables 1,2,3).
In addition to the experiments reported here, we have examined data from several other microarray platforms and find that in terms of the heteroscedasticity and apparent bias, they are qualitatively similar (not shown).
Simulation studies
To determine the reliability of our methods, we generated simulated data under a number of models based on the statistical characteristics of the data obtained in our hybridization experiments. All of the simulated data was produced using FORTRAN programs calling IMSL subroutines for sorting, cubic spline interpolation and random number generation.
For each model and each set of conditions we ran 100 independent realizations. The data from each of these realizations was used as input to our normalization routines, which performed normalizations in two ways. First, we normalized according to Equations 4, 10 and 6, that is, without bias removal and without accounting for heteroscedasticity; this procedure is referred to as 'naive'. Then, we normalized according to Equations 12, 21 and 23 and
= 0.9 with bias removal and estimation of heteroscedasticity. The software that implements the latter method is referred to as NoSeCoLoR, for Normalization by SelfConsistency and Local Regression [13]. For judging the relative performance of the two methods, we recorded the number of true positives and the number of false positives for each simulated dataset.
Homoscedastic error model
In the first set of tests, the data were generated by simulations of the model
where the values for
were generated as normals with mean 0 and standard deviation 0.2, the _{ k } were taken to be the values a_{ k } estimated from the experimental data, = 0.039 (this is the value estimated from the experimental data, treated as homoscedastic) and _{ ijk } were generated as standard normal. The treatment effects were generated by choosing at random a fixed number of genes G (10% or 20% of the total number G) and within this set, letting _{1k} = _{ k } logf and _{2k} = 0. Outside this set, _{ ik } = 0. Here, _{ k } are independently drawn from {1,1} with equal probability, and f is the 'fold change', or ratio of expression level between treated and control groups.
The function
_{ ij }() representing nonlinearity and bias was taken to be proportional to the corresponding function n_{ ij } estimated in the above data analysis (Figure 4) and completed by cubic spline interpolation. The constant of proportionality, designated q in the tables, regulates the size of the bias.
What we find (Table 1) is that the power of the test for the naive analysis is diminished by the presence of bias. For the localregression analysis (NoSeCoLoR), the power is unaffected by the presence of bias. Furthermore, when the proportion, , of affected genes among all genes is small ( = 10%), the power of the two methods is about the same. When = 20%, the naive method has slightly better power when bias is absent.
Heteroscedastic error model
As discussed above, even the logtransformed data are not homoscedastic, but have variance that varies with the mean intensity level. The second set of simulations is similar to the first, but differs in that the constant
_{0} in Equation 25 is replaced by the function () estimated from the Clontech array experiments (Figure 4). All other details are as for the previous simulation.
In this case (Table 2), we find as before that bias diminishes the power of the naive procedure, but not that of NoSeCoLoR. In addition, the rate of false positives is now notably high for the naive method. NoSeCoLoR yields consistently smaller falsepositive rates, although when large proportions of genes are affected and have large effect size, the rate of false positives with NoSeCoLoR is also larger than nominal.
Compound error model
The model given by Equation 12 is intended to be flexible and to be a reasonable approximation to a variety of models. One particularly common source of nonlinearity is additive error (on the untransformed data), or background with nonzero mean (Equation 2). We have therefore simulated data according to a model given by
I_{ ijk } = exp {α_{ k } + v_{ ij } + δ_{ ik } + ε_{ ijk }} + exp {ζ_{ ij } + η_{ ijk }} (26)
where the terms
, , and have the meanings assigned above and are computed as in the first simulation. In particular, ε has zero mean and constant variance with = 0.2. The second exponential represents an additive background. This background is modeled as lognormal. The component _{ ij } common to all spots in an array is chosen as a normal random deviate with mean zero and standard deviation q. Differences in from one array to the other can create apparent biases in the logtransformed data (Figure 5). The genespecific term in the background _{ ijk } has mean zero and standard deviation 0.2.
It is in this simulation that the naive method fails most dramatically. For all datasets, the naive method gives falsepositive rates significantly greater than nominal, some as much as tenfold higher than nominal. NoSeCoLoR has much better error rates, although as seen before, performance starts to suffer when larger numbers of spots are affected. The power of comparisons using NoSeCoLoR is again much more resistant to changes in the effective bias level (c in Table 3) than is the naive method.
Conclusions
We have presented a method for normalizing microarray data that relies on the statistical consistency of relative expression levels among a core set of genes that is not identified in advance, but inferred from the data itself. The normalization and variance estimation are both performed using local regression. We are then able to perform standard comparison tests. This technique reveals biologically plausible expressionlevel differences between control mesotheliomas and mesotheliomas treated with a potent inducer of oxidative stress. The expression changes identified by our normalization methodology were confirmed by quantitative PCR in all cases but one where there was no detectable PCR amplification at all.
Our simulation studies show that our normalization technique performs well. The worst case occurs when the response curve is perfectly linear, the variance constant and a large proportion of genes experiences sizable expressionlevel changes. Under these conditions, our method has a falsepositive rate somewhat greater than nominal and selfconsistent normalization without local regression performs slightly better than that with local regression. On the other hand, our method is insensitive to bias and heteroscedasticity, both of which have a significant deleterious effect on the naive method. Furthermore, bias and heteroscedasticity are both measurably present in all data that we have examined from microarrays from a number of different manufacturers and from several different laboratories. In these cases, local regression performs better than selfconsistency alone. When the data are generated by an additiveplusmultiplicative error model, the naive method completely breaks down, whereas our method continues to perform well.
We have applied these methods to the analysis of microarray data in toxicogenomic studies [12,14], where the results made good biological sense and, where relevant, were confirmed by subsequent experimentation. All dataanalytic techniques benefit from extensive use and assessment using several platforms and diverse biological systems. To facilitate this process for the methods described here, and to provide them to the interested research community, we have made the software used to implement them available for noncommercial use [13].
DNA hybridization microarrays promise unprecedented insight into many areas of cell biology, and statistical methods will be essential for making sense of the vast quantities of information contained in their data. Efficient and reliable normalization procedures are an indispensable component of any statistical method; further development and analysis of error models for microarray data will be a worthwhile investment.
Materials and methods
Clontech microarrays
This is a brief description of the experimental methods; complete details can be found in [12]. Immortalized rat peritoneal mesothelial cells (FredPe) developed inhouse were grown in mesothelial cell culture media as previously described [12] for several months before experiment with weekly subculturing. Cells plated at 1 × 10^{7} cells/150 mm dish in 30 ml media were grown for 24 h and treated with the predetermined ED_{50} concentration of 6 mM KBrO_{3} for 4 or 12 h. Cells were detached using a cell lifter and centrifuged at 175g for 3 min. The supernatant (medium) was removed by aspiration and cells were resuspended in 1 ml sterile PBS and frozen at 80°C until RNA extraction. The Atlas Pure Total RNA protocol for poly(A)^{+} mRNA extraction was used. Samples were hybridized in manufacturersupplied hybridization solution (Clontech ExpressHyb) for 30 min at 68°C. After hybridization, the membranes were washed, removed, wrapped in plastic wrap, and placed against a rareearth screen for 24 h, followed by phosphoimager detection and AtlasImage analysis before application of the software tools described in this paper.
Quantitative PCR
Confirmation by Taqman (PerkinElmer) quantitative PCR was performed for nine selected genes as described in [12]. The genes selected for confirmation were those for cyclin D1, GADD45, GPX, HO1, HSP70, Mdr1, QR, prostaglandin H synthase (PGHS), p21WAF1/CIP1 and PLA2. Two control and two treated samples from the 4h time point, and two control and one treated from the 12h time point, were analyzed. Each plate contained duplicate wells of each gene, and 16 notemplate control (NTC) wells divided evenly among four quadrants.
Analysis
Software for the implementation of the statistical estimation and testing procedures described above was written in FORTRAN and run on desktop PCs [13]. Additional statistical computations were performed using Splus 4.5 (MathSoft).
Additional data files
The additional data files available or from [13] consist of several files for implementing the methods described here: NoSeCoLoR.exe is the executable file, compiled for Windows, for the program itself; NoSeCoLoRTheManual.pdf is the user's guide and contains information on input formatting and the interpretation of output files; README.txt contains instructions for installation and startup;. there are several sample input files and associated output files.
References
 1.
Fodor SP, Rava RP, Huang XC, Pease AC, Holmes CP, Adams CL: Multiplexed biochemical assays with biological chips. Nature. 1993, 364: 555556. 10.1038/364555a0.
 2.
Schena M, Shalon D, Davis RW, Brown PO: Quantitative monitoring of gene expression patterns with a complementary DNA microarray. Science. 1995, 270: 467470.
 3.
DeRisi J, Penland L, Brown PO, Bittner ML, Meltzer PP, Ray M, Chen Y, Su YA, Trent JM: Use of a cDNA microarray to analyze gene expression patterns in human cancer. Nat Genet. 1996, 14: 457460.
 4.
Lockhart DJ, Dong H, Byrne MC, Follettie MT, Gallo MV, Chee MS, Mittmann M, Wang C, Kobayashi M, Horton H, Brown EL: Expression monitoring by hybridization to highdensity oligonucleotide arrays. Nat Biotechnol. 1996, 14: 16751680.
 5.
DeRisi JL, Iyer VR, Brown PO: Exploring the metabolic and genetic control of gene expression on a genomic scale. Science. 1997, 278: 680686. 10.1126/science.278.5338.680.
 6.
Iyer VR, Eisen MB, Ross DT, Schuler G, Moore T, Lee JCF, Trent JM, Staudt LM, Hudson J, Boguski MS, et al: The transcriptional program in the response of human fibroblasts to serum. Science. 1999, 283: 8387. 10.1006/abio.2000.4611.
 7.
Wodicka L, Dong H, Mittmann M, Ho MH, Lockhart DJ: Genomewide expression monitoring in Saccharomyces cerevisiae. Nat Biotechnol. 1997, 15: 13591367.
 8.
Spellman PT, Sherlock G, Zhang MQ, Iyer VR, Anders K, Eisen MB, Brown PO, Botstein D, Futcher B: Comprehensive identification of cell cycleregulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization. Mol Biol Cell. 1998, 9: 32733297.
 9.
Cleveland WS, Devlin SJ: Locally weighted regression: An approach to regression analysis by local fitting. J Am Stat Assoc. 1988, 83: 596610.
 10.
Loader CR: Local likelihood density estimation. Annls Statistics. 1996, 24: 16021618. 10.1214/aos/1032298287.
 11.
Loader CR: Local Regression and Likelihood. New York: SpringerVerlag;. 1999
 12.
Crosby LM, Hyder KS, DeAngelo AB, Kepler TB, Gaskill B, Benavides GR, Yoon L, Morgan KT: Morphologic analysis correlates with gene expression changes in cultured F344 rat mesothelial cells. Toxicol Appl Pharmacol. 2000, 169: 205221. 10.1006/taap.2000.9049.
 13.
NoSeCoLor: normalization by selfconsistency and local regression, (software and documentation). [ftp://ftp.santafe.edu/pub/kepler/]
 14.
Morgan KT, Ni H, Brown HR, Yoon L, Qualls CW, Crosby LM, Reynolds R, Gaskill B, Anderson SP, Kepler TB, et al: Application of cDNA microarray technology to in vitrotoxicology and the selection of genes for a real time RTPCRbased screen for oxidative stress in HepG2 cells. Toxicol Pathol. 2002,
Acknowledgements
This work was supported by grant number MCB 9357637 from the National Science Foundation (T.B.K.) and by a research grant from GlaxoWellcome, Inc. (T.B.K.).
Author information
Affiliations
Corresponding author
Electronic supplementary material
Rights and permissions
About this article
Cite this article
Kepler, T.B., Crosby, L. & Morgan, K.T. Normalization and analysis of DNA microarray data by selfconsistency and local regression. Genome Biol 3, research0037.1 (2002). https://doi.org/10.1186/gb200237research0037
Received:
Revised:
Accepted:
Published:
Keywords
 Additional Data File
 Core Gene
 Local Regression
 KBrO3
 Naive Method