Supervised clustering of genes
- Marcel Dettling^{1}Email author and
- Peter Bühlmann^{1}
DOI: 10.1186/gb-2002-3-12-research0069
© Dettling and Bühlmann, licensee BioMed Central Ltd 2002
Received: 6 June 2002
Accepted: 2 October 2002
Published: 25 November 2002
Abstract
Background
We focus on microarray data where experiments monitor gene expression in different tissues and where each experiment is equipped with an additional response variable such as a cancer type. Although the number of measured genes is in the thousands, it is assumed that only a few marker components of gene subsets determine the type of a tissue. Here we present a new method for finding such groups of genes by directly incorporating the response variables into the grouping process, yielding a supervised clustering algorithm for genes.
Results
An empirical study on eight publicly available microarray datasets shows that our algorithm identifies gene clusters with excellent predictive potential, often superior to classification with state-of-the-art methods based on single genes. Permutation tests and bootstrapping provide evidence that the output is reasonably stable and more than a noise artifact.
Conclusions
In contrast to other methods such as hierarchical clustering, our algorithm identifies several gene clusters whose expression levels clearly distinguish the different tissue types. The identification of such gene clusters is potentially useful for medical diagnostics and may at the same time reveal insights into functional genomics.
Background
Microarray technology allows the measurement of expression levels of thousands of genes simultaneously and is expected to contribute significantly to advances in fundamental questions of biology and medicine. We focus on the case where the experiments monitor the gene expression of different tissue samples, and where each experiment is equipped with an additional categorical outcome variable, describing, for example, a cancer type. An important problem in this setting is to study the relation between gene expression and tissue type. While microarrays monitor thousands of genes, it is assumed that only a few underlying marker components of gene subsets account for nearly all of the outcome variation - that is, determine the type of a tissue. The identification of these functional groups is crucial for tissue classification in medical diagnostics, as well as for understanding how the genome as a whole works.
As a first approach, unsupervised clustering techniques have been widely applied to find groups of co-regulated genes on microarray data. Hierarchical clustering [1,2] identifies sets of correlated genes with similar behavior across the experiments, but yields thousands of clusters in a tree-like structure. This makes the identification of functional groups very difficult. In contrast, self-organizing-maps [3] require a prespecified number and an initial spatial structure of clusters, but this may be hard to come up with in real problems. These drawbacks were improved by a novel graph theoretical clustering algorithm [4], but as with all other unsupervised techniques, it usually fails to reveal functional groups of genes that are of special interest in tissue classification. This is because genes are clustered by similarity only, without using any information about the experiment's response variables.
We focus here on supervised clustering, defined as grouping of variables (genes), controlled by information about the Y variables, that is, the tumor types of the tissues. Previous work in this field encompasses tree harvesting [5], a two-step method which consists first of generating numerous candidate groups by unsupervised hierarchical clustering. Then, the average expression profile of each cluster is considered as a potential input variable for a response model and the few gene groups that contain the most useful information for tissue discrimination are identified. Only this second step makes the clustering supervised, as the selection process relies on external information about the tissue types. An interesting supervised clustering approach that directly incorporates the response variables Y in the grouping process is the partial least squares (PLS) procedure [6,7], a tool often applied in the chemometrics literature, which in a supervised manner constructs weighted linear combinations of genes that have maximal covariance with the outcome. PLS has the drawback that the fitted components involve all (usually thousands of) genes, which makes them very difficult to interpret.
Here we present a promising new method for searching functional groups, each made up of only a few genes whose consensus expression profiles provide useful information for tissue discrimination. Like PLS, it is a one-step approach that directly incorporates the response variables Y into the grouping process, and is thus an algorithm for supervised clustering of genes. Because of the combinatorial complexity when clustering thousands of genes, we rely on a greedy strategy. It optimizes an empirical objective function that quickly and efficiently measures the cluster's ability for phenotype discrimination. Inspired by [8], we choose Wilcoxon's test statistic for two unpaired samples [9], refined by a novel second criterion, the margin function. Our supervised algorithm can be started with or without initial groups of genes, and then clusters genes in a stepwise forward and backward search, as long as their differential expression in terms of our objective function can be improved. This yields clusters typically made up of three to nine genes, whose coherent average expression levels allow perfect discrimination of tissue types. In an empirical study, the clusters show excellent out-of-sample predictive potential, and permutation and randomization techniques show that they are reasonably stable and clearly more than just a noise artifact. The output of our algorithm is thus potentially beneficial for cancer-type diagnosis. At the same time it is very accessible for interpretation, as the output consists of a very limited number of clusters, each summarizing the information about a few genes. Thus, it may also reveal insights into biological processes and give hints on explaining how the genome works.
We first describe our new algorithm for supervised clustering of gene-expression data and then apply the procedure to eight publicly available microarray datasets and test the results for their predictive potential, stability and relevance.
Results and discussion
Algorithm for supervised clustering of genes
This section presents an algorithm for supervised learning of similarities and interactions among predictor variables for classification in very high dimensional spaces, and hence is predestinated for searching functional groups of genes on microarray expression data.
The partitioning problem
Our basic stochastic model for microarray data equipped with categorical response is given by a random pair
where X ∈ ^{ p }denotes a log-transformed gene-expression profile of a tissue sample, standardized to mean zero and unit variance. is the associated response variable, taking numeric values in = {0,1,..., K -1}. A usual interpretation is that Y codes for one of K cancer types. For simplicity, and a concise description of the algorithm, we first assume that K = 2, so that the response is binary. A generalization of the setting for multicategorical response (K > 2) is given below.
To account for the fact that not all p genes on the chip, but rather a few functional gene subsets, determine nearly all of the outcome variation and thus the type of a tissue, we model the conditional probability as
where f(·) is a nonlinear function mapping from ^{ q }to [0,1], {C_{1},...,C_{ q }} with q << p are functional groups or clusters of genes which form a disjoint and usually incomplete partition of the index set: ⊂ {1,..., p} and C_{ i }∩ C_{ j }= Ø, i ≠ j. Finally, ∈ denotes a 'representative' expression value of gene cluster C_{ i }. There are many possibilities to determine such group values , but as we would like to shape clusters that contain similar genes, a simple linear combination is an accurate choice (see [5,10]):
Because of the use of log-transformed, mean-centered and standardized expression data, we, as a novel extension, allow the contribution of a particular gene g to the group value also to be given by its 'sign-flipped' expression value -X_{ g }. This means that we treat under- and overexpression symmetrically, and it prevents the differential expression of genes with different polarity (that is, one with low expression for class 0 and the other with low expression for class 1) from canceling out when they are averaged. But even by using such simple cluster expression values as in Equation 2, finding a partition of the index set {1,..., p} into subsets or clusters {C_{1},..., C_{ q }} that virtually determine the probability structure is still highly non-trivial and the design of a procedure that reveals the exact partition according to Equation 1 is too ambitious. Thus, we have developed a computationally intensive procedure that approximately solves Equation 1 and empirically yields good results.
Clustering with scores and margins
A practical heuristic for gene clustering is the cluster affinity search technique (CAST) [4]. Our approach is algorithmically similar and also relies on growing the cluster incrementally by adding one gene after the other. Subsequent cleaning steps help us to remove spurious genes that were incorrectly added to the cluster at earlier stages. As in CAST, we repeat growth and removal until the cluster stabilizes, and then start a new cluster. The main, and very important, difference is that we do not augment (or shorten) the cluster by the gene that suits best (or least) into the current cluster in terms of an unsupervised similarity measure, but base our strategy for supervised clustering of genes on adding (or removing) the gene that improves the differential expression of the current cluster most, according to an empirical objective function for the representative group values from Equation 2. To be more explicit, we assume now that we are given n independent and identically distributed realizations
(x_{1}, y_{1}),.., (x_{ n },y_{ n }), with x_{ j }∈ ^{ p }and y_{ j }∈ {0,1}, (3)
of the random vector (X, Y), whose expression profiles x_{ j }are centered to mean zero and scaled to unit variance. The objective function needs to be a quantitative and efficiently computable measure of a cluster's ability to discriminate the tissues. As we aim for subsets of genes with accurate separation in binary problems, we rely on Wilcoxon's test statistic for two unpaired samples [9], which has been also applied as a nonparametric rank-based score function for genes in [8]. The score of a single gene i is computed from its n-dimensional vector of observed values ξ_{ i }= (x_{i1},...,x_{ in }),
where x_{ ij }is the expression value of gene i for tissue j and N_{ k }represents the set of the n_{ k }tissues ∈ {1,...,n} being of type k ∈ {0,1}. The score uses information about the type of the tissues and is thus a criterion for supervised clustering. It can be interpreted as counting, for each experiment having response value 0, the number of tissues from class 1 that have smaller expression values, and summing up these quantities. Computing the score for a gene cluster C_{ i }, goes likewise via its observed representative values . Viewing the score as Wilcoxon's test statistic, it allows the ordering of genes and clusters according to their potential significance for tissue discrimination. If the expression values of a particular gene or cluster yield exact separation of the classes, the expression values for all tissue samples having response 0 are uniformly lower than the ones belonging to class 1 or vice versa. In the former case, the score function returns its minimal value s_{ min }= 0, in the latter case the maximum score s_{ max }= n_{0}n_{1} is assigned.
The sign-flip is equivalent to setting α_{ q }= -1 in Equation 2 for all genes that tend to have lower expression values for the tissues of type 1 than for tissues of type 0. After the sign-flip, the scores of all individual genes i in the expression matrix are equal to
and as all genes now have the same polarity, we can safely average them to compute group expression values. It is important to notice that the biological interpretation is not impeded by the sign-flips. Nevertheless, for interpretative purposes, the information about them should be recorded.
where N_{0}, N_{1} and x_{ ij }are as in Equation 4. The margin function is positive if, and only if, the score is zero and then perfectly separates the tissues; otherwise it is negative. It measures the size of the gap between the lowest expression value from tissues with response 1, and the highest gene expression corresponding to class 0. The larger this gap, and hence the value of the margin function, the easier and clearer the discrimination of the two classes. The computation of the margin is again likewise for clusters via . Whenever various gene or cluster expression profiles have equal scores, their quality is judged by the margin function. Our objective function thus has two components. The score function is regarded as highest priority, whereas the margin function serves as the next highest priority criterion to achieve uniqueness.
The algorithm
- 1.
Start with the entire p × n expression matrix X. Its rows are genes, and its columns are observations of two different tissue types, having zero mean and unit variance.
- 2.
Determine the score of every gene i, that is, every n-dimensional row of observed expression values ξ_{ i }= (x_{i1},..., x_{ in }) in X as in Equation 4. Flip the sign of each gene expression vector ξ_{ i }that has score s(ξ_{ i }) >s_{ max }/2 by multiplying it with (-1),
This operation changes the score to s( ) = min(s(ξ_{ i }), s_{ max }- s(ξ_{ i })).
- 3.
Composition of the starting values
- (a)
If no initial cluster C is given, identify the gene i* having the lowest score s( ). If more than one is found, the gene i* with the largest margin m( ) as in Equation 6 is chosen. Set the initial cluster mean ξ_{ C }equal to the expression vector ( *) of the chosen gene.
- (b)
If an initial cluster C is given, average the expression of the genes therein,
- 4.
Forward search
Average the current cluster expression profile ξ_{ C }with each individual gene i,
Identify the winning gene i* as arg min_{ i } s(ξ_{C+i}) that is, the gene that leads to the lowest score. If not unique, identify the winning gene i* as the one that optimizes score and margin; that is, i* = arg min_{ i }s(ξ_{C+i}) as well as i* = arg max_{ i }m(ξ_{C+i}).
- 5.
Repeat step 4 until the identified gene i* is no longer accepted to enter the cluster. This is said to happen if the score of the updated cluster expression vector ξ_{C+i* }worsens, that is, s(ξ_{C+i*}) >s(ξ_{ C }), or if the score remains unchanged and the margin deteriorates, that is, s(ξ_{C+i*}) = s(ξ_{ C }) as well as m(ξ_{C+i*}) <m(ξ_{ C }).
- 6.
Backward search
Exclude each gene i of the current cluster C separately, and average the expression vectors of the remaining genes,
Compute score and margin of each ξ_{C-i}. Identify (as in step 4) that gene i* whose exclusion optimizes the score, or if not unique, optimizes score and margin.
- 7.
Repeat step 6 until the exclusion of the identified gene i* is (according to the formulation in step 5) no longer accepted.
- 8.
Repeat steps 4-7 until the cluster converges and the objective function is optimal.
- 9.
If more than one cluster C is desired, discard the genes in the former clusters from X and restart the algorithm at step 3 with the reduced, sign-flipped expression matrix.
The algorithm begins with the sign-flip operation described in Equation 5 to bring all genes to the same polarity. The clustering process can be started with or without initial gene clusters. If none are given, we start the procedure with the single gene that optimizes the objective function. Otherwise, the representative value of the starting cluster is determined. We then proceed by constructing the cluster incrementally. By searching among all genes, we merge and average the current cluster with one single gene, such that the augmented cluster optimizes our objective function, that is, has the lowest score or (in case of 'ties') the largest margin. The merging process is repeated until the objective function can no longer be improved. To remove spurious elements out of the current cluster, we then continue with a backward pruning stage, where genes are excluded step by step so that the objective function is optimized by every single removal. This cleaning stage aims to root out genes that were wrongly added to the cluster before. Accordingly, the forward and backward stages are repeated until the cluster converges, that is, when no further improvement of the objective function by adding or removing single genes is possible.
If one wishes to have more than q = 1 cluster for a binary class distinction, the genes forming the first cluster are discarded from the expression matrix, and the clustering process is restarted, again with or without an initial cluster. The algorithm's computations are feasible for dimensions p and sample sizes n which are clearly beyond today's common orders and hence also applicable for microarray experiments in the future. The computing time for searching q = 5 clusters in the binary leukemia dataset with n = 72 observations and p = 3,571 genes on a Linux PC with an Intel Pentium IV 1.6 GHz processor is about 5 seconds only. Software for the supervised clustering algorithm is available free as an R-Package at [11].
In summary, our cluster algorithm is a combination of variable (gene) selection for cluster membership and formation of a new predictor by possible sign-flipping and averaging the gene expressions within a cluster as in Equation 2. The cluster membership is determined with a forward and backward searching technique that optimizes the predictive score and margin criteria in Equations 4 and 6, which both involve the supervised response variables from the data.
Generalization for multiclass problems
Here we explain the extension of the supervised clustering algorithm to multicategory (K > 2) problems, where the response comprises more than two tissue types. We recommend comparing each response class separately against all other classes. This one-against-all approach for reduction to K binary problems is very popular in the machine-learning community, as many algorithms are solely designed for binary response. It works by defining
and running K times the supervised clustering algorithm on as explained above. The interpretation is that we, as in Equation 1, model the conditional probability for discrimination of the kth class versus all the other response categories as depending on a few gene subsets only,
where f_{ k }(·) are nonlinear functions mapping from ^{ q }to [0,1]. are the q <<p functional groups of genes and are their representative group values, defined as in Equation 2. When the supervised clustering algorithm is applied to each of the K binary class distinctions, this results in totally K·q clusters, which can then be used to model the conditional probability for the K-class response,
It is important to notice that instead of considering each class against all the other classes, many more ways to reduce a multi-class problem to multiple binary problems exist (see [12,13] for a thorough discussion). We assume that problem-dependent solutions that utilize deeper knowledge about the biological relation between the tissue types could be even more accurate for reducing multicategory problems to binary problems.
Numerical results
Data
Leukemia dataset. This dataset contains gene expression levels of n = 72 patients either suffering from acute lymphoblastic leukemia (ALL, 47 cases) or acute myeloid leukemia (AML, 25 cases) and was obtained from Affymetrix oligonucleotide microarrays. For more information see [14]; the data are available at [15]. Following exactly the protocol in [16], we preprocess the data by thresholding, filtering, a logarithmic transformation, and standardization, so that they finally comprise the expression values of p = 3,571 genes.
Breast cancer dataset. This dataset, described in [17], monitors p = 7,129 genes in 49 breast tumor samples. The data were obtained by applying the Affymetrix technology and are available at [18]. We thresholded the raw data with a floor of 100 and a ceiling of 16,000 before applying a base 10 logarithmic transformation. Finally, each experiment was standardized to zero mean and unit variance. The response variable describes the status of the estrogen receptor (ER). According to [17], two samples failed to hybridize correctly and were excluded from their analysis. In five cases, two different clinical tests for determination of the ER status yielded conflicting results. These five plus another four randomly chosen samples were also separated from the rest of the data, so that a dataset of n = 38 samples remained, of which 18 were ER-positive and 20 ER-negative.
Colon cancer datase. In this dataset, expression levels of 40 tumor and 22 normal colon tissues for 6,500 human genes are measured using the Affymetrix technology. A selection of 2,000 genes with highest minimal intensity across the samples has been made in [19]. The data are available at [20]. As for all other datasets, we process these data further by carrying out a base 10 logarithmic transformation and standardizing each tissue sample to zero mean and unit variance across the genes.
Prostate cancer dataset. The raw data are available at [15] and comprise the expression of 52 prostate tumors and 50 non-tumor prostate samples, obtained using the Affymetrix technology. We use normalized and thresholded data as described in [21]. We also excluded genes whose expression varied less than fivefold relatively, or less than 500 units absolutely, between the samples, leaving us with the expression of p = 6,033 genes. Finally, we applied a base 10 logarithmic transformation and standardized each experiment to zero mean and unit variance across the genes.
SRBCT dataset. This was described in [22] and contains gene-expression profiles for classifying small round blue-cell tumors of childhood (SRBCT) into four classes (neuroblastoma, rhabdomyosarcoma, non-Hodgkin lymphoma, Ewing family of tumors) and was obtained from cDNA microarrays. A training set comprising 63 SRBCT tissues, as well as a test set consisting of 20 SRBCT and 5 non-SRBCT samples are available at [23]. Each tissue sample is associated with a thoroughly preprocessed expression profile of p = 2,308 genes, already standardized to zero mean and unit variance across genes.
Lymphoma dataset. This dataset is available at [24] and contains gene-expression levels of the K = 3 most prevalent adult lymphoid malignancies: 42 samples of diffuse large B-cell lymphoma (DLBCL, class 0), 9 observations of follicular lymphoma (FL, class 1), and 11 cases of chronic lymphocytic leukemia (CLL, class 2). The total sample size is n = 62, and the expression of p = 4,026 well-measured genes, preferentially expressed in lymphoid cells or with known immunological or oncological importance is documented. More information on these data can be found in [25]. We imputed missing values and standardized the data as described in[16].
Brain tumor dataset. This dataset, presented in [26], contains n = 42 microarray gene expression profiles from K = 5 different tumors of the central nervous system, that is, 10 medulloblastomas, 10 malignant gliomas, 10 atypical teratoid/rhabdoid tumors (AT/RTs), 8 primitive neuro-ectodermal tumors (PNETs) and 4 human cerebella. The raw data were originated using the Affymetrix technology and are publicly available at [15]. For data preprocessing, we followed the protocol in the supplementary information to [26]. After thresholding, filtering, a logarithmic transformation and standardization of each experiment to zero mean and unit variance, a dataset comprising p = 5,597 genes remained.
National Cancer Institute (NCI) dataset. This comprises gene-expression levels of p = 5,244 genes for n = 61 human tumor cell lines which can be divided in K = 8 classes: seven breast, five CNS, seven colon, six leukemia, eight melanoma, nine non-small-cell lung carcinoma, six ovarian and nine renal tumors. A more detailed description of the data can be found at [27] and in [28]. We work with preprocessed data as in [16].
Results from the supervised clustering algorithm
Permutation test
This section is concerned with assessing relevance and addresses the question of whether or not the promising output of the clustering procedure is a noise artifact. For this purpose, we explore quality measures of clusters generated from random-noise gene-expression data and compare them to the results obtained with the original data. As the distributions of the score function s(·) and the margin function m(·) on noise are not known, we rely on simulations. Let (y_{1},..., y_{ n }) be the original set of responses. Then,
is a 'shuffled' set of responses, constructed from the original response set by a random permutation for each l = 1,..., L. We then allocate an element of the permuted response to each of the (fixed) gene-expression profiles x_{ i }, giving us independent and identically distributed pairs
Margin statistics
Margins | m ^{(0)} | max_{ l }(m*^{(l)}) | med_{ l }(m*^{(l)}) | min_{ l }(m*^{(l)}) |
---|---|---|---|---|
Leukemia | 0.20 | 0.05 | -0.01 | -2.41 |
Breast cancer | 1.29 | 0.23 | 0.04 | -0.82 |
Prostate | 0.05 | 0.02 | -0.04 | -0.90 |
Colon | 0.08 | 0.05 | -0.12 | -1.39 |
SRBCT | 1.00 | 0.11 | -0.06 | -1.16 |
Lymphoma | 1.65 | 0.14 | 0.01 | -1.16 |
Brain | 1.03 | 0.32 | 0.09 | -0.29 |
NCI | 2.52 | 0.44 | 0.12 | -0.91 |
Scores
Scores | s ^{(0)} | min_{ l }(s*^{(l)}) | max^{ l }(s*^{(l)}) | Number of (s*^{(l) }= 0)/L |
---|---|---|---|---|
Leukemia | 0 | 0 | 279 | 0.41 |
Breast Cancer | 0 | 0 | 43 | 0.91 |
Prostate | 0 | 0 | 566 | 0.17 |
Colon | 0 | 0 | 164 | 0.11 |
SRBCT | 0 | 0 | 148 | 0.26 |
Lymphoma | 0 | 0 | 78 | 0.67 |
Brain | 0 | 0 | 11 | 0.98 |
NCI | 0 | 0 | 13 | 0.95 |
The fact that the score has highly non-significant p-values is at first sight surprising. The reason for this is that the cluster expression values x_{C,j}in Equation 2 are highly dependent among the samples j = 1,...,n via the responses y_{ j }in the supervisedly estimated cluster C = C(y_{1},...,y_{ n }) and the sign coefficients α_{ g }= α_{ g }(y_{1},..., y_{ n }). This strong interdependence causes the unusual phenomenon that the null-distribution, assuming no association between the expression values X and the response Y, has a substantial probability to score zero. The margin statistics in Equation 6 has much better power properties than the score.
Predictive potential
In this section, we will evaluate the predictive potential of the supervised clustering algorithm's output to see if it could successfully reveal functional groups of genes. A predictor or classifier for K different tissue types is a function C(·) that assigns a class label , based on an observed feature vector x. More precisely, the classification rule here will be based on average cluster expression values x= (x_{C1}^{0},..., x_{Cq}^{K-1}) as K·q features
In practice, the classifier is built from a learning set of tissues whose class labels are known. Subsequently it can be used to predict the class labels of new tissues with unknown outcome. There are various methods to build classification rules based on past experience and we restrict here on two relatively simple methods that are well suited for our purpose.
Nearest-neighbor classification. An easy to implement and, compared to more sophisticated methods, impressively competitive classifier for microarray data is the k-nearest-neighbor rule [29]. It is based on a distance function d(·,·) for pairs xand x'of feature vectors. As we consider standardized gene-expression data here, the Euclidean distance function
is a reasonable choice. Then, for each new feature vector, the k closest feature vectors from the tissues in the learning data are identified and the predicted class is given by majority vote of the associated responses of these k closest neighbors. We found a choice of k = 1 neighbors to be appropriate, but more data-driven approaches via cross-validation for the determination of k would be possible.
Aggregated trees. Another approach that proved to be very fruitful in our setting is as follows: When knowing conditional probabilities p_{ k }(x) = P[Y^{(k) }= 1|X= x], which specify how likely it is that a tissue with feature vector xbelongs to the kth or one of the other classes, the classifier function is
as estimated conditional probabilities, which can be plugged into Equation 7 for maximum-likelihood classification.
Misclassification rates based on leave-one-out cross validation
Leukemia | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 5.56% | 5.56% | 4.17% | 2.78% | 2.78% | 2.78% | 2.78% |
Aggregated trees | 5.56% | 5.56% | 1.39% | 1.39% | 2.78% | 2.78% | 2.78% |
Breast | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% |
Aggregated trees | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% |
Prostate | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 13.73% | 7.84% | 4.90% | 6.86% | 4.90% | 4.90% | 5.88% |
Aggregated trees | 13.73% | 13.73% | 6.86% | 8.82% | 6.86% | 5.88% | 5.88% |
Colon | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 27.42% | 22.58% | 22.58% | 19.35% | 16.13% | 17.74% | 19.35% |
Aggregated trees | 27.42% | 29.03% | 19.35% | 19.35% | 16.13% | 17.74% | 17.74% |
SRBCT | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 1.59% |
Aggregated trees | 3.17% | 0.00% | 0.00% | 0.00% | 1.59% | 1.59% | 1.59% |
Lymphoma | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 3.23% | 1.61% | 1.61% | 1.61% | 0.00% | 0.00% | 0.00% |
Aggregated trees | 3.23% | 1.61% | 1.61% | 1.61% | 0.00% | 0.00% | 0.00% |
Brain | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 30.95% | 23.81% | 19.05% | 16.67% | 19.05% | 16.67% | 16.67% |
Aggregated trees | 42.86% | 23.81% | 21.43% | 19.05% | 14.29% | 11.90% | 11.90% |
NCI | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 40.98% | 40.98% | 36.07% | 29.51% | 24.59% | 27.87% | 26.23% |
Aggregated trees | 49.18% | 47.54% | 39.34% | 29.51% | 21.31% | 21.31% | 19.67% |
Misclassification rates based on random splitting
Leukemia | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 6.58% | 4.62% | 4.21% | 3.75% | 3.33% | 3.38% | 3.25% |
Aggregated trees | 6.58% | 6.12% | 3.71% | 3.54% | 2.79% | 2.71% | 2.62% |
Breast | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 1.00% | 0.75% | 0.75% | 1.00% | 0.83% | 1.00% | 1.00% |
Aggregated trees | 1.00% | 1.58% | 1.67% | 2.33% | 2.58% | 2.42% | 3.00% |
Prostate | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 14.47% | 11.68% | 9.62% | 7.97% | 7.26% | 6.94% | 6.91% |
Aggregated trees | 14.47% | 16.47% | 10.32% | 8.79% | 8.12% | 8.00% | 7.79% |
Colon | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 23.35% | 20.35% | 19.10% | 16.95% | 16.45% | 16.05% | 15.95% |
Aggregated trees | 23.35% | 21.80% | 19.70% | 18.10% | 16.95% | 16.20% | 16.45% |
SRBCT | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 1.33% | 0.48% | 0.43% | 0.48% | 0.76% | 0.95% | 1.05% |
Aggregated trees | 5.76% | 0.95% | 0.71% | 1.10% | 1.76% | 1.90% | 2.14% |
Lymphoma | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 2.15% | 2.20% | 1.50% | 0.85% | 0.65% | 0.50% | 0.50% |
Aggregated trees | 3.45% | 2.45% | 1.40% | 0.80% | 0.25% | 0.20% | 0.30% |
Brain | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 31.21% | 27.50% | 26.36% | 24.71% | 23.86% | 23.71% | 23.36% |
Aggregated trees | 35.43% | 28.43% | 24.43% | 22.14% | 19.64% | 18.29% | 16.86% |
NCI | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 45.25% | 40.25% | 37.90% | 34.80% | 32.10% | 30.50% | 29.65% |
Aggregated trees | 51.85% | 42.35% | 38.05% | 34.05% | 29.30% | 27.75% | 26.50% |
We observe that the error estimates obtained from random splitting are on a slightly higher level than the ones from leave-one-out cross-validation. We also see that introducing some redundancy for the discrimination process by using additional clusters, that is, increasing q, yields better performance; but of course, a too large value of q would exhibit overfitting.
Benchmark misclassification rates
Leukemia | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 6.33% | 4.79% | 4.50% | 4.08% | 3.67% | 3.75% | 3.79% |
Aggregated trees | 8.50% | 6.04% | 4.54% | 3.92% | 4.83% | 6.79% | 8.46% |
Breast | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 1.08% | 0.83% | 0.92% | 1.17% | 1.33% | 1.50% | 1.58% |
Aggregated trees | 5.42% | 2.50% | 1.83% | 2.42% | 4.17% | 5.42% | 8.33% |
Prostate | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 13.24% | 10.68% | 9.15% | 8.44% | 7.76% | 8.18% | 7.85% |
Aggregated trees | 25.47% | 21.29% | 18.56% | 17.44% | 16.65% | 17.65% | 18.94% |
Colon | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 23.40% | 21.95% | 20.15% | 18.90% | 16.65% | 16.25% | 15.70% |
Aggregated trees | 30.95% | 29.70% | 30.20% | 31.20% | 33.55% | 34.15% | 34.90% |
SRBCT | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 1.76% | 0.86% | 0.81% | 1.05% | 1.19% | 1.43% | 1.48% |
Aggregated trees | 4.38% | 2.00% | 2.62% | 3.95% | 6.48% | 6.95% | 8.43% |
Lymphoma | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 2.43% | 2.29% | 1.76% | 1.05% | 0.81% | 0.81% | 0.86% |
Aggregated trees | 4.38% | 2.81% | 2.10% | 1.00% | 0.81% | 1.05% | 1.24% |
Brain | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 30.79% | 29.07% | 29.50% | 27.57% | 28.50% | 28.00% | 27.50% |
Aggregated trees | 40.14% | 35.29% | 34.64% | 33.50% | 34.36% | 34.79% | 35.29% |
NCI | q = 1 | q = 2 | q = 3 | q = 5 | q = 10 | q = 15 | q = 20 |
Nearest neighbor | 39.63% | 34.89% | 32.84% | 31.95% | 30.68% | 29.74% | 28.95% |
Aggregated trees | 56.58% | 49.53% | 44.84% | 42.42% | 39.21% | 39.05% | 37.79% |
Classification of the breast cancer validation sample
Tumor | 14 | 31 | 33 | 44 | 45 | 46 | 47 | 48 | 49 |
---|---|---|---|---|---|---|---|---|---|
Status | Neg? | Neg? | Neg? | Neg | Pos? | Pos? | Pos | Pos | Neg |
Prediction | Neg | Neg | Neg | Neg | Pos | Pos | Pos | Pos | Neg |
Not only the results on the validation sample are very convincing, but the cross-validation on the n = 38 training tissues is also error free. This is different from the results in [17] with precedent feature selection, singular value decomposition and Bayesian binary regression, where 7 of 9 tissues in the validation sample and 36 of 38 tissues in the training sample were accurately predicted. Moreover, our result confirms that the breast cancer expression matrix contains a strong signal for discriminating the ER status.
We next used our method to classify the original 34 test samples in the leukemia dataset. We applied the supervised clustering algorithm on the n = 38 training data, where we also fit the best predictor from our random splitting study (aggregated trees with q = 20 clusters as input features) as classifier for the independent sample. Our predictions turned out to be error-free, a result which can be directly compared to [14], where 29 of 34 observations were classified correctly by a weighted voting scheme. With support vector machines, results ranging between 30 to 32 correct classifications were reported [33]. Moreover, a full leave-one-out cross-validation on the n = 38 training data (results not shown) resulted in perfect classification for various q values; also, the performance for cross-validation on the entire dataset with n = 72 observations is competitive, compared, for example, to [34].
The SRBCT data contains an additional test set of 20 SRBCT and 5 non-SRBCT samples. We first classified the 20 SRBCT tissues with the best classifier from the random splitting study on the n = 63 training samples, the nearest-neighbor method with q = 3 clusters as input. The predictions turned out to be error-free, approving the perfect classification with artificial neural networks and principal components as in [22], as well as the correct diagnosis obtained with multicategory support vector machines in [35]. As aggregated trees and the one-nearest-neighbor classifier with q = 3 clusters as input are not well suited for assessing prediction strengths on the five non-SRBCT samples, we applied logistic discrimination and rejected every classification that was done with a probability lower than 0.95. All five non-SRBCT's did not exceed this threshold and were thus correctly rejected, whereas three of the twenty SRBCT tissues did not exceed it and could not confidently be classified either, though they were predicted correctly. Also, this result, as well as our error rate from leave-one-out cross-validation on the training data, which achieves the benchmark error rate of 0%, are consistent with [22,35]. This provides more evidence that our method can at least keep up with state-of-the-art classifiers such as neural networks or support vector machines.
Comparison against the literature
Leukemia | Breast | Prostate | Colon | SRBCT | Lymphoma | Brain | NCI* | |
---|---|---|---|---|---|---|---|---|
Supervised clustering | 1.39% | 0.00% | 4.90% | 16.13% | 0.00% | 0.00% | 11.90% | 26.50% |
Literature | 1.39% | 5.26% | 9.80% | 9.68% | 0.00% | ? | 16.67% | ≅ 35% |
In summary, our predictions from simple classifiers based on the supervised clustering's output can easily keep up with sophisticated methods that are based on single genes, and as Table 7 shows, our supervised clusters beat the best reported results from the literature in four out of eight datasets. On three further datasets, we achieve the benchmark of perfect classification. The success of our method may be because the averaging of genes according to Equation 2 has a variance-reducing effect and yields more stable and accurate features for classification. As well as its good predictive potential, the cluster structure provided by our method is very accessible for biological interpretation and can be beneficial for functional genomics.
Stability
The stability of the gene clusters detected by our supervised clustering algorithm is a critical issue. The output is much more useful for functional genomics if it remains unchanged for 'similar' input data. We use the bootstrap as a tool for assigning statistical significance, see [36]. We assume n pairs of observations (x_{ i },y_{ i }) with binary response y_{ i }∈ {0,1}, from which we form a resampled gene expression dataset
(x_{1},y_{1})*,..., (x_{ n },y_{ n })*
Cluster size
Cluster size | Mean | SD | Min | Max |
---|---|---|---|---|
Leukemia | 5.855 | 2.910 | 1 | 23 |
Breast cancer | 4.344 | 2.062 | 1 | 13 |
Prostate | 6.327 | 2.373 | 2 | 17 |
Colon | 6.642 | 2.733 | 2 | 20 |
SRBCT | 4.739 | 1.816 | 1 | 14 |
Lymphoma | 5.485 | 2.679 | 1 | 16 |
Brain | 6.094 | 2.751 | 1 | 19 |
NCI | 6.174 | 2.930 | 1 | 20 |
We first analyze the variability in cluster size. The results, summarized in Table 8, show surprising stability across the eight different datasets. We observe that quite small clusters, typically made up of three to nine genes, were found. The SD in cluster size was fairly low in all eight datasets. As a next, and more difficult, step, we try to explore the compositional variability of the clusters. To give a rough overview which proportion of genes is actively present in the clustering process, we assess a confidence level to each individual gene i, which measures how likely it is to be clustered,
We continue by assessing confidence levels to pairs of genes which gives a clue about pairwise interactions. We count the number N_{ ij }of clusters found with our bootstrapped gene expression datasets that both contain the genes i and j, and then divide by the number of replicates L,
These confidence levels not only give an idea how likely the pairs are, but also provide information for functional genomics, as we can now analyze whether pairs of genes preferentially enter clusters simultaneously or not. The number of hits N_{ i }for individual genes i follows a binomial(L,π_{ i }) distribution (given the data), and for pairs (i,j) we have that N_{ ij }is binomial(L, π_{ ij }) (we ignore here the fact that π_{ i }in Equation 8 and π_{ ij }in Equation 9 are computed with L = 1,000 replicates instead of the theoretical L = ∞. If there were no attraction or repulsion between genes, the joint probability π_{ ij }would be given by the product π_{ i }π_{ j }of the marginal probabilities. By calibrating the observed number of hits N_{ ij }with the binomial(L,N_{ i }N_{ j }/L) distribution under independence, we can test the hypothesis
H_{0} : π_{ ij }= π_{ i }π_{ j },
Most frequently clustered genes in DLBC lymphoma discrimination
Numbers | ||||
---|---|---|---|---|
Gene 3786 | Gene 3804 | Gene 761 | Gene 780 | |
Gene 3763 | 184 (301) | 68 (220) | 144 (155) | 173 (133) |
Gene 3786 | 289 (187) | 153 (132) | 72 (113) | |
Gene 3804 | 136 (96) | 60 (83) | ||
Gene 761 | 40 (58) | |||
p-values | ||||
Gene 3786 | Gene 3804 | Gene 761 | Gene 780 | |
Gene 3763 | (-) 0.000 | (-) 0.000 | (-) 0.359 | (+) 0.001 |
Gene 3786 | (+) 0.000 | (+) 0.055 | (-) 0.000 | |
Gene 3804 | (+) 0.000 | (-) 0.007 | ||
Gene 761 | (-) 0.015 |
Functional description of the most frequently clustered genes in DBLC lymphoma discrimination
Sign | Gene | Clone | Function |
---|---|---|---|
- | 3763 | 769861 | CD63 antigen (melanoma 1 antigen) |
- | 3786 | 345538 | Cathepsin L |
- | 3804 | 343867 | Allograft-inflammatory factor-1 or interferon gamma induced macrophage protein or ionized calcium binding adaptor molecule 1 |
+ | 761 | 1341294 | Unknown |
+ | 780 | 1334411 | Unknown UG Hs.32553 ESTs |
It is now tempting to extend this kind of analysis from pairs to tuples of third and higher orders. But estimating higher-order interactions will become very unreliable because of the limited amount of sample size n.
Additional modifications
Our supervised clustering procedure can be understood as a generic method and allows alteration of various details according to the users' choice and specific demands. We also tried to improve the supervised clustering procedure ourselves with additional modifications, the most important of which are described here. The averaging of the gene expression in Equation 2 is specified by the arithmetic mean plus sign-flips, a very simple linear combination of genes, as it is impracticable to repeatedly optimize a general linear combination such as
during the clustering process. But theoretically, once the cluster algorithm has done its work, we could try to improve the discriminatory power of the actual cluster by numerically optimizing a weighted linear combination as above with respect to score and margin. In practice, we recognized that the numerical optimization was very difficult. If we started it with equal weights, they only changed slightly, and the objective function (this is, the margin) did not improve much. Because of this we favor the more simple method.
Since the margin function in Equation 6 is not scale-invariant, we also considered clustering with an adjusted margin. This means that we optimized the quotient of margin and within-group variation for a gene-expression vector ξ_{ i }= (x_{i1},...,x_{ in }),
Here, n_{ k }is the size and is the sample variance of class k ∈ {0,1}. While theoretically the size of the gap between the two response classes is meaningful only in relation to the within-group variance, the adjustment of the margin proved not to be very important in practice, owing to the use of standardized gene-expression data. It did not improve the predictive performance of the clusters and slightly decreased their stability. As it is common practice to standardize expression data, we recommend working with the non-adjusted margin.
Our algorithm, as described above, yields disjoint clusters of genes. To account for the fact that genes may function in multiple pathways, one could modify it as follows. First, run the clustering algorithm on the data, producing a first cluster; second, compute a probability estimate for P[Y = 1|X] for a two-class problem, for example, with probability-based classification methods or in a logistic model; third, reweight the data with weights as in the Real AdaBoost algorithm [37]; then return to the first step but now with reweighted data. Doing the loop q times produces q clusters, which are allowed to be non-disjoint.
We also explored the improvement of the supervised clustering algorithm by biasing it towards larger clusters. Specifically, we did not stop the forward search when score and/or margin first worsened, but continued as long as the objective function remained within a factor of the best. Our intention was that the objective function could improve again and reach even better values. As soon as the objective function once dropped below the tolerance (a factor times the best ever achieved value), we stopped the forward search and continued the algorithm with the cluster that yielded the best parameters ever. Although our first guess was that the biasing could result in larger clusters with clearer separation, it rarely ever had any effect in practice.
Conclusions
We have proposed an algorithm for supervised clustering of genes from microarray experiments. Our procedure is potentially useful in the context of medical diagnostics, as it identifies groups of interacting genes that have high explanatory power for given tissue types, and which in turn can be used to accurately predict the class labels of new samples. At the same time, such gene clusters may reveal insights into biological processes and may be valuable for functional genomics.
In summary, our algorithm tries to cluster genes such that the discrimination of different tissue types is as simple as possible. It builds the clusters incrementally and relies on a fast, stepwise strategy that allows exhaustive searches among thousands of genes. More specifically, the aim is to identify sparse linear combinations of genes whose average expression level is uniformly low for one response class and uniformly high for the other class (es).
In empirical studies, the average cluster-expression profiles showed superior classification potential compared to other techniques where unclustered genes had been used. The clusters showed reasonable stability and there are several reasons that point towards their biological significance. They do not only contain the genes that are individually good, but groups of genes whose consensus expression profile is best with respect to the objective function. The predictive potential of the very same, unaveraged genes cannot keep up with the prediction potential of the corresponding cluster means. And, finally, an application of our algorithm to randomly permuted data shows that the identified structure is more than just a noise artifact.
An important task that remains to be addressed in future research is the generalization of the supervised clustering algorithm to quantitative response variables and to censored survival data. The fundamental idea of supervised clustering can be pursued again, but needs alternative objective functions that rank individual genes and gene clusters on the basis of their explanatory power for non-categorical response variables.
Declarations
Acknowledgements
We thank Jane Fridlyand for providing the preprocessed NCI data. Software is available at [11].
Authors’ Affiliations
References
- Weinstein J, Myers T, O'Connor P, Friend H, Fornace A, Kohn K, Fojo T, Bates S, Rubinstein L, Anderson N, et al: An information-intensive approach to the molecular pharmacology of cancer. Science. 1997, 275: 343-349. 10.1126/science.275.5298.343.PubMedView ArticleGoogle Scholar
- Eisen M, Spellman P, Brown P, Botstein D: Cluster analysis and display of genome-wide expression patterns. Proc Natl Acad Sci USA. 1998, 95: 14863-14868. 10.1073/pnas.95.25.14863.PubMedPubMed CentralView ArticleGoogle Scholar
- Tamayo P, Slonim D, Mesirov J, Zhu Q, Kitareewan S, Dmitrovsky E, Lander E, Golub T: Interpreting patterns of gene expression with self-organizing-maps: methods and application to hematopoietic differentiation. Proc Natl Acad Sci USA. 1999, 96: 2907-2912. 10.1073/pnas.96.6.2907.PubMedPubMed CentralView ArticleGoogle Scholar
- Ben-Dor A, Shamir R, Yakhini Z: Clustering gene expression patterns. J Comput Biol. 1999, 6: 281-297. 10.1089/106652799318274.PubMedView ArticleGoogle Scholar
- Hastie T, Tibshirani R, Botstein D, Brown P: Supervised harvesting of expression trees. Genome Biol. 2001, 2: research0003.1-0003.12. 10.1186/gb-2001-2-1-research0003.View ArticleGoogle Scholar
- Nguyen D, Rocke D: Tumor classification by partial least squares using microarray gene expression data. Bioinformatics. 2002, 18: 39-50. 10.1093/bioinformatics/18.1.39.PubMedView ArticleGoogle Scholar
- Geladi P, Kowalski B: Partial least squares regression: a tutorial. Analyt Chim Acta. 1986, 185: 1-17. 10.1016/0003-2670(86)80028-9.View ArticleGoogle Scholar
- Park P, Pagano M, Bonetti M: A nonparametric scoring algorithm for identifying informative genes from microarray data. Pac Symp Biocomput. 2001, 52-63.Google Scholar
- Wilcoxon F: Individual comparisons by ranking methods. Biometrics. 1945, 1: 80-83.View ArticleGoogle Scholar
- Hastie T, Tibshirani R, Eisen M, Alizadeh A, Levy R, Staudt L, Botstein D, Brown P: Gene shaving as a method of identifying distinct sets of genes with similar expression patterns. Genome Biol. 2000, 1: research0003.1-0003.21. 10.1186/gb-2000-1-2-research0003.View ArticleGoogle Scholar
- Marcel Dettling's website. [http://stat.ethz.ch/~dettling/supercluster.html]
- Hastie T, Tibshirani R: Classification by pairwise coupling. Annls Statistics. 1998, 26: 451-471. 10.1214/aos/1028144844.View ArticleGoogle Scholar
- Allwein E, Schapire R, Singer Y: Reducing multiclass to binary: a unifying approach for margin classifiers. J Machine Learn Res. 2000, 1: 113-141. 10.1162/15324430152733133.Google Scholar
- Golub T, Slonim D, Tamayo P, Huard C, Gassenbeek M, Coller H, Loh M, Downing J, Caliguri M, Bloomfield C, Lander E: Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science. 1999, 286: 531-538. 10.1126/science.286.5439.531.PubMedView ArticleGoogle Scholar
- Whitehead Institute Center for Genomic Research: cancer genomics. [http://www-genome.wi.mit.edu/cancer]
- Dudoit S, Fridlyand J, Speed T: Comparison of discrimination methods for the classification of tumors using gene expression data. J Am Stat Assoc. 2002, 97: 77-87. 10.1198/016214502753479248.View ArticleGoogle Scholar
- West M, Blanchette C, Dressman H, Huang E, Ishida S, Spang R, Zuzan H, Olson J, Marks J, Nevins J: Predicting the clinical status of human breast cancer by using gene expression profiles. Proc Natl Acad Sci USA. 2001, 98: 11462-11467. 10.1073/pnas.201162998.PubMedPubMed CentralView ArticleGoogle Scholar
- Duke DNA Microarray Center. [http://mgm.duke.edu/genome/dna_micro/work/]
- Alon U, Barkai N, Notterdam D, Gish K, Ybarra S, Mack D, Levine A: Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc Natl Acad Sci USA. 1999, 96: 6745-6750. 10.1073/pnas.96.12.6745.PubMedPubMed CentralView ArticleGoogle Scholar
- Colorectal Cancer Microarray Research. [http://microarray.princeton.edu/oncology/]
- Singh D, Febbo P, Ross K, Jackson D, Manola J, Ladd C, Tamayo P, Renshaw A, D'Amico A, Richie J, et al: Gene expression correlates of clinical prostate cancer behavior. Cancer Cell. 2002, 1: 203-209. 10.1016/S1535-6108(02)00030-2.PubMedView ArticleGoogle Scholar
- Khan J, Wei J, Ringner M, Saal L, Ladanyi M, Westermann F, Berthold F, Schwab M, Antonescu C, Peterson C, Meltzer P: Classification and diagnostic prediction of cancer using gene expression profiling and artificial neural networks. Nat Med. 2001, 6: 673-679. 10.1038/89044.View ArticleGoogle Scholar
- National Human Genome Research Institute: microarray project. [http://www.nhgri.nih.gov/DIR/Microarray/Supplement]
- Lymphoma/Leukemia Molecular Profiling Project Gateway. [http://llmpp.nih.gov/lymphoma/data/figure1]
- Alizadeh A, Eisen M, Davis R, Ma C, Lossos I, Rosenwald A, Boldrick J, Sabet H, Tran T, Yu X, et al: Distinct types of diffuse large B-cell lymphoma identified by gene expression profiling. Nature. 2000, 403: 503-511. 10.1038/35000501.PubMedView ArticleGoogle Scholar
- Pomeroy S, Tamayo P, Gaasenbeek M, Sturla L, Angelo M, McLaugh-lin M, Kim J, Goumnerova L, Black P, Lau C, et al: Prediction of central nervous system embryonal tumor outcome based on gene expression. Nature. 2002, 415: 436-442. 10.1038/415436a.PubMedView ArticleGoogle Scholar
- Stanford NCI60 Cancer Microarray Project. [http://genome-www.stanford.edu/nci60/]
- Ross D, Scherf U, Eisen M, Perou C, Rees C, Spellman P, Iyer V, Jeffrey S, Van de Rijn M, Waltham M, et al: Systematic variation in gene expression patterns in human cancer cell lines. Nat Genet. 1999, 24: 227-235.View ArticleGoogle Scholar
- Fix E, Hodges J: Discriminatory analysis - nonparametric discrimination: consistency properties. Report 1951, No. 4. US Air Force School of Aviation Medicine, Random Field, TX. In Machine Recognition of Patterns. Edited by: Agrawala A. 1977, New York: IEEE PressGoogle Scholar
- Breiman L, Friedman J, Olshen R, Stone C: Classification and Regression Trees. Wadsworth: Belmont. 1984Google Scholar
- Ripley B: Pattern Recognition and Neural Networks. Cambridge: Cambridge University Press. 1996Google Scholar
- Data Collection for: West et al. (September 18, 2001) Proc Natl Acad Sci USA 10.1073/pnas.201162998. [http://www.pnas.org/cgi/content/full/201162998/DC1]
- Furey T, Cristianini N, Duffy N, Bednarski D, Schummer M, Haussler D: Support vector machine classification and validation of cancer tissue samples using microarray expression data. Bioinformatics. 2000, 16: 906-914. 10.1093/bioinformatics/16.10.906.PubMedView ArticleGoogle Scholar
- Ben-Dor A, Bruhn L, Friedman N, Nachman I, Schummer M, Yakhini Z: Tissue classification with gene expression profiles. J Comput Biol. 2000, 7: 559-583. 10.1089/106652700750050943.PubMedView ArticleGoogle Scholar
- Lee Y, Lee C: Classification of multiple cancer types by multi-category support vector machines using gene expression data. Technical Report 1051. Madison, WI: University of Wisconsin, Department of Statistics. 2002Google Scholar
- Efron B, Tibshirani R: The problem of regions. Annls Statistics. 1998, 26: 1687-1718. 10.1214/aos/1024691353.View ArticleGoogle Scholar
- Friedman J, Hastie T, Tibshirani R: Additive logistic regression: a statistical view of boosting. Annls Statistics. 2000, 28: 337-407. 10.1214/aos/1016218222.View ArticleGoogle Scholar