- Open Access
Making the most of high-throughput protein-interaction data
Genome Biologyvolume 8, Article number: 112 (2007)
We review the estimation of coverage and error rate in high-throughput protein-protein interaction datasets and argue that reports of the low quality of such data are to a substantial extent based on misinterpretations. Probabilistic statistical models and methods can be used to estimate properties of interest and to make the best use of the available data.
Most protein functions involve their interaction with other molecules, often with other proteins in the assembly of operational complexes. A better understanding of protein interactions is fundamental to the study of biological systems. As many drugs act on proteins, it is also a prerequisite for understanding intended, and unintended, drug effects. Over the past few years a number of large-scale experiments have set out to map protein interactions systematically [1–15]. While there is interest in combining the resulting data, there appear to be substantial discrepancies between experiments, and evaluation studies have reported large error rates, lack of overlap and apparent contradictions between the different datasets [16–21].
The purpose of this article is to critically assess the methodology used to analyze protein-interaction datasets. When interpreting individual experiments or combining datasets from different experiments, we need to consider three questions: first, what do we want to know and which experiments provide data that can be used to answer our questions; second, which types of protein interactions were assayed and under what conditions; and third, what types of measurement errors may have occurred and what is their prevalence. In this article we will discuss how the formulation of appropriate statistical models can allow investigators to clearly identify and estimate quantities of interest.
We will consider two particular types of protein interactions: physical interactions, and interactions between members of a protein complex - which we shall call 'complex membership interactions'. A physical interaction is a direct and specific contact between a pair of proteins . We regard two proteins in a complex as having a physical interaction if they share an interaction surface. A complex membership interaction exists between proteins that are part of the same multiprotein complex and does not necessarily imply a physical interaction.
Sampling and coverage
The two most widely used experimental techniques for detecting protein-protein interactions are the yeast two-hybrid (Y2H) system  and affinity purification followed by mass spectrometry (AP-MS) . The Y2H system assays whether proteins can physically interact with each other. Large-scale experiments are carried out in a colony-array format, in which each yeast colony expresses a defined pair of 'bait' and 'prey' proteins that can be scored for reporter gene activity - indicating interaction - in an automated manner [1, 6, 25]. The type of information obtained from a Y2H experiment is shown in Figure 1. In an AP-MS experiment, a tagged protein is expressed in yeast and then 'pulled down' from a cell extract, along with any proteins associated with it, by co-immunoprecipitation or by tandem affinity purification. The set of pulled-down proteins is identified by MS. In a laborious and expensive process, this procedure has been systematically applied to large sets of yeast proteins [7–11]. The tagged protein in AP-MS is also sometimes called the bait and the proteins it pulls down the prey. The information on protein complexes given by Y2H and AP-MS experiments is compared in Figure 2.
An appreciation of the concepts of sampling and coverage is vital for interpreting the data from these types of experiments [26, 27]. The term 'sampling' is used for experimental designs where only a subset of the population is interrogated. Representative sampling techniques are used in many fields of science, but they are not common in the generation of protein-interaction datasets, where sampling has often been guided by biological priorities. The 'coverage' summarizes which part of the total set of possible interactions has actually been tested. Even when genome-wide screening was intended [1, 10, 11], coverage was in fact well below 100%, and the success for each bait seems to depend on nonrandom biological, technological and economic factors. For example, Gavin et al.  used all 6,466 open reading frames (ORFs) that were at that time annotated in the Saccharomyces cerevisiae genome and obtained tandem affinity purifications for 1,993 of those. The remaining 4,473 (69%) failed at various stages, because, for example, the tagged protein failed to express or protein bands were not well separated by gel electrophoresis. Thus, neither the set of tested baits nor the set of tested prey in current experiments are random subsets of all proteins in the organism and in general, it is not valid to make inferences about the 'population', that is, the set of all physical interactions that take place in a cell under the conditions being studied, by assuming the available experimental data from a Y2H or AP-MS experiment to be a representative sample. We are not arguing that random sampling be used, as it would not be appropriate in this setting, but rather that the data need to be interpreted more judiciously.
One problem in evaluating large-scale protein-interaction experiments is that the published data are often not sufficiently detailed to allow accurate description of the sets of baits and prey that were actually tested. As a proxy, we introduced the concept of 'viable baits' and 'viable prey' . The first is the set of baits that were reported to have interacted with at least one prey, and the latter are those proteins reported to be found by at least one bait. Numbers for these can be unambiguously obtained from the reported data and provide surrogate measures for the tested baits and tested prey. The set of all pairs between viable bait and viable prey are the interactions that we are confident were experimentally tested and could, in principle, have been detected. The failure to detect an interaction between a viable bait and a viable prey is informative, whereas the absence of an observed interaction between an untested bait and prey is not. We note that the set of viable prey is a subset of the tested prey, and viable baits are a subset of the tested baits. This approach might introduce bias, because negative data from baits that were tested but found no prey, as well as from prey that were present but did not interact with any bait, are not recorded. On the other hand, presuming that combinations were tested, when in fact they were not, can also result in bias. Gilchrist et al.  used a randomization approach to estimate the size of the prey populations for the datasets in  and . Their estimates are about double those of the number of viable prey.
Representation as graphs
Graph theory offers a convenient and useful set of terms and concepts to represent relationships between entities. Graphs most commonly represent binary relationships and these can be either directed or undirected. A further type of graph is needed to represent the membership of proteins in complexes: this relationship is not binary and requires a type of graph called a bipartite graph. Box 1 gives precise definitions of these concepts and an overview of how they apply to protein-interaction data.
Undirected graphs are often used as a model for physical interactions. True relationships are symmetric: if protein A interacts with B, then B interacts with A. The observed experimental data, however, often display asymmetry, which is a consequence of the experimental asymmetry between bait and prey. Protein A may identify protein B as an interactor when A is used as a prey, but B as a prey may not find A. To represent asymmetric data, we suggest using a directed-graph model. This is a point on which we diverge from much of the current practice. We argue that although the quantity of interest is an unknown undirected graph, it must be estimated from the observed data, which should be represented as a directed graph.
"All models are wrong, but some are useful." This maxim of George Box  reminds us that we should not expect these models to adequately represent all possible aspects of protein interactions in a satisfactory way. For the current types of data and questions, graph models are useful. As the data and the questions that we ask become more sophisticated, more complicated models are likely to be needed. Some limitations of the graph models described here are related to their lack of resolution in time and space, failure to distinguish between different protein isoforms or post-translational modifications, and to the fact that experiments do not record interactions between individual protein molecules but between populations. It is the lack of such information that makes it difficult to use Y2H data to make inference about the composition of protein complexes (see Figure 1) or to use AP-MS data to identify the physical interactions of the proteins within a complex and their stoichiometry (see Figure 2).
Whether two proteins physically interact in vivo is not always simple to determine: the range of binding affinities of biologically relevant protein interactions spans many orders of magnitude , and interactions can be dynamic, transient and highly regulated. Nevertheless, the simple measurement model used to interpret the results of protein-interaction experiments presumes that for each pair of proteins, the question of whether or not they interact can be answered as either yes or no. The aim of making a measurement is to record the true, typically unknown, value of a physical quantity, but in practice there will be deviations - measurement errors. In such circumstances, statistical methods can be used to infer the true value of a quantity, given the data and some assumptions about how the measurement tool works. In this sense, the Y2H system or an AP-MS screen are simply measurement tools that provide imperfect data from which we make inferences about the true state of nature.
Standard definitions of various error statistics  are given in Box 2. We give them to enable a coherent dialog and to address some of the confusion in the literature. For example, a widely cited evaluation study by Edwards et al.  reported a "false positive rate" defined as FP/(TP + FP): where FP is the number of false positives and TP the number of true positives. However, the more common name for this quantity is the 'false-discovery rate' (see Box 2). The difference between the false-positive rate, as usually defined by FP/N, and the false-discovery rate can be substantial, as their denominators are very different, N being the true tested non-interactions, given by TN + FP (see Box 2). Incompatible terminology leads to confusion and makes comparison of error rates reported in different studies difficult.
Measurement errors can be decomposed into two components: stochastic and systematic errors. Stochastic errors are associated with random variability, whereas systematic errors are recurrent. Stochastic errors are simpler to address: they can be controlled by replication, can be eventually eliminated if the experiment is repeated many times, and they can often readily be described using probability models. Systematic errors give rise to bias: the quantity being measured is consistently different from the truth. Their identification is difficult, but if it can be done, they can be addressed either by improving the experimental procedures or by developing appropriate methods for post-experiment data processing.
Statistical models for the analysis of protein-interaction data
Statistical models can integrate the information from repeated or related measurements and quantify the (un)certainty that we have about the conclusions. Here we consider how statistical techniques have been applied to two distinct problems: estimating membership of a protein complex and the integration of data from different experiments (cross-experiment integration of data).
Estimating membership of a protein complex
Russell and colleagues  have developed a heuristic that they term the 'socioaffinity index', A ij . It quantifies the confidence that proteins i and j share complex membership, given a set of protein purifications each with its bait and a number of prey. The score is the logarithm of the product of three odds-ratios. The first odds-ratio compares the frequency with which bait i pulled down prey j to the frequency that would be expected if prey came down randomly; the second is the corresponding value for bait j pulling down prey i; and the third is the ratio of frequency of co-occurrence of i and j in a pull-down to what would be expected under random sampling. The authors then apply a customized clustering algorithm to the matrix A ij to estimate sets of protein complexes from AP-MS data.
Scholtens and colleagues took a different route [33, 34]. They explicitly modeled the underlying bipartite graph of membership of proteins in protein complexes. They estimated the bipartite graph from the observed data using a penalized likelihood method. Their method explicitly differentiates between tested and untested edges in the data, and it deals with the possibility that some proteins can be members of multiple complexes and others may not be assignable to any.
Cross-experiment integration of data
Turning to the issue of the cross-experiment integration of data, Gilchrist and colleagues  described a statistical model for identifying stochastic errors in protein-protein interaction datasets that is based on the Binomial distribution. They assumed that there is a true underlying graph of protein interactions in the biological system under study and that multiple experimental runs are performed, each resulting in a set of observed edges. A true edge is observed with probability 1 - pFN and missed with the false-negative probability pFN. Similarly, a true non-edge is observed as an edge with false-positive probability pFP and not observed with probability 1 - pFP. They assumed that all these stochastic events are independent of each other, and governed only by the two Binomial rates pFP and pFN. The statistical distribution of the number of observed edges S between two proteins, given n t trials, and conditional on whether or not they truly interact, is then simply given by Binomial distributions:
S | true edge ~ Bin(n t , 1 - pFN) (1)
S | true non-edge ~ Bin(n t , pFP) (2)
From this, the authors constructed a maximum likelihood estimator of pFP and pFN, and a likelihood-ratio test to decide, for any pair of proteins, whether the data suggest an interaction between them.
Krogan and colleagues [11, 35] took an approach that is similar in spirit to that of Gilchrist et al. . Their formulation uses a Bayes factor that compares the probability of the observed data under the two possible alternatives, and a further component that represents the prior odds of an interaction. The use of a Bayes factor in this context is entirely appropriate, but given that the selection of baits is typically not a simple random sample from the population of potential baits, it is somewhat difficult to interpret the role of the prior, and it seems some justification is needed. The two approaches [29, 35] differ somewhat in how specific quantities, such as pFP and pFN, are estimated. An important difference is that Krogan and colleagues  were specifically interested in combining AP-MS datasets to solve the problem of identifying protein complexes.
Internal error rate estimation using reciprocity
The direction of an observed bait-prey interaction is informative for the estimation of error rates and the identification of systematic errors. If two proteins A and B are each tested both as bait and prey, then ideally we expect reciprocity in their interaction data: if they truly interact, bait A should find prey B and bait B should find prey A. If they truly do not interact, there should be no observed interaction in either direction. In real data there will be many pairs of proteins for which reciprocity does not hold, and these cases imply that either a false positive or a flase negative measurement was made. Comparing the prevalence of reciprocally measured interactions amung the reciprocally tested edges can tell us something about error rates, both stochastic and systematic.
As the set of reciprocally tested edges is usually not explicitly recorded, we have used the concept of viable baits and viable prey to produce Table 1, which gives the numbers of viable bait and prey proteins, and based on this, the numbers of reciprocated and unreciprocated interaction measurements for several large-scale Y2H and AP-MS experiments. We can represent these data for each experiment as a directed subgraph GBP, with nodes being the intersection of viable baits and viable prey, and with directed edges each representing an observed interaction of a bait with a prey. There are several experiments in which GBP is sufficiently large for statisical analysis, and the usefulness of the reciprocity criterion can be used to measure the internal consistency of a dtaset .
To identify proteins that are likely to be subject to systematic experiemental error, we can compare their in-edges and out-edges (see Box 1) within the directed subgraph GBP. Ideally, theses edges should all reciprocate each other; if a certain protein has very many unreciprocated edges, this indicates that it is likely to be affected by a systematic error. To quantify this, the number of unreciprocated edges, nunr, originating from or pointing to a particular protein can be compared with the number of reciprocated edges that it has and to the false-positive and false-negative rates pFP and pFN. Precise estimation of these rates is difficult, however, and a simple and effective criterion can instead be derived from considering symmetry.
For a given number of unreciprocated edges, nunr, if there are no systematic errors then the unreciprocated edges should be in-edges and out-edges in approximately equal numbers. If we denote their numbers by nin and nout, respectively, then nin + nout = nunr, and we expect that
nin ~ Bin(nunr, 0.5) (3)
If nin and nout are significantly different from each other, according to the Binomial distribution we would conclude that the protein behaved differently in the experiment when used as bait compared with prey, and would use this as an indication of systematic error affecting at least part of the data for that protein. An application of this criterion to the subgraph GBP of the data of Krogan et al.  is shown in Figure 3.
Estimation of the properties of the interaction graph in this setting
There are two basic approaches to estimation: one is to estimate the true underlying graph, given the data and some modeling assumptions, then to calculate properties of interest from the estimated graph. The other is to directly estimate the quantities of interest without making an attempt to estimate the true underlying graph. For protein-interaction data we suggest that the latter is often preferable, as it can deal better with the low coverage of the datasets. As new methods and models for integrating datasets are developed it will be important to reassess the situation.
We distinguish between two different types of quantities to be estimated. The first type are single numeric values, such as degree, clustering coefficient or diameter. The second are more general structures, such as modules or subgraphs. The tools for estimation are more developed for numeric quantities than for modules, and there is agreement on the definitions of the different quantities. For modules, or cohesive subgroups, there is little agreement on what is being sought or how to find it.
The integration of data from different independent experiments
No single experiment has provided complete information on all interactions in a system of interest and so data from different experiments need to be integrated. Integration promises to increase coverage and reduce the effects of stochastic errors. Table 1 summarizes experiments done on the yeast protein interactome that are candidates for integration. The overlap between experiments is examined in Tables 2 and 3.
An essential step before integration of data is to assess their quality in terms of specificity, sensitivity and coverage. Such an assessment should provide reliable estimates of the false-positive and false-negative error rates. There are three main computational approaches: comparison to a benchmark or 'gold standard' data, within-experiment or internal validation, and between-experiment validation.
When direct physical interactions are being measured (for example, by Y2H), crystal structures of the interacting proteins can be used as the gold standard for the validity of the interaction. This was one of the approaches used in . Only a handful of crystal structures of interacting proteins are known, however, and such data are still difficult and expensive to obtain. Some physical interactions and protein complexes have also been characterized through detailed biochemical investigations, and are collected in databases such as MIPS  and GO . Circularity needs to be avoided, however; for example, the data from  and  are now reported as known complexes in some of the public protein complex databases.
Within-experiment validation relies on internal properties of the data, such as redundancies or symmetries that are not used in the experiment, and that can therefore be used to validate the experimental results. One such property is reciprocity, as discussed above. Deviations from expectation can be used to estimate stochastic error rates, and they can also be used to identify individual proteins whose data appear to be subject to systematic artifacts (see Figure 3).
Reported replicate measurements can also be used to help validate experimental data and to estimate error rates. The basic idea is that if edges are tested multiple times under the same conditions, those that are found frequently can be termed true positives and can be used to estimate the false-negative rate from those cases when they were missed. Similarly, those that are seldom found can be deemed true negatives, and from the positive data points the false-positive rate can be estimated. This approach is complicated by possible dependencies between the replicate measurements and by systematic errors that, if present, will affect all replicates. These complications may render the statistical model intractable. Further caution is warranted. Was the choice of replicates measures made a priori or because of anomalous results obtained during the experiment? Do they provide equal coverage of all important conditions and of all types of proteins that were studied?
Between-experiment comparisons rely on the experimental conditions being sufficiently similar to ensure that the measurements are made on the same underlying set of true interactions. However, as we see in Tables 2 and 3, in many cases there is relatively little overlap in bait selection and in observed prey. For two recent experiments with at least some overlap, a comparison was presented by . These authors found a moderate overlap between the primary data, for example the proteins identified by each successful bait, but a low everlap of the computed protein complexes by each group.
When integrating data from different experiments our recommendation is that validation to a gold standard and within-experiment validation should first be done on each experiment separately. Once the data are sufficiently well understood and as many of the systematic errors as possible have been resolved, integration becomes worthwhile. If there is little agreement on the existence of interactions for edges tested in different experiments, then one must question the prudence of their integration: it may be that the biological conditions were too different to allow their integration into a single meaningful dataset.
There is room for much more research here. Evidence in favor of, or against, experimentally detected interactions can often be obtained from other sources, such as data from other organisms, dependencies of different types of interactions on each other (for example, coexpression, co-localization and physical interaction), evolutionary conservation , protein structure  and amino-acid binding motifs . The challenge is to ensure that the evidence is applicable and that it does bear relationship to the assay and system under study.
Our purpose in writing this article was to address the observation that the many different protein-interaction datasets available appear to have very little in common, and also to address reports that the data were inherently noisy and of low quality (for example [17, 41]). Our investigations suggest that the data themselves, while problematic in some cases, are not the real issue, but rather there is often misinterpretation of the data, methods to address noisiness are often inadequate, and the lack of substantive comparisons between methods applied to the data has led to a situation where the data, rather than the methods, are treated with suspicion. As seen from Tables 2 and 3, low coverage, and not the false-positive rate, is responsible for the small amount of overlap between datasets.
The separation of errors into stochastic and systematic components is potentially of great benefit. Comparison of experimental data should be based on stochastic error rates. The identification of systematic errors can help to identify problems with the experimental techniques and hopefully suggest solutions to those problems. We believe that when more standard, and sound, statistical practices are adopted for preprocessing the data, it will be possible to estimate quantities of interest and to make substantial comparisons. An essential prerequisite is the adoption of standard methods for estimation of stochastic error rates and where possible the identification of systematic errors. Standardized preprocessing is also required in order to be able to synthesize different experimental datasets. Combining data requires attention to the differing error rates, and the discounting of information from more variable experiments. Given the numbers in Tables 2 and 3, there is much to be gained by combining the different experimental datasets. We believe that the data, while noisy, are in fact very useful, and with appropriate preprocessing and statistical modeling they can provide deep insight into the functioning of cellular machineries.
Ito T, Chiba T, Ozawa R, Yoshida M, Hattori M, Sakaki Y: A comprehensive two-hybrid analysis to explore the yeast protein interactome. Proc Natl Acad Sci USA. 2001, 98: 4569-4574. 10.1073/pnas.061034498.
Cagney G, Uetz P, Fields S: Two-hybrid analysis of the Saccharomyces cerevisiae 26S proteasome. Physiol Genomics. 2001, 7: 27-34.
Tong AH, Drees B, Nardelli G, Bader GD, Brannetti B, Castagnoli L, Evangelista M, Ferracuti S, Nelson B, Paoluzi S, et al: A combined experimental and computational strategy to define protein interaction networks for peptide recognition modules. Science. 2002, 295: 321-324. 10.1126/science.1064987.
Hazbun TR, Malmström L, Anderson S, Graczyk BJ, Fox B, Riffle M, Sundin BA, Aranda JD, McDonald WH, Chiu CH, et al: Assigning function to yeast proteins by integration of technologies. Mol Cell. 2003, 12: 1353-1365. 10.1016/S1097-2765(03)00476-3.
Zhao R, Davey M, Hsu YC, Kaplanek P, Tong A, Parsons AB, Krogan N, Cagney G, Mai D, Greenblatt J, et al: Navigating the chaperone network: an integrative map of physical and genetic interactions mediated by the hsp90 chaperone. Cell. 2005, 120: 715-727. 10.1016/j.cell.2004.12.024.
Uetz P, Giot L, Cagney G, Mansfield TA, Judson RS, Knight JR, Lockshon D, Narayan V, Srinivasan M, Pochart P, et al: A comprehensive analysis of protein-protein interactions in Saccharomyces cerevisiae. Nature. 2000, 403: 623-627. 10.1038/35001009.
Gavin AC, Bösche M, Krause R, Grandi P, Marzioch M, Bauer A, Schultz J, Rick JM, Michon AM, Cruciat CM, et al: Functional organization of the yeast proteome by systematic analysis of protein complexes. Nature. 2002, 415: 141-147. 10.1038/415141a.
Ho Y, Gruhler A, Heilbut A, Bader GD, Moore L, Adams SL, Millar A, Taylor P, Bennett K, Boutilier K, et al: Systematic identification of protein complexes in Saccharomyces cerevisiae by mass spectrometry. Nature. 2002, 415: 180-183. 10.1038/415180a.
Krogan NJ, Peng WT, Cagney G, Robinson MD, Haw R, Zhong G, Guo X, Zhang X, Canadien V, Richards DP, et al: High-definition macromolecular composition of yeast RNA-processing complexes. Mol Cell. 2004, 13: 225-239. 10.1016/S1097-2765(04)00003-6.
Gavin AC, Aloy P, Grandi P, Krause R, Boesche M, Marzioch M, Rau C, Jensen LJ, Bastuck S, Dümpelfeld B, et al: Proteome survey reveals modularity of the yeast cell machinery. Nature. 2006, 440: 631-636. 10.1038/nature04532.
Krogan NJ, Cagney G, Yu H, Zhong G, Guo X, Ignatchenko A, Li J, Pu S, Datta N, Tikuisis AP, et al: Global landscape of protein complexes in the yeast Saccharomyces cerevisiae. Nature. 2006, 440: 637-643. 10.1038/nature04670.
Giot L, Bader JS, Brouwer C, Chaudhuri A, Kuang B, Li Y, Hao YL, Ooi CE, Godwin B, Vitols E, et al: A protein interactionmap of Drosophila melanogaster. Science. 2003, 302: 1727-1736. 10.1126/science.1090289.
Li S, Armstrong CM, Bertin N, Ge H, Milstein S, Boxem M, Vidalain PO, Han JD, Chesneau A, Hao T, et al: A map of the interactome network of the metazoan C. elegans. Science. 2004, 303: 540-543. 10.1126/science.1091403.
Rual JF, Venkatesan K, Hao T, Hirozane-Kishikawa T, Dricot A, Li N, Berriz GF, Gibbons FD, Dreze M, Ayivi-Guedehoussou N, et al: Towards a proteome-scale map of the human protein-protein interaction network. Nature. 2005, 437: 1173-1178. 10.1038/nature04209.
Stelzl U, Worm U, Lalowski M, Haenig C, Brembeck FH, Goehler H, Stroedicke M, Zenkner M, Schoenherr A, Koeppen S, et al: A human protein-protein interaction network: a resource for annotating the proteome. Cell. 2005, 122: 957-968. 10.1016/j.cell.2005.08.029.
Mrowka R, Patzak A, Herzel H: Is there a bias in proteome research?. Genome Res. 2001, 11: 1971-1973. 10.1101/gr.206701.
Edwards AM, Kus B, Jansen R, Greenbaum D, Greenblatt J, Gerstein M: Bridging structural biology and genomics: assessing protein interaction data with known complexes. Trends Genet. 2002, 18: 529-536. 10.1016/S0168-9525(02)02763-4.
von Mering C, Krause R, Snel B, Cornell M, Oliver SG, Fields S, Bork P: Comparative assessment of large-scale data sets of protein-protein interactions. Nature. 2002, 417: 399-403. 10.1038/nature750.
Goll J, Uetz P: The elusive yeast interactome. Genome Biol. 2006, 7: 223-
Gagneur J, David L, Steinmetz LM: Capturing cellular machines by systematic screens of protein complexes. Trends Microbiol. 2006, 14: 336-339. 10.1016/j.tim.2006.06.002.
Hart GT, Ramani AK, Marcotte EM: How complete are current yeast and human protein-interaction networks?. Genome Biol. 2006, 7: 120-10.1186/gb-2006-7-11-120.
Jones S, Thornton JM: Principles of protein-protein interactions. Proc Natl Acad Sci USA. 1996, 93: 13-20. 10.1073/pnas.93.1.13.
Fields S, Song O: A novel genetic system to detect protein-protein interactions. Nature. 1989, 340: 245-246. 10.1038/340245a0.
Kumar A, Snyder M: Protein complexes take the bait. Nature. 2002, 415: 123-124. 10.1038/415123a.
Uetz P: Two-hybrid arrays. Curr Opin Chem Biol. 2002, 6: 57-62. 10.1016/S1367-5931(01)00288-5.
Han JD, Dupuy D, Bertin N, Cusick ME, Vidal M: Effect of sampling on topology predictions of protein-protein interaction networks. Nat Biotechnol. 2005, 23: 839-844. 10.1038/nbt1116.
Stumpf MPH, Wiuf C: Sampling properties of random graphs: the degree distribution. Phys Rev E Stat Nonlin Soft Matter Phys. 2005, 72: 036118-
Chiang T, Scholtens D, Sarkar D, Gentleman R, Huber W: Coverage and error models or protein-protein interaction data by directed graph analysis. Genome Biol. 2007, 8: R186-10.1186/gb-2007-8-9-r186.
Gilchrist MA, Salter LA, Wagner A: A statistical framework for combining and interpreting proteomic datasets. Bioinformatics. 2004, 20: 689-700. 10.1093/bioinformatics/btg469.
Box GEP, Draper NR: Empirical Model-Building and Response Surfaces. 1987, New York: Wiley
Aloy P, Russell RB: Structural systems biology: modelling protein interactions. Nat Rev Mol Cell Biol. 2006, 7: 188-197. 10.1038/nrm1859.
Kelsey JL, Whittemore AS, Evans AS, Thompson WD: Methods in observational epidemiology. Monographs in Epidemiology and Biostatistics. 1996, New York: Oxford University Press
Scholtens D, Gentleman R: Making sense of high-throughput protein-protein interaction data. Stat Appl Genet Mol Biol. 2004, 3: 39-
Scholtens D, Vidal M, Gentleman R: Local modeling of global interactome networks. Bioinformatics. 2005, 21: 3548-3557. 10.1093/bioinformatics/bti567.
Collins SR, Kemmeren P, Zhao XC, Greenblatt JF, Spencer F, Holstege FC, Weissman JS, Krogan NJ: Toward a comprehensive atlas of the physical interactome of Saccharomyces cerevisiae. Mol Cell Proteomics. 2007, 6: 439-450. 10.1074/mcp.M600381-MCP200.
Mewes HW, Frishman D, Mayer KF, Münsterkötter M, Noubibou O, Pagel P, Rattei T, Oesterheld M, Ruepp A, Stümpflen V: MIPS: analysis and annotation of proteins from whole genomes in 2005. Nucleic Acids Res. 2006, 34 (Database issue): D169-D172. 10.1093/nar/gkj148.
Harris MA, Clark J, Ireland A, Lomax J, Ashburner M, Foulger R, Eilbeck K, Lewis S, Marshall B, Mungall C, et al: The Gene Ontology (GO) database and informatics resource. Nucleic Acids Res. 2004, 32 (Database issue): D258-D261.
Poyatos JF, Hurst LD: How biologically relevant are interaction-based modules in protein networks?. Genome Biol. 2004, 5: R93-10.1186/gb-2004-5-11-r93.
Aloy P, Böttcher B, Ceulemans H, Leutwein C, Mellwig C, Fischer S, Gavin AC, Bork P, Superti-Furga G, Serrano L, Russell RB: Structure-based assembly of protein complexes in yeast. Science. 2004, 303: 2026-2029. 10.1126/science.1092645.
Neduva V, Russell RB: Peptides mediating interaction networks: new leads at last. Curr Opin Biotechnol. 2006, 17: 465-471. 10.1016/j.copbio.2006.08.002.
Chen J, Hsu W, Lee ML, Ng SK: Increasing confidence of protein interactomes using network topological metrics. Bioinformatics. 2006, 22: 1998-2004. 10.1093/bioinformatics/btl335.
Storey J: A direct approach to false discovery rates. J R Stat Soc Ser B. 2002, 64: 479-498. 10.1111/1467-9868.00346.
We thank Richard Bourgon, Michael Boutros, Tony Chiang, Denise Scholtens and Lars Steinmetz for helpful comments on the manuscript. This work was supported by HFSP research grant RGP0022/2005 to W.H. and R.G.