Skip to main content
Fig. 1 | Genome Biology

Fig. 1

From: A statistical and biological response to an informatics appraisal of healthy aging gene signatures

Fig. 1

A guide to the graphing layouts produced by Jacob and Speed and bench-marking against our top-ranked AD signature. a Using the same code and sampling procedure as Jacob and Speed, we compare the top-ranked gene-set from Fig. 5 Sood et al. [1] with their performance of their random process. Critically, Jacob’s code creates two random sampling objects per R session – rand.sig.1 for GSE63060 and rand.sig.2 for GSE63061, reflecting the different gene content of the GSE63060 and GSE63061 arrays. Jacob and Speed do not implement any cross-cohort validation nor any consideration of correction for multiple testing. Notably, our top-ranked blood signature exceeds random sampling by a substantial margin and critically is ‘active’ in both blood data-sets. Note that their ‘random’ gene-lists are selected from entire gene-chip which contains thousands of published age and AD correlated genes (as listed in our 2015 article, supplementary information). b Using the same code and sampling procedure as Jacob and Speed, we present the performance of just the 150 tissue age genes from Sood et al. (blue dot) in blood. We then assess one example of choices made by Jacob and Speed, namely using 50 vs 75% cohort split during withincohort cross-validation. By selecting a 50% split the relative (to ‘random’) performance of our age-150 gene list is impaired. Since the Sood age-150 was never stated to be the only age classifier, we made a genuine attempt at sampling at random by removing known Age (and AD) genes from the sampling pool (as listed in our 2015 article, supplementary information [1]). As Jacob’s code does not implement any cross-cohort validation nor any consideration of correction for multiple testing, our age signature remains the only signature validated in two (blood) cohorts (and across other independent data)

Back to article page