Skip to main content

Table 1 A comparison among the performances of AnnoPRO and eight available methods/tools

From: AnnoPRO: a strategy for protein function annotation based on multi-scale protein representation and a hybrid deep learning of dual-path encoding

Method/Tool

Date of Publication

BP

CC

MF

Fmax

AUPRC

Fmax

AUPRC

Fmax

AUPRC

DiamondBLAST

Nov, 2014

0.549

0.183

0.550

0.186

0.729

0.112

DeepGO

Feb, 2018

0.362

0.213

0.501

0.434

0.384

0.325

DeepGOCNN

Jan, 2020

0.369

0.294

0.516

0.460

0.382

0.362

DeepGOPlus

Jan, 2020

0.593

0.561

0.588

0.502

0.628

0.627

TALE

Mar, 2021

0.391

0.307

0.562

0.587

0.472

0.458

NetGO2*

Jul, 2021

0.497

0.434

0.574

0.508

0.667

0.674

PFmulDL

Mar, 2022

0.324

0.257

0.590

0.608

0.412

0.371

NetGO3*

Dec, 2022

0.540

0.500

0.579

0.535

0.687

0.726

AnnoPRO

This Study

0.609

0.574

0.746

0.749

0.763

0.755

  1. The values indicating the best performances among all methods/tools were highlighted in BOLD, and AnnoPRO performed consistently the best in all Gene Ontology (GO) classes (BP, CC, MF) under both evaluating criteria (Fmax, AUPRC). All methods/tools were ordered according to their publication dates. BP: biological process; CC: cellular component; MF: molecular function; Fmax: protein centric maximum F-measure; AUPRC: area under the precision-recall curve. The tools marked by an asterisk (*) indicated that their source-codes for model construction were not fully provided, which made it impossible for us to train models on experimental functional annotations that appeared before Oct 22, 2019, and their performances (evaluated by Fmax and AUPRC) were assessed by directly uploading those experimental function annotations asserted between Oct 22, 2019 and May 31, 2022 to the online server of those annotation tools. Among those eight existing methods/tools, the best performing ones under each category were highlighted by underline