Skip to main content
Fig. 3 | Genome Biology

Fig. 3

From: Knowledge-primed neural networks enable biologically interpretable deep learning on single-cell sequencing data

Fig. 3

Optimized learning methodology for KPNNs. a Node weights reflect predictiveness in a simple network. (Top) Simulated network with one hidden node (node A) that is connected to several predictive input nodes, each representing one gene. (Bottom) Learned node weights identify node A as predictive. b High variability of node weights for two redundant nodes based on generic deep learning. (Top) Network with two hidden nodes (A and B) connected to predictive input nodes. (Bottom) Node weights distinguish predictive from non-predictive nodes, but there is a negative correlation of node weights for the two redundant nodes (inset). c Dropout reduces variability and increases robustness of node weights. (Top) The same network as in panel b, trained with dropout on hidden nodes. Dropout nodes are randomly selected at each training iteration. (Bottom) Learning with dropout results in robust and highly correlated weights (inset) for the two redundant nodes. d Node weights of weakly and strongly predictive nodes using generic deep learning. (Top) Network with three strongly predictive hidden nodes (A–C, connected to multiple predictive input nodes) and three weakly predictive hidden nodes (D–F, connected to one predictive input node). (Bottom) Node weights do not separate highly predictive from weakly predictive nodes when using generic deep learning. e Learning with input node dropout distinguishes between highly predictive and weakly predictive hidden nodes. (Top) The same network as in panel d, trained with dropout on input nodes. (Bottom) Node weights separate highly predictive from weakly predictive nodes when training with input node dropout. f Control inputs quantify the uneven connectivity of biological networks. (Top) Network with two layers of hidden nodes (A and B; 1 to 10) and input nodes that are all equally predictive of the output. (Bottom) Node weights trained on control inputs reflect the uneven connectivity of the simulated network. g Node weights obtained by training on actual data reflect both the data and the uneven connectivity. (Top) The same network as in panel f, but with only a subset of input nodes being predictive. (Bottom) Node weights for the network trained on actual data. h Comparison of node weights for actual data and for control inputs enables normalization for uneven network connectivity. (Top) The same network as in panel g, with annotation of the effect of input data and network structure on the importance of nodes A and B. (Bottom) Differential node weights for actual data versus control inputs

Back to article page