Error-driven Input Modulation: Solving the Credit Assignment Problem without a Backward Pass

TitleError-driven Input Modulation: Solving the Credit Assignment Problem without a Backward Pass
Publication TypeConference Proceedings
Year of Publication2022
AuthorsDellaferrera, G, Kreiman, G
Conference NameProceedings of the 39th International Conference on Machine Learning, PMLR
Volume162
Pagination4937-4955
Date Published07/2022
Abstract

Supervised learning in artificial neural networks typically relies on backpropagation, where the weights are updated based on the error-function gradients and sequentially propagated from the output layer to the input layer. Although this approach has proven effective in a wide domain of applications, it lacks biological plausibility in many regards, including the weight symmetry problem, the dependence of learning on non-local signals, the freezing of neural activity during error propagation, and the update locking problem. Alternative training schemes have been introduced, including sign symmetry, feedback alignment, and direct feedback alignment, but they invariably rely on a backward pass that hinders the possibility of solving all the issues simultaneously. Here, we propose to replace the backward pass with a second forward pass in which the input signal is modulated based on the error of the network. We show that this novel learning rule comprehensively addresses all the above-mentioned issues and can be applied to both fully connected and convolutional models. We test this learning rule on MNIST, CIFAR-10, and CIFAR-100. These results help incorporate biological principles into machine learning.

URLhttps://proceedings.mlr.press/v162/dellaferrera22a.html
Download:  PDF icon dellaferrera22a.pdf

Associated Module: 

CBMM Relationship: 

  • CBMM Funded