Error-driven Input Modulation: Solving the Credit Assignment Problem without a Backward Pass

TitleError-driven Input Modulation: Solving the Credit Assignment Problem without a Backward Pass
Publication TypeJournal Article
Year of Publication2022
AuthorsDellaferrera, G, Kreiman, G
JournalarXiv
Date Published01/2022
Abstract

Supervised learning in artificial neural networks typically relies on backpropagation, where the weights are updated based on the error-function gradients and sequentially propagated from the output layer to the input layer. Although this approach has proven effective in a wide domain of applications, it lacks biological plausibility in many regards, including the weight symmetry problem, the dependence of learning on non-local signals, the freezing of neural activity during error propagation, and the update locking problem. Alternative training schemes - such as sign symmetry, feedback alignment, and direct feedback alignment - have been introduced, but invariably rely on a backward pass that hinders the possibility of solving all the issues simultaneously. Here, we propose to replace the backward pass with a second forward pass in which the input signal is modulated based on the error of the network. We show that this novel learning rule comprehensively addresses all the above-mentioned issues and can be applied to both fully connected and convolutional models. We test this learning rule on MNIST, CIFAR-10, and CIFAR-100. These results help incorporate biological principles into machine learning.

URLhttps://arxiv.org/abs/2201.11665
DOI10.48550/arXiv.2201.11665
Download:  PDF icon 2201.11665.pdf

Associated Module: 

CBMM Relationship: 

  • CBMM Funded