Dyadic Learning in Recurrent and Feedforward Models
Paper i proceeding, 2024

From electrical to biological circuits, feedback plays a critical role in amplifying,dampening and stabilizing signals. In local activity difference based alternatives tobackpropagation, feedback connections are used to propagate learning signals indeep neural networks. We propose a saddle-point based framework using dyadic(two-state) neurons for training a family of parameterized models, which includethe symmetric Hopfield model, pure feedforward networks and a less exploredskew-symmetric Hopfield variant. The resulting learning method reduces to equi-librium propagation (EP) for symmetric Hopfield models and to dual propagation(DP) for feedforward networks, while the skew-symmetric Hopfield setting yieldsa new method with desirable robustness properties. Experimentally we demon-strate that the new skew-symmetric Hopfield model performs on par with EP andDP in terms of the resulting model predictive performance, while exhibiting en-hanced robustness to input changes and strong feedback and is less inclined toneural saturation. We identify the fundamentally different types of feedback sig-nals propagated in each model as the main cause of differences in robustness andsaturation.

Författare

Rasmus Kjær Høier

Microsoft Research

Chalmers, Elektroteknik, Signalbehandling och medicinsk teknik

Kirill Kalinin

Microsoft Research

Maxence Ernoult

RAIN AI

Christopher Zach

Chalmers, Elektroteknik, Signalbehandling och medicinsk teknik

NeurIPS 2024 Workshop Machine Learning with new Compute Paradigms

NeurIPS 2024 Workshop Machine Learning with new Compute Paradigms
Vancouver, Canada,

Ämneskategorier (SSIF 2025)

Beräkningsmatematik

Reglerteknik

Mer information

Skapat

2025-02-07