Improving Performance in Neural Networks by Dendrite-Activated Connection
Paper in proceeding, 2025

We introduce a novel computational unit for neural networks featuring multiple biases, challenging the conventional perceptron structure. Designed to emphasize preserving uncorrupted information as it transfers from one unit to the next, this unit applies activation functions later in the process, incorporating specialized biases for each unit. We posit this unit as an improved design for neural networks and support this with (1) empirical evidence across diverse datasets; (2) a class of functions where this unit utilizes parameters more efficiently; and (3) biological analogies suggesting closer mimicry to natural neural processing. Source code is available at https://github.com/CuriosAI/dac-dev.

Author

Carlo Metta

Istituto di Scienza e Tecnologie dell'Informazione A. Faedo

Marco Fantozzi

University of Parma

Andrea Papini

Chalmers, Mathematical Sciences, Applied Mathematics and Statistics

University of Gothenburg

Gianluca Amato

G. d'Annunzio University of Chieti-Pescara

Matteo Bergamaschi

University of Padua

Andrea Fois

University of Parma

Silvia Giulia Galfré

University of Pisa

Alessandro Marchetti

G. d'Annunzio University of Chieti-Pescara

Michelangelo Vegliò

G. d'Annunzio University of Chieti-Pescara

Maurizio Parton

G. d'Annunzio University of Chieti-Pescara

Francesco Morandin

University of Parma

Studies in Classification Data Analysis and Knowledge Organization

14318814 (ISSN) 21983321 (eISSN)

133-141
9783031847011 (ISBN)

14th Scientific Meeting of the Classification and Data Analysis Group of the Italian Statistical Society, CLADAG 2023
Salerno, Italy,

Subject Categories (SSIF 2025)

Formal Methods

Computer Engineering

Computational Mathematics

DOI

10.1007/978-3-031-84702-8_15

More information

Latest update

10/27/2025