Lifelong learning starting from zero
Paper in proceeding, 2019

We present a deep neural-network model for lifelong learning inspired by several forms of neuroplasticity. The neural network develops continuously in response to signals from the environment. In the beginning the network is a blank slate with no nodes at all. It develops according to four rules: (i) expansion, which adds new nodes to memorize new input combinations; (ii) generalization, which adds new nodes that generalize from existing ones; (iii) forgetting, which removes nodes that are of relatively little use; and (iv) backpropagation, which fine-tunes the network parameters. We analyze the model from the perspective of accuracy, energy efficiency, and versatility and compare it to other network models, finding better performance in several cases.

Dynamic architectures

Lifelong learning

Deep learning

Author

Claes Strannegård

Chalmers, Computer Science and Engineering (Chalmers)

FS Dynamics Sweden AB

Herman Carlström

Chalmers, Computer Science and Engineering (Chalmers)

Niklas Engsner

FS Dynamics Sweden AB

Fredrik Mäkeläinen

FS Dynamics Sweden AB

Filip Slottner Seholm

Chalmers, Computer Science and Engineering (Chalmers)

Morteza Haghir Chehreghani

Chalmers, Computer Science and Engineering (Chalmers)

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

03029743 (ISSN) 16113349 (eISSN)

Vol. 11654 LNAI 188-197
978-3-030-27005-6 (ISBN)

12th International Conference on Artificial General Intelligence, AGI 2019
Shenzhen, China,

Subject Categories

Telecommunications

Communication Systems

Bioinformatics (Computational Biology)

DOI

10.1007/978-3-030-27005-6_19

More information

Latest update

11/20/2023