Learning and decision-making in artificial animals
Journal article, 2018
ecosystems is presented. All animats use the same mechanisms for learning and decisionmaking.
Each animat has its own set of needs and its own memory structure that undergoes
continuous development and constitutes the basis for decision-making. The decision-making mechanism aims at keeping the needs of the animat as satisfied as possible for as long as possible. Reward and punishment are defined in terms of changes to the level of need satisfaction. The learning mechanisms are driven by prediction error relating to reward and punishment and are of two kinds: multi-objective local Q-learning and structural learning that alter the architecture of the memory structures by adding and removing nodes.
The animat model has the following key properties: (1) autonomy: it operates in a fully
automatic fashion, without any need for interaction with human engineers. In particular, it
does not depend on human engineers to provide goals, tasks, or seed knowledge. Still, it can operate either with or without human interaction; (2) generality: it uses the same learning and decision-making mechanisms in all environments, e.g. desert environments and forest environments and for all animats, e.g. frog animats and bee animats; and (3) adequacy: it is able to learn basic forms of animal skills such as eating, drinking, locomotion, and navigation.
Eight experiments are presented. The results obtained indicate that (i) dynamic
memory structures are strictly more powerful than static; (ii) it is possible to use a
fixed generic design to model basic cognitive processes of a wide range of animals and
environments; and (iii) the animat framework enables a uniform and gradual approach to
AGI, by successively taking on more challenging problems in the form of broader and more complex classes of environments.
local Q-learning
structural learning
homeostatic decision-making
animats
Author
Claes Strannegård
Chalmers, Computer Science and Engineering (Chalmers), Data Science
Nils Lars Svangård
University of Gothenburg
David Lindström
University of Gothenburg
Joscha Bach
Harvard University
Bas Steunebrink
NNAISENSE
Journal of Artificial General Intelligence
Vol. 9 1 55-82
Subject Categories
Computer and Information Science
DOI
10.2478/jagi-2018-0002