Exploration strategies for homeostatic agents
Paper in proceeding, 2019

This paper evaluates two new strategies for investigating artificial animals called animats. Animats are homeostatic agents with the objective of keeping their internal variables as close to optimal as possible. Steps towards the optimal are rewarded and steps away punished. Using reinforcement learning for exploration and decision making, the animats can consider predetermined optimal/acceptable levels in light of current levels, giving them greater flexibility for exploration and better survival chances. This paper considers the resulting strategies as evaluated in a range of environments, showing them to outperform common reinforcement learning, where internal variables are not taken into consideration.

Artificial general intelligence

Animat

Exploration strategies

Homeostatic regulation

Multi-objective reinforcement learning

Author

Patrick Andersson

Chalmers, Computer Science and Engineering (Chalmers)

Anton Strandman

Chalmers, Computer Science and Engineering (Chalmers)

Claes Strannegård

Chalmers, Computer Science and Engineering (Chalmers)

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

03029743 (ISSN) 16113349 (eISSN)

Vol. 11654 LNAI 178-187
978-3-030-27005-6 (ISBN)

12th International Conference on Artificial General Intelligence, AGI 2019
Shenzhen, China,

Subject Categories

Other Computer and Information Science

Learning

Computer Science

DOI

10.1007/978-3-030-27005-6_18

More information

Latest update

11/18/2019