Exploration strategies for homeostatic agents
Paper i proceeding, 2019

This paper evaluates two new strategies for investigating artificial animals called animats. Animats are homeostatic agents with the objective of keeping their internal variables as close to optimal as possible. Steps towards the optimal are rewarded and steps away punished. Using reinforcement learning for exploration and decision making, the animats can consider predetermined optimal/acceptable levels in light of current levels, giving them greater flexibility for exploration and better survival chances. This paper considers the resulting strategies as evaluated in a range of environments, showing them to outperform common reinforcement learning, where internal variables are not taken into consideration.

Artificial general intelligence

Animat

Exploration strategies

Homeostatic regulation

Multi-objective reinforcement learning

Författare

Patrick Andersson

Chalmers, Data- och informationsteknik

Anton Strandman

Chalmers, Data- och informationsteknik

Claes Strannegård

Chalmers, Data- och informationsteknik

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

03029743 (ISSN) 16113349 (eISSN)

Vol. 11654 LNAI 178-187
978-3-030-27005-6 (ISBN)

12th International Conference on Artificial General Intelligence, AGI 2019
Shenzhen, China,

Ämneskategorier

Annan data- och informationsvetenskap

Lärande

Datavetenskap (datalogi)

DOI

10.1007/978-3-030-27005-6_18

Mer information

Senast uppdaterat

2019-11-18