Efficient concept formation in large state spaces
Paper i proceeding, 2018

General autonomous agents must be able to operate in previously unseen worlds with large state spaces. To operate successfully in such worlds, the agents must maintain their own models of the environment, based on concept sets that are several orders of magnitude smaller. For adaptive agents, those concept sets cannot be fixed, but must adapt continuously to new situations. This, in turn, requires mechanisms for forming and preserving those concepts that are critical to successful decision-making, while removing others. In this paper we compare four general algorithms for learning and decision-making: (i) standard Q-learning, (ii) deep Q-learning, (iii) single-agent local Q-learning, and (iv) single-agent local Q-learning with improved concept formation rules. In an experiment with a state space larger than 232, it was found that a single-agent local Q-learning agent with improved concept formation rules performed substantially better than a similar agent with less sophisticated concept formation rules and slightly better than a deep Q-learning agent.

Efficient concept formation

Artificial animals

Adaptive architectures

Autonomous agents

Local Q-learning

Författare

Fredrik Mäkeläinen

Chalmers, Data- och informationsteknik

Hampus Torén

Chalmers, Data- och informationsteknik

Claes Strannegård

Chalmers, Data- och informationsteknik, Data Science

Lecture Notes in Computer Science

0302-9743 (ISSN)

Vol. 10999 140-150

11th International Conference on Artificial General Intelligence, AGI 2018
Prague, Czech Republic,

Ämneskategorier

Filosofi

Annan matematik

Datavetenskap (datalogi)

DOI

10.1007/978-3-319-97676-1_14

Mer information

Senast uppdaterat

2018-08-29