Basic language learning in artificial animals
Paper in proceedings, 2018

We explore a general architecture for artificial animals, or animats, that develops over time. The architecture combines reinforcement
learning, dynamic concept formation, and homeostatic decision-making aimed at need satisfaction. We show that this
architecture, which contains no ad hoc features for language processing, is capable of basic language learning of three kinds: (i)
learning to reproduce phonemes that are perceived in the environment via motor babbling; (ii) learning to reproduce sequences of
phonemes corresponding to spoken words perceived in the environment; and (iii) learning to ground the semantics of spoken words
in sensory experience by associating spoken words (e.g. the word “cold”) to sensory experience (e.g. the activity of a sensor for
cold temperature) and vice versa.

poverty of the stimulus

sequence learning

grounded semantics

babbling

generic animat

language learning

Author

Louise Johannesson

Chalmers, Computer Science and Engineering (Chalmers), Data Science

Martin Nilsson

Chalmers, Computer Science and Engineering (Chalmers), Data Science

Claes Strannegård

Chalmers, Computer Science and Engineering (Chalmers), Data Science

Proceedings of the 2018 Annual International Conference on Biologically Inspired Cognitive Architectures (BICA)

155-161

Biologically Inspired Cognitive Architectures (BICA-18)
Prag, Czech Republic,

Subject Categories

Computer and Information Science

More information

Latest update

1/15/2019