Basic language learning in artificial animals
Paper i proceeding, 2018
We explore a general architecture for artificial animals, or animats, that develops over time. The architecture combines reinforcement
learning, dynamic concept formation, and homeostatic decision-making aimed at need satisfaction. We show that this
architecture, which contains no ad hoc features for language processing, is capable of basic language learning of three kinds: (i)
learning to reproduce phonemes that are perceived in the environment via motor babbling; (ii) learning to reproduce sequences of
phonemes corresponding to spoken words perceived in the environment; and (iii) learning to ground the semantics of spoken words
in sensory experience by associating spoken words (e.g. the word “cold”) to sensory experience (e.g. the activity of a sensor for
cold temperature) and vice versa.
poverty of the stimulus
sequence learning
grounded semantics
babbling
generic animat
language learning