What's in the Container? Classifying Object Contents from Vision and Touch
Paper i proceeding, 2014

Robots operating in household environments need to interact with food containers of different types. Whether a container is filled with milk, juice, yogurt or coffee may affect the way robots grasp and manipulate the container. In this paper, we concentrate on the problem of identifying what kind of content is in a container based on tactile and/or visual feedback in combination with grasping. In particular, we investigate the benefits of using unimodal (visual or tactile) or bimodal (visual-tactile) sensory data for this purpose. We direct our study toward cardboard containers with liquid or solid content or being empty. The motivation for using grasping rather than shaking is that we want to investigate the content prior to applying manipulation actions to a container. Our results show that we achieve comparable classification rates with unimodal data and that the visual and tactile data are complimentary.

Robot sensing systems

Visualization

Grasping

Containers

Författare

Puren Guler

Kungliga Tekniska Högskolan (KTH)

Universidad de Granada

Yasemin Bekiroglu

Universidad de Granada

Kungliga Tekniska Högskolan (KTH)

Xavi Gratal

Universidad de Granada

Kungliga Tekniska Högskolan (KTH)

Karl Pauwels

Universidad de Granada

Kungliga Tekniska Högskolan (KTH)

Danica Kragic

Kungliga Tekniska Högskolan (KTH)

Universidad de Granada

IEEE/RSJ International Conference on Intelligent Robots and Systems

2153-0858 (ISSN) 2153-0866 (eISSN)

3961-3968

IEEE/RSJ International Conference on Intelligent Robots and Systems
Chicago, USA,

Ämneskategorier

Robotteknik och automation

Datorseende och robotik (autonoma system)

DOI

10.1109/IROS.2014.6943119

Mer information

Senast uppdaterat

2022-03-07