Learning Task- and Touch-based Grasping
Poster (konferens), 2012

In order to equip robots with goal-directed grasping ability, the integration of high-level task information with low-level sensory data is needed. For example, if a robot is given a task, e.g., pour me a cup of coffee, it needs to 1) make decision on which object to use, 2) how the hand should be placed around the object, and 3) how much gripping force should be applied so that the subsequent manipulation is feasible and stable for the pouring action. Several sensory streams (visual, proprioceptive and haptic) are relevant for these three steps. The problem domain and hence the state space becomes highdimensional involving both continuous and discrete variables with complex relations. We study how these can be encoded in a suitable manner using probabilistic generative models so that robots can achieve stable and robust goal-directed grasps by exploiting feedback loops from multisensory data.

grasp planning, grasp stability, tactile sensing, Bayesian Networks

Författare

Yasemin Bekiroglu

Kungliga Tekniska Högskolan (KTH)

Dan Song

Kungliga Tekniska Högskolan (KTH)

Lu Wang

Kungliga Tekniska Högskolan (KTH)

Danica Kragic

Kungliga Tekniska Högskolan (KTH)

IEEE IROS 2012 Workshop: Beyond Robot Grasping - Modern Approaches for Learning Dynamic Manipulation
Vilamoura, ,

Ämneskategorier

Robotteknik och automation

Reglerteknik

Datorseende och robotik (autonoma system)

Mer information

Senast uppdaterat

2022-09-01