Learning Task- and Touch-based Grasping
Conference poster, 2012

In order to equip robots with goal-directed grasping ability, the integration of high-level task information with low-level sensory data is needed. For example, if a robot is given a task, e.g., pour me a cup of coffee, it needs to 1) make decision on which object to use, 2) how the hand should be placed around the object, and 3) how much gripping force should be applied so that the subsequent manipulation is feasible and stable for the pouring action. Several sensory streams (visual, proprioceptive and haptic) are relevant for these three steps. The problem domain and hence the state space becomes highdimensional involving both continuous and discrete variables with complex relations. We study how these can be encoded in a suitable manner using probabilistic generative models so that robots can achieve stable and robust goal-directed grasps by exploiting feedback loops from multisensory data.

grasp planning, grasp stability, tactile sensing, Bayesian Networks

Author

Yasemin Bekiroglu

Royal Institute of Technology (KTH)

Dan Song

Royal Institute of Technology (KTH)

Lu Wang

Royal Institute of Technology (KTH)

Danica Kragic

Royal Institute of Technology (KTH)

IEEE IROS 2012 Workshop: Beyond Robot Grasping - Modern Approaches for Learning Dynamic Manipulation
Vilamoura, ,

Subject Categories

Robotics

Control Engineering

Computer Vision and Robotics (Autonomous Systems)

More information

Latest update

9/1/2022 1