Learning grasp stability based on haptic data
Poster (konferens), 2010
Grasping is an essential skill for a general purpose service robot, working in an industrial or home-like environment. The classical work in robotic grasping assumes that the object parameters such as pose, shape, weight and material properties are known. If precise knowledge of these is available, grasp planning using analytical approaches, such as form or force closure, may be enough for successful grasp execution. However, in unstructured environments the information is usually uncertain, which presents a great challenge for the current state-of-the-art work in this area. Sensors can be used to alleviate the problem of uncertainty. To determine the shape and pose of an object, vision has been commonly used. However, the accuracy of vision is limited and small errors in object pose are frequent even for known objects. It is not uncommon that even these small errors cause failures in grasping. These failures are also difficult to prevent at the grasp planning stage. This problem is magnified when also the object models are acquired on-line using vision or other similar sensors. While the tactile and finger force sensors can be used to reduce this problem, a grasp may fail even when all fingers have adequate contact forces and the hand pose is not dramatically different from the planned one. The main contribution of this paper is to show that it is possible to infer knowledge about grasp stability using information from tactile sensors while grasping an object before being further manipulated. This is very useful, because if failures can be detected, objects can be regrasped before trying to lift them. However, the relationship between tactile measurements and grasp stability is embodiment specific and very complex. For this reason, we propose to use machine earning techniques for the inference.
grasp planning, grasp stability, tactile sensing