Offline Goal-Conditioned Reinforcement Learning for Shape Control of Deformable Linear Objects
Preprint, 2024

Deformable objects present several challenges to the field of robotic manipulation. One of the tasks that best encapsulates the difficulties arising due to non-rigid behavior is shape control, which requires driving an object to a desired shape. While shape-servoing methods have been shown successful in contexts with approximately linear behavior, they can fail in tasks with more complex dynamics. We investigate an alternative approach, using offline RL to solve a planar shape control problem of a Deformable Linear Object (DLO). To evaluate the effect of material properties, two DLOs are tested namely a soft rope and an elastic cord. We frame this task as a goal-conditioned offline RL problem, and aim to learn to generalize to unseen goal shapes. Data collection and augmentation procedures are proposed to limit the amount of experimental data which needs to be collected with the real robot. We evaluate the amount of augmentation needed to achieve the best results, and test the effect of regularization through behavior cloning on the TD3+BC algorithm. Finally, we show that the proposed approach is able to outperform a shape-servoing baseline in a curvature inversion experiment.

Author

Rita Laezza

Chalmers, Electrical Engineering, Systems and control

Mohammadreza Shetab-Bushehri

Clermont Auvergne University

Gabriel Arslan Waltersson

Chalmers, Electrical Engineering, Systems and control

Erol Özgür

Clermont Auvergne University

Youcef Mezouar

Clermont Auvergne University

Yiannis Karayiannidis

Chalmers, Electrical Engineering, Systems and control

Infrastructure

C3SE (Chalmers Centre for Computational Science and Engineering)

Subject Categories

Robotics

Computer Science

Computer Vision and Robotics (Autonomous Systems)

More information

Latest update

3/18/2024