GraspAda: Deep Grasp Adaptation through Domain Transfer
Paper in proceeding, 2023

Learning-based methods for robotic grasping have been shown to yield high performance. However, they rely on expensive-to-acquire and well-labeled datasets. In addition, how to generalize the learned grasping ability across different scenarios is still unsolved. In this paper, we present a novel grasp adaptation strategy to transfer the learned grasping ability to new domains based on visual data using a new grasp feature representation. We present a conditional generative model for visual data transformation. By leveraging the deep feature representational capacity from the well-trained grasp synthesis model, our approach utilizes feature-level contrastive representation learning and adopts adversarial learning on output space. This way we bridge the domain gap between the new domain and the training domain while keeping consistency during the adaptation process. Based on transformed input grasp data via the generator, our trained model can generalize to new domains without any fine-tuning. The proposed method is evaluated on benchmark datasets and based on real robot experiments. The results show that our approach leads to high performance in new scenarios.

Deep Learning

robotic grasping

Author

Yiting Chen

Student at Chalmers

Junnan Jiang

Wuhan University

Ruiqi Lei

Tsinghua University

Yasemin Bekiroglu

Chalmers, Electrical Engineering, Systems and control

Fei Chen

The Chinese University of Hong Kong, Shenzhen

Miao Li

Wuhan University

Proceedings - IEEE International Conference on Robotics and Automation

10504729 (ISSN)


9798350323658 (ISBN)

IEEE International Conference on Robotics and Automation
London, ,

Subject Categories

Language Technology (Computational Linguistics)

Robotics

Computer Science

Computer Vision and Robotics (Autonomous Systems)

DOI

10.1109/ICRA48891.2023.10160213

ISBN

9798350323658

More information

Latest update

9/7/2023 9