GraspAda: Deep Grasp Adaptation through Domain Transfer
Paper i proceeding, 2023

Learning-based methods for robotic grasping have been shown to yield high performance. However, they rely on expensive-to-acquire and well-labeled datasets. In addition, how to generalize the learned grasping ability across different scenarios is still unsolved. In this paper, we present a novel grasp adaptation strategy to transfer the learned grasping ability to new domains based on visual data using a new grasp feature representation. We present a conditional generative model for visual data transformation. By leveraging the deep feature representational capacity from the well-trained grasp synthesis model, our approach utilizes feature-level contrastive representation learning and adopts adversarial learning on output space. This way we bridge the domain gap between the new domain and the training domain while keeping consistency during the adaptation process. Based on transformed input grasp data via the generator, our trained model can generalize to new domains without any fine-tuning. The proposed method is evaluated on benchmark datasets and based on real robot experiments. The results show that our approach leads to high performance in new scenarios.

Deep Learning

robotic grasping

Författare

Yiting Chen

Student vid Chalmers

Junnan Jiang

Wuhan University

Ruiqi Lei

Tsinghua University

Yasemin Bekiroglu

Chalmers, Elektroteknik, System- och reglerteknik

Fei Chen

The Chinese University of Hong Kong, Shenzhen

Miao Li

Wuhan University

Proceedings - IEEE International Conference on Robotics and Automation

10504729 (ISSN)

Vol. 2023-May
9798350323658 (ISBN)

IEEE International Conference on Robotics and Automation
London, ,

Ämneskategorier

Språkteknologi (språkvetenskaplig databehandling)

Robotteknik och automation

Datavetenskap (datalogi)

Datorseende och robotik (autonoma system)

DOI

10.1109/ICRA48891.2023.10160213

ISBN

9798350323658

Mer information

Senast uppdaterat

2024-07-17