Learning Pixel-Wise Suction Grasp Representation for Cluttered Environments
Paper i proceeding, 2025

Robotic vacuum grippers provide distinct advantages for handling objects with complex geometries or compliant materials. Existing data annotation methods for suction grasp planning typically sample object-level grasp labels and transfer them to scene-level annotation on sensor data (e.g., depth images or point clouds). However, this process suffers from sparse annotations due to random sampling and label domain shifts, degrading downstream model performance. To overcome these limitations, we propose a suction grasp evaluation framework that directly assesses grasp feasibility at every sensor pixel. Our approach introduces an orthographic ray projection module to sample pixel-aligned grasp poses, followed by a physics-informed metric to evaluate suction grasp quality. This pixel-aligned annotation pipeline ensures a direct bijective mapping between sensor pixels and grasp labels. Additionally, we present a modular robotic system that utilizes depth data to perform object-agnostic seal map prediction and RGB data for grasp pose refinement. Real-world robot experiments demonstrate that our pixel-wise annotations align well with practical scenarios, and the learned grasp planning model outperforms existing baselines.

Robot sensing systems

Seals

Measurement

Grippers

Computer aided software engineering

Annotations

Point cloud compression

Pipelines

Planning

Geometry

Författare

Yiting Chen

Rice University

Ahmet Ercan Tekden

Chalmers, Elektroteknik, System- och reglerteknik

Miao Li

Wuhan University

Dimitrios Kanoulas

University College London (UCL)

Yasemin Bekiroglu

University College London (UCL)

Chalmers, Elektroteknik, System- och reglerteknik

IEEE International Conference on Automation Science and Engineering

21618070 (ISSN) 21618089 (eISSN)

3488-3493
9798331522469 (ISBN)

21st IEEE International Conference on Automation Science and Engineering, CASE 2025
Los Angeles, USA,

Ämneskategorier (SSIF 2025)

Robotik och automation

Datorgrafik och datorseende

Datavetenskap (datalogi)

DOI

10.1109/CASE58245.2025.11163903

Mer information

Senast uppdaterat

2025-10-17