title | section | openreview | abstract | layout | series | publisher | issn | id | month | tex_title | firstpage | lastpage | page | order | cycles | bibtex_author | author | date | address | container-title | volume | genre | issued | extras | ||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SA6D: Self-Adaptive Few-Shot 6D Pose Estimator for Novel and Occluded Objects |
Poster |
gdkKi_F55h |
To enable meaningful robotic manipulation of objects in the real-world, 6D pose estimation is one of the critical aspects. Most existing approaches have difficulties to extend predictions to scenarios where novel object instances are continuously introduced, especially with heavy occlusions. In this work, we propose a few-shot pose estimation (FSPE) approach called SA6D, which uses a self-adaptive segmentation module to identify the novel target object and construct a point cloud model of the target object using only a small number of cluttered reference images. Unlike existing methods, SA6D does not require object-centric reference images or any additional object information, making it a more generalizable and scalable solution across categories. We evaluate SA6D on real-world tabletop object datasets and demonstrate that SA6D outperforms existing FSPE methods, particularly in cluttered scenes with occlusions, while requiring fewer reference images. |
inproceedings |
Proceedings of Machine Learning Research |
PMLR |
2640-3498 |
gao23a |
0 |
SA6D: Self-Adaptive Few-Shot 6D Pose Estimator for Novel and Occluded Objects |
1572 |
1595 |
1572-1595 |
1572 |
false |
Gao, Ning and Ngo, Vien Anh and Ziesche, Hanna and Neumann, Gerhard |
|
2023-12-02 |
Proceedings of The 7th Conference on Robot Learning |
229 |
inproceedings |
|