Skip to content

Latest commit

 

History

History
60 lines (60 loc) · 2.22 KB

2023-12-02-chang23a.md

File metadata and controls

60 lines (60 loc) · 2.22 KB
title section openreview abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
A Data-Efficient Visual-Audio Representation with Intuitive Fine-tuning for Voice-Controlled Robots
Poster
dxOaNO8bge
A command-following robot that serves people in everyday life must continually improve itself in deployment domains with minimal help from its end users, instead of engineers. Previous methods are either difficult to continuously improve after the deployment or require a large number of new labels during fine-tuning. Motivated by (self-)supervised contrastive learning, we propose a novel representation that generates an intrinsic reward function for command-following robot tasks by associating images with sound commands. After the robot is deployed in a new domain, the representation can be updated intuitively and data-efficiently by non-experts without any hand-crafted reward functions. We demonstrate our approach on various sound types and robotic tasks, including navigation and manipulation with raw sensor inputs. In simulated and real-world experiments, we show that our system can continually self-improve in previously unseen scenarios given fewer new labeled data, while still achieving better performance over previous methods.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
chang23a
0
A Data-Efficient Visual-Audio Representation with Intuitive Fine-tuning for Voice-Controlled Robots
1797
1819
1797-1819
1797
false
Chang, Peixin and Liu, Shuijing and Ji, Tianchen and Chakraborty, Neeloy and Hong, Kaiwen and Driggs-Campbell, Katherine Rose
given family
Peixin
Chang
given family
Shuijing
Liu
given family
Tianchen
Ji
given family
Neeloy
Chakraborty
given family
Kaiwen
Hong
given family
Katherine Rose
Driggs-Campbell
2023-12-02
Proceedings of The 7th Conference on Robot Learning
229
inproceedings
date-parts
2023
12
2