Skip to content

Latest commit

 

History

History
56 lines (56 loc) · 2.23 KB

2023-12-02-liu23c.md

File metadata and controls

56 lines (56 loc) · 2.23 KB
title section openreview abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
CLUE: Calibrated Latent Guidance for Offline Reinforcement Learning
Poster
xJ7XL5Wt8iN
Offline reinforcement learning (RL) aims to learn an optimal policy from pre-collected and labeled datasets, which eliminates the time-consuming data collection in online RL. However, offline RL still bears a large burden of specifying/handcrafting extrinsic rewards for each transition in the offline data. As a remedy for the labor-intensive labeling, we propose to endow offline RL tasks with a few expert data and utilize the limited expert data to drive intrinsic rewards, thus eliminating the need for extrinsic rewards. To achieve that, we introduce Calibrated Latent gUidancE (CLUE), which utilizes a conditional variational auto-encoder to learn a latent space such that intrinsic rewards can be directly qualified over the latent space. CLUE’s key idea is to align the intrinsic rewards consistent with the expert intention via enforcing the embeddings of expert data to a calibrated contextual representation. We instantiate the expert-driven intrinsic rewards in sparse-reward offline RL tasks, offline imitation learning (IL) tasks, and unsupervised offline RL tasks. Empirically, we find that CLUE can effectively improve the sparse-reward offline RL performance, outperform the state-of-the-art offline IL baselines, and discover diverse skills from static reward-free offline data.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
liu23c
0
CLUE: Calibrated Latent Guidance for Offline Reinforcement Learning
906
927
906-927
906
false
Liu, Jinxin and Zu, Lipeng and He, Li and Wang, Donglin
given family
Jinxin
Liu
given family
Lipeng
Zu
given family
Li
He
given family
Donglin
Wang
2023-12-02
Proceedings of The 7th Conference on Robot Learning
229
inproceedings
date-parts
2023
12
2