Skip to content

Latest commit

 

History

History
54 lines (54 loc) · 2.07 KB

2023-12-02-mazoure23a.md

File metadata and controls

54 lines (54 loc) · 2.07 KB
title section openreview abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Contrastive Value Learning: Implicit Models for Simple Offline RL
Poster
oqOfLP6bJy
Model-based reinforcement learning (RL) methods are appealing in the offline setting because they allow an agent to reason about the consequences of actions without interacting with the environment. While conventional model-based methods learn a 1-step model, predicting the immediate next state, these methods must be plugged into larger planning or RL systems to yield a policy. Can we model the environment dynamics in a different way, such that the learned model directly indicates the value of each action? In this paper, we propose Contrastive Value Learning (CVL), which learns an implicit, multi-step dynamics model. This model can be learned without access to reward functions, but nonetheless can be used to directly estimate the value of each action, without requiring any TD learning. Because this model represents the multi-step transitions implicitly, it avoids having to predict high-dimensional observations and thus scales to high-dimensional tasks. Our experiments demonstrate that CVL outperforms prior offline RL methods on complex robotics benchmarks.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
mazoure23a
0
Contrastive Value Learning: Implicit Models for Simple Offline RL
1257
1267
1257-1267
1257
false
Mazoure, Bogdan and Eysenbach, Benjamin and Nachum, Ofir and Tompson, Jonathan
given family
Bogdan
Mazoure
given family
Benjamin
Eysenbach
given family
Ofir
Nachum
given family
Jonathan
Tompson
2023-12-02
Proceedings of The 7th Conference on Robot Learning
229
inproceedings
date-parts
2023
12
2