Skip to content

Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation

License

Notifications You must be signed in to change notification settings

chanhee-luke/Recurrent-VLN-BERT

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Recurrent VLN-BERT

Code of the CVPR 2021 Oral paper:
A Recurrent Vision-and-Language BERT for Navigation
Yicong Hong, Qi Wu, Yuankai Qi, Cristian Rodriguez-Opazo, Stephen Gould

[Paper & Appendices]:new::new::new: [GitHub]

Prerequisites

Installation

Install the Matterport3D Simulator. Notice that this code uses the old version (v0.1) of the simulator, but you can easily change to the latest version which supports batches of agents and it is much more efficient.

Please find the versions of packages in our environment here.

Install the Pytorch-Transformers. In particular, we use this version (same as OSCAR) in our experiments.

Data Preparation

Please follow the instructions below to prepare the data in directories:

Initial OSCAR and PREVALENT weights

Please refer to vlnbert_init.py to set up the directories.

  • Pre-trained OSCAR weights
    • Download the base-no-labels following this guide.
  • Pre-trained PREVALENT weights
    • Download the pytorch_model.bin from here.

Trained Network Weights

R2R Navigation

Please read Peter Anderson's VLN paper for the R2R Navigation task.

Reproduce Testing Results

To replicate the performance reported in our paper, load the trained network weights and run validation:

bash run/test_agent.bash

You can simply switch between the OSCAR-based and the PREVALENT-based VLN models by changing the arguments vlnbert (oscar or prevalent) and load (trained model paths).

Training

Navigator

To train the network from scratch, simply run:

bash run/train_agent.bash

The trained Navigator will be saved under snap/.

Citation

If you use or discuss our Recurrent VLN-BERT, please cite our paper:

@article{hong2020recurrent,
  title={A Recurrent Vision-and-Language BERT for Navigation},
  author={Hong, Yicong and Wu, Qi and Qi, Yuankai and Rodriguez-Opazo, Cristian and Gould, Stephen},
  journal={arXiv preprint arXiv:2011.13922},
  year={2020}
}

About

Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.2%
  • Shell 0.8%