Zirui Wang, Wenjing Bian, Victor Adrian Prisacariu.
Active Vision Lab (AVL), University of Oxford.
We provide a environment.yaml
file to set up a conda
environment:
git clone https://github.com/ActiveVisionLab/CrossScore.git
cd CrossScore
conda env create -f environment.yaml
conda activate CrossScore
TLDR: download this
file (~3GB),
put it in datadir
:
mkdir datadir
cd datadir
wget https://www.robots.ox.ac.uk/~ryan/CrossScore/MFR_subset_demo.tar.gz
tar -xzvf MFR_subset_demo.tar.gz
rm MFR_subset_demo.tar.gz
cd ..
To demonstrate a minimum working example for training and inferencing steps shown below, we provide a small pre-processed subset. The is a subset of Map-Free Relocalisation (MFR) and is pre-processed using 3D Gaussian Splatting (3DGS). This small demo dataset is available at this link (~3GB). This is the file in TLDR. We only use this demo subset to present the expected dataloading structure.
In our actual training, our model is trained using MFR that pre-processed by three NVS methods: 3DGS, TensoRF, and NeRFacto. Due to the preprocessed file size (~2TB), it is challenging to directly share this pre-processed data. One work around is to release a data pre-processing script for MFR, which we are still tidying up. We aim to release the pre-processing script in Dec 2024.
We train our model with two NVIDIA A5000 (24GB) GPUs for about two days. However, the model should perform reasonably well after 12 hours of training. It is also possible to train with a single GPU.
python task/train.py trainer.devices='[0,1]' # 2 GPUs
# python task/train.py trainer.devices='[0]' # 1 GPU
We provide an example command to predict CrossScore for NVS rendered images by referencing real captured images.
git lfs install && git lfs pull # get our ckpt using git LFS
bash predict.sh
After running the script, our CrossScore score maps should be written to predict
dir.
The output should be similar to our
demo video
on our project page.
- Create a HuggingFace demo page.
- Release ECCV quantitative results related scripts.
- Release data processing scripts
- Release PyPI and Conda package.
This research is supported by an ARIA research gift grant from Meta Reality Lab. We gratefully thank Shangzhe Wu, Tengda Han, Zihang Lai for insightful discussions, and Michael Hobley for proofreading.
@inproceedings{wang2024crossscore,
title={CrossScore: Towards Multi-View Image Evaluation and Scoring},
author={Zirui Wang and Wenjing Bian and Victor Adrian Prisacariu},
booktitle={ECCV},
year={2024}
}