(CVPR 2023) NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field Indirect Illumination
Paper | Project Page | Arxiv | Data
-
Create conda environment
Firstly, choose your anaconda path in requirements.sh. (e.g.
source /root/anaconda3/etc/profile.d/conda.sh
).bash requirements.sh conda activate nefii
Use NeuS to train geometry.
Turn to NeuS : git clone https://github.com/Jangmin-Lee/NeuS.git
.
Take the thin_cube subdataset of NeuS as an example.
- images w.o. masks.
- modify
code_path
and anaconda path in run_s1_womask.sh - modify
general.base_exp_dir
anddataset.data_dir
inNeuS/confs/womask.conf
bash code/training/training_scripts/run_s1_womask.sh
-
images w. masks.
Take the same process like 'images without masks' except run_s1_wmask.sh and
NeuS/confs/wmask.conf
.
- modify
in_data_path
,code_path
,out_data_path
, and anaconda path in neus2nefii.sh to transform datasets.bash code/training/training_scripts/neus2nefii.sh
- images w.o. masks.
- modify your
data_path
,code_path
, andsave_path
, and anaconda path in run_s2_womask.sh. - modify the
geometry_neus
, in run_s2_womask.sh.bash code/training/training_scripts/run_s2_womask.sh
-
images w. masks.
Take the same process like 'images without masks' except run_s2_wmask.sh.
As mentioned in Sec. 4.2, when comparing the performance of different approaches on synthetic data, we directly learn geometry from the mesh for each approach. This enables us to better evaluate the material estimation ability without interference from the quality of the geometry reconstruction. Below are the corresponding scripts.
Take the robot subdataset as an example.
-
modify your anaconda path in run_s1.sh and run_s2.sh. (e.g.
source /root/anaconda3/etc/profile.d/conda.sh
). -
modify your
data_path
,code_path
, andsave_path
in run_s1.sh and run_s2.sh.bash code/training/training_scripts/robot/run_s1.sh
-
Modify the
Geometry
in run_s2.sh first.bash code/training/training_scripts/robot/run_s2.sh
- modify your anaconda path in render.sh.
- modify your
data_path
,code_path
, andsave_path
in render.sh. - modify
old_expdir
,Expname
, andTimestamp
for rendering in render.sh.bash code/training/training_scripts/robot/render.sh
Tips:
- if you training w./w.o. masks,
--conf
should be modified as the form of*/conf_neus.conf
. - for viewing exr images, you can use tev hdr viewer.
- modify your anaconda path in eval.sh.
- modify the
code_path
,data_path
, andrender_folder
in eval.sh.bash code/training/training_scripts/robot/eval.sh
- for the network.
- sg_render.py : core of the appearance modelling that evaluates rendering equation using spherical Gaussians.
- sg_envmap_convention.png : coordinate system convention of
mitsuba
for the envmap. - blender_envmap_convention.png : coordinate system convention of
blender
for the envmap. - sg_envmap_material.py : optimizable parameters for the material part.
- implicit_differentiable_renderer.py : optimizable parameters for the geometry part; it also contains our foward rendering code.
- for the training phase.
- geometry_train.py : optimization of unknown geometry with mesh.
- sdf_dataset.py : dataloader for mesh.
- conf : configuration file for the network and traiining strategy. It should be noted that the
Step2 in Training phase
is memory-consuming and you can decreasenum_pixels
andnum_rays
to train on your device. - conf_neus : for models traning w./w.o. masks.
- exp_runner.py : training script for step2. DDP is also feasible.
- for evaluating phase.
-
render.py : novel view rendering with materials. It's memory-comsuming and decreasing
Memory_capacity_level
in render.sh will be help. It should be noted that the rendering phase is time-comsuing and DDP will be faster. -
evaluate.py : evaluating script. The results can be found in
code/results/*.txt
. -
fit_envmap_with_sg.py : represent an envmap with mixture of spherical Gaussians. We provide three envmaps represented by spherical Gaussians optimized via this script in the 'code/envmaps' folder.
@InProceedings{Wu_2023_CVPR,
author = {Wu, Haoqian and Hu, Zhipeng and Li, Lincheng and Zhang, Yongqiang and Fan, Changjie and Yu, Xin},
title = {NeFII: Inverse Rendering for Reflectance Decomposition With Near-Field Indirect Illumination},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023},
pages = {4295-4304}
}
We have used codes from the following repositories, and we thank the authors for sharing their codes.
-
Inverder: https://github.com/zju3dv/InvRender
Tips :
- Thanks to DIVE128 for his contribution in organizing the code for the release version.