Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Train the model with RGBD dataset #21

Open
Qiulin-W opened this issue Sep 23, 2020 · 0 comments
Open

Train the model with RGBD dataset #21

Qiulin-W opened this issue Sep 23, 2020 · 0 comments

Comments

@Qiulin-W
Copy link

Thanks for your great work and congratulations! Since the unsup3d model is trained based on RGB input, what if I have RGBD human face dataset capture by commodity RGBD cameras, in which way should I add the supervision for the depth part to make full use of the depth information input?

Actually I have tried some unsupervised method such as Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set, which estimate the BFM parameters directly and use a differential renderer (my choice is Pytorch3D) for end-to-end weakly-supervised training. I tried to use L1/L2 loss between the rendered zbuffer and real depth map, but in this way the depth loss may conflit with other loss components(rgb loss, landmark loss).

Any sugguestions on this?

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant