Real-Time end-to-end 2D-to-3D Video Conversion, based on deep learning.
Inspired by piiswrong/deep3d, we rebuild the network on pytorch and optimize it in time domain and faster inference speed. So, try it and enjoy your own 3D movies.
Left is input video and right is output video with parallax.
More examples:
Plan | 360p (FPS) | 720p (FPS) | 1080p (FPS) | 4k (FPS) |
---|---|---|---|---|
GPU (2080ti) | 84 | 87 | 77 | 26 |
CPU (Xeon Platinum 8260) | 27.7 | 14.1 | 7.2 | 2.0 |
- Linux, Mac OS, Windows
- Python 3.7+
- ffmpeg 3.4.6+
- Pytorch 1.7.1
- CPU or NVIDIA GPU
This code depends on opencv-python available via pip install.
pip install opencv-python
git clone https://github.com/HypoX64/Deep3D
cd Deep3D
You can download pre_trained models from:
[Google Drive] [百度云,提取码xxo0 ]
Note:
- 360p can get the best result.
- The published models are not inference optimized.
- Models are still under training, 1080p and 4k models will be uploaded in the future.
python inference.py --model ./export/deep3d_v1.0_640x360_cuda.pt --video ./medias/wood.mp4 --out ./result/wood.mp4 --inv
# some video need to reverse left and right views (--inv)
This code borrows heavily from [deep3d] [DeepMosaics]