A growing curation of Text-to-3D, Diffusion-to-3D works. Heavily inspired by awesome-NeRF
05.08.2023
- Provided citations in BibTeX06.07.2023
- Created initial list
-
Zero-Shot Text-Guided Object Generation with Dream Fields, Ajay Jain et al., CVPR 2022 | citation
-
CLIP-Forge: Towards Zero-Shot Text-to-Shape Generation, Aditya Sanghi et al., Arxiv 2021 | citation
-
CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields, Can Wang et al., Arxiv 2021 | citation
-
CG-NeRF: Conditional Generative Neural Radiance Fields, Kyungmin Jo et al., Arxiv 2021 | citation
-
PureCLIPNERF: Understanding Pure CLIP Guidance for Voxel Grid NeRF Models, Han-Hung Lee et al., Arxiv 2022 | citation
-
TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting Decomposition, Yongwei Chen et al., NeurIPS 2022 | citation
-
SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation, Yen-Chi Cheng et al., CVPR 2023 | citation
-
3DDesigner: Towards Photorealistic 3D Object Generation and Editing with Text-guided Diffusion Models, Gang Li et al., Arxiv 2022 | citation
-
DreamFusion: Text-to-3D using 2D Diffusion, Ben Poole et al., ICLR 2023 | citation
-
Dream3D: Zero-Shot Text-to-3D Synthesis Using 3D Shape Prior and Text-to-Image Diffusion Models, Jiale Xu et al., Arxiv 2022 | citation
-
NeRF-Art: Text-Driven Neural Radiance Fields Stylization, Can Wang et al., Arxiv 2022 | citation
-
Novel View Synthesis with Diffusion Models, Daniel Watson et al., Arxiv 2022 | citation
-
NeuralLift-360: Lifting An In-the-wild 2D Photo to A 3D Object with 360° Views, Dejia Xu et al., Arxiv 2022 | citation
-
Point-E: A System for Generating 3D Point Clouds from Complex Prompts, Alex Nichol et al., Arxiv 2022 | citation
-
Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures, Gal Metzer et al., Arxiv 2023 | citation
-
Magic3D: High-Resolution Text-to-3D Content Creation, Chen-Hsuan Linet et al., CVPR 2023 | citation
-
RealFusion: 360° Reconstruction of Any Object from a Single Image, Luke Melas-Kyriazi et al., CVPR 2023 | citation
-
Monocular Depth Estimation using Diffusion Models, Saurabh Saxena et al., Arxiv 2023 | citation
-
SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction, Zhizhuo Zho et al., CVPR 2023 | citation
-
NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from 3D-aware Diffusion, Jiatao Gu et al., ICML 2023 | citation
-
Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation, Haochen Wang et al., CVPR 2023 | citation
-
High-fidelity 3D Face Generation from Natural Language Descriptions, Menghua Wu et al., CVPR 2023 | citation
-
TEXTure: Text-Guided Texturing of 3D Shapes, Elad Richardson Chen et al., SIGGRAPH 2023 | citation
-
NeRDi: Single-View NeRF Synthesis with Language-Guided Diffusion as General Image Priors, Congyue Deng et al., CVPR 2023 | citation
-
DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising Diffusion Models, Jamie Wynn et al., CVPR 2023 | citation
-
3DQD: Generalized Deep 3D Shape Prior via Part-Discretized Diffusion Process, Yuhan Li et al., CVPR 2023 | citation
-
DATID-3D: Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model, Gwanghyun Kim et al., CVPR 2023 | citation
-
Novel View Synthesis with Diffusion Models, Daniel Watson et al., ICLR 2023 | citation
-
ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation, Zhengyi Wang et al., Arxiv 2023 | citation
-
Rodin: A Generative Model for Sculpting 3D Digital Avatars Using Diffusion, Tengfei Wang et al., Arxiv 2022 | citation
-
3D-aware Image Generation using 2D Diffusion Models, Jianfeng Xiang et al., Arxiv 2023 | citation
-
Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior, Junshu Tang et al., ICCV 2023 | citation
-
Re-imagine the Negative Prompt Algorithm: Transform 2D Diffusion into 3D, alleviate Janus problem and Beyond, Mohammadreza Armandpour et al., Arxiv 2023 | citation
-
Text-To-4D Dynamic Scene Generation, Uriel Singer et al., Arxiv 2023 | citation
-
Generative Novel View Synthesis with 3D-Aware Diffusion Models, Eric R. Chan et al., Arxiv 2023 | citation
-
Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields, Jingbo Zhang et al., Arxiv 2023 | citation
-
Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors, Guocheng Qian et al., Arxiv 2023 | citation
-
DreamBooth3D: Subject-Driven Text-to-3D Generation, Amit Raj et al., ICCV 2023 | citation
-
Zero-1-to-3: Zero-shot One Image to 3D Object, Ruoshi Liu et al., Arxiv 2023 | citation
-
ZeroAvatar: Zero-shot 3D Avatar Generation from a Single Image, Zhenzhen Weng et al., Arxiv 2023 | citation
-
AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control, Ruixiang Jiang et al., ICCV 2023 | citation
-
TextDeformer: Geometry Manipulation using Text Guidance, William Gao et al., Arxiv 2033 | citation
-
ATT3D: Amortized Text-to-3D Object Synthesis, Jonathan Lorraine et al., ICCV 2023 | citation
-
Conditional 3D Shape Generation based on Shape-Image-Text Aligned Latent Representation, Zibo Zhao et al., Arxiv 2023 | citation
-
Diffusion-SDF: Conditional Generative Modeling of Signed Distance Functions, Gene Chou et al., Arxiv 2023 | citation
-
HiFA: High-fidelity Text-to-3D with Advanced Diffusion Guidance, Junzhe Zhu et al., Arxiv 2023 | citation
-
LERF: Language Embedded Radiance Fields, Justin Kerr et al., Arxiv 2023 | citation
-
Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions, Ayaan Haque et al., Arxiv 2023 | citation
-
3DFuse: Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation, Junyoung Seo et al., Arxiv 2023 | citation
-
MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion, Shitao Tang et al., Arxiv 2023 | citation
-
One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization, Minghua Liu et al., Arxiv 2023 | citation
-
TextMesh: Generation of Realistic 3D Meshes From Text Prompts, Christina Tsalicoglou Liu et al., Arxiv 2023 | citation
-
Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, Xingqian Xu et al., Arxiv 2023 | citation
-
SceneScape: Text-Driven Consistent Scene Generation, Rafail Fridman et al., Arxiv 2023 | citation
-
Local 3D Editing via 3D Distillation of CLIP Knowledge, Junha Hyung et al., Arxiv 2023 | citation
-
CLIP-Mesh: Generating textured meshes from text using pretrained image-text models, Nasir Khalid et al., Arxiv 2023 | citation
-
Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models, Lukas Höllein et al., Arxiv 2023 | citation
-
Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction, Hansheng Chen et al., Arxiv 2023 | citation
-
PODIA-3D: Domain Adaptation of 3D Generative Model Across Large Domain Gap Using Pose-Preserved Text-to-Image Diffusion, Gwanghyun Kim et al., ICCV 2023 | citation
-
Chupa: Carving 3D Clothed Humans from Skinned Shape Priors using 2D Diffusion Probabilistic Models, Byungjun Kim et al., ICCV 2023 | citation
-
Shap-E: Generating Conditional 3D Implicit Functions, Heewoo Jun et al., Arxiv 2023 | citation
-
Sketch-A-Shape: Zero-Shot Sketch-to-3D Shape Generation, Aditya Sanghi et al., Arxiv 2023 | citation
-
RePaint-NeRF: NeRF Editting via Semantic Masks and Diffusion Models, Xingchen Zhou et al., Arxiv 2023 | citation
-
Text2Tex: Text-driven Texture Synthesis via Diffusion Models, Dave Zhenyu Chen et al., Arxiv 2023 | citation
-
3D VADER - AutoDecoding Latent 3D Diffusion Models, Evangelos Ntavelis et al., Arxiv 2023 | citation
-
Control4D: Dynamic Portrait Editing by Learning 4D GAN from 2D Diffusion-based Editor, Ruizhi Shao et al., Arxiv 2023 | citation
-
DreamSparse: Escaping from Plato's Cave with 2D Frozen Diffusion Model Given Sparse Views, Paul Yoo et al., Arxiv 2023 | citation
-
Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation, Rui Chen et al., Arxiv 2023 | citation
-
DreamFace: Progressive Generation of Animatable 3D Faces under Text Guidance, Longwen Zhang et al., Arxiv 2023 | citation
-
Set-the-Scene: Global-Local Training for Generating Controllable NeRF Scenes, Dana Cohen-Bar et al., Arxiv 2023 | citation
-
HeadSculpt: Crafting 3D Head Avatars with Text, Xiao Han et al., Arxiv 2023 | citation
-
Cap3D: Scalable 3D Captioning with Pretrained Models, Tiange Luo et al., Arxiv 2023 | citation
-
InstructP2P: Learning to Edit 3D Point Clouds with Text Instructions, Jiale Xu et al., Arxiv 2023 | citation
-
DreamHuman: Animatable 3D Avatars from Text, Nikos Kolotouros et al., Arxiv 2023 | citation
-
FaceCLIPNeRF: Text-driven 3D Face Manipulation using Deformable Neural Radiance Fields, Sungwon Hwang et al., Arxiv 2023 | citation
-
3D-LLM: Injecting the 3D World into Large Language Models, Yining Hong et al., Arxiv 2023 | citation
-
Points-to-3D: Bridging the Gap between Sparse Points and Shape-Controllable Text-to-3D Generation, Chaohui Yu et al., Arxiv 2023 | citation
-
RGB-D-Fusion: Image Conditioned Depth Diffusion of Humanoid Subjects, Sascha Kirch et al., Arxiv 2023 | citation
-
AvatarVerse: High-quality & Stable 3D Avatar Creation from Text and Pose, Huichao Zhang et al., Arxiv 2023 | citation
-
TeCH: Text-guided Reconstruction of Lifelike Clothed Humans, Yangyi Huang et al., Arxiv 2023 | citation
-
HumanLiff: Layer-wise 3D Human Generation with Diffusion Model, Hu Shoukang et al., Arxiv 2023 | citation
-
TADA! Text to Animatable Digital Avatars, Tingting Liao et al., Arxiv 2023 | citation
-
MATLABER: Material-Aware Text-to-3D via LAtent BRDF auto-EncodeR, Xudong Xu et al., Arxiv 2023 | citation
-
IT3D: Improved Text-to-3D Generation with Explicit View Synthesis, Yiwen Chen et al., Arxiv 2023 | citation
-
SATR: Zero-Shot Semantic Segmentation of 3D Shapes, Ahmed Abdelreheem et al., ICCV 2023 | citation
-
One-shot Implicit Animatable Avatars with Model-based Priors, Yangyi Huang et al., ICCV 2023 | citation
-
MVDream: Multi-view Diffusion for 3D Generation, Yichun Shi et al., Arxiv 2023 | citation
-
PointLLM: Empowering Large Language Models to Understand Point Clouds, Xu Runsen et al., Arxiv 2023 | citation
-
Objaverse: A Universe of Annotated 3D Objects, Matt Deitke et al., Arxiv 2022 | citation
-
Objaverse-XL: A Universe of 10M+ 3D Objects, Matt Deitke et al., Preprint 2023 | citation
-
Describe3D: High-Fidelity 3D Face Generation from Natural Language Descriptions, Menghua Wu et al., CVPR 2023 | citation
-
threestudio: A unified framework for 3D content generation, Yuan-Chen Guo et al., Github 2023
-
Nerfstudio: A Modular Framework for Neural Radiance Field Development, Matthew Tancik et al., SIGGRAPH 2023
-
Mirage3D: Open-Source Implementations of 3D Diffusion Models Optimized for GLB Output, Mirageml et al., Github 2023
- Initial List of the STOA
- Provide citations in BibTeX
- Sub-categorize based on input conditioning