ShineChen1024
's Collections
StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D
Paper
•
2312.02189
•
Published
•
7
ImageDream: Image-Prompt Multi-view Diffusion for 3D Generation
Paper
•
2312.02201
•
Published
•
30
HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a
Single Image
Paper
•
2312.04543
•
Published
•
21
Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D
priors
Paper
•
2312.04963
•
Published
•
15
Sherpa3D: Boosting High-Fidelity Text-to-3D Generation via Coarse 3D
Prior
Paper
•
2312.06655
•
Published
•
21
SEEAvatar: Photorealistic Text-to-3D Avatar Generation with Constrained
Geometry and Appearance
Paper
•
2312.08889
•
Published
•
10
UniDream: Unifying Diffusion Priors for Relightable Text-to-3D
Generation
Paper
•
2312.08754
•
Published
•
6
Stable Score Distillation for High-Quality 3D Generation
Paper
•
2312.09305
•
Published
•
7
GAvatar: Animatable 3D Gaussian Avatars with Implicit Mesh Learning
Paper
•
2312.11461
•
Published
•
16
Customize-It-3D: High-Quality 3D Creation from A Single Image Using
Subject-Specific Knowledge Prior
Paper
•
2312.11535
•
Published
•
5
HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles
Paper
•
2312.11666
•
Published
•
12
Repaint123: Fast and High-quality One Image to 3D Generation with
Progressive Controllable 2D Repainting
Paper
•
2312.13271
•
Published
•
4
Carve3D: Improving Multi-view Reconstruction Consistency for Diffusion
Models with RL Finetuning
Paper
•
2312.13980
•
Published
•
11
Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models
Paper
•
2312.13913
•
Published
•
22
HarmonyView: Harmonizing Consistency and Diversity in One-Image-to-3D
Paper
•
2312.15980
•
Published
•
10
Make-A-Character: High Quality Text-to-3D Character Generation within
Minutes
Paper
•
2312.15430
•
Published
•
25
DreamGaussian4D: Generative 4D Gaussian Splatting
Paper
•
2312.17142
•
Published
•
15
DiffusionGAN3D: Boosting Text-guided 3D Generation and Domain Adaption
by Combining 3D GANs and Diffusion Priors
Paper
•
2312.16837
•
Published
•
5
SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via
Stein Identity
Paper
•
2401.00604
•
Published
•
4
En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data
Paper
•
2401.01173
•
Published
•
9
Taming Mode Collapse in Score Distillation for Text-to-3D Generation
Paper
•
2401.00909
•
Published
•
8
SIGNeRF: Scene Integrated Generation for Neural Radiance Fields
Paper
•
2401.01647
•
Published
•
12
What You See is What You GAN: Rendering Every Pixel for High-Fidelity
Geometry in 3D GANs
Paper
•
2401.02411
•
Published
•
12
InseRF: Text-Driven Generative Object Insertion in Neural 3D Scenes
Paper
•
2401.05335
•
Published
•
26
HexaGen3D: StableDiffusion is just one step away from Fast and Diverse
Text-to-3D Generation
Paper
•
2401.07727
•
Published
•
8
TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion
Paper
•
2401.09416
•
Published
•
8
Single-View 3D Human Digitalization with Large Reconstruction Models
Paper
•
2401.12175
•
Published
•
4
UltrAvatar: A Realistic Animatable 3D Avatar Diffusion Model with
Authenticity Guided Textures
Paper
•
2401.11078
•
Published
•
6
Make-A-Shape: a Ten-Million-scale 3D Shape Model
Paper
•
2401.11067
•
Published
•
15
GALA: Generating Animatable Layered Assets from a Single Scan
Paper
•
2401.12979
•
Published
•
6
TIP-Editor: An Accurate 3D Editor Following Both Text-Prompts And
Image-Prompts
Paper
•
2401.14828
•
Published
•
6
Repositioning the Subject within Image
Paper
•
2401.16861
•
Published
•
13
ReplaceAnything3D:Text-Guided 3D Scene Editing with Compositional Neural
Radiance Fields
Paper
•
2401.17895
•
Published
•
15
AToM: Amortized Text-to-Mesh using 2D Diffusion
Paper
•
2402.00867
•
Published
•
10
LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content
Creation
Paper
•
2402.05054
•
Published
•
24
HeadStudio: Text to Animatable Head Avatars with 3D Gaussian Splatting
Paper
•
2402.06149
•
Published
•
15
GALA3D: Towards Text-to-3D Complex Scene Generation via Layout-guided
Generative Gaussian Splatting
Paper
•
2402.07207
•
Published
•
7
IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality
3D Generation
Paper
•
2402.08682
•
Published
•
12
GaussianObject: Just Taking Four Images to Get A High-Quality 3D Object
with Gaussian Splatting
Paper
•
2402.10259
•
Published
•
13
Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based
View Synthesis
Paper
•
2402.12377
•
Published
•
8
MVDiffusion++: A Dense High-resolution Multi-view Diffusion Model for
Single or Sparse-view 3D Object Reconstruction
Paper
•
2402.12712
•
Published
•
13
Improving Robustness for Joint Optimization of Camera Poses and
Decomposed Low-Rank Tensorial Radiance Fields
Paper
•
2402.13252
•
Published
•
17
MVD^2: Efficient Multiview 3D Reconstruction for Multiview Diffusion
Paper
•
2402.14253
•
Published
•
5
Consolidating Attention Features for Multi-view Image Editing
Paper
•
2402.14792
•
Published
•
7
Disentangled 3D Scene Generation with Layout Learning
Paper
•
2402.16936
•
Published
•
10
ViewFusion: Towards Multi-View Consistency via Interpolated Denoising
Paper
•
2402.18842
•
Published
•
13
TripoSR: Fast 3D Object Reconstruction from a Single Image
Paper
•
2403.02151
•
Published
•
9
ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models
Paper
•
2403.01807
•
Published
•
7
MagicClay: Sculpting Meshes With Generative Neural Fields
Paper
•
2403.02460
•
Published
•
6
CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction
Model
Paper
•
2403.05034
•
Published
•
17
V3D: Video Diffusion Models are Effective 3D Generators
Paper
•
2403.06738
•
Published
•
27
Isotropic3D: Image-to-3D Generation Based on a Single CLIP Embedding
Paper
•
2403.10395
•
Published
•
6
Controllable Text-to-3D Generation via Surface-Aligned Gaussian
Splatting
Paper
•
2403.09981
•
Published
•
6
FDGaussian: Fast Gaussian Splatting from Single Image via
Geometric-aware Diffusion Model
Paper
•
2403.10242
•
Published
•
10
SV3D: Novel Multi-view Synthesis and 3D Generation from a Single Image
using Latent Video Diffusion
Paper
•
2403.12008
•
Published
•
18
VFusion3D: Learning Scalable 3D Generative Models from Video Diffusion
Models
Paper
•
2403.12034
•
Published
•
5
LN3Diff: Scalable Latent Neural Fields Diffusion for Speedy 3D
Generation
Paper
•
2403.12019
•
Published
•
8
Generic 3D Diffusion Adapter Using Controlled Multi-View Editing
Paper
•
2403.12032
•
Published
•
14
GaussianFlow: Splatting Gaussian Dynamics for 4D Content Creation
Paper
•
2403.12365
•
Published
•
10
TexDreamer: Towards Zero-Shot High-Fidelity 3D Human Texture Generation
Paper
•
2403.12906
•
Published
•
4
GVGEN: Text-to-3D Generation with Volumetric Representation
Paper
•
2403.12957
•
Published
•
4
ComboVerse: Compositional 3D Assets Creation Using Spatially-Aware
Diffusion Guidance
Paper
•
2403.12409
•
Published
•
9
Compress3D: a Compressed Latent Space for 3D Generation from a Single
Image
Paper
•
2403.13524
•
Published
•
8
DreamReward: Text-to-3D Generation with Human Preference
Paper
•
2403.14613
•
Published
•
33
Gaussian Frosting: Editable Complex Radiance Fields with Real-Time
Rendering
Paper
•
2403.14554
•
Published
•
12
ThemeStation: Generating Theme-Aware 3D Assets from Few Exemplars
Paper
•
2403.15383
•
Published
•
11
LATTE3D: Large-scale Amortized Text-To-Enhanced3D Synthesis
Paper
•
2403.15385
•
Published
•
5
FlexiDreamer: Single Image-to-3D Generation with FlexiCubes
Paper
•
2404.00987
•
Published
•
20
Freditor: High-Fidelity and Transferable NeRF Editing by Frequency
Decomposition
Paper
•
2404.02514
•
Published
•
9
Hash3D: Training-free Acceleration for 3D Generation
Paper
•
2404.06091
•
Published
•
12
Magic-Boost: Boost 3D Generation with Mutli-View Conditioned Diffusion
Paper
•
2404.06429
•
Published
•
6
DreamScene360: Unconstrained Text-to-3D Scene Generation with Panoramic
Gaussian Splatting
Paper
•
2404.06903
•
Published
•
14
RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth
Diffusion
Paper
•
2404.07199
•
Published
•
21
Taming Latent Diffusion Model for Neural Radiance Field Inpainting
Paper
•
2404.09995
•
Published
•
6
SyncDreamer: Generating Multiview-consistent Images from a Single-view
Image
Paper
•
2309.03453
•
Published
•
9
MVDream: Multi-view Diffusion for 3D Generation
Paper
•
2308.16512
•
Published
•
99
LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes
Paper
•
2311.13384
•
Published
•
48
HiFi-123: Towards High-fidelity One Image to 3D Content Generation
Paper
•
2310.06744
•
Published
•
2
LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval
Score Matching
Paper
•
2311.11284
•
Published
•
16
Interactive3D: Create What You Want by Interactive 3D Generation
Paper
•
2404.16510
•
Published
•
17
MicroDreamer: Zero-shot 3D Generation in sim20 Seconds by Score-based
Iterative Reconstruction
Paper
•
2404.19525
•
Published
•
7
Lightplane: Highly-Scalable Components for Neural 3D Fields
Paper
•
2404.19760
•
Published
•
4
SAGS: Structure-Aware 3D Gaussian Splatting
Paper
•
2404.19149
•
Published
•
9
Invisible Stitch: Generating Smooth 3D Scenes with Depth Inpainting
Paper
•
2404.19758
•
Published
•
8