ashawkey commited on
Commit
f0a9913
1 Parent(s): 0cfd813

update readme

Browse files
Files changed (1) hide show
  1. readme.md +8 -4
readme.md CHANGED
@@ -4,7 +4,7 @@ A pytorch implementation of the text-to-3D model **Dreamfusion**, powered by the
4
 
5
  The original paper's project page: [_DreamFusion: Text-to-3D using 2D Diffusion_](https://dreamfusion3d.github.io/).
6
 
7
- Examples generated from text prompts only:
8
 
9
  Exported meshes viewed with MeshLab:
10
 
@@ -12,7 +12,7 @@ Exported meshes viewed with MeshLab:
12
 
13
 
14
  # Important Notice
15
- This project is a **work-in-progress**, and contains lots of differences from the paper. Also, many features are still not implemented now. The current generation quality cannot match the results from the original paper, and still fail badly for many prompts.
16
 
17
 
18
  ## Notable differences from the paper
@@ -23,7 +23,7 @@ This project is a **work-in-progress**, and contains lots of differences from th
23
 
24
  ## TODOs
25
  * The normal evaluation & shading part.
26
- * Improve the surface quality.
27
 
28
  # Install
29
 
@@ -86,7 +86,11 @@ python main_nerf.py --text "a hamburger" --workspace trial_clip -O --test --gui
86
 
87
  # Code organization
88
 
89
- * The key SDS loss is located at `./nerf/sd.py > StableDiffusion > train_step`:
 
 
 
 
90
  ```python
91
  # 1. we need to interpolate the NeRF rendering to 512x512, to feed it to SD's VAE.
92
  pred_rgb_512 = F.interpolate(pred_rgb, (512, 512), mode='bilinear', align_corners=False)
 
4
 
5
  The original paper's project page: [_DreamFusion: Text-to-3D using 2D Diffusion_](https://dreamfusion3d.github.io/).
6
 
7
+ Examples generated from text prompt 'a DSLR photo of a pineapple' viewed with the GUI in real time:
8
 
9
  Exported meshes viewed with MeshLab:
10
 
 
12
 
13
 
14
  # Important Notice
15
+ This project is a **work-in-progress**, and contains lots of differences from the paper. Also, many features are still not implemented now. **The current generation quality cannot match the results from the original paper, and still fail badly for many prompts.**
16
 
17
 
18
  ## Notable differences from the paper
 
23
 
24
  ## TODOs
25
  * The normal evaluation & shading part.
26
+ * Better mesh (improve the surface quality).
27
 
28
  # Install
29
 
 
86
 
87
  # Code organization
88
 
89
+ This is a simple description of the most important implementation details.
90
+ If you are interested in improving this repo, this might be a starting point.
91
+ Any contribution would be greatly appreciated!
92
+
93
+ * The SDS loss is located at `./nerf/sd.py > StableDiffusion > train_step`:
94
  ```python
95
  # 1. we need to interpolate the NeRF rendering to 512x512, to feed it to SD's VAE.
96
  pred_rgb_512 = F.interpolate(pred_rgb, (512, 512), mode='bilinear', align_corners=False)