codestella commited on
Commit
3c9f729
1 Parent(s): e1b3138

README edit

Browse files
README.md CHANGED
@@ -2,13 +2,15 @@
2
 
3
  <p align="center"><img width="450" alt="스크린샷 2021-07-04 오후 4 11 51" src="https://user-images.githubusercontent.com/77657524/126361638-4aad58e8-4efb-4fc5-bf78-f53d03799e1e.png"></p>
4
 
5
- the Pytorch, JAX/Flax based code implementation of this paper [Putting NeRF on a Diet : Ajay Jain, Matthew Tancik, Pieter Abbeel, Arxiv : https://arxiv.org/abs/2104.00677]
6
- The model generates the novel view synthesis redering (NeRF: Neural Radiances Field) base on Fewshot learning.
7
- The semantic loss using pre-trained CLIP Vision Transformer embedding is used for 2D supervision for 3D. It outperforms the Original NeRF in 3D reconstruction for
 
 
8
 
9
 
10
  ## 🤗 Hugging Face Hub Repo URL:
11
- We will also upload our project on the Hugging Face Hub Repository.
12
  [https://huggingface.co/flax-community/putting-nerf-on-a-diet/](https://huggingface.co/flax-community/putting-nerf-on-a-diet/)
13
 
14
  Our JAX/Flax implementation currently supports:
@@ -46,6 +48,12 @@ Our JAX/Flax implementation currently supports:
46
  </tbody>
47
  </table>
48
 
 
 
 
 
 
 
49
  ## 💻 Installation
50
 
51
  ```bash
@@ -65,15 +73,21 @@ pip install --upgrade jax jaxlib==0.1.57+cuda101 -f https://storage.googleapis.c
65
  pip install flax transformer[flax]
66
  ```
67
 
68
- ## ⚽ Dataset & Methods
69
  Download the datasets from the [NeRF official Google Drive](https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1).
70
  Please download the `nerf_synthetic.zip` and unzip them
71
  in the place you like. Let's assume they are placed under `/tmp/jaxnerf/data/`.
72
 
 
 
 
 
 
 
73
  <p align="center"><img width="400" alt="스크린샷 2021-07-04 오후 4 11 51" src="https://user-images.githubusercontent.com/77657524/124376591-b312b780-dce2-11eb-80ad-9129d6f5eedb.png"></p>
74
 
75
  Based on the principle
76
- that “a bulldozer is a bulldozer from any perspective”, our proposed DietNeRF supervises the radiance field from arbitrary poses
77
  (DietNeRF cameras). This is possible because we compute a semantic consistency loss in a feature space capturing high-level
78
  scene attributes, not in pixel space. We extract semantic representations of renderings using the CLIP Vision Transformer, then
79
  maximize similarity with representations of ground-truth views. In
@@ -96,45 +110,51 @@ You can toggle the semantic loss by “use_semantic_loss” in configuration fil
96
 
97
  ## 💎 Expriment Result
98
 
99
-
100
  ### ❗ Rendered Rendering images by 8-shot learned Diet-NeRF
101
- ### CHAIR / HOTDOG / DRUM
 
 
 
102
 
103
  <p align="center">
104
  <table>
105
  <tr>
106
- <td><img alt="" src="https://user-images.githubusercontent.com/26036843/126624964-52c81c00-73c3-45ee-9807-02f9de514370.png" width="300"/></td>
107
- <td><img alt="" src="https://user-images.githubusercontent.com/26036843/126625086-f3479803-7ca7-4011-9242-2f9c1fbecc25.png" width="300"/></td>
108
- <td><img alt="" src="https://user-images.githubusercontent.com/26036843/126625142-f9b4b1f5-683b-48e1-b2a9-18b9852d9da5.png" width="300"/></td>
 
 
 
 
109
  <tr>
110
  </table></p>
111
 
112
- ### ❗ Rendering GIF images by 4-shot learned Diet-NeRF and Diet-NeRF
113
 
114
- DietNeRF has a strong capacity to generalise on novel and challenging views with EXTREMELY SMALL TRAINING SAMPLES!
115
- The animations below shows the performance difference between DietNeRF (left) v.s. NeRF (right) with only 4 training images:
 
116
 
 
 
117
 
118
 
119
- ### ❗ Rendered GIF by occluded 14-shot learned NeRF and Diet-NeRF
120
- We made aritificial occulusion on the right side of image.
121
- The reconstruction quality can be compared with this experiment.
122
- Diet NeRF shows better quailty than Original NeRF when It is occulused.
 
 
 
123
 
124
  #### SHIP
125
  <p align="center">
126
  <table>
127
  <tr>
128
- <td><img alt="" src="https://user-images.githubusercontent.com/26036843/126626302-7df48853-54c2-42a3-9876-dc618b35219a.gif" width="300"/></td>
129
- <td><img alt="" src="https://user-images.githubusercontent.com/26036843/126626308-1c63a1e1-5c87-42af-9d7d-6468bd345c6a.gif" width="300"/></td>
130
  <tr>
131
  </table></p>
132
 
133
- ## 🤩 Demo
134
-
135
- You can check our Streamlit Space Demo on following site !
136
- [https://huggingface.co/spaces/flax-community/DietNerf-Demo](https://huggingface.co/spaces/flax-community/DietNerf-Demo)
137
-
138
  ## 👨‍👧‍👦 Our Teams
139
 
140
 
@@ -161,7 +181,7 @@ This project is based on “JAX-NeRF”.
161
  }
162
  ```
163
 
164
- This project is based on “JAX-NeRF”.
165
  ```
166
  @misc{jain2021putting,
167
  title={Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis},
@@ -182,8 +202,14 @@ This project is based on “JAX-NeRF”.
182
  Our Project is started in the HuggingFace X GoogleAI (JAX) Community Week Event.
183
  https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104
184
 
185
- Thank you for Our Mentor Suraj and Organizers in JAX/Flax Community Week!
186
  Our team grows up with this community learning experience. It was wonderful time!
187
 
188
  <p align="center"><img width="250" alt="스크린샷 2021-07-04 오후 4 11 51" src="https://user-images.githubusercontent.com/77657524/126369170-5664076c-ac99-4157-bc53-b91dfb7ed7e1.jpeg"></p>
189
 
 
 
 
 
 
 
 
2
 
3
  <p align="center"><img width="450" alt="스크린샷 2021-07-04 오후 4 11 51" src="https://user-images.githubusercontent.com/77657524/126361638-4aad58e8-4efb-4fc5-bf78-f53d03799e1e.png"></p>
4
 
5
+ Welcome to Putting NeRF on a Diet Project!
6
+ This project is the Pytorch, JAX/Flax based code implementation of this paper [Putting NeRF on a Diet : Ajay Jain, Matthew Tancik, Pieter Abbeel, Arxiv : https://arxiv.org/abs/2104.00677]
7
+ The model generates the novel view synthesis redering (NeRF: Neural Radiances Field) with Fewshot learning scheme.
8
+ The semantic loss use the pre-trained CLIP Vision Transformer embedding. This information can give a 2D supervision for 3D.
9
+ The Diet NeRF result outperforms the original NeRF in 3D reconstruction and neural rendering with only few images.
10
 
11
 
12
  ## 🤗 Hugging Face Hub Repo URL:
13
+ We will also upload our project on the Hugging Face Hub Repository Also.
14
  [https://huggingface.co/flax-community/putting-nerf-on-a-diet/](https://huggingface.co/flax-community/putting-nerf-on-a-diet/)
15
 
16
  Our JAX/Flax implementation currently supports:
 
48
  </tbody>
49
  </table>
50
 
51
+ ## 🤩 Demo
52
+
53
+ You can check our Streamlit Space demo on following site !
54
+ [https://huggingface.co/spaces/flax-community/DietNerf-Demo](https://huggingface.co/spaces/flax-community/DietNerf-Demo)
55
+
56
+
57
  ## 💻 Installation
58
 
59
  ```bash
 
73
  pip install flax transformer[flax]
74
  ```
75
 
76
+ ## ⚽ Dataset
77
  Download the datasets from the [NeRF official Google Drive](https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1).
78
  Please download the `nerf_synthetic.zip` and unzip them
79
  in the place you like. Let's assume they are placed under `/tmp/jaxnerf/data/`.
80
 
81
+
82
+ ## 💖 Methods
83
+
84
+ You can check more detail explaination about DietNeRF on following **Notion Report**
85
+ * 👉👉 VEEEERY Detail DietNeRF Explaination Docs : (https://bit.ly/3x4FwcT)👈👈
86
+
87
  <p align="center"><img width="400" alt="스크린샷 2021-07-04 오후 4 11 51" src="https://user-images.githubusercontent.com/77657524/124376591-b312b780-dce2-11eb-80ad-9129d6f5eedb.png"></p>
88
 
89
  Based on the principle
90
+ that “a bulldozer is a bulldozer from any perspective”, Our proposed DietNeRF supervises the radiance field from arbitrary poses
91
  (DietNeRF cameras). This is possible because we compute a semantic consistency loss in a feature space capturing high-level
92
  scene attributes, not in pixel space. We extract semantic representations of renderings using the CLIP Vision Transformer, then
93
  maximize similarity with representations of ground-truth views. In
 
110
 
111
  ## 💎 Expriment Result
112
 
 
113
  ### ❗ Rendered Rendering images by 8-shot learned Diet-NeRF
114
+
115
+ DietNeRF has a strong capacity to generalise on novel and challenging views with EXTREMELY SMALL TRAINING SAMPLES!
116
+
117
+ ### CHAIR / HOTDOG / DRUM / LEGO / MATERIALS
118
 
119
  <p align="center">
120
  <table>
121
  <tr>
122
+ <td><img alt="" src="./assets/chair.png" width="400"/></td><td><img alt="" src="./assets/hotdog.png" width="400"/></td><td><img alt="" src="./assets/drum.png" width="400"/></td>
123
+ <tr>
124
+ </table></p>
125
+ <p align="center">
126
+ <table>
127
+ <tr>
128
+ <td><img alt="" src="./assets/lego-8-diet.gif" width="400"/></td><td><img alt="" src="./assets/mic-8-diet.gif" width="400"/></td>
129
  <tr>
130
  </table></p>
131
 
132
+ ### ❗ Rendered GIF by occluded 14-shot learned NeRF and Diet-NeRF
133
 
134
+ We made aritificial occulusion on the right side of image (Only picked left side training poses).
135
+ The reconstruction quality can be compared with this experiment.
136
+ Diet NeRF shows better quailty than Original NeRF when It is occulused.
137
 
138
+ #### Training poses
139
+ <img width="1400" src="https://user-images.githubusercontent.com/26036843/126111980-4f332c87-a7f0-42e0-a355-8e77621bbca4.png">
140
 
141
 
142
+ #### LEGO
143
+ <p align="center">
144
+ <table>
145
+ <tr>
146
+ <td><img alt="" src="assets/lego-14-occ-diet_.gif" width="300"/></td><td><img alt="" src="assets/lego-14-occ-nerf_.gif" width="300"/></td>
147
+ <tr>
148
+ </table></p>
149
 
150
  #### SHIP
151
  <p align="center">
152
  <table>
153
  <tr>
154
+ <td><img alt="" src="./assets/ship-dietnerf.gif" width="300"/></td><td><img alt="" src="./assets/ship-nerf.gif" width="300"/></td>
 
155
  <tr>
156
  </table></p>
157
 
 
 
 
 
 
158
  ## 👨‍👧‍👦 Our Teams
159
 
160
 
 
181
  }
182
  ```
183
 
184
+ This project is based on “Putting NeRF on a Diet”.
185
  ```
186
  @misc{jain2021putting,
187
  title={Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis},
 
202
  Our Project is started in the HuggingFace X GoogleAI (JAX) Community Week Event.
203
  https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104
204
 
205
+ Thank you for our mentor Suraj and organizers in JAX/Flax Community Week!
206
  Our team grows up with this community learning experience. It was wonderful time!
207
 
208
  <p align="center"><img width="250" alt="스크린샷 2021-07-04 오후 4 11 51" src="https://user-images.githubusercontent.com/77657524/126369170-5664076c-ac99-4157-bc53-b91dfb7ed7e1.jpeg"></p>
209
 
210
+ Common Computer AI(https://comcom.ai/ko/) sponsored the multiple V100 GPUs for our project!
211
+ Thank you so much for your support!
212
+ <p align="center"><img width="250" alt="스크린샷" src="./assets/comcom.jpeg"></p>
213
+
214
+
215
+
assets/chair.png CHANGED
assets/comcom.jpeg ADDED
assets/drawing.png CHANGED
assets/drum.png CHANGED
assets/hotdog.png CHANGED
assets/lego-14-occ-diet_.gif ADDED
assets/lego-14-occ-nerf_.gif ADDED
assets/lego-8-diet-image.png ADDED
assets/lego-8-diet.gif ADDED
assets/lego-8-nerf.gif ADDED
assets/materials-8-diet-image.png ADDED
assets/mic-8-diet.gif ADDED
assets/ship-8-diet-coarse.gif ADDED
assets/ship-8-nerf.gif ADDED