hbXNov commited on
Commit
4946eac
1 Parent(s): a41b6a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -5
README.md CHANGED
@@ -4,20 +4,18 @@ license: mit
4
 
5
  Paper: Leaving Reality to Imagination: Robust Classification via Generated Datasets (https://arxiv.org/abs/2302.02503)
6
 
7
-
8
  Colab Notebook for Data Generation: https://colab.research.google.com/drive/1I2IO8tD_l9JdCRJHOqlAP6ojMPq_BsoR?usp=sharing
9
 
 
 
10
  Finetuning Recipe:
11
  1. We finetune the Stable Diffusion V1.5 model for 1 epoch on the complete ImageNet-1K training dataset, which contains ~1.3M images. The model was finetuned on a single 24GB A5000 GPU. It took us ~1day to complete the finetuning.
12
  2. The finetuning code was adopted directly from the Huggingface Diffusers library - https://github.com/huggingface/diffusers/tree/main/examples/text_to_image.
13
  3. Link to our GitHub code: https://github.com/Hritikbansal/generative-robustness/tree/main/sd_finetune
14
  4. The complete set of finetuning arguments are present here - https://docs.google.com/document/d/17ggIdEuhAS0rhX7gIFp2q6H0JjkpERYFkCLTO_MtdgY/edit?usp=sharing
15
 
16
-
17
  Post-finetuning, we repeatedly sample the data from the generative model to generate 1.3M training and 50K validation images.
18
 
19
- All the newly generated images from the finetuned Stable Diffusion as well as the pretrained Stable Diffusion are present here - https://drive.google.com/drive/folders/14DJyU_xx018Ir6Cw-mETKw9a0yLtc2NJ?usp=sharing
20
-
21
  Github Repo for the paper: https://github.com/Hritikbansal/generative-robustness
22
 
23
- Authors: Hritik Bansal (https://sites.google.com/view/hbansal), Aditya Grover (https://aditya-grover.github.io/)
 
4
 
5
  Paper: Leaving Reality to Imagination: Robust Classification via Generated Datasets (https://arxiv.org/abs/2302.02503)
6
 
 
7
  Colab Notebook for Data Generation: https://colab.research.google.com/drive/1I2IO8tD_l9JdCRJHOqlAP6ojMPq_BsoR?usp=sharing
8
 
9
+ All the generated images from the finetuned Stable Diffusion and the pretrained (base) Stable Diffusion are present here - https://drive.google.com/drive/folders/14DJyU_xx018Ir6Cw-mETKw9a0yLtc2NJ?usp=sharing
10
+
11
  Finetuning Recipe:
12
  1. We finetune the Stable Diffusion V1.5 model for 1 epoch on the complete ImageNet-1K training dataset, which contains ~1.3M images. The model was finetuned on a single 24GB A5000 GPU. It took us ~1day to complete the finetuning.
13
  2. The finetuning code was adopted directly from the Huggingface Diffusers library - https://github.com/huggingface/diffusers/tree/main/examples/text_to_image.
14
  3. Link to our GitHub code: https://github.com/Hritikbansal/generative-robustness/tree/main/sd_finetune
15
  4. The complete set of finetuning arguments are present here - https://docs.google.com/document/d/17ggIdEuhAS0rhX7gIFp2q6H0JjkpERYFkCLTO_MtdgY/edit?usp=sharing
16
 
 
17
  Post-finetuning, we repeatedly sample the data from the generative model to generate 1.3M training and 50K validation images.
18
 
 
 
19
  Github Repo for the paper: https://github.com/Hritikbansal/generative-robustness
20
 
21
+ Authors: Hritik Bansal (https://sites.google.com/view/hbansal), Aditya Grover (https://aditya-grover.github.io/)