JadenFK commited on
Commit
f440bc0
1 Parent(s): a54a5ff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md CHANGED
@@ -9,3 +9,35 @@ app_file: app.py
9
  pinned: false
10
  license: mit
11
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  pinned: false
10
  license: mit
11
  ---
12
+
13
+
14
+
15
+ # Erasing Concepts from Diffusion Models
16
+
17
+ Project Website [https://erasing.baulab.info](https://erasing.baulab.info) <br>
18
+ Arxiv Preprint [https://arxiv.org/pdf/2303.07345.pdf](https://arxiv.org/pdf/2303.07345.pdf) <br>
19
+ Fine-tuned Weights [https://erasing.baulab.info/weights/esd_models/](https://erasing.baulab.info/weights/esd_models/) <br>
20
+ <div align='center'>
21
+ <img src = 'images/applications.png'>
22
+ </div>
23
+
24
+ Motivated by recent advancements in text-to-image diffusion, we study erasure of specific concepts from the model's weights. While Stable Diffusion has shown promise in producing explicit or realistic artwork, it has raised concerns regarding its potential for misuse. We propose a fine-tuning method that can erase a visual concept from a pre-trained diffusion model, given only the name of the style and using negative guidance as a teacher. We benchmark our method against previous approaches that remove sexually explicit content and demonstrate its effectiveness, performing on par with Safe Latent Diffusion and censored training.
25
+
26
+ To evaluate artistic style removal, we conduct experiments erasing five modern artists from the network and conduct a user study to assess the human perception of the removed styles. Unlike previous methods, our approach can remove concepts from a diffusion model permanently rather than modifying the output at the inference time, so it cannot be circumvented even if a user has access to model weights
27
+
28
+ Given only a short text description of an undesired visual concept and no additional data, our method fine-tunes model weights to erase the targeted concept. Our method can avoid NSFW content, stop imitation of a specific artist's style, or even erase a whole object class from model output, while preserving the model's behavior and capabilities on other topics.
29
+
30
+ ## Fine-tuned Weights
31
+
32
+ The finetuned weights for both NSFW and art style erasures are available on our [project page](https://erasing.baulab.info).
33
+
34
+ ## Citing our work
35
+ The preprint can be cited as follows
36
+ ```
37
+ @article{gandikota2023erasing,
38
+ title={Erasing Concepts from Diffusion Models},
39
+ author={Rohit Gandikota and Joanna Materzy\'nska and Jaden Fiotto-Kaufman and David Bau},
40
+ journal={arXiv preprint arXiv:2303.07345},
41
+ year={2023}
42
+ }
43
+ ```