concept-ablation / README.md
Nupur Kumari
concept ablation
d177f6b
metadata
title: Ablating Concepts in Text-to-Image Diffusion Models
emoji: 💡
colorFrom: indigo
colorTo: gray
sdk: gradio
sdk_version: 3.21.0
app_file: app.py
pinned: false
license: mit

Ablating Concepts in Text-to-Image Diffusion Models

Project Website https://www.cs.cmu.edu/~concept-ablation/
Arxiv Preprint https://arxiv.org/abs/2303.13516

Large-scale text-to-image diffusion models can generate high-fidelity images with powerful compositional ability. However, these models are typically trained on an enormous amount of Internet data, often containing copyrighted material, licensed images, and personal photos. Furthermore, they have been found to replicate the style of various living artists or memorize exact training samples. How can we remove such copyrighted concepts or images without retraining the model from scratch?

We propose an efficient method of ablating concepts in the pretrained model, i.e., preventing the generation of a target concept. Our algorithm learns to match the image distribution for a given target style, instance, or text prompt we wish to ablate to the distribution corresponding to an anchor concept, e.g., Grumpy Cat to Cats.

Demo vs github

This demo uses different hyper-parameters than the github version for faster training.

Running locally

1.) Create an environment using the packages included in the requirements.txt file

2.) Run python app.py

3.) Open the application in browser at http://127.0.0.1:7860/

4.) Train, evaluate, and save models

Citing our work

The preprint can be cited as follows

@inproceedings{kumari2023conceptablation,
  author = {Kumari, Nupur and Zhang, Bingliang and Wang, Sheng-Yu and Shechtman, Eli and Zhang, Richard and Zhu, Jun-Yan},
  title = {Ablating Concepts in Text-to-Image Diffusion Models},
  booktitle = ICCV,
  year = {2023},
}