Diffusion Course documentation

DreamBooth Hackathon 🏆

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

DreamBooth Hackathon 🏆

📣 The hackathon is now over and the winners have been announced on Discord. You are still welcome to train models and submit them to the leaderboard, but we won’t be offering prizes or certificates at this point in time.

Welcome to the DreamBooth Hackathon! This is a community event where you’ll personalise a Stable Diffusion model by fine-tuning it on a handful of your own images. To do so, you’ll use a powerful technique called DreamBooth, which allows one to implant a subject (e.g. your pet or favourite dish) into the output domain of the model such that it can be synthesized with a unique identifier in the prompt.

This competition is composed of 5 themes, where each theme will collect models belong to the following categories:

  • Animal 🐨: Use this theme to generate images of your pet or favourite animal hanging out in the Acropolis, swimming, or flying in space.
  • Science 🔬: Use this theme to generate cool synthetic images of galaxies, proteins, or any domain of the natural and medical sciences.
  • Food 🍔: Use this theme to tune Stable Diffusion on your favourite dish or cuisine.
  • Landscape 🏔: Use this theme to generate beautiful landscapes of your favourite mountain, lake, or garden.
  • Wildcard 🔥: Use this theme to go wild and create Stable Diffusion models for any category of your choosing!

We’ll be giving out prizes to the top 3 most liked models per theme, and you’re encouraged to submit as many models as you want!

Getting started

Follow the steps below to take part in this event:

  1. Join the Hugging Face Discord server and check out the #dreambooth-hackathon channel to stay up to date with the event.
  2. Launch and run the DreamBooth notebook to train your models by clicking on one of the links below. Make sure you select the GPU runtime in each platform to ensure your models train fast!
Notebook Colab Kaggle Gradient Studio Lab
DreamBooth Training Open In Colab Kaggle Gradient Open In SageMaker Studio Lab

Note 👋: The DreamBooth notebook uses the CompVis/stable-diffusion-v1-4 checkpoint as the Stable Diffusion model to fine-tune. However, you are totally free to use any Stable Diffusion checkpoint that you want - you’ll just have to adjust the code to load the appropriate components and the safety checker (if it exists). Some interesting models to fine-tune include:

Evaluation & Leaderboard

To be in the running for the prizes, push one or more DreamBooth models to the Hub with the dreambooth-hackathon tag in the model card (example). This is created automatically by the DreamBooth notebook, but you’ll need to add it if you’re running your own scripts.

Models are evaluated according to the number of likes they have and you can track your model’s ranking on the hackathon’s leaderboard:

Timeline

  • December 21, 2022 - Start date
  • December 31, 2022 - Colab Pro registration deadline
  • January 22, 2023 - Final submissions deadline (closing of the leaderboard)
  • January 23-27, 2023 - Announce winners of each theme

All deadlines are at 11:59 PM UTC on the corresponding day unless otherwise noted.

Prizes

We will be awarding 3 prizes per theme, where winners are determined by the models with the most likes on the leaderboard:

1st place winner

2nd place winner

3rd place winner

We will also provide a certificate of completion to all the participants that submit at least 1 DreamBooth model to the hackathon 🔥.

Compute

Google Colab will be sponsoring this event by providing fee Colab Pro credits to 100 participants (selected randomly). We’ll be giving out the credits in January 2023, and you have until December 31 to register. To register for these credits, please fill out this form.

FAQ

What data is allowed for fine-tuning?

You can use any images that belong to you or for which a permissive license allows for. If you’d like to submit a model trained on faces (e.g. as a Wildcard submission), we recommend using your own likeness. Ideally, use your own data where you can - we’d love to see your pets or favorite local landscape features, and we suspect the likes and prizes will tend to go to those who do something nice and personal 😁.

Are other fine-tuning techniques like Textual Inversion allowed?

Absolutely! Although this hackathon is focused on DreamBooth, you’re welcome (and encouraged) to experiment with other fine-tuning techniques. This also means you can use whatever frameworks, code, or services that help you make delightful models for the community to enjoy 🥰.