Problems and Proposed Solutions Fine-tuning a diffusion model on a small set of subject images causes it to lose the ability to generate generic images of the same class and forget the class-specific prior. 1.Language Drift Solution 1 Dreambooth use the model's own generated samples by adding a relative weight of the prior-preservation loss. However the ratio of prior-preservation is not easy to determine. Solution 2 This is a method that requires a lot of GPU time - during the regular training process, we add auto-generated images from the current model with prompt of a single word, with words chosen from a pre-estimated word frequency list randomly according to a certain ratio (we chose our word list from Danbooru Tags). To avoid overfitting, each auto-generated image is used only once.