{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "d874e23f-9631-48e0-b635-84e7280bf07b", "metadata": {}, "source": [ "# Stable Diffusion Training / Inference Tutorial\n", "\n", "### Note:\n", "Currently, this notebook must be run in a NeMo container (> 24.01). An example command to launch the container:\n", "\n", "```\n", "docker run --gpus all -it --rm -v :/opt/NeMo --shm-size=8g \\\n", " -p 8888:8888 --ulimit memlock=-1 --ulimit \\\n", " stack=67108864 \n", "```\n", "\n", "## Introduction\n", "\n", "This notebook illustrates how to train and perform inference using Stable Diffusion with the NeMo Toolkit. For the sake of brevity, we've chosen to use Stable Diffusion as an example to demonstrate the foundational process of training and inferencing with Text2Img models. However, you can apply the same approach to other foundational Text2Img models, such as Imagen.\n", "\n", "The implementation of Stable Diffusion is based on [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752).\n", "\n", "This tutorial will guide you through the following topics:\n", "\n", "1. Training a Stable Diffusion model.\n", "2. Performing inference with the trained model.\n", "\n", "## Datasets\n", "\n", "Please refer to [Dataset Tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/multimodal/Multimodal%20Data%20Preparation.ipynb) for how to prepare a training dataset for Stable diffusion.\n", "\n", "For a pre-cached Stable Diffusion dataset, each webdataset tar file should, at a minimum, include the pickle files that store the pre-cached image and text features:\n", "\n", "```\n", "t0_r0_0.tar\n", "|---- 0000.pickle\n", "|---- 0001.pickle\n", "...\n", "```\n", "\n", "For non-precached Stable Diffusion dataset, each webdataset tar file should contain the raw texts and corresponding images:\n", "\n", "```\n", "t0_r0_0.tar\n", "|---- 0000.jpg\n", "|---- 0000.txt\n", "|---- 0001.jpg\n", "|---- 0001.txt\n", "...\n", "```\n", "\n", "## Encoders Preparation\n", "\n", "Depending on whether you precache the dataset, you might also need to first download the image and/or text encoders.\n", "\n", "### Option 1: Training on Non-Precached Dataset (Use Encoders During Training)\n", "\n", "#### A. Prepare VAE\n", "To download the default VAE for Stable Diffusion:\n" ] }, { "cell_type": "code", "execution_count": null, "id": "730cd137-0fce-4bab-8ac7-219e5c55faf2", "metadata": { "scrolled": true }, "outputs": [], "source": [ "! wget https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/vae/diffusion_pytorch_model.bin\n", "! mkdir -p /ckpts\n", "! mv diffusion_pytorch_model.bin /ckpts/vae.bin" ] }, { "attachments": {}, "cell_type": "markdown", "id": "fef8b245-7cee-4048-a9ec-3ada90432a89", "metadata": {}, "source": [ "The above command will download the default VAE weights from HuggingFace and save it to `/ckpts/vae.bin`.\n", "\n", "**Note**: if you want to customize the saved location, make sure it is also reflected in your training config.\n", "#### B. Prepare Text Encoder\n", "For the text encoder used in Stable Diffusion, you can either use [HuggingFace CLIP ViT-L/14 model](https://huggingface.co/openai/clip-vit-large-patch14) or use NeMo's CLIP-ViT. NeMo Stable Diffusion also supports native CLIP ViT model trained in NeMo Framework.\n", "\n", "Make sure the following settings are used in `cond_stage_config`:\n", "\n", "```\n", " cond_stage_config:\n", " # For compatibility with the previous version that uses HuggingFace CLIP model\n", " _target_: nemo.collections.multimodal.modules.stable_diffusion.encoders.modules.FrozenCLIPEmbedder\n", " version: openai/clip-vit-large-patch14\n", " device: cuda\n", " max_length: 77\n", " capture_cudagraph_i rs: ${model.capture_cudagraph_ters}\n", "```" ] }, { "attachments": {}, "cell_type": "markdown", "id": "e52057c4-83ee-4f21-a11c-11a5c367a0b8", "metadata": {}, "source": [ "Alternatively, you can use the CLIP model in `.nemo` format . This can be achieved by using the provided NeMo script to download and convert the CLIP model via the following command:" ] }, { "cell_type": "code", "execution_count": null, "id": "ada77920-06f5-43f3-bb26-82d9daabde8f", "metadata": {}, "outputs": [], "source": [ "! python /opt/NeMo/examples/multimodal/vision_language_foundation/clip/convert_external_clip_to_nemo.py \\\n", " --arch ViT-L-14 \\\n", " --version openai \\\n", " --hparams_file /opt/NeMo/examples/multimodal/vision_language_foundation/clip/conf/megatron_clip_VIT-L-14.yaml \\\n", " --nemo_file /ckpts/openai.nemo" ] }, { "attachments": {}, "cell_type": "markdown", "id": "34a385ff-f8ff-4e64-bd6f-b814be388598", "metadata": {}, "source": [ "When using `.nemo` ViT model, you can use the default `cond_stage_config`:\n", "\n", "```\n", " cond_stage_config:\n", " _target_: nemo.collections.multimodal.modules.stable_diffusion.encoders.modules.FrozenMegatronCLIPEmbedder\n", " restore_from_path: /ckpts/openai.nemo\n", " device: cuda\n", " freeze: True\n", " layer: \"last\"\n", "```" ] }, { "attachments": {}, "cell_type": "markdown", "id": "8854eb7a-e822-43f6-a1d5-12357049485a", "metadata": {}, "source": [ "\n", "### Option 2: Training on Precached Dataset (Training UNet Only)\n", "\n", "When using precached dataset (please refer to the [Dataset Tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/multimodal/Multimodal%20Data%20Preparation.ipynb) for details), every text feature and image feature are stored as key-value pairs in `.pickle` file:\n", "\n", "```\n", "{\n", " image_key: torch.Tensor(),\n", " text_key: torch.Tensor(),\n", "}\n", "```\n", "\n", "Make sure in the training config, `cond_stage_key` is associated with `text_key` and `first_stage_key` is associated with `image_key`.\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "5762427b-f60c-4dfd-8318-e55771b25354", "metadata": {}, "source": [ "## Model Config Setup\n", "\n", "Now we will begin setting up the config file needed for Stable Diffusion training. We will use [sd_train.yaml]() as the template.\n", "\n", "1. Modify `model.data.train.dataset_path` so that it has all the webdataset info files you want to train on\n", "2. Modify `model.data.webdataset.local_root_path` to point to your dataset path\n", "3. Make sure VAE path `model.first_stage_config.from_pretrained` is adjusted if using non-precached dataset\n", "4. Make sure the text encoder config is correct (detailed above)\n", "5. Configure `exp_manager.exp_dir` for experiment save directory\n", "6. Configure `exp_manager.wandb_logger_kwargs` and/or `exp_manager.create_tensorboard_logger` if needed" ] }, { "attachments": {}, "cell_type": "markdown", "id": "70f858b3-f7d5-4678-b380-80582337bc23", "metadata": {}, "source": [ "**Note**: Please refer to NeMo Toolkit Developer Guide's Stable Diffusion page for more details on in-depth customizations, including all available optimizations.\n", "\n", "## Training\n", "\n", "Once everything is set up, training stable diffusion is as simple as running:\n" ] }, { "cell_type": "code", "execution_count": null, "id": "589e3a14-c881-4a56-b2bd-370653059dfc", "metadata": {}, "outputs": [], "source": [ "! torchrun /opt/NeMo/examples/multimodal/text_to_image/stable_diffusion/sd_train.py trainer.max_steps=100 model.data.synthetic_data=True" ] }, { "attachments": {}, "cell_type": "markdown", "id": "892d72dd-c4d7-4ca4-a948-168e187af65c", "metadata": {}, "source": [ "Intermediate checkpoints (during training) and final checkpoint will be saved to `exp_manager.exp_dir` folder. Note that here we use synthetic data for demo purpose." ] }, { "attachments": {}, "cell_type": "markdown", "id": "087c8b9a-92c3-43d3-86a3-bf7e848dfbd2", "metadata": {}, "source": [ "## Inference\n", "\n", "Stable Diffusion inference needs a trained NeMo Stable Diffusion checkpoint, along with both the image encoder (VAE) and text encoder (CLIP). The checkpoint can be either a fully trained `.nemo` checkpoint or an intermediate checkpoint from training (typically in `.ckpt` format). Both `.nemo` and `.ckpt` checkpoints can be used for inference. For information on downloading the encoders, please refer to the previous section.\n", "\n", "### Inference Config Setup\n", "\n", "Now we will begin setting up the config file needed for Stable Diffusion inference. We will use [sd_infer.yaml]() as the template.\n", "\n", "We generally use [Classifier Free Guidance](https://arxiv.org/abs/2207.12598) for better visual quality, which can be set at `infer.unconditional_guidance_scale`.\n", "\n", "NeMo Stable Diffusion supports multiple samplers. Please refer to the developer guide for more details. Samplers can be set at `infer.sampler_type`.\n", "\n", "Inference supports a batch of text prompts, which can be set at `infer.prompts`. One can also generate a configurable number of images per prompt by setting `infer.num_images_per_prompt`. Generated images will be saved to `infer.out_path`.\n", "\n", "You will also need to set the model checkpoint path at `model.restore_from_path`.\n", "\n", "### Running the Inference\n", "\n", "Once everything is set up, Stable Diffusion inference is as simple as running:" ] }, { "cell_type": "code", "execution_count": null, "id": "9e676c5d-d711-489e-8ab7-3ee20046d88d", "metadata": {}, "outputs": [], "source": [ "! torchrun /opt/NeMo/examples/multimodal/text_to_image/stable_diffusion/sd_infer.py model.restore_from_path='/opt/NeMo/tutorials/multimodal/nemo_experiments/stable-diffusion-train/checkpoints/stable-diffusion-train.nemo'" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 5 }