sayakpaul HF staff commited on
Commit
edd85f6
1 Parent(s): 7f1b698

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -88,11 +88,11 @@ To more details, check out the official documentation of [`StableDiffusionXLCont
88
  ### Training
89
 
90
  Our training script was built on top of the official training script that we provide [here](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_sdxl.md).
91
- You can refer to [this script](https://github.com/patil-suraj/muse-experiments/blob/f71e7e79af24509ddb4e1b295a1d0ef8d8758dc9/ctrlnet/train_controlnet_webdataset.py) for full discolsure.
92
 
93
  * This checkpoint does not perform distillation. We just use a smaller ControlNet initialized from the SDXL UNet. We
94
  encourage the community to try and conduct distillation too. This resource might be of help in [this regard](https://huggingface.co/blog/sd_distillation).
95
- * To learn more about how the ControlNet was initialized, refer to [this code block](https://github.com/patil-suraj/muse-experiments/blob/f71e7e79af24509ddb4e1b295a1d0ef8d8758dc9/ctrlnet/train_controlnet_webdataset.py#L1020C1-L1042C36).
96
  * It does not have any attention blocks.
97
  * The model works pretty good on most conditioning images. But for more complex conditionings, the bigger checkpoints might be better. We are still working on improving the quality of this checkpoint and looking for feedback from the community.
98
  * We recommend playing around with the `controlnet_conditioning_scale` and `guidance_scale` arguments for potentially better
 
88
  ### Training
89
 
90
  Our training script was built on top of the official training script that we provide [here](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_sdxl.md).
91
+ You can refer to [this script](https://github.com/huggingface/diffusers/blob/7b93c2a882d8e12209fbaeffa51ee2b599ab5349/examples/research_projects/controlnet/train_controlnet_webdataset.py) for full discolsure.
92
 
93
  * This checkpoint does not perform distillation. We just use a smaller ControlNet initialized from the SDXL UNet. We
94
  encourage the community to try and conduct distillation too. This resource might be of help in [this regard](https://huggingface.co/blog/sd_distillation).
95
+ * To learn more about how the ControlNet was initialized, refer to [this code block](https://github.com/huggingface/diffusers/blob/7b93c2a882d8e12209fbaeffa51ee2b599ab5349/examples/research_projects/controlnet/train_controlnet_webdataset.py#L981C1-L999C36).
96
  * It does not have any attention blocks.
97
  * The model works pretty good on most conditioning images. But for more complex conditionings, the bigger checkpoints might be better. We are still working on improving the quality of this checkpoint and looking for feedback from the community.
98
  * We recommend playing around with the `controlnet_conditioning_scale` and `guidance_scale` arguments for potentially better