Not able to download it
#38
by
iffishells
- opened
AFAIK gguf quantization is not supported in diffusers yet. It most likely wouldn't work with the pipeline either, as these model files only contain the unet without the VAE or text encoder, so you'd have to init the pipeline from multiple parts or init it from the original but pass in the already initialized unet.
This seems to be the issue for diffusers gguf support in case you want to follow it: https://github.com/huggingface/diffusers/issues/9487