File size: 1,441 Bytes
0163a2c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
# sd-webui-model-converter
Model convert extension , Used for [AUTOMATIC1111's stable diffusion webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui)

## Features
- convert to precisions: fp32, fp16, bf16
- pruning model: no-ema, ema-only
- checkpoint ext convert: ckpt, safetensors
- convert/copy/delete any parts of model: unet, text encoder(clip), vae
- Fix CLIP
- Force CLIP position_id to int64 before convert
### Fix CLIP
Sometimes, the CLIP position_id becomes incorrect due to model merging.
For example, Anything-v3.
This option will reset CLIP position to `torch.Tensor([list(range(77))]).to(torch.int64)`
### Force CLIP position_id to int64 before convert
If you use this extension to convert a model to fp16, and the model has an incorrect CLIP, the precision of the CLIP position_id may decrease during the compression process, which might coincidentally correct the offset.

If you do not wish to correct this CLIP offset coincidentally (because fixing it would alter the model,
even though the correction is accurate, not everyone prefers the most correct, right? :P),
you can use this option. It will force the CLIP position_id to be int64 and retain the incorrect CLIP |