File size: 1,963 Bytes
23363fb 46dde6d 38a01b7 23363fb 882ceaf 309d061 882ceaf 309d061 23363fb 309d061 23363fb 5cae546 5b4c792 5cae546 309d061 23363fb 882ceaf 23363fb 882ceaf 309d061 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
# Summary of Stable Diffusion embedding format
This file is to be a quick reference for SD embedding file formats.
Note: there are a bunch of files here that have "embedding" in their names. However, they cannot be used as Stable Diffusion Embeddings.
I do include some tools, such as *generate-embedding.py* and *generate-embeddingXL.py*, that are intended
to explore the actual inference tool formatted embedding file types. Therefore, I'm taking some time to document
the little I know about the format of those files
## Stable Diffusion v1.5
Note that SD 1.5 has a different format for embeddings than SDXL. And within SD 1.5, there are two different formats
### SD 1.5 pickletensor embed format
I have observed that .pt embeddings have a dict-of-dicts type format. It looks something like this:
[
"string_to_token": {'doesntmatter': 265}, # I dont know why 265, but it usually is
"string_to_param": {'doesntmatter': tensor([][768])},
"name": *string*,
"step": *string*,
"sd_checkpoint": *string*,
"sd_checkpoint_name": *string*
]
(Note that *string* can be None)
### SD 1.5 safetensor embed format
The ones I have seen, have a much simpler format. It is a trivial format compared to SD 1.5:
{ "emb_params": Tensor([][768])}
According to https://github.com/Stability-AI/ModelSpec?tab=readme-ov-file
there is supposed to be metadata embedding in the safetensor format, but I havent found a clean way to read it yet.
Expected standard slots for metadata info are:
"modelspec.title": "(name for this embedding)",
"modelspec.architecture": "stable-diffusion-v1/textual-inversion",
"modelspec.thumbnail": "(data:image/jpeg;base64,/9jxxxxxxxxx)"
## SDXL embed format (safetensor)
This has an actual spec at:
https://huggingface.co/docs/diffusers/using-diffusers/textual_inversion_inference
But it's pretty simple.
summary:
{
"clip_l": Tensor([][768]),
"clip_g": Tensor([][1280])
}
|