File size: 972 Bytes
9c29586 fbf9ce1 01090fc 9c29586 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
---
pipeline_tag: zero-shot-image-classification
---
Adds the missing layers from [openai/clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336/blob/32aef857e655710d7c25515da6decdb4b4026f44/model.safetensors) to [microsoft/LLM2CLIP-Openai-L-14-336](https://huggingface.co/microsoft/LLM2CLIP-Openai-L-14-336/blob/94ebb8b48c9514af2d32c83b168fa863d1227323/model.safetensors) so that it can be loaded in ComfyUI.
```py
import torch
from safetensors.torch import load_file, save_file
def newclip(original_path, new_path, combined_path):
original = load_file(original_path)
new = load_file(new_path)
combined = {}
for key in original.keys():
if key in new.keys():
combined[key] = new[key]
else:
combined[key] = original[key]
save_file(combined, combined_path)
newclip(
"./original-clip-l.safetensors",
"./new-llm2clip-clip-l.safetensors",
"./combined.safetensors"
)
``` |