ppbrown commited on
Commit
d1c7d8d
1 Parent(s): 40cd7fc

standalone "Make an embedding file for SD", but non-conventional

Browse files

Note: this currently does NOT PERFORM AS EXPECTED.
Making an embedding from the word "cat" does not give you consistent images of cats.
It's more like "suggestion of cats".

Files changed (1) hide show
  1. generate-embedding.py +73 -0
generate-embedding.py ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/env python
2
+
3
+ """ Work in progress
4
+ NB: This is COMPLETELY DIFFERENT from "generate-embeddings.py"!!!
5
+
6
+
7
+ Plan:
8
+ Take input for a single word or phrase.
9
+ Generate a embedding file, "generated.safetensors"
10
+ Save it out, to "generated.safetensors"
11
+
12
+ Note that you can generate an embedding from two words, or even more
13
+
14
+ Note also that apparently there are multiple file formats for embeddings.
15
+ I only use the simplest of them, in the simplest way.
16
+ """
17
+
18
+
19
+ import sys
20
+ import json
21
+ import torch
22
+ from safetensors.torch import save_file
23
+ from transformers import CLIPProcessor,CLIPModel
24
+
25
+ import logging
26
+ # Turn off stupid mesages from CLIPModel.load
27
+ logging.disable(logging.WARNING)
28
+
29
+ clipsrc="openai/clip-vit-large-patch14"
30
+ processor=None
31
+ model=None
32
+
33
+ device=torch.device("cuda")
34
+
35
+
36
+ def init():
37
+ global processor
38
+ global model
39
+ # Load the processor and model
40
+ print("loading processor from "+clipsrc,file=sys.stderr)
41
+ processor = CLIPProcessor.from_pretrained(clipsrc)
42
+ print("done",file=sys.stderr)
43
+ print("loading model from "+clipsrc,file=sys.stderr)
44
+ model = CLIPModel.from_pretrained(clipsrc)
45
+ print("done",file=sys.stderr)
46
+
47
+ model = model.to(device)
48
+
49
+ def standard_embed_calc(text):
50
+ inputs = processor(text=text, return_tensors="pt")
51
+ inputs.to(device)
52
+ with torch.no_grad():
53
+ text_features = model.get_text_features(**inputs)
54
+ embedding = text_features[0]
55
+ return embedding
56
+
57
+
58
+ init()
59
+
60
+
61
+ word = input("type a phrase to generate an embedding for: ")
62
+
63
+ emb = standard_embed_calc(word)
64
+ embs=emb.unsqueeze(0) # stupid matrix magic to make the cat work
65
+
66
+ print("Shape of result = ",embs.shape)
67
+ output = "generated.safetensors"
68
+ print(f"Saving to {output}...")
69
+ save_file({"emb_params": embs}, output)
70
+
71
+
72
+
73
+