ppbrown commited on
Commit
e9623ba
1 Parent(s): eb222a6

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +21 -11
  2. generate-distances.py +23 -27
README.md CHANGED
@@ -3,38 +3,48 @@
3
  This directory contains utilities for the purpose of browsing the
4
  "token space" of CLIP ViT-L/14
5
 
6
- Long term goal is to be able to literally browse from word to
7
- "nearby" word
 
 
 
 
 
 
 
 
8
 
9
 
10
 
11
  ## generate-embeddings.py
12
 
13
- Generates the "embeddings.safetensor" file!
14
 
15
  Basically goes through the fullword.json file, and
16
- generates a standalone embedding object for each word.
17
  Shape of the embeddings tensor, is
18
  [number-of-words][768]
19
 
20
- Note that it is possible to directly pull a tensor from the CLIP model,
21
- key of text_model.embeddings.token_embedding.weight
22
 
23
  This will NOT GIVE YOU THE RIGHT DISTANCES!
24
  Hence why we are calculating and then storing the embedding weights actually
25
- used by the CLIP process
26
 
 
27
 
28
- ## generate-distances.py
29
 
30
- Loads the prior generated embeddings, and then tries to calculate a full matrix
31
- of distances between all tokens
32
 
33
 
34
  ## fullword.json
35
 
36
  This file contains a collection of "one word, one CLIP token id" pairings.
37
- The file was taken from vocab.sdturbo.json
 
 
38
  First all the non-(/w) entries were stripped out.
39
  Then all the garbage punctuation and foreign characters were stripped out.
40
  Finally, the actual (/w) was stripped out, for ease of use.
 
 
3
  This directory contains utilities for the purpose of browsing the
4
  "token space" of CLIP ViT-L/14
5
 
6
+ Primary tool is "generate-distances.py",
7
+ which allows command-line browsing of words and their neighbours
8
+
9
+
10
+ ## generate-distances.py
11
+
12
+ Loads the generated embeddings, calculates a full matrix
13
+ of distances between all tokens, and then reads in a word, to show neighbours for.
14
+
15
+ To run this requires the files "embeddings.safetensors" and "fullword.json"
16
 
17
 
18
 
19
  ## generate-embeddings.py
20
 
21
+ Generates the "embeddings.safetensor" file. Takes a few minutes to run.
22
 
23
  Basically goes through the fullword.json file, and
24
+ generates a standalone embedding for each word.
25
  Shape of the embeddings tensor, is
26
  [number-of-words][768]
27
 
28
+ Note that yes, it is possible to directly pull a tensor from the CLIP model,
29
+ using keyname of text_model.embeddings.token_embedding.weight
30
 
31
  This will NOT GIVE YOU THE RIGHT DISTANCES!
32
  Hence why we are calculating and then storing the embedding weights actually
33
+ generated by the CLIP process
34
 
35
+ ## embeddings.safetensors
36
 
37
+ Data file generated by generate-embeddings.py
38
 
 
 
39
 
40
 
41
  ## fullword.json
42
 
43
  This file contains a collection of "one word, one CLIP token id" pairings.
44
+ The file was taken from vocab.json, which is part of multiple SD models in huggingface.co
45
+
46
+ The file was optimized for what people are actually going to type as words.
47
  First all the non-(/w) entries were stripped out.
48
  Then all the garbage punctuation and foreign characters were stripped out.
49
  Finally, the actual (/w) was stripped out, for ease of use.
50
+
generate-distances.py CHANGED
@@ -23,44 +23,40 @@ with open("fullword.json","r") as f:
23
  tokendict = json.load(f)
24
  wordlist = list(tokendict.keys())
25
 
26
- print("read in embeddingsnow",file=sys.stderr)
27
-
28
 
29
  model = safe_open(embed_file,framework="pt",device="cuda")
30
  embs=model.get_tensor("embeddings")
31
  embs.to(device)
32
-
33
-
34
  print("Shape of loaded embeds =",embs.shape)
35
- print("calculate distances now")
36
 
 
37
  distances = torch.cdist(embs, embs, p=2)
38
  print("distances shape is",distances.shape)
39
 
40
- targetword="cat"
41
- targetindex=wordlist.index(targetword)
42
- print("index of cat is",targetindex)
43
- targetdistances=distances[targetindex]
44
-
45
- smallest_distances, smallest_indices = torch.topk(targetdistances, 5, largest=False)
 
 
46
 
47
- smallest_distances=smallest_distances.tolist()
48
- smallest_indices=smallest_indices.tolist()
49
 
50
- print("The smallest distance values are",smallest_distances)
51
- print("The smallest index values are",smallest_indices)
52
 
53
- for t in smallest_indices:
54
- print(wordlist[t])
 
 
 
 
55
 
56
 
57
-
58
- """
59
- import torch.nn.functional as F
60
- pos=0
61
- for word in tokendict.keys():
62
- print("Calculating distances from",word)
63
- home=embs[pos]
64
- #distances = torch.cdist(embs, home.unsqueeze(0), p=2)
65
- #distance = F.pairwise_distance(home, embs[,p=2).item()
66
- """
 
23
  tokendict = json.load(f)
24
  wordlist = list(tokendict.keys())
25
 
26
+ print("read in embeddings now",file=sys.stderr)
 
27
 
28
  model = safe_open(embed_file,framework="pt",device="cuda")
29
  embs=model.get_tensor("embeddings")
30
  embs.to(device)
 
 
31
  print("Shape of loaded embeds =",embs.shape)
 
32
 
33
+ # ("calculate distances now")
34
  distances = torch.cdist(embs, embs, p=2)
35
  print("distances shape is",distances.shape)
36
 
37
+ # Find 10 closest tokens to targetword.
38
+ # Will include the word itself
39
+ def find_closest(targetword):
40
+ try:
41
+ targetindex=wordlist.index(targetword)
42
+ except ValueError:
43
+ print(targetword,"not found")
44
+ return
45
 
46
+ #print("index of",targetword,"is",targetindex)
47
+ targetdistances=distances[targetindex]
48
 
49
+ smallest_distances, smallest_indices = torch.topk(targetdistances, 10, largest=False)
 
50
 
51
+ smallest_distances=smallest_distances.tolist()
52
+ smallest_indices=smallest_indices.tolist()
53
+ for d,i in zip(smallest_distances,smallest_indices):
54
+ print(wordlist[i],"(",d,")")
55
+ #print("The smallest distance values are",smallest_distances)
56
+ #print("The smallest index values are",smallest_indices)
57
 
58
 
59
+ print("Input a word now:")
60
+ for line in sys.stdin:
61
+ input_text = line.rstrip()
62
+ find_closest(input_text)