|
# tokenspace directory |
|
|
|
This directory contains utilities for the purpose of browsing the |
|
"token space" of CLIP ViT-L/14 |
|
|
|
Primary tools are: |
|
|
|
* "calculate-distances.py": allows command-line browsing of words and their neighbours |
|
* "graph-embeddings.py": plots graph of full values of two embeddings |
|
|
|
|
|
## calculate-distances.py |
|
|
|
Loads the generated embeddings, reads in a word, calculates "distance" to every |
|
embedding, and then shows the closest "neighbours". |
|
|
|
To run this requires the files "embeddings.safetensors" and "dictionary" |
|
|
|
You will need to rename or copy appropriate files for this as mentioned below |
|
|
|
## graph-embeddings.py |
|
|
|
Run the script. It will ask you for two text strings. |
|
Once you enter both, it will plot the graph and display it for you |
|
|
|
Note that this tool does not require any of the other files; just that you |
|
have the requisite python modules installed. (pip install -r requirements.txt) |
|
|
|
### embeddings.safetensors |
|
|
|
You can either copy one of the provided files, or generate your own. |
|
See generate-embeddings.py for that. |
|
|
|
Note that you muist always use the "dictionary" file that matchnes your embeddings file |
|
|
|
### embeddings.allids.safetensors |
|
|
|
DO NOT USE THIS ONE for programs that expect a matching dictionary. |
|
This one is purely numeric based. |
|
Its intention is more for research datamining, but it does have a matching |
|
graph front end, graph-byid.py |
|
|
|
|
|
### dictionary |
|
|
|
Make sure to always use the dictionary file that matches your embeddings file. |
|
|
|
The "dictionary.fullword" file is pulled from fullword.json, which is distilled from "full words" |
|
present in the ViT-L/14 CLIP model's provided token dictionary, called "vocab.json". |
|
Thus there are only around 30,000 words in it |
|
|
|
If you want to use the provided "embeddings.safetensors.huge" file, you will want to use the matching |
|
"dictionary.huge" file, which has over 300,000 words |
|
|
|
This huge file comes from the linux "wamerican-huge" package, which delivers it under |
|
/usr/share/dict/american-english-huge |
|
|
|
There also exists a "american-insane" package |
|
|
|
|
|
## generate-embeddings.py |
|
|
|
Generates the "embeddings.safetensor" file, based on the "dictionary" file present. |
|
Takes a few minutes to run, depending on size of the dictionary |
|
|
|
The shape of the embeddings tensor, is |
|
[number-of-words][768] |
|
|
|
Note that yes, it is possible to directly pull a tensor from the CLIP model, |
|
using keyname of text_model.embeddings.token_embedding.weight |
|
|
|
This will NOT GIVE YOU THE RIGHT DISTANCES! |
|
Hence why we are calculating and then storing the embedding weights actually |
|
generated by the CLIP process |
|
|
|
|
|
## fullword.json |
|
|
|
This file contains a collection of "one word, one CLIP token id" pairings. |
|
The file was taken from vocab.json, which is part of multiple SD models in huggingface.co |
|
|
|
The file was optimized for what people are actually going to type as words. |
|
First all the non-(/w) entries were stripped out. |
|
Then all the garbage punctuation and foreign characters were stripped out. |
|
Finally, the actual (/w) was stripped out, for ease of use. |
|
|
|
|