dclure commited on
Commit
e6201a1
·
1 Parent(s): a0b51bd
Files changed (4) hide show
  1. README.md +138 -0
  2. nn10.png +3 -0
  3. nn30.png +3 -0
  4. nn60.jpg +3 -0
README.md ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # LAION-Aesthetics :: CLIP → UMAP
3
+
4
+ This dataset is a CLIP (text) → UMAP embedding of the extremely cool [LAION-Aesthetics dataset](https://laion.ai/blog/laion-aesthetics/) - specifically the [`improved_aesthetics_6plus` version](https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus), which filters the full dataset to images with scores of > 6 under the "aesthetic" filtering model.
5
+
6
+ The dataset here includes coordinates for 3x separate UMAP fits using different values for the `n_neighbors` parameter - `10`, `30`, and `60` - which are broken out as separate columns with different suffixes:
7
+
8
+ - `n_neighbors=10` → (`x_nn10`, `y_nn10`)
9
+ - `n_neighbors=30` → (`x_nn30`, `y_nn30`)
10
+ - `n_neighbors=60` → (`x_nn60`, `y_nn60`)
11
+
12
+ ### `nn10`
13
+
14
+ ![](nn10.jpg)
15
+
16
+ ### `nn30`
17
+
18
+ ![](nn30.jpg)
19
+
20
+ ### `nn60`
21
+
22
+ (The version from [Twitter](https://twitter.com/clured/status/1565399157606580224).)
23
+
24
+ ![](nn60.jpg)
25
+
26
+ ## Pipeline
27
+
28
+ The script for producing this can be found here:
29
+
30
+ https://github.com/davidmcclure/loam-viz/blob/laion/laion.py
31
+
32
+ And is very simple - just using the `openai/clip-vit-base-patch32` model out-of-the-box to encode the text captions:
33
+
34
+ ```python
35
+ @app.command()
36
+ def clip(
37
+ src: str,
38
+ dst: str,
39
+ text_col: str = 'TEXT',
40
+ limit: Optional[int] = typer.Option(None),
41
+ batch_size: int = typer.Option(512),
42
+ ):
43
+ """Embed with CLIP."""
44
+ df = pd.read_parquet(src)
45
+
46
+ if limit:
47
+ df = df.head(limit)
48
+
49
+ tokenizer = CLIPTokenizerFast.from_pretrained('openai/clip-vit-base-patch32')
50
+ model = CLIPTextModel.from_pretrained('openai/clip-vit-base-patch32')
51
+
52
+ model = model.to(device)
53
+
54
+ texts = df[text_col].tolist()
55
+
56
+ embeds = []
57
+ for batch in chunked_iter(tqdm(texts), batch_size):
58
+
59
+ enc = tokenizer(
60
+ batch,
61
+ return_tensors='pt',
62
+ padding=True,
63
+ truncation=True,
64
+ )
65
+
66
+ enc = enc.to(device)
67
+
68
+ with torch.no_grad():
69
+ res = model(**enc)
70
+
71
+ embeds.append(res.pooler_output.to('cpu'))
72
+
73
+ embeds = torch.cat(embeds).numpy()
74
+
75
+ np.save(dst, embeds)
76
+
77
+ print(embeds.shape)
78
+ ```
79
+
80
+ Then using `cuml.GaussianRandomProjection` to do an initial squeeze to 64d (which gets the embedding tensor small enough to fit onto a single GPU for the UMAP) -
81
+
82
+ ```python
83
+ @app.command()
84
+ def random_projection(src: str, dst: str, dim: int = 64):
85
+ """Random projection on an embedding matrix."""
86
+ import rmm
87
+ import cuml
88
+
89
+ rmm.reinitialize(managed_memory=True)
90
+
91
+ embeds = np.load(src)
92
+
93
+ rp = cuml.GaussianRandomProjection(n_components=dim)
94
+ embeds = rp.fit_transform(embeds)
95
+
96
+ np.save(dst, embeds)
97
+ print(embeds.shape)
98
+ ```
99
+
100
+ And then `cuml.UMAP` to get from 64d -> 2d -
101
+
102
+ ```python
103
+ @app.command()
104
+ def umap(
105
+ df_src: str,
106
+ embeds_src: str,
107
+ dst: str,
108
+ n_neighbors: int = typer.Option(30),
109
+ n_epochs: int = typer.Option(1000),
110
+ negative_sample_rate: int = typer.Option(20),
111
+ ):
112
+ """UMAP to 2d."""
113
+ rmm.reinitialize(managed_memory=True)
114
+
115
+ df = pd.read_parquet(df_src)
116
+
117
+ embeds = np.load(embeds_src)
118
+
119
+ embeds = embeds.astype('float16')
120
+
121
+ print(embeds.shape)
122
+ print(embeds.dtype)
123
+
124
+ reducer = cuml.UMAP(
125
+ n_neighbors=n_neighbors,
126
+ n_epochs=n_epochs,
127
+ negative_sample_rate=negative_sample_rate,
128
+ verbose=True,
129
+ )
130
+
131
+ x = reducer.fit_transform(embeds)
132
+
133
+ df['x'] = x[:,0]
134
+ df['y'] = x[:,1]
135
+
136
+ df.to_parquet(dst)
137
+ print(df)
138
+ ```
nn10.png ADDED

Git LFS Details

  • SHA256: 05cbe020d3b5883e4d5423506c8b3e20a03ca5139c253bc63f2bc4aab847d5e6
  • Pointer size: 133 Bytes
  • Size of remote file: 19.1 MB
nn30.png ADDED

Git LFS Details

  • SHA256: 7026114c55cc2dc5d04be083625acee4e39f1da331f04c8b42f88070c91447da
  • Pointer size: 133 Bytes
  • Size of remote file: 14.6 MB
nn60.jpg ADDED

Git LFS Details

  • SHA256: a9bebf48330cfb853c41537d3d82272734fe718b17bb73ca99b3d9c0ae2cafb9
  • Pointer size: 133 Bytes
  • Size of remote file: 13.4 MB