Datasets:

ArXiv:
cloneofsimo commited on
Commit
1461df2
1 Parent(s): 62785a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -7
README.md CHANGED
@@ -1,5 +1,15 @@
 
 
 
 
1
  # Imagenet.int8: Entire Imagenet dataset in 5GB.
2
 
 
 
 
 
 
 
3
 
4
  Find 138 GB of imagenet dataset too bulky? Did you know entire imagenet actually just fits inside the ram of apple watch?
5
 
@@ -20,13 +30,6 @@ These include:
20
 
21
  ... but so little material online on the actual preprocessed dataset. I'm here to fix that. One thing I noticed was that latent doesn't have to be full precision! Indeed, they can be as small as int-8, and it won't hurt! Here are some of the examples:
22
 
23
- <p align="center">
24
- <img src="contents/vae.png" alt="small" width="800">
25
- </p>
26
-
27
- | original, reconstructed from float16, reconstructed from uint8*
28
-
29
-
30
  So clearly, it doesn't make sense to download entire Imagenet and do VAE everytime. Just download this, `to('cuda')` the entire dataset just to flex, and call it a day.😌
31
 
32
  (BTW If you think you'll need higher precision, you can always further fine-tune your model on higher precision. But I doubt that.)
 
1
+ ---
2
+ size_categories:
3
+ - 1M<n<10M
4
+ ---
5
  # Imagenet.int8: Entire Imagenet dataset in 5GB.
6
 
7
+ <p align="center">
8
+ <img src="contents/vae.png" alt="small" width="800">
9
+ </p>
10
+
11
+ | original, reconstructed from float16, reconstructed from uint8*
12
+
13
 
14
  Find 138 GB of imagenet dataset too bulky? Did you know entire imagenet actually just fits inside the ram of apple watch?
15
 
 
30
 
31
  ... but so little material online on the actual preprocessed dataset. I'm here to fix that. One thing I noticed was that latent doesn't have to be full precision! Indeed, they can be as small as int-8, and it won't hurt! Here are some of the examples:
32
 
 
 
 
 
 
 
 
33
  So clearly, it doesn't make sense to download entire Imagenet and do VAE everytime. Just download this, `to('cuda')` the entire dataset just to flex, and call it a day.😌
34
 
35
  (BTW If you think you'll need higher precision, you can always further fine-tune your model on higher precision. But I doubt that.)