File size: 1,626 Bytes
2401e8d e008e57 13b8c6d 4fb80ed 6a20bf6 4fb80ed 6a20bf6 4fb80ed 6a20bf6 4fb80ed 6a20bf6 4fb80ed 6a20bf6 8f42ccc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
license: mit
---
### Pre-computed CLIP embeddings
Embeddings are stored as HDF5 datasets with the following structure:
```python
<DATASET_NAME>_<MODEL_NAME>_<OP>.hdf5
"""
DATASET_NAME: name of the dataset, e.g. "imagenette".
MODEL_NAME: name of the model, e.g. "open_clip:ViT-B-32".
OP: split of the dataset (either "train" or "val").
"""
dataset["embedding"] contains the embeddings
dataset["label"] contains the labels
```
To generate the dataset, run
```bash
$ python make_dataset.py -h
usage: make_dataset.py [-h] [--dataset DATASET [DATASET ...]] [--model MODEL [MODEL ...]]
options:
--dataset DATASET [DATASET ...] List of datasets to encode.
--model MODEL [MODEL ...] List of models to use.
```
Supported dataset names (see [supported_datasets.txt](supported_datasets.txt)):
* `imagenette` [[dataset](https://github.com/fastai/imagenette)]
Supported model names (see [supported_models.txt](supported_models.txt)):
* `open_clip:ViT-B-32` [[model](https://huggingface.co/laion/CLIP-ViT-B-32-laion2B-s34B-b79K)]
* `open_clip:ViT-L-14` [[model](https://huggingface.co/laion/CLIP-ViT-L-14-laion2B-s32B-b82K)]
* `clip:ViT-B/32` [[model](https://github.com/openai/CLIP)]
* `clip:ViT-L/14` [[model](https://github.com/openai/CLIP)]
**References**
```
@misc{teneggi2024ibetdidmean,
title={I Bet You Did Not Mean That: Testing Semantic Importance via Betting},
author={Jacopo Teneggi and Jeremias Sulam},
year={2024},
eprint={2405.19146},
archivePrefix={arXiv},
primaryClass={stat.ML},
}
``` |