RobbeSneyders commited on
Commit
a7b9860
1 Parent(s): 2a84a43

Add dataset creation details to dataset card

Browse files
Files changed (1) hide show
  1. README.md +47 -4
README.md CHANGED
@@ -69,15 +69,58 @@ We provide an [example use case](https://github.com/ml6team/fondant-usecase-cont
69
 
70
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
71
 
72
- The data repository is
73
- structured as follows:
74
- - [data/](https://huggingface.co/datasets/fondant-ai/datacomp-small-clip/viewer): The dataset
75
  containing ids, urls, and CLIP embeddings
76
  - [faiss](https://huggingface.co/datasets/fondant-ai/datacomp-small-clip/blob/main/faiss):
77
  The faiss index
78
- - [id_mapping/](https://huggingface.co/datasets/fondant-ai/datacomp-small-clip/tree/main/id_mapping):
79
  The mapping of the faiss ids to the original urls
80
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81
  ## Terms and Conditions
82
  Under no circumstances can Fondant be held liable by a third party for (i) the accuracy or correctness of the content, (ii) an alleged infringement of intellectual property rights or (iii) any other alleged claim, action, injunction or suit resulting from the publication or use of the dataset.
83
 
 
69
 
70
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
71
 
72
+ The data repository is structured as follows:
73
+ - [data/](https://huggingface.co/datasets/fondant-ai/datacomp-small-clip/viewer/embeddings): The dataset
 
74
  containing ids, urls, and CLIP embeddings
75
  - [faiss](https://huggingface.co/datasets/fondant-ai/datacomp-small-clip/blob/main/faiss):
76
  The faiss index
77
+ - [id_mapping/](https://huggingface.co/datasets/fondant-ai/datacomp-small-clip/viewer/id_mapping):
78
  The mapping of the faiss ids to the original urls
79
 
80
+ ## Dataset Creation
81
+
82
+ We leveraged Fondant to generate the CLIP index and published the pipeline as a
83
+ [git repository](https://github.com/ml6team/fondant-clip-index). The pipeline consists of 4 steps:
84
+
85
+ - A [`load_from_hf_hub`](https://fondant.ai/en/stable/components/hub/#load_from_hf_hub#description)
86
+ operation that loads the
87
+ [datacomp_small](https://huggingface.co/datasets/mlfoundations/datacomp_small) dataset from
88
+ huggingface into the Fondant workspace and format.
89
+ - A [`download_images`](https://fondant.ai/en/stable/components/hub/#download_images#description)
90
+ operation which downloads the actual images from the urls in the dataset.
91
+ - A [`embed_images`](https://fondant.ai/en/stable/components/hub/#embed_images#description) operation which embeds the downloaded images using a CLIP model.
92
+ - A [`write_to_file`](https://fondant.ai/en/stable/components/hub/#write_to_file#description)
93
+ operation which writes the original urls and generated embeddings to the chosen destination.
94
+
95
+ After running the pipeline, we used [`autofaiss`](https://github.com/criteo/autofaiss) to build the
96
+ CLIP index.
97
+
98
+ ### Execution details
99
+
100
+ ### Download images
101
+
102
+ We downloaded the images with 32 cores in parallel, each opening up to 25 concurrent connections,
103
+ and achieved a success rate of 72%, resulting in 9.251.172 images.
104
+
105
+ The downloading was executed on a VM on GCP using the Fondant Docker runner. We originally
106
+ planned to run this on Vertex AI, but moved to a VM when noticing lower network bandwidth on Vertex.
107
+
108
+ The success rate can probably be further improved by setting up a faster DNS resolver.
109
+
110
+ ### Embed images
111
+
112
+ We leveraged the
113
+ [`laion/CLIP-ViT-B-32-laion2B-s34B-b79K`](https://huggingface.co/laion/CLIP-ViT-B-32-laion2B-s34B-b79K)
114
+ CLIP model. We chose this model because of a couple of reasons. It is popular, which makes it
115
+ easy to use with existing embeddings, it is small, which makes it cheap to run, and it is an open
116
+ model trained on open data.
117
+
118
+ We appreciate any feedback on our choice of model, so we can take this into account if we
119
+ generate indices for larger datasets in the future.
120
+
121
+ The embedding was executed on 4 T4 GPUs on Google Cloud using our Vertex AI runner, with a batch
122
+ size of 32. The execution took 8:15 hours.
123
+
124
  ## Terms and Conditions
125
  Under no circumstances can Fondant be held liable by a third party for (i) the accuracy or correctness of the content, (ii) an alleged infringement of intellectual property rights or (iii) any other alleged claim, action, injunction or suit resulting from the publication or use of the dataset.
126