Datasets:

Languages:
English
Multilinguality:
multilingual
Size Categories:
10M<n<100M
Language Creators:
crowdsourced
Annotations Creators:
no-annotation
Source Datasets:
original
ArXiv:
License:
albertvillanova HF staff commited on
Commit
45f0ff0
1 Parent(s): ae47398

Add explanation about the different subsets

Browse files

Add explanation about the different configurations of the dataset:
- nq, multiset
- exact, compressed, no_index
- no_embeddings

Files changed (1) hide show
  1. README.md +11 -0
README.md CHANGED
@@ -163,6 +163,17 @@ The wikipedia articles were split into multiple, disjoint text blocks of 100 wor
163
 
164
  The wikipedia dump is the one from Dec. 20, 2018.
165
 
 
 
 
 
 
 
 
 
 
 
 
166
 
167
  ### Supported Tasks and Leaderboards
168
 
 
163
 
164
  The wikipedia dump is the one from Dec. 20, 2018.
165
 
166
+ There are two types of DPR embeddings based on two different models:
167
+ - `nq`: the model is trained on the Natural Questions dataset
168
+ - `multiset`: the model is trained on multiple datasets
169
+
170
+ Additionally, a FAISS index can be created from the embeddings:
171
+ - `exact`: with an exact FAISS index (high RAM usage)
172
+ - `compressed`: with a compressed FAISS index (approximate, but lower RAM usage)
173
+ - `no_index`: without FAISS index
174
+
175
+ Finally, there is the possibility of generating the dataset without the embeddings:
176
+ - `no_embeddings`
177
 
178
  ### Supported Tasks and Leaderboards
179