rulins commited on
Commit
7ef6d22
1 Parent(s): 234ba05

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -5
README.md CHANGED
@@ -6,18 +6,39 @@ We release the raw passages, embeddings, and index of MassiveDS.
6
 
7
  Website: https://retrievalscaling.github.io
8
 
 
9
  We release two versions of MassiveDS:
10
  1. [MassiveDS-1.4T](https://huggingface.co/datasets/rulins/MassiveDS-1.4T), which contains 1.4T tokens in the datastore.
11
  2. [MassiveDS-140B](https://huggingface.co/datasets/rulins/MassiveDS-140B), which is a subsampled version containing 140B tokens in the datastore.
12
 
13
- File structure:
 
 
 
 
 
 
14
  * `raw_data`: plain data in JSONL files.
15
  * `passages`: chunked raw passages with passage IDs. Each passage is chunked to have no more than 256 words.
16
  * `embeddings`: embeddings of the passages encoded with Contriever-MSMACRO.
17
  * `index`: flat index built with embeddings.
18
 
19
- **Note**:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
- * Due to large data volume, **we are still uploading the data for MassiveDS-1.4T** (ETA: July 31).
22
- * MassiveDS-140B is ready to go. Please try our 10% subsampled version first!
23
- * Code support to run with MassiveDS is in https://github.com/RulinShao/retrieval-scaling.
 
6
 
7
  Website: https://retrievalscaling.github.io
8
 
9
+ ## Versions
10
  We release two versions of MassiveDS:
11
  1. [MassiveDS-1.4T](https://huggingface.co/datasets/rulins/MassiveDS-1.4T), which contains 1.4T tokens in the datastore.
12
  2. [MassiveDS-140B](https://huggingface.co/datasets/rulins/MassiveDS-140B), which is a subsampled version containing 140B tokens in the datastore.
13
 
14
+ **Note**:
15
+
16
+ * Due to large data volume, **we are still uploading the data for MassiveDS-1.4T** (ETA: July 31).
17
+ * MassiveDS-140B is ready to go. Please try our 10% subsampled version first!
18
+ * Code support to run with MassiveDS is in https://github.com/RulinShao/retrieval-scaling.
19
+
20
+ ## File structure
21
  * `raw_data`: plain data in JSONL files.
22
  * `passages`: chunked raw passages with passage IDs. Each passage is chunked to have no more than 256 words.
23
  * `embeddings`: embeddings of the passages encoded with Contriever-MSMACRO.
24
  * `index`: flat index built with embeddings.
25
 
26
+ ## Download
27
+ We recommend using Git LFS to download large files. We provide an example script below.
28
+
29
+ First, clone the Git history only to start working with it.
30
+ ```bash
31
+ git clone --filter=blob:none https://huggingface.co/datasets/rulins/MassiveDS-1.4T
32
+ cd MassiveDS-1.4T/
33
+ ```
34
+ (Optionally) Specify the directory that you want to partially download, e.g., the `embeddings` only. Skip this step if you want to download everything.
35
+ ```bash
36
+ git sparse-checkout init --cone
37
+ git sparse-checkout set embeddings
38
+ ```
39
+ Finally, pull the data.
40
+ ```bash
41
+ git lfs install
42
+ git lfs pull
43
+ ```
44