Datasets:
v1_7 update (#28)
Browse files- v1_7 update (fa8ae125c7a1a746a1b5b1f9202c9af2e7741cd5)
- add more sources (f606534e4f82123943f89813eb4f159129a88d60)
- Update dolma.py (3c125d3ac182adc68c7c9ec9564ad0795af7c679)
- Delete urls/.DS_Store (b1f38ffc3f43f9fc53ea67e3251abfa0e97aee67)
- Upload v1_7.txt (4424757a5e782ede5ec1004432ed5c3cceeda31b)
- Update .gitignore (fcc0c611396d3623549aa63c12ff91bf12dfdc35)
- Update README.md (099c384b6a457ce79553382178a618e8b78406f0)
Co-authored-by: Kyle Lo <kylel@users.noreply.huggingface.co>
- .gitignore +1 -1
- README.md +32 -5
- dolma.py +5 -2
- urls/.DS_Store +0 -0
- urls/v1_7.txt +0 -0
.gitignore
CHANGED
@@ -59,7 +59,7 @@ target/
|
|
59 |
*.so
|
60 |
|
61 |
# macOS metadata
|
62 |
-
|
63 |
|
64 |
# ignoring test output
|
65 |
/tests/work/
|
|
|
59 |
*.so
|
60 |
|
61 |
# macOS metadata
|
62 |
+
*.DS_Store
|
63 |
|
64 |
# ignoring test output
|
65 |
/tests/work/
|
README.md
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
---
|
2 |
license: odc-by
|
3 |
-
viewer:
|
4 |
task_categories:
|
5 |
- text-generation
|
6 |
language:
|
@@ -28,26 +28,51 @@ More information:
|
|
28 |
|
29 |
To learn more about the toolkit used to create Dolma, including how to replicate this dataset, head over our [GitHub project page](https://github.com/allenai/dolma/tree/main/docs)!
|
30 |
|
|
|
|
|
31 |
**2024-04-15: License Change.** We have updated the license of Dolma to [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). Please see this [blog post](https://blog.allenai.org/making-a-switch-dolma-moves-to-odc-by-8f0e73852f44) for more information.
|
32 |
|
33 |
|
34 |
## Versions
|
35 |
|
36 |
-
At the moment, there are
|
37 |
|
38 |
| **Version** | **Default?** | **Release Date** | **Size** (gzip) | **Description** |
|
39 |
|--|:--:|--|--|--|
|
40 |
-
| `
|
|
|
41 |
| `v1_6-sample` | | 2024-01-31 | 16.4 GB | A smaller sample of Dolma, with roughly 10 billion tokens. Useful for data exploration. |
|
42 |
| `v1_5` | | 2023-10-31 | 6.4 TB | The version of Dolma used to train [OLMo-1B](https://huggingface.co/allenai/OLMo-1B). Roughly 3 trillion tokens. |
|
43 |
| `v1_5-sample` | | 2023-10-31 | 2.9 TB | A sample of roughly 1.9 trillion tokens used to train [OLMo-7B](https://huggingface.co/allenai/OLMo-7B) |
|
44 |
| `v1` | | 2023-08-18 | 6.0 TB | The first version of Dolma. |
|
45 |
|
46 |
-
(Size difference between `v1_6` and previous version is due to different set of metadata included in files: we removed redundant metadata in `v1_6`.)
|
47 |
|
48 |
-
## Summary Statistics (v1.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
|
50 |
|
|
|
|
|
51 |
| **Source** | **Doc Type** | **UTF-8 bytes** (GB) | **Documents** (millions) | **Unicode words** (billions) | **Llama tokens** (billions) |
|
52 |
|--|--|--|--|--|--|
|
53 |
| Common Crawl | web pages | 9,022 | 3,370 | 1,775 | 2,281 |
|
@@ -60,6 +85,8 @@ At the moment, there are five versions of Dolma available:
|
|
60 |
| **Total** | | **11,519** | **4,367** | **2,318** | **3,059** |
|
61 |
|
62 |
|
|
|
|
|
63 |
|
64 |
## Download
|
65 |
|
|
|
1 |
---
|
2 |
license: odc-by
|
3 |
+
viewer: false
|
4 |
task_categories:
|
5 |
- text-generation
|
6 |
language:
|
|
|
28 |
|
29 |
To learn more about the toolkit used to create Dolma, including how to replicate this dataset, head over our [GitHub project page](https://github.com/allenai/dolma/tree/main/docs)!
|
30 |
|
31 |
+
**2024-04-17: Dolma v1.7 Release.** We have released an updated version of Dolma that we used to train our latest [OLMo 7B-v1.7](https://huggingface.co/allenai/OLMo-7b-v1.7) model.
|
32 |
+
|
33 |
**2024-04-15: License Change.** We have updated the license of Dolma to [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). Please see this [blog post](https://blog.allenai.org/making-a-switch-dolma-moves-to-odc-by-8f0e73852f44) for more information.
|
34 |
|
35 |
|
36 |
## Versions
|
37 |
|
38 |
+
At the moment, there are six versions of Dolma available:
|
39 |
|
40 |
| **Version** | **Default?** | **Release Date** | **Size** (gzip) | **Description** |
|
41 |
|--|:--:|--|--|--|
|
42 |
+
| `v1_7` | ✅ | 2024-04-15 | 4.5 TB | Used to train [OLMo-7B-v1.7](https://huggingface.co/allenai/OLMo-7b-v1.7). |
|
43 |
+
| `v1_6` | | 2024-01-31 | 5.4 TB | An update to v1.5 with some bug-fixes. |
|
44 |
| `v1_6-sample` | | 2024-01-31 | 16.4 GB | A smaller sample of Dolma, with roughly 10 billion tokens. Useful for data exploration. |
|
45 |
| `v1_5` | | 2023-10-31 | 6.4 TB | The version of Dolma used to train [OLMo-1B](https://huggingface.co/allenai/OLMo-1B). Roughly 3 trillion tokens. |
|
46 |
| `v1_5-sample` | | 2023-10-31 | 2.9 TB | A sample of roughly 1.9 trillion tokens used to train [OLMo-7B](https://huggingface.co/allenai/OLMo-7B) |
|
47 |
| `v1` | | 2023-08-18 | 6.0 TB | The first version of Dolma. |
|
48 |
|
|
|
49 |
|
50 |
+
## Summary Statistics (v1.7)
|
51 |
+
|
52 |
+
| **Source** | **Provenance** | **New?** | **Documents** (millions) | **OLMo tokens** (billions) | **Sample Proportion** | **Cutoff Date** | **Processing**
|
53 |
+
|--|--|--|--|--|--|--|--|
|
54 |
+
| Dolma's CC | [Common Crawl](https://commoncrawl.org/) via Dolma v1.6 | Updated | 875.2 | 1,195.5 | 50% | Mar 2023 | Extracted using the Dolma pipeline; new quality filtering and deduplication steps. |
|
55 |
+
| Refined Web | [Refined Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | Yes | 664.0 | 456.4 | 100% | Feb 2023 | Filtered using the Dolma pipeline; new quality filtering and deduplication steps. |
|
56 |
+
| StarCoder | [StarCoder](https://huggingface.co/blog/starcoder) | Yes | 206.6 | 263.8 | 100% | May 2023 | No further processing. |
|
57 |
+
| C4 | [C4](https://huggingface.co/datasets/c4) via Dolma v1.6 | Updated | 249.9 | 138.4 | 50% | Apr 2019 | Filtered using the Dolma pipeline; new quality filtering and deduplication steps. |
|
58 |
+
| Reddit | [PushShift API](https://github.com/pushshift/api) | Updated | 377.4 | 79.9 | 100% | Mar 2023 | Extracted using the Dolma pipeline; new quality filtering and deduplication steps. |
|
59 |
+
| Semantic Scholar ([S2ORC](https://aclanthology.org/2020.acl-main.447/) & [S2AG](https://www.semanticscholar.org/product/api)) | [peS2o](https://huggingface.co/datasets/allenai/peS2o) via Dolma v1.6 | No | 38.8 | 57.2 | 100% | Mar 2023 | Same as Dolma v1.6 |
|
60 |
+
| arXiv | [RedPajama v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | Yes | 1.5 | 28.0 | 100% | Mar 2023 | No further processing. |
|
61 |
+
| StackExchange | [RedPajama v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | Yes | 29.3 | 19.6 | 100% | Mar 2023 | No further processing. |
|
62 |
+
| Flan | [Flan](https://arxiv.org/abs/2301.13688) via [Tulu](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) | Yes | 52.1 | 16.5 | 100% | Mar 2023 | |
|
63 |
+
| CC News | [Common Crawl](https://commoncrawl.org/blog/news-dataset-available) | Yes | 22.0 | 14.3 | 100% | Mar 2023 | Extracted using the Dolma pipeline; new quality filtering and deduplication steps. |
|
64 |
+
| OpenWebMath | [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) via [Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2) | Yes | 2.9 | 12.6 | 100% | Oct 2023 | Training subset; no further processing. |
|
65 |
+
| Algebraic Stack | [Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2) | Yes | 2.8 | 12.6 | 100% | Oct 2023 | Training subset; no further processing. |
|
66 |
+
| Project Gutenberg | [Project Gutenberg](https://www.gutenberg.org) via Dolma v1.6 | No | 0.0556 | 5.3 | 100% | Mar 2023 | Same as Dolma v1.6 |
|
67 |
+
| MegaWika | [MetaWika](https://huggingface.co/datasets/hltcoe/megawika) | Yes | 3.2 | 4.6 | 100% | Jul 2023 | English web pages cited from Wikipedia; curated using the full Dolma pipeline. |
|
68 |
+
| Wikipedia & Wikibooks | [Wikimedia](https://dumps.wikimedia.org) via Dolma v1.6 | No | 6.2 | 3.7 | 200% | Mar 2023 | Same as Dolma v1.6 |
|
69 |
+
| **Total** | | | | **2,308.5** | **1,715.1** | | |
|
70 |
+
|
71 |
+
(A subset of total data was used for training of OLMo 7B-v1.7. The token counts are based on the full dataset, whereas taking into account sampling proportion gives the final actual token counts used for training --- 1.715 trillion tokens.)
|
72 |
|
73 |
|
74 |
+
## Summary Statistics (v1.6)
|
75 |
+
|
76 |
| **Source** | **Doc Type** | **UTF-8 bytes** (GB) | **Documents** (millions) | **Unicode words** (billions) | **Llama tokens** (billions) |
|
77 |
|--|--|--|--|--|--|
|
78 |
| Common Crawl | web pages | 9,022 | 3,370 | 1,775 | 2,281 |
|
|
|
85 |
| **Total** | | **11,519** | **4,367** | **2,318** | **3,059** |
|
86 |
|
87 |
|
88 |
+
(Size difference between `v1_6` and `v1_5` is due to different set of metadata included in files: we removed redundant metadata in `v1_6`.)
|
89 |
+
|
90 |
|
91 |
## Download
|
92 |
|
dolma.py
CHANGED
@@ -1,4 +1,4 @@
|
|
1 |
-
# Copyright
|
2 |
#
|
3 |
# Licensed under the Apache License, Version 2.0 (the "License");
|
4 |
# you may not use this file except in compliance with the License.
|
@@ -35,6 +35,7 @@ _URL_LISTS = {
|
|
35 |
"v1_5-sample": "urls/v1_5-sample.txt",
|
36 |
"v1_6": "urls/v1_6.txt",
|
37 |
"v1_6-sample": "urls/v1_6-sample.txt",
|
|
|
38 |
}
|
39 |
_VERSIONS = {
|
40 |
"v1": "1.0.0",
|
@@ -42,6 +43,7 @@ _VERSIONS = {
|
|
42 |
"v1_5-sample": "1.5.0",
|
43 |
"v1_6": "1.6.0",
|
44 |
"v1_6-sample": "1.6.0",
|
|
|
45 |
}
|
46 |
_DATES = {
|
47 |
"v1": "(Aug 2023)",
|
@@ -49,6 +51,7 @@ _DATES = {
|
|
49 |
"v1_5-sample": "(Oct 2023)",
|
50 |
"v1_6": "(Jan 2024)",
|
51 |
"v1_6-sample": "(Jan 2024)",
|
|
|
52 |
}
|
53 |
_BASE_URL = "https://olmo-data.org"
|
54 |
|
@@ -84,7 +87,7 @@ class Dolma(datasets.GeneratorBasedBuilder):
|
|
84 |
for name in _URL_LISTS.keys()
|
85 |
]
|
86 |
|
87 |
-
DEFAULT_CONFIG_NAME = "
|
88 |
|
89 |
def _info(self):
|
90 |
return datasets.DatasetInfo(
|
|
|
1 |
+
# Copyright 2024 Allen Institute for AI
|
2 |
#
|
3 |
# Licensed under the Apache License, Version 2.0 (the "License");
|
4 |
# you may not use this file except in compliance with the License.
|
|
|
35 |
"v1_5-sample": "urls/v1_5-sample.txt",
|
36 |
"v1_6": "urls/v1_6.txt",
|
37 |
"v1_6-sample": "urls/v1_6-sample.txt",
|
38 |
+
"v1_7": "urls/v1_7.txt",
|
39 |
}
|
40 |
_VERSIONS = {
|
41 |
"v1": "1.0.0",
|
|
|
43 |
"v1_5-sample": "1.5.0",
|
44 |
"v1_6": "1.6.0",
|
45 |
"v1_6-sample": "1.6.0",
|
46 |
+
"v1_7": "1.7.0",
|
47 |
}
|
48 |
_DATES = {
|
49 |
"v1": "(Aug 2023)",
|
|
|
51 |
"v1_5-sample": "(Oct 2023)",
|
52 |
"v1_6": "(Jan 2024)",
|
53 |
"v1_6-sample": "(Jan 2024)",
|
54 |
+
"v1_7": "(Apr 2024)",
|
55 |
}
|
56 |
_BASE_URL = "https://olmo-data.org"
|
57 |
|
|
|
87 |
for name in _URL_LISTS.keys()
|
88 |
]
|
89 |
|
90 |
+
DEFAULT_CONFIG_NAME = "v1_7"
|
91 |
|
92 |
def _info(self):
|
93 |
return datasets.DatasetInfo(
|
urls/.DS_Store
DELETED
Binary file (6.15 kB)
|
|
urls/v1_7.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|