Datasets:

Languages:
English
Size Categories:
n>1T
ArXiv:
License:
soldni commited on
Commit
7da0b42
1 Parent(s): 1eb0631
Files changed (7) hide show
  1. .DS_Store +0 -0
  2. README.md +48 -71
  3. dolma.py +50 -30
  4. urls/v1.txt +0 -0
  5. urls/v1_5r1-sample.txt +0 -0
  6. urls/v1_5r1.txt +0 -0
  7. urls/v1_5r2.txt +0 -0
.DS_Store DELETED
Binary file (6.15 kB)
 
README.md CHANGED
@@ -35,8 +35,8 @@ Dolma is a dataset of 3 trillion tokens from a diverse mix of web content, acade
35
 
36
  More information:
37
 
38
- - Read Dolma **announcement blogpost** [on Medium](https://soldni.medium.com/dolma-3-trillion-tokens-open-llm-corpus-9a0ff4b8da64);
39
- - Learn more about Dolma on its [**Data Sheet**](https://drive.google.com/file/d/12gOf5I5RytsD159nSP7iim_5zN31FCXq/view?usp=drive_link);
40
  - Review Dolma's [**ImpACT license** for medium risk artifacts](https://allenai.org/licenses/impact-mr);
41
  - Explore the [**open source tools**](https://github.com/allenai/dolma) we created to curate Dolma.
42
  - Want to request removal of personal data? Use [this form](https://forms.gle/q4BNUUxUxKwKkfdT6) to notify us of documents containing PII about a specific user.
@@ -44,103 +44,80 @@ More information:
44
 
45
  To learn more about the toolkit used to create Dolma, including how to replicate this dataset, head over our [GitHub project page](https://github.com/allenai/dolma/tree/main/docs)!
46
 
47
- ## Summary Statistics
48
 
 
49
 
50
- |**Source**|**Type**|**Gzip files (GB)**|**Documents (millions)**|**[GPT-NeoX](https://huggingface.co/EleutherAI/gpt-neox-20b) Tokens (billions)**|
51
- |:---|:---:|:---:|:---:|:----:|
52
- |[CommonCrawl](https://commoncrawl.org/)|web|4,197|4,600|2,415|
53
- |[C4](https://huggingface.co/datasets/allenai/c4)|web|302|364|175|
54
- |[peS2o](https://huggingface.co/datasets/allenai/peS2o)|academic|150|38.8|57|
55
- |[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|code|319|236|430|
56
- |[Project Gutenberg](https://www.gutenberg.org/)|books|6.6|0.052|4.8|
57
- |[Wikipedia](https://dumps.wikimedia.org/)|encyclopedic|5.8|6.1|3.6|
58
- ||**Total** |**4980.4**|**5,245**|**3,084**|
59
 
 
 
 
 
 
 
 
60
 
61
- ## Download
62
 
 
63
 
64
- The fastest way to download Dolma is to directly download the individual files across multiple threads.
65
- This can be achieved using wget or [aria2](https://github.com/aria2/aria2) Linux/Mac/Windows package (`sudo apt-get install aria2` on Ubuntu).
66
 
67
- For downloading individual files, simply use `wget` as follows:
 
 
 
 
 
 
 
 
 
68
 
69
- `wget --header 'Authorization: Bearer YOUR_HF_HUB_ACCESS_TOKEN' https://huggingface.co/datasets/allenai/dolma/resolve/main/data/peS2o/s2_v3-0000.json.gz`
70
 
71
- For downloading many files across multiple threads, first prepare a `.txt` file with the urls you would like such as via the script below:
72
 
73
- ```python
74
- OUT_DIRECTORY = "/scratch/dolma/data"
75
-
76
- # URLs for cc_en_head
77
- cc_en_head_base_url = "https://huggingface.co/datasets/allenai/dolma/resolve/main/data/common-crawl/cc_en_head/cc_en_head-"
78
- cc_en_head_url_list = [f"{cc_en_head_base_url}{str(i).zfill(4)}.json.gz\n dir={OUT_DIRECTORY}/cc_en_head\n out=cc_en_head-{str(i).zfill(4)}.json.gz" for i in range(612)]
79
-
80
- # URLs for cc_en_middle
81
- cc_en_middle_base_url = "https://huggingface.co/datasets/allenai/dolma/resolve/main/data/common-crawl/cc_en_middle/cc_en_middle-"
82
- cc_en_middle_url_list = [f"{cc_en_middle_base_url}{str(i).zfill(4)}.json.gz\n dir={OUT_DIRECTORY}/cc_en_middle\n out=cc_en_middle-{str(i).zfill(4)}.json.gz" for i in range(777)]
83
-
84
- # URLs for cc_en_tail
85
- cc_en_tail_base_url = "https://huggingface.co/datasets/allenai/dolma/resolve/main/data/common-crawl/cc_en_tail/cc_en_tail-"
86
- cc_en_tail_url_list = [f"{cc_en_tail_base_url}{str(i).zfill(4)}.json.gz\n dir={OUT_DIRECTORY}/cc_en_tail\n out=cc_en_tail-{str(i).zfill(4)}.json.gz" for i in range(1493)]
87
-
88
- # URLs for s2_v3
89
- s2_v3_base_url = "https://huggingface.co/datasets/allenai/dolma/resolve/main/data/peS2o/s2_v3-"
90
- s2_v3_url_list = [f"{s2_v3_base_url}{str(i).zfill(4)}.json.gz\n dir={OUT_DIRECTORY}/peS2o\n out=s2_v3-{str(i).zfill(4)}.json.gz" for i in range(42)]
91
-
92
- # URLs for The Stack
93
- LANG_TO_FILES = {'lasso': 1, 'nsis': 1, 'literate-agda': 1, 'metal': 1, 'xojo': 1, 'max': 8, 'jupyter-notebook': 101, 'asp': 7, 'elixir': 14, 'html+erb': 19, 'julia': 22, 'dart': 63, 'ragel-in-ruby-host': 1, 'api-blueprint': 1, 'gams': 1, 'tex': 71, 'xml': 101, 'smalltalk': 17, 'cmake': 11, 'piglatin': 1, "cap'n-proto": 1, 'common-lisp': 21, 'stylus': 3, 'typescript': 101, 'jflex': 1, 'factor': 1, 'arc': 1, 'parrot-internal-representation': 1, 'aspectj': 1, 'go': 101, 'urweb': 1, 'dns-zone': 1, 'purebasic': 1, 'toml': 15, 'erlang': 11, 'hy': 1, 'component-pascal': 2, 'oz': 1, 'opa': 1, 'handlebars': 10, 'gas': 15, 'less': 17, 'gnuplot': 15, 'harbour': 1, 'vhdl': 16, 'octave': 1, 'powershell': 21, 'clips': 1, 'fish': 1, 'prolog': 1, 'sparql': 1, 'objective-j': 1, 'scaml': 1, 'twig': 20, 'gettext-catalog': 101, 'purescript': 2, 'vala': 1, 'gosu': 1, 'apacheconf': 1, 'xc': 1, 'lean': 3, 'mako': 1, 'r': 4, 'unrealscript': 1, 'solidity': 21, 'pike': 1, 'cartocss': 1, 'maple': 1, 'graphql': 3, 'unity3d-asset': 101, 'swift': 101, 'dockerfile': 13, 'digital-command-language': 1, 'scala': 83, 'sqf': 2, 'logtalk': 1, 'coq': 1, 'shellsession': 1, 'befunge': 1, 'nu': 1, 'ecere-projects': 1, 'zimpl': 1, 'shen': 1, 'golo': 1, 'web-ontology-language': 12, 'sas': 2, 'uno': 1, 'livescript': 1, 'literate-haskell': 1, 'clojure': 8, 'perl6': 1, 'zig': 3, 'liquid': 2, 'ec': 1, 'blitzbasic': 1, 'sql': 101, 'http': 2, 'xproc': 1, 'kit': 1, 'textile': 1, 'netlinx': 1, 'propeller-spin': 1, 'cython': 5, 'realbasic': 1, 'dogescript': 1, 'llvm': 9, 'pawn': 1, 'groff': 40, 'html+django': 3, 'csound': 1, 'd': 1, 'agda': 2, 'css': 101, 'yacc': 7, 'robotframework': 1, 'kotlin': 101, 'grace': 1, 'abap': 2, 'blitzmax': 1, 'webassembly': 3, 'ampl': 1, 'postscript': 16, 'nit': 1, 'gentoo-eclass': 1, 'xpages': 1, 'linker-script': 2, 'yang': 3, 'jade': 4, 'standard-ml': 6, 'javascript': 101, 'moonscript': 1, 'mtml': 1, 'saltstack': 1, 'freemarker': 5, 'ston': 1, 'html+eex': 1, 'xs': 1, 'c++': 101, 'matlab': 1, 'm4': 2, 'xbase': 1, 'perl': 37, 'emacs-lisp': 7, 'bison': 1, 'slim': 2, 'grammatical-framework': 1, 'rdoc': 1, 'nix': 10, 'clean': 1, 'module-management-system': 1, 'nimrod': 6, 'raml': 1, 'forth': 1, 'squirrel': 1, 'alloy': 1, 'opencl': 3, 'c': 101, 'sass': 4, 'eiffel': 2, 'papyrus': 1, 'html': 109, 'java': 101, 'hcl': 14, 'isabelle': 2, 'markdown': 101, 'gentoo-ebuild': 2, 'objdump': 1, 'emberscript': 1, 'text': 101, 'bro': 1, 'opal': 1, 'haskell': 35, 'mupad': 1, 'desktop': 1, 'modelica': 2, 'coldfusion-cfc': 2, 'fantom': 1, 'glsl': 10, 'ocaml': 16, 'nesc': 2, 'scheme': 7, 'crystal': 5, 'tcsh': 1, 'c2hs-haskell': 1, 'idris': 1, 'logos': 4, 'coffeescript': 13, 'g-code': 10, 'sage': 1, 'haml': 4, 'tcl': 7, 'smt': 5, 'ox': 1, 'chuck': 1, 'xquery': 1, 'batchfile': 7, 'pod': 2, 'xtend': 1, 'restructuredtext': 61, 'rmarkdown': 1, 'turtle': 33, 'jsx': 45, 'protocol-buffer': 8, "ren'py": 2, 'diff': 32, 'slash': 1, 'darcs-patch': 1, 'numpy': 1, 'augeas': 1, 'wisp': 1, 'edn': 15, 'ooc': 1, 'bitbake': 2, 'labview': 1, 'inform-7': 1, 'rust': 101, 'creole': 1, 'apl': 1, 'arduino': 11, 'openscad': 2, 'cuda': 9, 'thrift': 1, 'yaml': 101, 'fancy': 1, 'coldfusion': 1, 'python': 101, 'clarion': 1, 'glyph': 1, 'parrot': 1, 'lookml': 1, 'java-server-pages': 19, 'oxygene': 1, 'flux': 1, 'scilab': 1, 'groovy-server-pages': 2, 'rhtml': 1, 'eagle': 52, 'parrot-assembly': 1, 'igor-pro': 1, 'webidl': 1, 'bluespec': 1, 'unified-parallel-c': 1, 'smali': 38, 'haxe': 9, 'ada': 7, 'lua': 48, 'pascal': 21, 'html+php': 6, 'irc-log': 1, 'x10': 1, 'netlogo': 1, 'ioke': 1, 'dm': 1, 'self': 1, 'elm': 5, 'ats': 1, 'brainfuck': 1, 'mask': 1, 'rouge': 1, 'turing': 1, 'lex': 2, 'gap': 1, 'pogoscript': 1, 'kicad': 30, 'io': 1, 'objective-c++': 8, 'qml': 4, 'redcode': 1, 'autoit': 2, 'processing': 4, 'systemverilog': 6, 'gdscript': 5, 'f-sharp': 12, 'fortran': 23, 'monkey': 1, 'c-sharp': 101, 'xslt': 9, 'viml': 6, 'renderscript': 1, 'scss': 84, 'cucumber': 4, 'verilog': 1, 'genshi': 1, 'racket': 1, 'krl': 1, 'actionscript': 10, 'pan': 1, 'cirru': 1, 'chapel': 1, 'pure-data': 2, 'm': 1, 'applescript': 1, 'inno-setup': 1, 'volt': 1, 'myghty': 1, 'groovy': 17, 'ags-script': 1, 'mirah': 1, 'lsl': 1, 'brightscript': 1, 'python-traceback': 1, 'sourcepawn': 2, 'maxscript': 1, 'zephir': 1, 'supercollider': 1, 'mathematica': 20, 'awk': 1, 'autohotkey': 2, 'lfe': 1, 'ruby': 101, 'visual-basic': 20, 'ini': 59, 'red': 1, 'omgrofl': 1, 'idl': 1, 'rebol': 1, 'vue': 101, 'ninja': 2, 'ecl': 1, 'lolcode': 1, 'tea': 1, 'txl': 1, 'smarty': 9, 'vcl': 1, 'php': 101, 'literate-coffeescript': 1, 'click': 1, 'pony': 1, 'mediawiki': 5, 'stata': 5, 'stan': 1, 'nginx': 1, 'asciidoc': 16, 'antlr': 1, 'cobol': 1, 'org': 5, 'latte': 1, 'makefile': 32, 'ceylon': 1, 'graphviz-(dot)': 13, 'lilypond': 1, 'dylan': 1, 'qmake': 1, 'muf': 1, 'j': 1, 'pov-ray-sdl': 1, 'jasmin': 1, 'shell': 73, 'cycript': 1, 'boo': 1, 'hlsl': 2}
94
- stack_base_url = "https://huggingface.co/datasets/allenai/dolma/resolve/main/data/stack-code/"
95
- stack_url_list = []
96
- for lang, num_files in sorted(LANG_TO_FILES.items()):
97
- for i in range(num_files):
98
- stack_url_list.append(f"{stack_base_url}{lang}/v3-{str(i).zfill(4)}.json.gz\n dir={OUT_DIRECTORY}/stack-code/{lang}\n out=v3-{str(i).zfill(4)}.json.gz")
99
-
100
- # Combine all URL lists
101
- all_url_list = cc_en_head_url_list + cc_en_middle_url_list + cc_en_tail_url_list + s2_v3_url_list + stack_url_list
102
-
103
- out = open("files.txt", "a")
104
- # Print the combined list of URLs
105
- for i, url in enumerate(all_url_list):
106
- out.write(url + "\n")
107
- ```
108
 
109
- Then you can download them all in parallel using:
 
110
 
111
- `aria2c --input-file files.txt --header 'Authorization: Bearer YOUR_HF_HUB_ACCESS_TOKEN'`
 
 
 
112
 
113
- You can also add `-s` to increase the number of connections, e.g. `-s 10` (defaults to 5).
 
114
 
115
 
 
 
116
 
117
- To get the exact file counts that are used for The Stack in the above script (`LANG_TO_FILES`), you can follow the below:
118
 
119
- Fetch all files (does not download them, so should be fast): `GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/allenai/dolma.git`
120
- Then run:
121
  ```python
122
  import os
 
123
 
124
- directory = "dolma/data/stack-code"
125
- folder_dict = {}
126
-
127
- for folder in os.listdir(directory):
128
- folder_path = os.path.join(directory, folder)
129
- if os.path.isdir(folder_path):
130
- file_count = len([f for f in os.listdir(folder_path) if os.path.isfile(os.path.join(folder_path, f))])
131
- folder_dict[folder] = file_count
132
-
133
- print(folder_dict)
134
  ```
135
 
136
  ## Bibtex
137
 
138
  If you use our dataset or tooling, please cite us at:
139
 
140
- ```
141
  @article{dolma,
142
  title = {{Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research}},
143
- author = {Luca Soldaini and Rodney Kinney and Akshita Bhagia and Dustin Schwenk and David Atkinson and Russell Authur and Ben Bogin and Khyathi Chandu and Jennifer Dumas and Yanai Elazar and Valentin Hofmann and Ananya Harsh Jha and Sachin Kumar and Li Lucy and Xinxi Lyu and Ian Magnusson and Jacob Morrison and Niklas Muennighoff and Aakanksha Naik and Crystal Nam and Matthew E. Peters and Abhilasha Ravichander and Kyle Richardson and Zejiang Shen and Emma Strubell and Nishant Subramani and Oyvind Tafjord and Evan Pete Walsh and Hannaneh Hajishirzi and Noah A. Smith and Luke Zettlemoyer and Iz Beltagy and Dirk Groeneveld and Jesse Dodge and Kyle Lo},
 
 
 
 
 
 
 
 
144
  year = {2024},
145
  journal={arXiv preprint},
146
  }
 
35
 
36
  More information:
37
 
38
+ <https://github.com/allenai/dolma/blob/main/docs/assets/dolma-datasheet-v0.1.pdf>
39
+ - Read Dolma **manuscript** and its **Data Sheet** [on ArXiv](https://github.com/allenai/dolma/blob/soldni/paper/docs/assets/dolma-v1_6-20240131.pdf);
40
  - Review Dolma's [**ImpACT license** for medium risk artifacts](https://allenai.org/licenses/impact-mr);
41
  - Explore the [**open source tools**](https://github.com/allenai/dolma) we created to curate Dolma.
42
  - Want to request removal of personal data? Use [this form](https://forms.gle/q4BNUUxUxKwKkfdT6) to notify us of documents containing PII about a specific user.
 
44
 
45
  To learn more about the toolkit used to create Dolma, including how to replicate this dataset, head over our [GitHub project page](https://github.com/allenai/dolma/tree/main/docs)!
46
 
 
47
 
48
+ ## Versions
49
 
50
+ At the moment, there are five versions of Dolma available:
 
 
 
 
 
 
 
 
51
 
52
+ | **Version** | **Default?** | **Release Date** | **Size** (gzip) | **Description** |
53
+ |--|:--:|--|--|--|
54
+ | `v1_6` | ✅ | 2024-01-31 | 5.4 TB | The latest version of Dolma, with 3 trillion tokens from a diverse mix of web content, academic publications, code, books, and encyclopedic materials. |
55
+ | `v1_6-sample` | | 2024-01-31 | 16.4 GB | A smaller sample of Dolma, with roughly 10 billion tokens. Useful for data exploration. |
56
+ | `v1_5` | | 2023-10-31 | 6.4 TB | The version of Dolma used to train [OLMo-1B](https://huggingface.co/allenai/OLMo-1B). Roughly 3 trillion tokens. |
57
+ | `v1_5-sample` | | 2023-10-31 | 2.9 TB | A sample of roughly 1.9 trillion tokens used to train [OLMo-7B](https://huggingface.co/allenai/OLMo-7B) |
58
+ | `v1` | | 2023-08-18 | 6.0 TB | The first version of Dolma. |
59
 
60
+ (Size difference between `v1_6` and previous version is due to different set of metadata included in files: we removed redundant metadata in `v1_6`.)
61
 
62
+ ## Summary Statistics (v1.6)
63
 
 
 
64
 
65
+ | **Source** | **Doc Type** | **UTF-8 bytes** (GB) | **Documents** (millions) | **Unicode words** (billions) | **Llama tokens** (billions) |
66
+ |--|--|--|--|--|--|
67
+ | Common Crawl | web pages | 9,022 | 3,370 | 1,775 | 2,281 |
68
+ | The Stack | code| 1,043| 210 | 260| 411 |
69
+ | C4 | web pages | 790 | 364 | 153| 198 |
70
+ | Reddit| social media| 339 | 377| 72| 89 |
71
+ | PeS2o | STEM papers| 268 | 38.8| 50| 70 |
72
+ | Project Gutenberg | books | 20.4 | 0.056 | 4.0 | 6.0 |
73
+ | Wikipedia, Wikibooks | encyclopedic | 16.2 | 6.2 | 3.7 | 4.3 |
74
+ | **Total** | | **11,519** | **4,367** | **2,318** | **3,059** |
75
 
 
76
 
 
77
 
78
+ ## Download
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79
 
80
+ The fastest way to download Dolma is to clone this repository and use the files in the `url` directory.
81
+ We recommend using wget in parallel mode to download the files. For example:
82
 
83
+ ```bash
84
+ DATA_DIR="<path_to_your_data_directory>"
85
+ PARALLEL_DOWNLOADS="<number_of_parallel_downloads>"
86
+ DOLMA_VERSION="<version_of_dolma_to_download>"
87
 
88
+ git clone https://huggingface.co/datasets/allenai/dolma
89
+ mkdir -p "${DATA_DIR}"
90
 
91
 
92
+ cat "dolma/urls/${DOLMA_VERSION}.txt" | xargs -n 1 -P "${PARALLEL_DOWNLOADS}" wget -q -P "$DATA_DIR"
93
+ ```
94
 
95
+ Then, to load this data using HuggingFace's `datasets` library, you can use the following code:
96
 
 
 
97
  ```python
98
  import os
99
+ from datasets import load_dataset
100
 
101
+ os.environ["DATA_DIR"] = "<path_to_your_data_directory>"
102
+ dataset = load_dataset("allenai/dolma", split="train")
 
 
 
 
 
 
 
 
103
  ```
104
 
105
  ## Bibtex
106
 
107
  If you use our dataset or tooling, please cite us at:
108
 
109
+ ```bibtex
110
  @article{dolma,
111
  title = {{Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research}},
112
+ author = {
113
+ Luca Soldaini and Rodney Kinney and Akshita Bhagia and Dustin Schwenk and David Atkinson and
114
+ Russell Authur and Ben Bogin and Khyathi Chandu and Jennifer Dumas and Yanai Elazar and
115
+ Valentin Hofmann and Ananya Harsh Jha and Sachin Kumar and Li Lucy and Xinxi Lyu and Ian Magnusson and
116
+ Jacob Morrison and Niklas Muennighoff and Aakanksha Naik and Crystal Nam and Matthew E. Peters and
117
+ Abhilasha Ravichander and Kyle Richardson and Zejiang Shen and Emma Strubell and Nishant Subramani and
118
+ Oyvind Tafjord and Evan Pete Walsh and Hannaneh Hajishirzi and Noah A. Smith and Luke Zettlemoyer and
119
+ Iz Beltagy and Dirk Groeneveld and Jesse Dodge and Kyle Lo
120
+ },
121
  year = {2024},
122
  journal={arXiv preprint},
123
  }
dolma.py CHANGED
@@ -15,11 +15,12 @@
15
  # Lint as: python3
16
  """Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research"""
17
 
18
-
19
- from pathlib import Path
 
 
20
 
21
  import datasets
22
- import os
23
 
24
  logger = datasets.logging.get_logger(__name__)
25
 
@@ -30,21 +31,24 @@ Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Re
30
 
31
  _URL_LISTS = {
32
  "v1": "urls/v1.txt",
33
- "v1_5r1": "urls/v1_5r1.txt",
34
- "v1_5r1-sample": "urls/v1_5r1-sample.txt",
35
- "v1_5r2": "urls/v1_5r2.txt",
 
36
  }
37
  _VERSIONS = {
38
  "v1": "1.0.0",
39
- "v1_5r1": "1.5.0",
40
- "v1_5r1-sample": "1.5.0",
41
- "v1_5r2": "1.5.0",
 
42
  }
43
  _DATES = {
44
  "v1": "(Aug 2023)",
45
- "v1_5r1": "(Oct 2023)",
46
- "v1_5r1-sample": "(Oct 2023)",
47
- "v1_5r2": "Dolma v1.5r2 (Dec 2023)",
 
48
  }
49
  _BASE_URL = "https://olmo-data.org"
50
 
@@ -54,14 +58,14 @@ _CITATION = """\
54
  @article{dolma,
55
  title = {{Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research}},
56
  author = {
57
- Luca Soldaini and Rodney Kinney and Akshita Bhagia and Dustin Schwenk and David Atkinson and
58
- Russell Authur and Ben Bogin and Khyathi Chandu and Jennifer Dumas and Yanai Elazar and
59
- Valentin Hofmann and Ananya Harsh Jha and Sachin Kumar and Li Lucy and Xinxi Lyu and Ian Magnusson and
60
- Jacob Morrison and Niklas Muennighoff and Aakanksha Naik and Crystal Nam and Matthew E. Peters and
61
- Abhilasha Ravichander and Kyle Richardson and Zejiang Shen and Emma Strubell and Nishant Subramani and
62
- Oyvind Tafjord and Evan Pete Walsh and Hannaneh Hajishirzi and Noah A. Smith and Luke Zettlemoyer and
63
- Iz Beltagy and Dirk Groeneveld and Jesse Dodge and Kyle Lo
64
- },
65
  year = {2024},
66
  journal={arXiv preprint},
67
  }
@@ -80,7 +84,7 @@ class Dolma(datasets.GeneratorBasedBuilder):
80
  for name in _URL_LISTS.keys()
81
  ]
82
 
83
- DEFAULT_CONFIG_NAME = "v1_5r2"
84
 
85
  def _info(self):
86
  return datasets.DatasetInfo(
@@ -89,21 +93,25 @@ class Dolma(datasets.GeneratorBasedBuilder):
89
  {
90
  "id": datasets.Value("string"),
91
  "text": datasets.Value("string"),
92
- "metadata": datasets.Value("string"),
93
  "added": datasets.Value("string"),
94
- # "metadata": datasets.Value("")
 
95
  }
96
  ),
97
  supervised_keys=None,
98
  )
99
 
100
- def _split_generators(self, dl_manager):
101
- with open(_URL_LISTS[self.config.name], mode="rt", encoding="utf-8") as f:
102
- subset_urls = f.read().splitlines()
103
 
104
- breakpoint()
 
105
 
106
- subset_files = dl_manager.download(subset_urls)
 
 
 
107
 
108
  return [
109
  datasets.SplitGenerator(
@@ -112,6 +120,18 @@ class Dolma(datasets.GeneratorBasedBuilder):
112
  )
113
  ]
114
 
115
- def _generate_examples(self, files):
116
  """This function returns the examples in the raw (text) form."""
117
- raise NotImplementedError("Dolma is a streaming dataset")
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  # Lint as: python3
16
  """Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research"""
17
 
18
+ import gzip
19
+ import json
20
+ import os
21
+ from typing import List
22
 
23
  import datasets
 
24
 
25
  logger = datasets.logging.get_logger(__name__)
26
 
 
31
 
32
  _URL_LISTS = {
33
  "v1": "urls/v1.txt",
34
+ "v1_5": "urls/v1_5.txt",
35
+ "v1_5-sample": "urls/v1_5-sample.txt",
36
+ "v1_6": "urls/v1_6.txt",
37
+ "v1_6-sample": "urls/v1_6-sample.txt",
38
  }
39
  _VERSIONS = {
40
  "v1": "1.0.0",
41
+ "v1_5": "1.5.0",
42
+ "v1_5-sample": "1.5.0",
43
+ "v1_6": "1.6.0",
44
+ "v1_6-sample": "1.6.0",
45
  }
46
  _DATES = {
47
  "v1": "(Aug 2023)",
48
+ "v1_5": "(Oct 2023)",
49
+ "v1_5-sample": "(Oct 2023)",
50
+ "v1_6": "(Jan 2024)",
51
+ "v1_6-sample": "(Jan 2024)",
52
  }
53
  _BASE_URL = "https://olmo-data.org"
54
 
 
58
  @article{dolma,
59
  title = {{Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research}},
60
  author = {
61
+ Luca Soldaini and Rodney Kinney and Akshita Bhagia and Dustin Schwenk and David Atkinson and
62
+ Russell Authur and Ben Bogin and Khyathi Chandu and Jennifer Dumas and Yanai Elazar and
63
+ Valentin Hofmann and Ananya Harsh Jha and Sachin Kumar and Li Lucy and Xinxi Lyu and Ian Magnusson and
64
+ Jacob Morrison and Niklas Muennighoff and Aakanksha Naik and Crystal Nam and Matthew E. Peters and
65
+ Abhilasha Ravichander and Kyle Richardson and Zejiang Shen and Emma Strubell and Nishant Subramani and
66
+ Oyvind Tafjord and Evan Pete Walsh and Hannaneh Hajishirzi and Noah A. Smith and Luke Zettlemoyer and
67
+ Iz Beltagy and Dirk Groeneveld and Jesse Dodge and Kyle Lo
68
+ },
69
  year = {2024},
70
  journal={arXiv preprint},
71
  }
 
84
  for name in _URL_LISTS.keys()
85
  ]
86
 
87
+ DEFAULT_CONFIG_NAME = "v1_6"
88
 
89
  def _info(self):
90
  return datasets.DatasetInfo(
 
93
  {
94
  "id": datasets.Value("string"),
95
  "text": datasets.Value("string"),
96
+ # "metadata": datasets.Value("string"),
97
  "added": datasets.Value("string"),
98
+ "created": datasets.Value("string"),
99
+ "source": datasets.Value("string"),
100
  }
101
  ),
102
  supervised_keys=None,
103
  )
104
 
105
+ def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
106
+ path = dl_manager.download(_URL_LISTS[self.config.name])
 
107
 
108
+ with open(path, mode="rt", encoding="utf-8") as f: # type: ignore[no-untyped-call]
109
+ subset_urls = f.read().splitlines()
110
 
111
+ if _DATA_DIR is not None:
112
+ subset_files = [os.path.join(_DATA_DIR, url.replace(_BASE_URL, "").lstrip("/")) for url in subset_urls]
113
+ else:
114
+ subset_files = dl_manager.download(subset_urls)
115
 
116
  return [
117
  datasets.SplitGenerator(
 
120
  )
121
  ]
122
 
123
+ def _generate_examples(self, files: List[str]):
124
  """This function returns the examples in the raw (text) form."""
125
+ for fn in files:
126
+ logger.info("generating examples from = %s", fn)
127
+
128
+ with gzip.open(fn, mode="rt", encoding="utf-8") as f:
129
+ for line in f:
130
+ row = json.loads(line)
131
+ yield row["id"], {
132
+ "id": row["id"],
133
+ "text": row["text"],
134
+ "added": row.get("added", ""),
135
+ "created": row.get("created", ""),
136
+ "source": row.get("source", ""),
137
+ }
urls/v1.txt CHANGED
The diff for this file is too large to render. See raw diff
 
urls/v1_5r1-sample.txt DELETED
The diff for this file is too large to render. See raw diff
 
urls/v1_5r1.txt DELETED
The diff for this file is too large to render. See raw diff
 
urls/v1_5r2.txt DELETED
The diff for this file is too large to render. See raw diff