Datasets:

Languages:
English
Size:
n>1T
ArXiv:
License:
dolma / README.md
soldni's picture
text
b959d5c
|
raw
history blame
2.39 kB
---
license: other
task_categories:
- text-generation
language:
- en
tags:
- language-modeling
- casual-lm
- llm
pretty_name: Dolma
size_categories:
- n>1T
extra_gated_prompt: "Access to this dataset is automatically granted upon accepting the [ImpACT license for medium risk artifacts](https://allenai.org/licenses/impact-mr) and completing all fields below."
extra_gated_fields:
Your full name: text
Organization or entity you are affiliated with: text
State or country you are located in: text
Contact email: text
Please describe your intended use of the medium risk artifact(s): text
I AGREE to the terms and conditions of the MR Agreement above: checkbox
I AGREE to AI2’s use of my information for legal notices and administrative matters: checkbox
I CERTIFY that the information I have provided is true and accurate: checkbox
---
# Dolma
<img alt="Dolma's official logo. It's dolma written in yellow, round lowercase letters over a blue background." src="logo.png" width="100%">
Dolma is a dataset of 3 trillion tokens from a diverse mix of web content, academic publications, code, books, and encyclopedic materials. It is openly released under AI2’s ImpACT license as a medium risk artifact.
More information:
- Read Dolma **announcement blogpost** [on Medium](https://soldni.medium.com/dolma-3-trillion-tokens-open-llm-corpus-9a0ff4b8da64);
- Learn more about Dolma on its [**Data Sheet**](https://drive.google.com/file/d/12gOf5I5RytsD159nSP7iim_5zN31FCXq/view?usp=drive_link);
- Review Dolma's [**ImpACT license** for medium risk artifacts](https://allenai.org/licenses/impact-mr);
- Explore the [**open source tools**](https://github.com/allenai/dolma) we created to curate Dolma.
## Summary Statistics
|**Source**|**Type**|**Gzip files (GB)**|**Documents (millions)**|**[GPT-NeoX](https://huggingface.co/EleutherAI/gpt-neox-20b) Tokens (billions)**|
|:---|:---:|:---:|:---:|:----:|
|[CommonCrawl](https://commoncrawl.org/)|web|4,197|4,600|2,415|
|[C4](https://huggingface.co/datasets/allenai/c4)|web|302|364|175|
|[peS2o](https://huggingface.co/datasets/allenai/peS2o)|academic|150|38.8|57|
|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|code|675|236|430|
|[Project Gutenberg](https://www.gutenberg.org/)|books|6.6|0.052|4.8|
|[Wikipedia](https://dumps.wikimedia.org/)|encyclopedic|5.8|6.1|3.6|
|**Total** |**5,334**|**5,245**|**3,084**|