File size: 7,560 Bytes
2188f70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
871bbe9
2188f70
 
 
 
 
 
 
 
 
871bbe9
 
 
 
 
8874c61
2188f70
 
 
 
 
 
 
 
 
871bbe9
2188f70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91b9fcc
2188f70
 
3dbd25d
53cec6a
 
afc8824
91b9fcc
2188f70
 
53cec6a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
language:
- en
- fr
- de
- it
- pt
- nl
- es
pretty_name: Common Corpus
size_categories:
- n>1T
task_categories:
- text-generation
tags:
- legal
- finance
- literature
- science
- code
---

# Common Corpus

Common Corpus is the largest open and permissible licensed text dataset, comprising over 2 trillion tokens (2,003,039,184,047 tokens). It is a diverse dataset, consisting of books, newspapers, scientific articles, government and legal documents, code, and more. 

Common Corpus differs from existing open datasets in that it is:
*  **Truly Open**: contains only data that is permissively licensed
*  **Multilingual**: mostly representing English and French data, but contains data for XX languages
*  **Diverse**: consisting of scientific articles, government and legal documents, code, and cultural heritage data, including books and newspapers
*  **Extensively Curated**: spelling and formatting has been corrected from digitized texts, harmful and toxic content has been removed, and content with low educational content has also been removed.

# About Common Corpus

Common Corpus is made of five carefully curated collections:
*  **OpenCulture**: our largest collection at 926,541,096,243 tokens, featuring public domain books, newspapers, and Wikisource content. We've developed innovative tools like OCROnos-Vintage to correct historical digitization errors, while implementing advanced toxicity filtering to ensure content meets modern ethical standards.
*  **OpenGovernment**: 387,965,738,992 tokens of financial and legal documents, including Finance Commons (from sources like SEC and WTO) and Legal Commons (including Europarl and Caselaw Access Project), providing enterprise-grade training data from regulatory bodies and administrative sources.
*  **OpenSource**: 334,658,896,533 tokens of high-quality code in open source from GitHub, filtered using ArmoRM to ensure only the top 80% of submissions by quality rating are included.
*  **OpenScience**: 221,798,136,564 tokens of academic content from Open Alex and other open science reposiories, processed using vision-language models to preserve crucial document structure and formatting.
*  **OpenWeb**: 132,075,315,715 tokens from Wikipedia (official releases from the [Wikimedia Foundation](https://huggingface.co/datasets/wikimedia/wikipedia) on Huggingface), YouTube Commons and other websites available under permissible licenses like Stack-Exchange.

| Collection     | Domain                   | Sources                                                                                   |
|----------------|--------------------------|-------------------------------------------------------------------------------------------|
| OpenGovernment | legal and administrative | [Finance Commons](https://huggingface.co/collections/PleIAs/finance-commons-66925e1095c7fa6e6828e26c) (e.g. SEC, WTO) and Legal Commons (e.g. Europarl, Caselaw Access Project) |
| OpenCulture    | cultural heritage        | public domain books and newspapers, Wikisource                                                        |
| OpenScience    | academic                 | OpenAlex, French theses                                                                  |
| OpenWeb        | web text                 | [YouTube Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons), Stack Exchange                                                           |
| OpenSource     | code                     | GitHub                                                                                    |

We will accompany the dataset release with a comprehensive technical report detailing our methodologies and data sources will accompany the release, ensuring full transparency and reproducibility. We will release the individual sub-corpora in coming weeks for more fine-grained auditability for to expand uses 

## Dataset Structure

<details >
  <summary>Data Fields</summary>
  
  *  identifier: unique text identifier
  *  text: post-processed text
  *  char_count: number of UTF-8 characters in text
  *  file_name: original file path, organized by collection
  *  set_id: set id (1-10)
  *  subset_id: subset id (1-100)

</details > 
<br />

# How to Use

## Considerations for Using the Data

All data in Common Corpus are permissibly licensed and may be used for both commercial and non-commercial purposes. 

The dataset is multilingual. The language text is included in the metadata, so data can be filtered by language. Additionally, some of the text data are historical. The year each text is written is included in the metadata, therefore it is possible to construct a dataset with a custom date cutoff if desired.

### Discussion of Bias

Some of the dataset sources contain biased and toxic content, such as stereotypes about certain minoritized groups. We have removed texts which had high toxicity scores according to our toxicity classifier, [Celadon](https://huggingface.co/PleIAs/celadon), or which contain offensive terms and slurs. See our [preprint](https://arxiv.org/pdf/2410.22587) for more details.

### Personal and Sensitive Information

We have attempted to remove personally identifiable information (PII). We primarily use [Microsoft Presidio](https://microsoft.github.io/presidio/), but make additional modifications to account for language- and country-specific considerations, such as European phone number formats.


## Use Common Corpus

```
from datasets import load_dataset
data = load_dataset('PleIAs/common_corpus')
```


# Acknowledgements

The corpus was stored and processed with the generous support of the AI Alliance, Jean Zay (Eviden, Idris), Nvidia Inception program, Nebius AI, Tracto AI, Mozilla. It was built up with the support and concerted efforts of the state start-up LANGU:IA (start-up d’Etat), supported by the French Ministry of Culture and DINUM, as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC). This dataset was also made in partnership with Wikimedia Enterprise for the Wikipedia part. The collection of the corpus has been largely facilitated thanks to the open science LLM community insights, cooperation and support (Eleuther AI, Allen AI, HuggingFace…).

<div style="text-align: center;">
  <img src="https://huggingface.co/datasets/PleIAs/common_corpus/resolve/main/logo/ai_alliance.png" style="width: 33%; margin: 0 auto; display: inline-block;"/>
  <img src="https://huggingface.co/datasets/PleIAs/common_corpus/resolve/main/logo/logo-genci-header.svg" style="width: 33%; margin: 0 auto; display: inline-block;"/>
  <img src="https://huggingface.co/datasets/PleIAs/common_corpus/resolve/main/logo/Nvidia_(logo).svg.png" style="width: 33%; margin: 0 auto; display: inline-block;"/>   
  <img src="https://huggingface.co/datasets/PleIAs/common_corpus/resolve/main/logo/tractoAI.png" style="width: 33%; margin: 0 auto; display: inline-block;"/>
  <img src="https://huggingface.co/datasets/PleIAs/common_corpus/resolve/main/logo/mozilla.png" style="width: 33%; margin: 0 auto; display: inline-block;"/>
  <img src="https://raw.githubusercontent.com/Pleias/logos/f117dee70b317bc664eac14ee70d7c0563101ed1/ministere_logo.png?token=GHSAT0AAAAAACZUTJMICO3MSWUJ43EQWG5QZZL3RFQ" style="width: 33%; margin: 0 auto; display: inline-block;"/>   
  <img src="https://raw.githubusercontent.com/Pleias/logos/f117dee70b317bc664eac14ee70d7c0563101ed1/wikimedia_logo.png?token=GHSAT0AAAAAACZUTJMIIPAP4J7MKP6RSSWCZZL3TFA" style="width: 33%; margin: 0 auto; display: inline-block;"/>
</div>