nicholasKluge commited on
Commit
dc2eeb9
1 Parent(s): 2b11134

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +165 -0
README.md CHANGED
@@ -16,4 +16,169 @@ configs:
16
  data_files:
17
  - split: train
18
  path: data/train-*
 
 
 
 
 
 
 
 
 
 
 
19
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  data_files:
17
  - split: train
18
  path: data/train-*
19
+ license: other
20
+ task_categories:
21
+ - text-generation
22
+ language:
23
+ - pt
24
+ tags:
25
+ - portuguese
26
+ - language-modeling
27
+ pretty_name: Pt-Corpus
28
+ size_categories:
29
+ - 1M<n<10M
30
  ---
31
+ # Pt-Corpus
32
+
33
+ Pt-Corpus is a concatenation of several portions of Brazilian Portuguese datasets found in the [Hub](https://huggingface.co/datasets?task_categories=task_categories:text-generation&language=language:pt&sort=trending).
34
+
35
+ In a tokenized format, the dataset (uncompressed) weighs 80 GB and has approximately 6.2B tokens. This version does not have instructional content.
36
+
37
+ The following datasets (_only training splits are a part of the corpus_) and respective licenses form Pt-Corpus:
38
+
39
+ - [Wikipedia](https://huggingface.co/datasets/graelo/wikipedia) (License: [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/))
40
+
41
+ **Citation Information**
42
+
43
+ ```latex
44
+ @ONLINE{wikidump,
45
+ author = "Wikimedia Foundation",
46
+ title = "Wikimedia Downloads",
47
+ url = "https://dumps.wikimedia.org"
48
+ }
49
+ ```
50
+
51
+ - [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) (License: [ODC-By](https://opendatacommons.org/licenses/by/1-0/), [cc0-1.0](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301#licensing-information))
52
+
53
+ **Citation Information**
54
+
55
+ ```latex
56
+ @misc{nguyen2023culturax,
57
+ title={CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages},
58
+ author={Thuat Nguyen and Chien Van Nguyen and Viet Dac Lai and Hieu Man and Nghia Trung Ngo and Franck Dernoncourt and Ryan A. Rossi and Thien Huu Nguyen},
59
+ year={2023},
60
+ eprint={2309.09400},
61
+ archivePrefix={arXiv},
62
+ primaryClass={cs.CL}
63
+ }
64
+ ```
65
+
66
+ - [OSCAR](https://huggingface.co/datasets/eduagarcia/OSCAR-2301-pt_dedup) (License: [cc0-1.0](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301#licensing-information))
67
+
68
+ **Citation Information**
69
+
70
+ ```latex
71
+ @inproceedings{ortiz-suarez-etal-2020-monolingual,
72
+ title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
73
+ author = "Ortiz Su{'a}rez, Pedro Javier and
74
+ Romary, Laurent and
75
+ Sagot, Benoit",
76
+ booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
77
+ month = jul,
78
+ year = "2020",
79
+ address = "Online",
80
+ publisher = "Association for Computational Linguistics",
81
+ url = "https://www.aclweb.org/anthology/2020.acl-main.156",
82
+ pages = "1703--1714",
83
+ abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
84
+ }
85
+
86
+ @inproceedings{OrtizSuarezSagotRomary2019,
87
+ author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
88
+ title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
89
+ series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
90
+ editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
91
+ publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
92
+ address = {Mannheim},
93
+ doi = {10.14618/ids-pub-9021},
94
+ url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
95
+ pages = {9 -- 16},
96
+ year = {2019},
97
+ language = {en}
98
+ }
99
+ ```
100
+
101
+ - [CCc100](https://huggingface.co/datasets/eduagarcia/cc100-pt) (License: [Common Crawl terms of use](https://commoncrawl.org/terms-of-use/))
102
+
103
+ **Citation Information**
104
+
105
+ ```latex
106
+ @inproceedings{conneau-etal-2020-unsupervised,
107
+ title = "Unsupervised Cross-lingual Representation Learning at Scale",
108
+ author = "Conneau, Alexis and
109
+ Khandelwal, Kartikay and
110
+ Goyal, Naman and
111
+ Chaudhary, Vishrav and
112
+ Wenzek, Guillaume and
113
+ Guzm{\'a}n, Francisco and
114
+ Grave, Edouard and
115
+ Ott, Myle and
116
+ Zettlemoyer, Luke and
117
+ Stoyanov, Veselin",
118
+ booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
119
+ month = jul,
120
+ year = "2020",
121
+ address = "Online",
122
+ publisher = "Association for Computational Linguistics",
123
+ url = "https://www.aclweb.org/anthology/2020.acl-main.747",
124
+ doi = "10.18653/v1/2020.acl-main.747",
125
+ pages = "8440--8451",
126
+ }
127
+
128
+ @inproceedings{wenzek-etal-2020-ccnet,
129
+ title = "{CCN}et: Extracting High Quality Monolingual Datasets from Web Crawl Data",
130
+ author = "Wenzek, Guillaume and
131
+ Lachaux, Marie-Anne and
132
+ Conneau, Alexis and
133
+ Chaudhary, Vishrav and
134
+ Guzm{\'a}n, Francisco and
135
+ Joulin, Armand and
136
+ Grave, Edouard",
137
+ booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
138
+ month = may,
139
+ year = "2020",
140
+ address = "Marseille, France",
141
+ publisher = "European Language Resources Association",
142
+ url = "https://www.aclweb.org/anthology/2020.lrec-1.494",
143
+ pages = "4003--4012",
144
+ language = "English",
145
+ ISBN = "979-10-95546-34-4",
146
+ }
147
+ ```
148
+
149
+ - [Roots Wikiquote](https://huggingface.co/datasets/bigscience-data/roots_pt_wikiquote): 0.06% (License: [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/))
150
+
151
+ - [Roots Ted Talks](https://huggingface.co/datasets/bigscience-data/roots_pt_ted_talks_iwslt): 0.04% (License: [CC BY-NC-ND 4.0](https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en))
152
+
153
+ **Citation Information**
154
+
155
+ ```latex
156
+ @article{laurenccon2022bigscience,
157
+ title={The bigscience roots corpus: A 1.6 tb composite multilingual dataset},
158
+ author={Lauren{\c{c}}on, Hugo and Saulnier, Lucile and Wang, Thomas and Akiki, Christopher and Villanova del Moral, Albert and Le Scao, Teven and Von Werra, Leandro and Mou, Chenghao and Gonz{\'a}lez Ponferrada, Eduardo and Nguyen, Huu and others},
159
+ journal={Advances in Neural Information Processing Systems},
160
+ volume={35},
161
+ pages={31809--31826},
162
+ year={2022}
163
+ }
164
+ ```
165
+
166
+ ## How to use
167
+
168
+ To use this dataset, use the following code snippet:
169
+
170
+ ```python
171
+ from datasets import load_dataset
172
+
173
+ dataset = load_dataset("nicholasKluge/Pt-Corpus", split='train')
174
+
175
+ # If you don't want to download the entire dataset, set streaming to `True`
176
+ dataset = load_dataset("nicholasKluge/Pt-Corpus", split='train', streaming=True)
177
+
178
+ ```
179
+
180
+ ## Disclaimer
181
+
182
+ The dataset might contain offensive content, as some parts are a subset of public Common Crawl data. This means that the dataset contains sentences that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
183
+
184
+