File size: 3,902 Bytes
d5c002c
17a314b
0147117
3370f16
0147117
 
 
 
 
 
 
 
 
 
 
b8aa758
 
d5c002c
5ab0423
0147117
5ab0423
 
 
392259e
5ab0423
 
c7053a7
c3d3357
 
 
 
 
 
 
 
 
 
 
 
adb0612
5ab0423
b31f390
 
abca939
b31f390
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
license: cc-by-nc-nd-4.0
languages:
- es
licenses:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
pretty_name: ViquiQuAD
size_categories:
- 100M<n<1B
source_datasets:
- original
task_categories:
- language-modelling
- text-generation
- sequence-modelling
---


# esCorpius: A Massive Spanish Crawling Corpus

## Introduction
In the recent years, transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, the results in Spanish present important shortcomings, as they are either too small in comparison with other languages, or present a low quality derived from sub-optimal cleaning and deduplication. In this paper, we introduce esCorpius, a Spanish crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in Spanish with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius has been released under CC BY-NC-ND 4.0 license.

## Statistics
| **Corpus**              | OSCAR<br>22.01 | mC4          | CC-100          | ParaCrawl<br>v9 | esCorpius<br>(ours) |
|-------------------------|----------------|--------------|-----------------|-----------------|-------------------------|
| **Size (ES)**           | 381.9 GB       | 1,600.0 GB   | 53.3 GB         | 24.0 GB         | 322.5 GB                |
| **Docs (ES)**           | 51M            | 416M         | -               | -               | 104M                    |
| **Words (ES)**          | 42,829M        | 433,000M     | 9,374M          | 4,374M          | 50,773M                 |
| **Lang.<br>identifier** | fastText       | CLD3         | fastText        | CLD2            | CLD2 + fastText         |
| **Elements**            | Document       | Document     | Document        | Sentence        | Document and paragraph  |
| **Parsing quality**     | Medium         | Low          | Medium          | High            | High                    |
| **Cleaning quality**    | Low            | No cleaning  | Low             | High            | High                    |
| **Deduplication**       | No             | No           | No              | Bicleaner       | dLHF                    |
| **Language**            | Multilingual   | Multilingual | Multilingual    | Multilingual    | Spanish                 |
| **License**             | CC-BY-4.0      | ODC-By-v1.0  | Common<br>Crawl | CC0             | CC-BY-NC-ND             |


## Citation
Link to the paper: https://arxiv.org/abs/2206.15147
Cite this work:
```
@misc{https://doi.org/10.48550/arxiv.2206.15147,
  doi = {10.48550/ARXIV.2206.15147},
  url = {https://arxiv.org/abs/2206.15147},
  author = {Gutiérrez-Fandiño, Asier and Pérez-Fernández, David and Armengol-Estapé, Jordi and Griol, David and Callejas, Zoraida},
  keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {esCorpius: A Massive Spanish Crawling Corpus},
  publisher = {arXiv},
  year = {2022},
  copyright = {Creative Commons Attribution 4.0 International}
}
```
## Disclaimer
We did not perform any kind of filtering and/or censorship to the corpus. We expect users to do so applying their own methods. We are not reliable for any misuse of the corpus.