Datasets:
LHF
/

Languages:
Spanish
Multilinguality:
monolingual
Size Categories:
100M<n<1B
Source Datasets:
original
ArXiv:
Tags:
License:

Fair comparison with ParaCrawl

#1
by ZJaume - opened

Hi

Nice to see a curated monolingual corpus for pre-trained language models in Spanish! I think that the comparison with ParaCrawl has nothing to do with this approach. The Spanish part of ParCrawl that you are referring to, is a parallel corpus meant to be used in Machine Translation. A different task than pre-trained language models. Even so, if you want to compare your corpus with the Spanish monolingual part of ParaCrawl, here you can find all the data used previous to the document alignment. But as I said before, its purpose is a different task and processing of this monolingual data was only to prepare things before document alignment, not for training language models. Sentence splitter was performed with moses sentence splitter and deduplication with this.

Thank you for your work!

Best,
Jaume

EDIT: only wide00016 collection from Internet Archive is 600GB of compressed text.

Hi Jaume,

We will take your comment into account for the next version of the preprint and we will update the table accordingly.

Thanks,
Asier

asier-gutierrez changed discussion status to closed

Sign up or log in to comment