Datasets:

Languages:
English
Size Categories:
n>1T
ArXiv:
Tags:
DOI:
License:

Reprocessing for a new language

#12
by pere - opened

I am considering applying this approach to another language. If I understand correctly, the first two steps involve parsing the CC files and using FastText for language detection. These steps are quite substantial, especially for less common languages, as they require processing all the data, not just a subset.

Is there any way to access your files at this stage, so we don't have to repeat this step?

HuggingFaceFW org

we plan to work on multilinguality later on but for the moment the data we have after language filtering is not public

Thanks @guipenedo . We tried at some stage to parse MC4 (before you released the split) but gave it up. It is basically the same job (parsing CC and running FastText lang detect), but datatrove seems to make it a bit smoother. I know it is a significant job, and a bit wasted if you are just looking for a minor language.

Do you have any estimate about what compute is needed for performing these first steps? Are there other ways of obtaining the CC split by language?

HuggingFaceFW org

Trafilatura is by far the most compute heavy step. If you don't mind some slightly worse text extraction, you can run the fasttext language filtering on the WET version of commoncrawl. mC4 (and many other datasets) also start directly from the WET files.
datatrove's WarcReader can also parse WET files, you just have to change the glob pattern in the example

Thanks. That is a very good point. I am interested only in the Norwegian part here (far less than <1%). To only run Trafilatura on this small portion makes the project a lot more feasible.

If you have any estimate on when multilingual versions (on any level of completeness) will be available, please let me know. No point in spending resources on tasks that others are doing. I am also happy to give feedback on language specific adoptions on the heuristics here.

HuggingFaceFW org

We might have an open call for help from native speakers when we start working more on our multilingual efforts

The project sounds very cool @pere ! I feel like it would be very very useful for the community to have datasets specifically for Norwegian so I wouldn't wait for us if I were you. The more we work on datasets, the better!

Sorry to jump in. I'm actually doing something similar but for multilingual purposes, and currently our pipelines pass through three snapshots. I'm using our developed FastText based language identification (GlotLID) for this work: https://huggingface.co/spaces/cis-lmu/glotlid-space.

@guipenedo , regarding the point you made about run the LID directly on wet files from Common Crawl, does Fineweb not do this? Does it use wat files instead? And is that why it needs Trafilatura to extract text? Could you explain a bit about the decisions made in the Fineweb or point me to a readme? Thanks.

@pere @kargaranamir I am looking to run these scripts for a low-to-medium resource language (Turkish). Is there any update as to have you been able to make any progress in running for your choice of languages?

Hi @meliksahturker ,

You can see our effort here: https://huggingface.co/datasets/cis-lmu/GlotCC-V1. The whole dataset is 2.2TB.

We mostly use the latest snapshot available. Specifically for Turkish, the viewer is here: https://huggingface.co/datasets/cis-lmu/GlotCC-V1/viewer/tur-Latn, and the data is here: https://huggingface.co/datasets/cis-lmu/GlotCC-V1/tree/main/v1.0/tur-Latn, which is 33GB of tur-Latn data.

For much lower resource languages, we are going to have another round of cleaning and auditing. I will update the codes here:

https://github.com/cisnlp/GlotCC/tree/main

This project is built on top of GlotLID and the ungoliant fork.


@kargaranamir If you're still planning to do another round of cleaning and auditing, I would love to help with Amazigh languages (codes zgh, tzm, kab, shi ...) because I've noticed that the amount of data is a very tiny fraction of what the latest snapshot should contain.

@ayymen Sure, definitely. We may over-clean for some languages. I would like to share our non-clean data for these languages with you, to see what we can do for them. Later, we can have specific filters for these sets.

I create a discussion in here form Amazigh languages: https://huggingface.co/datasets/cis-lmu/GlotCC-V1/discussions/6

@kargaranamir Thanks for the effort and the answer! I have a question though: How come Turkish subset is so small compared to others like OSCAR, mC4, CulturaX and CulturaY? Despite aggressive cleaning, https://huggingface.co/datasets/vngrs-ai/vngrs-web-corpus -which is obtained by cleaning OSCAR + mC4 corpora- is 100+ GBs on disk. I was expecting that FineWeb Turkish subset would be much larger.

@meliksahturker good question. 33GB is not a bad deal for primarily one snapshot. OSCAR 23.01 has 73.7 GB without any major filtering. Our filters are a bit aggressive because we want to have clean data for all languages (and we still have a way to go). So, 1/2 of the data is filtered. The second reason is the contamination of other languages in the Turkish data. We have a more granular separation of closely related languages.

Sign up or log in to comment