You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

MC^2: A Multilingual Corpus of Minority Languages in China

We present MC^2, a Multilingual Corpus of Minority Languages in China, which is the largest open-source corpus so far. This corpus encompasses four languages, namely Tibetan, Uyghur, Kazakh written in the Kazakh Arabic script, and Mongolian written in the traditional Mongolian script.

Please read our paper for more information: MC^2: Towards Transparent and Culturally-Aware NLP for Minority Languages in China (ACL 2024).

The processing scripts are released on our Github Repo.

Languages and Sizes

There are four minority languages in the dataset, and we report the dataset sizes below:

MC^2 (crawl) MC^2 (full)
Tibetan 1.7G 2.2G
Uyghur 520M 736M
Kazakh (Arabic) 397M 937M
Mongolian (Traditional) 970M 970M

MC^2 (crawl) denotes the subset of our newly-collected web crawls. MC^2 (full) is the complete set of MC^2, which additionally contains texts collected from existing resources.

Update (JUN 3, 2024): The Mongolian subset is updated with a larger size (from 874M in mn-crawl-only-release-20231112.jsonl to 970M in mn-crawl-only-release-20231127.jsonl).

Dataset Structure

The dataset is in JSON format, with each line containing one entry with three keys: title, text, and url.

This is an example:

{
  "title":"پارتيانىڭ مەملەكەتتىك 19 - قۇرىلتايىنىڭ ورىنباسار باس حاتشىلارى",
  "text":"ليۋ چيباۋ، مىڭ جيانجۋ، جاۋ لىجي، لي جانشۋ\n\n\n(شينحۋا اگەنتتىگىنىڭ 17 - قازاندا بەيجيڭنەن بەرگەن حابارى)",
  "url":"kazakh.altxw.com\/system\/2017\/10\/24\/030007713.shtml"
}

How to Obtain the Data

Our data mainly contains three parts.

You can download our web-crawled data from Hugging Face.

For data from CulturaX and Wikipedia, you can download and then process them using scripts in our Github Repo.

Pre-trained Models

We provide two models pre-trained on MC^2. Please read the paper for detailed information on model training.

  • MC^2XLMR-large: XLM-RoBERTa-large continually pretrained on MC^2
  • MC^2Llama-13B: Llama2-13b continually pretrained on Chinese corpora and MC^2

License

We released the data under the Creative Commons CC0 license.

These data are released under this licensing scheme:
We do not own any of the text from which these data have been extracted.
We license the data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/.
To the extent possible under law, Peking University has waived all copyright and related or neighboring rights to MC^2.

Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number, or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.

We will comply with legitimate requests by removing the affected sources from the next release of the corpus.

Citation

@article{zhang2024mc,
  title={MC$^2$: Towards Transparent and Culturally-Aware NLP for Minority Languages in China},
  author={Zhang, Chen and Tao, Mingxu and Huang, Quzhe and Lin, Jiuheng and Chen, Zhibin and Feng, Yansong},
  journal={arXiv preprint arXiv:2311.08348},
  year={2024}
}

Contributors

We thank Chen Zhang*, Mingxu Tao*, Quzhe Huang*, Jiuheng Lin*, Zhibin Chen, Yansong Feng for their contribution.

Downloads last month
0

Models trained or fine-tuned on pkupie/mc2_corpus

Collection including pkupie/mc2_corpus