Datasets:

Modalities:
Image
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
MAP-CC / README.md
dododododo's picture
Update README.md
99e7de0 verified
|
raw
history blame
No virus
2.58 kB
---
license: apache-2.0
---
# MAP-CC
[**🌐 Homepage**](https://chinese-tiny-llm.github.io) | [**πŸ€— MAP-CC**](https://huggingface.co/datasets/m-a-p/MAP-CC) | [**πŸ€— CHC-Bench**](https://huggingface.co/datasets/m-a-p/CHC-Bench) | [**πŸ€— CT-LLM**](https://huggingface.co/collections/m-a-p/chinese-tiny-llm-660d0133dff6856f94ce0fc6) | [**πŸ“– arXiv**](https://arxiv.org/abs/2404.04167) | [**GitHub**](https://github.com/Chinese-Tiny-LLM/Chinese-Tiny-LLM)
An open-source Chinese pretraining dataset with a scale of 800 billion tokens, offering the NLP community high-quality Chinese pretraining data.
## Usage Instructions
After downloading the parts of the dataset, you can concatenate them into a single file for each split of the dataset using the following command in a UNIX-like terminal:
```bash
cat [split].gz.part* > [split].gz
```
Replace [split] with the name of the dataset component you wish to merge (zh-cc, zh-baike, zh-papers, zh-books, or zh-others). After merging, decompress the .gz file to access the dataset's content.
<table>
<tr>
<td>
## Dataset Composition
The dataset consists of several components, each originating from different sources and serving various purposes in language modeling and processing. Below is a brief overview of each component:
<p>
<img src="https://cdn-uploads.huggingface.co/production/uploads/654907a4a1faff97850c4eff/Hd01lXv08_GBCe3SEVx_p.png" alt="Dataset Image" style="float: right; margin-left: 20px; width: 400px;" />
<strong>zh-cc (Chinese Common Crawl)</strong><br>
Extracts from the Common Crawl project specifically filtered for Chinese content. This component is rich in diverse internet text, ranging from websites, blogs, news articles, and more.<br><br>
<strong>zh-baike (Chinese Encyclopedias)</strong><br>
A collection of articles from various Chinese encyclopedias, similar to Wikipedia but including other encyclopedic sources as well.<br><br>
<strong>zh-papers (Chinese Academic Papers)</strong><br>
This component consists of academic and research papers published in Chinese. It covers a wide range of disciplines and offers technical, domain-specific language.<br><br>
<strong>zh-books (Chinese Books)</strong><br>
Comprises texts extracted from books published in Chinese. This includes literature, non-fiction, textbooks, and more.<br><br>
<strong>zh-others</strong><br>
This category is a collection of miscellaneous texts, notably including a substantial amount of QA (Question and Answer) data, alongside a variety of other texts.
</p>
## License
## Citation
```
```