Create README.md
#2
by
dododododo
- opened
README.md
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
# MAP-CC
|
6 |
+
An open-source Chinese pretraining dataset with a scale of 800 billion tokens, offering the NLP community high-quality Chinese pretraining data.
|
7 |
+
|
8 |
+
## Usage Instructions
|
9 |
+
|
10 |
+
After downloading the parts of the dataset, you can concatenate them into a single file for each split of the dataset using the following command in a UNIX-like terminal:
|
11 |
+
|
12 |
+
```bash
|
13 |
+
cat [split].gz.part* > [split].gz
|
14 |
+
```
|
15 |
+
Replace [split] with the name of the dataset component you wish to merge (zh-cc, zh-baike, zh-papers, zh-books, or zh-others). After merging, decompress the .gz file to access the dataset's content.
|
16 |
+
|
17 |
+
<table>
|
18 |
+
<tr>
|
19 |
+
<td>
|
20 |
+
|
21 |
+
## Dataset Composition
|
22 |
+
|
23 |
+
The dataset consists of several components, each originating from different sources and serving various purposes in language modeling and processing. Below is a brief overview of each component:
|
24 |
+
|
25 |
+
<p>
|
26 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/654907a4a1faff97850c4eff/Hd01lXv08_GBCe3SEVx_p.png" alt="Dataset Image" style="float: right; margin-left: 20px; width: 400px;" />
|
27 |
+
<strong>zh-cc (Chinese Common Crawl)</strong><br>
|
28 |
+
Extracts from the Common Crawl project specifically filtered for Chinese content. This component is rich in diverse internet text, ranging from websites, blogs, news articles, and more.<br><br>
|
29 |
+
<strong>zh-baike (Chinese Encyclopedias)</strong><br>
|
30 |
+
A collection of articles from various Chinese encyclopedias, similar to Wikipedia but including other encyclopedic sources as well.<br><br>
|
31 |
+
<strong>zh-papers (Chinese Academic Papers)</strong><br>
|
32 |
+
This component consists of academic and research papers published in Chinese. It covers a wide range of disciplines and offers technical, domain-specific language.<br><br>
|
33 |
+
<strong>zh-books (Chinese Books)</strong><br>
|
34 |
+
Comprises texts extracted from books published in Chinese. This includes literature, non-fiction, textbooks, and more.<br><br>
|
35 |
+
<strong>zh-others</strong><br>
|
36 |
+
This category is a collection of miscellaneous texts, notably including a substantial amount of QA (Question and Answer) data, alongside a variety of other texts.
|
37 |
+
</p>
|
38 |
+
|
39 |
+
|
40 |
+
|
41 |
+
## License
|
42 |
+
|
43 |
+
|
44 |
+
## Citation
|
45 |
+
```
|
46 |
+
|
47 |
+
```
|
48 |
+
|