Datasets:

Modalities:
Image
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
dododododo commited on
Commit
99e7de0
β€’
1 Parent(s): f4a08f8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -3,6 +3,9 @@ license: apache-2.0
3
  ---
4
 
5
  # MAP-CC
 
 
 
6
  An open-source Chinese pretraining dataset with a scale of 800 billion tokens, offering the NLP community high-quality Chinese pretraining data.
7
 
8
  ## Usage Instructions
 
3
  ---
4
 
5
  # MAP-CC
6
+
7
+ [**🌐 Homepage**](https://chinese-tiny-llm.github.io) | [**πŸ€— MAP-CC**](https://huggingface.co/datasets/m-a-p/MAP-CC) | [**πŸ€— CHC-Bench**](https://huggingface.co/datasets/m-a-p/CHC-Bench) | [**πŸ€— CT-LLM**](https://huggingface.co/collections/m-a-p/chinese-tiny-llm-660d0133dff6856f94ce0fc6) | [**πŸ“– arXiv**](https://arxiv.org/abs/2404.04167) | [**GitHub**](https://github.com/Chinese-Tiny-LLM/Chinese-Tiny-LLM)
8
+
9
  An open-source Chinese pretraining dataset with a scale of 800 billion tokens, offering the NLP community high-quality Chinese pretraining data.
10
 
11
  ## Usage Instructions