Upload 2 files
Browse files
README.md
ADDED
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: zh
|
3 |
+
datasets: CLUECorpusSmall
|
4 |
+
widget:
|
5 |
+
- text: "米饭是一种用稻米与水煮成的食物"
|
6 |
+
|
7 |
+
|
8 |
+
---
|
9 |
+
|
10 |
+
|
11 |
+
# Chinese GPT2-distil Model
|
12 |
+
|
13 |
+
## Model description
|
14 |
+
|
15 |
+
The model is used to generate Chinese texts. You can download the model either from the [GPT2-Chinese Github page](https://github.com/Morizeyao/GPT2-Chinese), or via HuggingFace from the link [gpt2-distil-chinese-cluecorpussmall](https://huggingface.co/uer/gpt2-distil-chinese-cluecorpussmall). The model is called GPT2-distil because the configuration of model follows [distilgpt2](https://huggingface.co/distilgpt2), which has 6 layers, 768 dimension, and 12 heads. The pre-training does not involve the supervision of larger models.
|
16 |
+
|
17 |
+
## How to use
|
18 |
+
|
19 |
+
You can use the model directly with a pipeline for text generation:
|
20 |
+
|
21 |
+
```python
|
22 |
+
>>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
|
23 |
+
>>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-distil-chinese-cluecorpussmall")
|
24 |
+
>>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-distil-chinese-cluecorpussmall")
|
25 |
+
>>> text_generator = TextGenerationPipeline(model, tokenizer)
|
26 |
+
>>> text_generator("这是很久之前的事情了", max_length=100, do_sample=True)
|
27 |
+
[{'generated_text': '这是很久之前的事情了 。 我 现 在 想 起 来 就 让 自 己 很 伤 心 , 很 失 望 。 我 现 在 想 到 , 我 觉 得 大 多 数 人 的 生 活 比 我 的 生 命 还 要 重 要 , 对 一 些 事 情 的 看 法 , 对 一 些 人 的 看 法 , 都 是 在 发 泄 。 但 是 , 我 们 的 生 活 是 需 要 一 个 信 用 体 系 的 。 我 不 知'}]
|
28 |
+
```
|
29 |
+
|
30 |
+
## Training data
|
31 |
+
|
32 |
+
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
|
33 |
+
|
34 |
+
## Training procedure
|
35 |
+
|
36 |
+
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 1024.
|
37 |
+
|
38 |
+
Stage1:
|
39 |
+
|
40 |
+
```
|
41 |
+
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
42 |
+
--vocab_path models/google_zh_vocab.txt \
|
43 |
+
--dataset_path cluecorpussmall_lm_seq128_dataset.pt \
|
44 |
+
--seq_length 128 --processes_num 32 --data_processor lm
|
45 |
+
```
|
46 |
+
|
47 |
+
```
|
48 |
+
python3 pretrain.py --dataset_path cluecorpussmall_lm_seq128_dataset.pt \
|
49 |
+
--vocab_path models/google_zh_vocab.txt \
|
50 |
+
--config_path models/gpt2/distil_config.json \
|
51 |
+
--output_model_path models/cluecorpussmall_gpt2_distil_seq128_model.bin \
|
52 |
+
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
53 |
+
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
|
54 |
+
--learning_rate 1e-4 --batch_size 64
|
55 |
+
```
|
56 |
+
|
57 |
+
Stage2:
|
58 |
+
|
59 |
+
```
|
60 |
+
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
61 |
+
--vocab_path models/google_zh_vocab.txt \
|
62 |
+
--dataset_path cluecorpussmall_lm_seq1024_dataset.pt \
|
63 |
+
--seq_length 1024 --processes_num 32 --data_processor lm
|
64 |
+
```
|
65 |
+
|
66 |
+
```
|
67 |
+
python3 pretrain.py --dataset_path cluecorpussmall_lm_seq1024_dataset.pt \
|
68 |
+
--vocab_path models/google_zh_vocab.txt \
|
69 |
+
--pretrained_model_path models/cluecorpussmall_gpt2_distil_seq128_model.bin-1000000 \
|
70 |
+
--config_path models/gpt2/distil_config.json \
|
71 |
+
--output_model_path models/cluecorpussmall_gpt2_distil_seq1024_model.bin \
|
72 |
+
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
73 |
+
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
|
74 |
+
--learning_rate 5e-5 --batch_size 16
|
75 |
+
```
|
76 |
+
|
77 |
+
Finally, we convert the pre-trained model into Huggingface's format:
|
78 |
+
|
79 |
+
```
|
80 |
+
python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path cluecorpussmall_gpt2_distil_seq1024_model.bin-250000 \
|
81 |
+
--output_model_path pytorch_model.bin \
|
82 |
+
--layers_num 6
|
83 |
+
```
|
84 |
+
|
85 |
+
### BibTeX entry and citation info
|
86 |
+
|
87 |
+
```
|
88 |
+
@article{radford2019language,
|
89 |
+
title={Language Models are Unsupervised Multitask Learners},
|
90 |
+
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
|
91 |
+
year={2019}
|
92 |
+
}
|
93 |
+
|
94 |
+
@article{zhao2019uer,
|
95 |
+
title={UER: An Open-Source Toolkit for Pre-training Models},
|
96 |
+
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
|
97 |
+
journal={EMNLP-IJCNLP 2019},
|
98 |
+
pages={241},
|
99 |
+
year={2019}
|
100 |
+
}
|
101 |
+
```
|
vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|