Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ widget:
|
|
11 |
|
12 |
The model is used to generate Chinese couplets. You can download the model either from the [GPT2-Chinese Github page](https://github.com/Morizeyao/GPT2-Chinese), or via HuggingFace from the link [gpt2-chinese-couplet][couplet].
|
13 |
|
14 |
-
Since the parameter skip_special_tokens is used in the pipelines.py, special tokens such as [SEP], [UNK] will be deleted,
|
15 |
|
16 |
## How to use
|
17 |
|
@@ -25,7 +25,7 @@ When the parameter skip_special_tokens is True:
|
|
25 |
>>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-couplet")
|
26 |
>>> text_generator = TextGenerationPipeline(model, tokenizer)
|
27 |
>>> text_generator("[CLS]丹 枫 江 冷 人 初 去 -", max_length=25, do_sample=True)
|
28 |
-
|
29 |
```
|
30 |
|
31 |
When the parameter skip_special_tokens is False:
|
@@ -36,7 +36,7 @@ When the parameter skip_special_tokens is False:
|
|
36 |
>>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-couplet")
|
37 |
>>> text_generator = TextGenerationPipeline(model, tokenizer)
|
38 |
>>> text_generator("[CLS]丹 枫 江 冷 人 初 去 -", max_length=25, do_sample=True)
|
39 |
-
|
40 |
```
|
41 |
|
42 |
## Training data
|
@@ -45,40 +45,44 @@ Training data contains 700,000 Chinese couplets which are collected by [couplet-
|
|
45 |
|
46 |
## Training procedure
|
47 |
|
48 |
-
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud
|
49 |
|
50 |
```
|
51 |
python3 preprocess.py --corpus_path corpora/couplet.txt \
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
```
|
56 |
|
57 |
```
|
58 |
python3 pretrain.py --dataset_path couplet_dataset.pt \
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
```
|
71 |
|
72 |
Finally, we convert the pre-trained model into Huggingface's format:
|
73 |
```
|
74 |
python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path couplet_gpt2_model.bin-25000 \
|
75 |
-
|
76 |
-
|
77 |
```
|
78 |
|
79 |
### BibTeX entry and citation info
|
80 |
|
81 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
@article{zhao2019uer,
|
83 |
title={UER: An Open-Source Toolkit for Pre-training Models},
|
84 |
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
|
|
|
11 |
|
12 |
The model is used to generate Chinese couplets. You can download the model either from the [GPT2-Chinese Github page](https://github.com/Morizeyao/GPT2-Chinese), or via HuggingFace from the link [gpt2-chinese-couplet][couplet].
|
13 |
|
14 |
+
Since the parameter skip_special_tokens is used in the pipelines.py, special tokens such as [SEP], [UNK] will be deleted, the output results of Hosted inference API (right) may not be properly displayed..
|
15 |
|
16 |
## How to use
|
17 |
|
|
|
25 |
>>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-couplet")
|
26 |
>>> text_generator = TextGenerationPipeline(model, tokenizer)
|
27 |
>>> text_generator("[CLS]丹 枫 江 冷 人 初 去 -", max_length=25, do_sample=True)
|
28 |
+
[{'generated_text': '[CLS]丹 枫 江 冷 人 初 去 - 黄 叶 声 从 天 外 来 阅 旗'}]
|
29 |
```
|
30 |
|
31 |
When the parameter skip_special_tokens is False:
|
|
|
36 |
>>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-couplet")
|
37 |
>>> text_generator = TextGenerationPipeline(model, tokenizer)
|
38 |
>>> text_generator("[CLS]丹 枫 江 冷 人 初 去 -", max_length=25, do_sample=True)
|
39 |
+
[{'generated_text': '[CLS]丹 枫 江 冷 人 初 去 - 黄 叶 声 我 酒 不 辞 [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP]'}]
|
40 |
```
|
41 |
|
42 |
## Training data
|
|
|
45 |
|
46 |
## Training procedure
|
47 |
|
48 |
+
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 25,000 steps with a sequence length of 64.
|
49 |
|
50 |
```
|
51 |
python3 preprocess.py --corpus_path corpora/couplet.txt \
|
52 |
+
--vocab_path models/google_zh_vocab.txt \
|
53 |
+
--dataset_path couplet_dataset.pt --processes_num 16 \
|
54 |
+
--seq_length 64 --target lm
|
55 |
```
|
56 |
|
57 |
```
|
58 |
python3 pretrain.py --dataset_path couplet_dataset.pt \
|
59 |
+
--vocab_path models/google_zh_vocab.txt \
|
60 |
+
--config_path models/gpt2/config.json \
|
61 |
+
--output_model_path models/couplet_gpt2_model.bin \
|
62 |
+
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
63 |
+
--total_steps 25000 --save_checkpoint_steps 5000 --report_steps 1000 \
|
64 |
+
--learning_rate 5e-4 --batch_size 64 \
|
65 |
+
--embedding word_pos --remove_embedding_layernorm \
|
66 |
+
--encoder transformer --mask causal --layernorm_positioning pre \
|
67 |
+
--target lm --tie_weight
|
|
|
|
|
68 |
```
|
69 |
|
70 |
Finally, we convert the pre-trained model into Huggingface's format:
|
71 |
```
|
72 |
python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path couplet_gpt2_model.bin-25000 \
|
73 |
+
--output_model_path pytorch_model.bin \
|
74 |
+
--layers_num 12
|
75 |
```
|
76 |
|
77 |
### BibTeX entry and citation info
|
78 |
|
79 |
```
|
80 |
+
@article{radford2019language,
|
81 |
+
title={Language Models are Unsupervised Multitask Learners},
|
82 |
+
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
|
83 |
+
year={2019}
|
84 |
+
}
|
85 |
+
|
86 |
@article{zhao2019uer,
|
87 |
title={UER: An Open-Source Toolkit for Pre-training Models},
|
88 |
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
|