File size: 3,986 Bytes
5197190
 
dd80869
0382071
32bdce1
ff5de19
5197190
638960a
d794daf
638960a
 
 
d794daf
638960a
d794daf
638960a
 
 
d794daf
638960a
d794daf
638960a
 
7e9daa9
638960a
 
 
 
 
 
 
d794daf
638960a
 
7e9daa9
638960a
 
 
 
 
 
 
 
 
d3552f7
638960a
 
 
081eba7
638960a
 
 
081eba7
d3552f7
638960a
 
 
 
d3552f7
081eba7
4e11c2f
 
53b9dd0
dfd153a
53b9dd0
5638fac
53b9dd0
 
638960a
 
 
5638fac
 
d3552f7
22b01f1
50d4acd
5638fac
 
638960a
 
 
 
 
 
 
 
 
 
 
 
fa99536
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
language: zh 
widget:
- text: "[CLS] 万 叠 春 山 积 雨 晴 ,"
- text: "[CLS] 青 山 削 芙 蓉 ,"

---

# Chinese Poem GPT2 Model

## Model description

The model is used to generate Chinese ancient poems. You can download the model  either from the [GPT2-Chinese Github page](https://github.com/Morizeyao/GPT2-Chinese), or via HuggingFace from the link [gpt2-chinese-poem][poem].

Since the parameter skip_special_tokens is used in the pipelines.py, special tokens such as [SEP], [UNK] will be deleted, and the output results may not be neat.

## How to use

You can use the model directly with a pipeline for text generation:

When the parameter skip_special_tokens is True:

```python
>>> from transformers import BertTokenizer, GPT2LMHeadModel,TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-poem")
>>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-poem")
>>> text_generator = TextGenerationPipeline(model, tokenizer)   
>>> text_generator("[CLS]梅 山 如 积 翠 ,", max_length=50, do_sample=True)
	[{'generated_text': '[CLS]梅 山 如 积 翠 , 的 手 堪 捧 。 遥 遥 仙 人 尉 , 盘 盘 故 时 陇 。 丹 泉 清 可 鉴 , 石 乳 甘 于 。 行 将 解 尘 缨 , 于 焉 蹈 高 踵 。 我'}]
```

When the parameter skip_special_tokens is False:

```python
>>> from transformers import BertTokenizer, GPT2LMHeadModel,TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-poem")
>>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-poem")
>>> text_generator = TextGenerationPipeline(model, tokenizer)   
>>> text_generator("[CLS]梅 山 如 积 翠 ,", max_length=50, do_sample=True)
	[{'generated_text': '[CLS]梅 山 如 积 翠 , 的 [UNK] 手 堪 捧 。 遥 遥 仙 人 尉 , 盘 盘 故 时 陇 。 丹 泉 清 可 鉴 , 石 乳 甘 可 捧 。 银 汉 迟 不 来 , 槎 头 欲 谁 揽 。 何'}]
```

## Training data

Training data contains 800,000 Chinese ancient poems which are collected by [chinese-poetry](https://github.com/chinese-poetry/chinese-poetry) and [Poetry](https://github.com/Werneror/Poetry) projects.

## Training procedure

The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 200,000 steps with a sequence length of 128. We use extended vocabulary to handle out-of-vocabulary words. The Chinese character that occurs greater than or equal to 100 in poem corpus is added to the vocabulary.

```
python3 preprocess.py --corpus_path corpora/poem.txt \
					  --vocab_path models/poem_zh_vocab.txt \
					  --dataset_path poem_dataset.pt --processes_num 16 \
					  --seq_length 128 --target lm 
```

```
python3 pretrain.py --dataset_path poem_dataset.pt \
				    --vocab_path models/poem_zh_vocab.txt \
					--output_model_path models/poem_gpt2_model.bin \
					--config_path models/gpt2/config.json \
					--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
					--total_steps 200000 --save_checkpoint_steps 50000 --report_steps 1000 \
					--learning_rate 5e-4 --batch_size 64 \
					--embedding word_pos --remove_embedding_layernorm \
					--encoder transformer --mask causal --layernorm_positioning pre \
					--target lm --tie_weight 

```

Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path poem_gpt2_base_model.bin-200000 \
														--output_model_path pytorch_model.bin \
														--layers_num 12
```

### BibTeX entry and citation info

```
@article{zhao2019uer,
  title={UER: An Open-Source Toolkit for Pre-training Models},
  author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
  journal={EMNLP-IJCNLP 2019},
  pages={241},
  year={2019}
}
```

[poem]: https://huggingface.co/uer/gpt2-chinese-poem