File size: 4,486 Bytes
1309ba1
e3306e9
1309ba1
f93fe5f
1309ba1
 
 
 
 
f93fe5f
1309ba1
 
 
1752eb1
 
 
1309ba1
 
 
 
 
 
 
 
 
 
 
f93fe5f
1309ba1
 
 
 
4124b40
 
1309ba1
 
 
b9b9ffc
1309ba1
 
a472bdd
21bcad2
a472bdd
fbeeaef
1309ba1
 
 
a472bdd
21bcad2
a472bdd
fbeeaef
a472bdd
 
fbeeaef
1309ba1
 
 
 
 
3b26487
a472bdd
1309ba1
 
 
 
 
 
b9b9ffc
 
 
 
 
 
1309ba1
 
 
 
 
 
 
1752eb1
 
 
 
 
 
 
 
1309ba1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
language: zh
widget: 
- text: "[CLS]当是时"


---


# Chinese Ancient GPT2 Model

## Model description

The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). Besides, the model could also be pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits UER-py to support models with parameters above one billion, and extends it to a multimodal pre-training framework.

The model is used to generate ancient Chinese. You can download the model from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or [GPT2-Chinese Github page](https://github.com/Morizeyao/GPT2-Chinese), or via HuggingFace from the link [gpt2-chinese-ancient](https://huggingface.co/uer/gpt2-chinese-ancient)

## How to use

You can use the model directly with a pipeline for text generation:

```python
>>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-ancient")
>>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-ancient")
>>> text_generator = TextGenerationPipeline(model, tokenizer)   
>>> text_generator("当是时", max_length=100, do_sample=True)
    [{'generated_text': '[CLS]当是时 所 议 者 不 为 无 据 , 况 亦 在 之 列 乎 ? 然 则 今 日 之 事 , 所 当 思 者 在 何 ? 欲 求 国 是 于 天 下 , 莫 在 于 得 人 。 臣 以 为 求 人 之 法 , 不 在 多 用 官 一 途 。 诚 使 得 才 者 众 , 人 才 者 优 , 则 治 所 当 得 , 而 不 事 于 官 者 , 人 才 乃 其 常 也 。 所 当 讲 者'}]
```

## Training data

Training data contains 3,000,000 ancient Chinese which are collected by [daizhigev20](https://github.com/garychowcmu/daizhigev20). Since part of ancient corpus has no punctuation, we used the [ancient Chinese punctuation system](https://seg.shenshen.wiki) developed by [BNU ICIP lab](http://icip.bnu.edu.cn/). 


## Training procedure

The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 500,000 steps with a sequence length of 320. We use extended vocabulary to handle out-of-vocabulary words. The Chinese character that occurs greater than or equal to 100 in ancient Chinese corpus is added to the vocabulary.

```
python3 preprocess.py --corpus_path corpora/ancient_chinese.txt \
                      --vocab_path models/google_zh_ancient_vocab.txt \
                      --dataset_path ancient_chinese_dataset.pt --processes_num 16 \
                      --seq_length 320 --data_processor lm
```

```
python3 pretrain.py --dataset_path ancient_chinese_dataset.pt \
                    --vocab_path models/google_zh_ancient_vocab.txt \
                    --config_path models/bert_base_config.json \
                    --output_model_path models/ancient_chinese_gpt2_model.bin \
                    --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
                    --total_steps 500000 --save_checkpoint_steps 100000 --report_steps 10000 \
                    --learning_rate 5e-4 --batch_size 32
```

Finally, we convert the pre-trained model into Huggingface's format:

```
python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path models/ancient_chinese_gpt2_model.bin-500000 \
                                                        --output_model_path pytorch_model.bin \
                                                        --layers_num 12
```

### BibTeX entry and citation info

```
@article{radford2019language,
  title={Language Models are Unsupervised Multitask Learners},
  author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
  year={2019}
}

@article{zhao2019uer,
  title={UER: An Open-Source Toolkit for Pre-training Models},
  author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
  journal={EMNLP-IJCNLP 2019},
  pages={241},
  year={2019}
}

@article{zhao2023tencentpretrain,
  title={TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities},
  author={Zhao, Zhe and Li, Yudong and Hou, Cheng and Zhao, Jing and others},
  journal={ACL 2023},
  pages={217},
  year={2023}
}
```