File size: 4,672 Bytes
b572e92 b78ef6e 62fc2e8 82f9aa4 b78ef6e 62fc2e8 b78ef6e f99ee19 b78ef6e 66f86e0 b78ef6e 17a180a b78ef6e 17a180a b78ef6e 66f86e0 b78ef6e 17a180a b78ef6e c4042d6 b78ef6e 66f86e0 b78ef6e 7474d77 b78ef6e 66f86e0 b78ef6e 66f86e0 62fc2e8 b78ef6e 7474d77 66f86e0 62fc2e8 b78ef6e 4607e23 b60f206 66f86e0 4607e23 b78ef6e 66f86e0 b78ef6e f99ee19 62fc2e8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
---
language: zh
widget:
- text: "[CLS]国 色 天 香 , 姹 紫 嫣 红 , 碧 水 青 云 欣 共 赏 -"
---
# Chinese Couplet GPT2 Model
## Model description
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). Besides, the model could also be pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits UER-py to support models with parameters above one billion, and extends it to a multimodal pre-training framework.
The model is used to generate Chinese couplets. You can download the model from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or [GPT2-Chinese Github page](https://github.com/Morizeyao/GPT2-Chinese), or via HuggingFace from the link [gpt2-chinese-couplet](https://huggingface.co/uer/gpt2-chinese-couplet).
Since the parameter skip_special_tokens is used in the pipelines.py, special tokens such as [SEP], [UNK] will be deleted, the output results of Hosted inference API (right) may not be properly displayed..
## How to use
You can use the model directly with a pipeline for text generation:
When the parameter skip_special_tokens is True:
```python
>>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-couplet")
>>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-couplet")
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generator("[CLS]丹 枫 江 冷 人 初 去 -", max_length=25, do_sample=True)
[{'generated_text': '[CLS]丹 枫 江 冷 人 初 去 - 黄 叶 声 从 天 外 来 阅 旗'}]
```
When the parameter skip_special_tokens is False:
```python
>>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-couplet")
>>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-couplet")
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generator("[CLS]丹 枫 江 冷 人 初 去 -", max_length=25, do_sample=True)
[{'generated_text': '[CLS]丹 枫 江 冷 人 初 去 - 黄 叶 声 我 酒 不 辞 [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP]'}]
```
## Training data
Training data contains 700,000 Chinese couplets which are collected by [couplet-clean-dataset](https://github.com/v-zich/couplet-clean-dataset).
## Training procedure
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 25,000 steps with a sequence length of 64.
```
python3 preprocess.py --corpus_path corpora/couplet.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path couplet_dataset.pt --processes_num 16 \
--seq_length 64 --data_processor lm
```
```
python3 pretrain.py --dataset_path couplet_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/gpt2/config.json \
--output_model_path models/couplet_gpt2_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 25000 --save_checkpoint_steps 5000 --report_steps 1000 \
--learning_rate 5e-4 --batch_size 64
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path couplet_gpt2_model.bin-25000 \
--output_model_path pytorch_model.bin \
--layers_num 12
```
### BibTeX entry and citation info
```
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
@article{zhao2023tencentpretrain,
title={TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities},
author={Zhao, Zhe and Li, Yudong and Hou, Cheng and Zhao, Jing and others},
journal={ACL 2023},
pages={217},
year={2023}
}
``` |