hhou435 commited on
Commit
b78ef6e
1 Parent(s): b572e92

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md CHANGED
@@ -4,3 +4,89 @@ widget:
4
  - text: "[CLS]国 色 天 香 , 姹 紫 嫣 红 , 碧 水 青 云 欣 共 赏 -"
5
 
6
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - text: "[CLS]国 色 天 香 , 姹 紫 嫣 红 , 碧 水 青 云 欣 共 赏 -"
5
 
6
  ---
7
+
8
+ # Chinese GPT2 Language Models
9
+
10
+ ## Model description
11
+
12
+ This is the set of two Chinese GPT2 language models pre-trained by [UER-py](https://www.aclweb.org/anthology/D19-3041.pdf).
13
+
14
+ You can download the two Chinese GPT2 language models via HuggingFace from the links below:
15
+
16
+ | Model | [gpt2-chinese-poem][poem] | [gpt2-chinese-couplet][couplet] |
17
+ | :-----------: | :------------------------------------------: | :-------------------------------------: |
18
+ | Training data | Contains about 800,000 chinese ancient poems | contains about 700,000 chinese couplets |
19
+
20
+
21
+
22
+ ## How to use
23
+
24
+ Because the parameter ***skip_special_tokens*** is used in the ***pipelines.py*** , special tokens such as [SEP], [UNK] will be deleted, and the output results may not be neat.
25
+
26
+ You can use this model directly with a pipeline for text generation:
27
+
28
+ When the parameter ***skip_special_tokens*** is True:
29
+
30
+ ```python
31
+ >>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
32
+ >>> from transformers import TextGenerationPipeline,
33
+ >>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-couplet")
34
+ >>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-couplet")
35
+ >>> text_generator = TextGenerationPipeline(model, tokenizer)
36
+ >>> text_generator("[CLS]丹 枫 江 冷 人 初 去 -", max_length=25, do_sample=True)
37
+ [{'generated_text': '[CLS]丹 枫 江 冷 人 初 去 - 黄 叶 声 从 天 外 来 阅 旗'}]
38
+ ```
39
+
40
+ When the parameter ***skip_special_tokens*** is Flase:
41
+
42
+ ```python
43
+ >>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
44
+ >>> from transformers import TextGenerationPipeline,
45
+ >>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-poem")
46
+ >>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-poem")
47
+ >>> text_generator = TextGenerationPipeline(model, tokenizer)
48
+ >>> text_generator("[CLS]丹 枫 江 冷 人 初 去 -", max_length=25, do_sample=True)
49
+ [{'generated_text': '[CLS]丹 枫 江 冷 人 初 去 - 黄 叶 声 我 酒 不 辞 [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP]'}]
50
+ ```
51
+
52
+ ## Training data
53
+
54
+ Contains about 800,000 chinese ancient poems.
55
+
56
+ ## Training procedure
57
+
58
+ Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 25,000 steps with a sequence length of 64.
59
+
60
+ ```
61
+ python3 preprocess.py --corpus_path corpora/couplet.txt \
62
+ --vocab_path models/google_zh_vocab.txt \
63
+ --dataset_path couplet.pt --processes_num 16 \
64
+ --seq_length 64 --target lm
65
+ ```
66
+
67
+ ```
68
+ python3 pretrain.py --dataset_path couplet.pt \
69
+ --vocab_path models/google_zh_vocab.txt \
70
+ --output_model_path models/couplet_gpt_base_model.bin \
71
+ --config_path models/bert_base_config.json --learning_rate 5e-4 \
72
+ --tie_weight --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
73
+ --batch_size 64 --report_steps 1000 \
74
+ --save_checkpoint_steps 5000 --total_steps 25000 \
75
+ --embedding gpt --encoder gpt2 --target lm
76
+
77
+ ```
78
+
79
+ ### BibTeX entry and citation info
80
+
81
+ ```
82
+ @article{zhao2019uer,
83
+ title={UER: An Open-Source Toolkit for Pre-training Models},
84
+ author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
85
+ journal={EMNLP-IJCNLP 2019},
86
+ pages={241},
87
+ year={2019}
88
+ }
89
+ ```
90
+
91
+ [poem]: https://huggingface.co/uer/gpt2-chinese-poem
92
+ [couplet]: https://huggingface.co/uer/gpt2-chinese-couplet