hhou435 commited on
Commit
1309ba1
1 Parent(s): 2fb5b37
Files changed (3) hide show
  1. README.md +75 -0
  2. pytorch_model.bin +1 -1
  3. tf_model.h5 +1 -1
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: Chinese
3
+ widget:
4
+ - text: "当是时"
5
+
6
+
7
+ ---
8
+
9
+
10
+ # Chinese GPT2 Ancient Model
11
+
12
+ ## Model description
13
+
14
+ The model is used to generate ancient Chinese. You can download the model either from the [GPT2-Chinese Github page](https://github.com/Morizeyao/GPT2-Chinese), or via HuggingFace from the link [gpt2-chinese-ancient](https://huggingface.co/uer/gpt2-chinese-ancient)
15
+
16
+ ## How to use
17
+
18
+ You can use the model directly with a pipeline for text generation:
19
+
20
+ ```python
21
+ >>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
22
+ >>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-ancient")
23
+ >>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-ancient")
24
+ >>> text_generator = TextGenerationPipeline(model, tokenizer)
25
+ >>> text_generator("当是时", max_length=100, do_sample=True)
26
+ [{'generated_text': '当是时 所 议 者 不 为 无 据 , 况 亦 在 之 列 乎 ? 然 则 今 日 之 事 , 所 当 思 者 在 何 ? 欲 求 国 是 于 天 下 , 莫 在 于 得 人 。 臣 以 为 求 人 之 法 , 不 在 多 用 官 一 途 。 诚 使 得 才 者 众 , 人 才 者 优 , 则 治 所 当 得 , 而 不 事 于 官 者 , 人 才 乃 其 常 也 。 所 当 讲 者'}]
27
+ ```
28
+
29
+ ## Training data
30
+
31
+ Training data contains 3,000,000 ancient Chinese which are collected by [daizhigev20](https://github.com/garychowcmu/daizhigev20).
32
+
33
+ ## Training procedure
34
+
35
+ The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 500,000 steps with a sequence length of 320.
36
+
37
+ ```
38
+ python3 preprocess.py --corpus_path corpora/ancient_chinese.txt \
39
+ --vocab_path models/google_zh_vocab.txt \
40
+ --dataset_path ancient_chinese_dataset.pt --processes_num 16 \
41
+ --seq_length 320 --target lm
42
+ ```
43
+
44
+ ```
45
+ python3 pretrain.py --dataset_path ancient_chinese_dataset.pt \
46
+ --vocab_path models/google_zh_vocab.txt \
47
+ --output_model_path models/ancient_chinese_base_model.bin \
48
+ --config_path models/bert_base_config.json \
49
+ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
50
+ --total_steps 500000 --save_checkpoint_steps 100000 --report_steps 10000 \
51
+ --learning_rate 5e-4 --batch_size 32 \
52
+ --embedding word_pos --remove_embedding_layernorm \
53
+ --encoder transformer --mask causal --layernorm_positioning pre \
54
+ --target lm --tie_weight
55
+ ```
56
+
57
+ Finally, we convert the pre-trained model into Huggingface's format:
58
+
59
+ ```
60
+ python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path ancient_chinese_base_model.bin-500000 \
61
+ --output_model_path pytorch_model.bin \
62
+ --layers_num 12
63
+ ```
64
+
65
+ ### BibTeX entry and citation info
66
+
67
+ ```
68
+ @article{zhao2019uer,
69
+ title={UER: An Open-Source Toolkit for Pre-training Models},
70
+ author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
71
+ journal={EMNLP-IJCNLP 2019},
72
+ pages={241},
73
+ year={2019}
74
+ }
75
+ ```
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a335f4ea173e63bcacf1342ed63c7b588dfed5ed229f58432ba8014c91be548c
3
  size 419348431
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ba1d96901351a4eb148bac920daf315a51d3df343eb03d0595f70a3a9976bfc
3
  size 419348431
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3c6097d4193a1566b845bebad27bfcedb8fe0477a18eedbb153f96052f7d2a4d
3
  size 406876496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ef1b96776204a762e6fdfadcf5437b2f4147b8eb65bddaee9414fd2772e0da4
3
  size 406876496