hhou435 commited on
Commit
90b5d2e
1 Parent(s): f2b1b30

Update README

Browse files
Files changed (4) hide show
  1. README.md +77 -0
  2. config.json +1 -1
  3. pytorch_model.bin +1 -1
  4. tf_model.h5 +1 -1
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: Chinese
3
+ widget:
4
+ - text: "最美的不是下雨天,是曾与你躲过雨的屋檐"
5
+
6
+
7
+ ---
8
+
9
+
10
+ # Chinese GPT2 Model
11
+
12
+ ## Model description
13
+
14
+ The model is used to generate Chinese lyrics. You can download the model from the link [gpt2-chinese-lyric](https://huggingface.co/uer/gpt2-chinese-lyric)
15
+
16
+ ## How to use
17
+
18
+ You can use the model directly with a pipeline for text generation:
19
+
20
+ ```python
21
+ >>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
22
+ >>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-lyric")
23
+ >>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-lyric")
24
+ >>> text_generator = TextGenerationPipeline(model, tokenizer)
25
+ >>> text_generator("最美的不是下雨天,是曾与你躲过雨的屋檐", max_length=100, do_sample=True)
26
+ [{'generated_text': '最美的不是下雨天,是曾与你躲过雨的屋檐 , 下 课 铃 声 响 起 的 瞬 间 , 我 们 的 笑 脸 , 有 太 多 回 忆 在 浮 现 , 是 你 总 在 我 身 边 , 不 知 道 会 不 会 再 见 , 从 现 在 开 始 到 永 远 , 想 说 的 语 言 凝 结 成 一 句 , 不 管 我 们 是 否 能 够 兑 现 , 想 说 的 语 言 凝 结'}]
27
+ ```
28
+
29
+
30
+
31
+ ## Training data
32
+
33
+ Training data contains 150,000 Chinese lyrics which are collected by [Chinese-Lyric-Corpus](https://github.com/gaussic/Chinese-Lyric-Corpus) and [MusicLyricChatbot](https://github.com/liuhuanyong/MusicLyricChatbot) projects
34
+
35
+ ## Training procedure
36
+
37
+ The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 100,000 steps with a sequence length of 512 on the basis of the model [gpt2-base-chinese-cluecorpussmall](https://huggingface.co/uer/gpt2-base-chinese-cluecorpussmall)
38
+
39
+ ```
40
+ python3 preprocess.py --corpus_path corpora/lyric.txt \
41
+ --vocab_path models/google_zh_vocab.txt \
42
+ --dataset_path lyric_lm_seq512_dataset.pt \
43
+ --seq_length 512 --processes_num 32 --target lm
44
+ ```
45
+
46
+ ```
47
+ python3 pretrain.py --dataset_path lyric_lm_seq512_dataset.pt \
48
+ --pretrained_model_path gpt2-base-chinese-cluecorpussmall/pytorch_model.bin\
49
+ --vocab_path models/google_zh_vocab.txt \
50
+ --output_model_path models/lyric_gpt2_seq512_model.bin \
51
+ --config_path models/bert_base_config.json --learning_rate 5e-5 \
52
+ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 --tie_weight \
53
+ --embedding word_pos --remove_embedding_layernorm \
54
+ --encoder transformer --mask causal --layernorm_positioning pre \
55
+ --target lm --batch_size 64 --total_steps 100000 \
56
+ --save_checkpoint_steps 10000 --report_steps 5000
57
+ ```
58
+
59
+ Finally, we convert the pre-trained model into Huggingface's format:
60
+
61
+ ```
62
+ python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path lyric_gpt2_seq512_model.bin-100000 \
63
+ --output_model_path pytorch_model.bin \
64
+ --layers_num 12
65
+ ```
66
+
67
+ ### BibTeX entry and citation info
68
+
69
+ ```
70
+ @article{zhao2019uer,
71
+ title={UER: An Open-Source Toolkit for Pre-training Models},
72
+ author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
73
+ journal={EMNLP-IJCNLP 2019},
74
+ pages={241},
75
+ year={2019}
76
+ }
77
+ ```
config.json CHANGED
@@ -20,7 +20,7 @@
20
  "task_specific_params": {
21
  "text-generation": {
22
  "do_sample": true,
23
- "max_length": 512
24
  }
25
  },
26
  "tokenizer_class": "BertTokenizer",
 
20
  "task_specific_params": {
21
  "text-generation": {
22
  "do_sample": true,
23
+ "max_length": 128
24
  }
25
  },
26
  "tokenizer_class": "BertTokenizer",
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ef616ae6d77ded35cea7e4bdd3dbc78e2b52d31223eab9f435893a4ba0216b61
3
  size 419348431
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98d71975ce622315019ba39f0815b00a7b2b00773e43194cd3fa923abf48ffa1
3
  size 419348431
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cf24c2c912cc60f43082d2211c4bd0ea84d7d6c5cbdaa7501d15d00defba0520
3
  size 406876496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7db5c6e8ca0b5846f12c5bd30822c17de26781fa0f117ff0ea7673b6b321e875
3
  size 406876496