uer commited on
Commit
7474d77
1 Parent(s): 6574967

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -43,7 +43,7 @@ When the parameter skip_special_tokens is False:
43
 
44
  ## Training data
45
 
46
- Contains 700,000 Chinese couplets collected by [couplet-clean-dataset](https://github.com/v-zich/couplet-clean-dataset).
47
 
48
  ## Training procedure
49
 
@@ -52,14 +52,14 @@ Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent
52
  ```
53
  python3 preprocess.py --corpus_path corpora/couplet.txt \
54
  --vocab_path models/google_zh_vocab.txt \
55
- --dataset_path couplet.pt --processes_num 16 \
56
  --seq_length 64 --target lm
57
  ```
58
 
59
  ```
60
- python3 pretrain.py --dataset_path couplet.pt \
61
  --vocab_path models/google_zh_vocab.txt \
62
- --output_model_path models/couplet_gpt_base_model.bin \
63
  --config_path models/bert_base_config.json --learning_rate 5e-4 \
64
  --tie_weight --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
65
  --embedding word_pos --remove_embedding_layernorm \
@@ -71,7 +71,7 @@ python3 pretrain.py --dataset_path couplet.pt \
71
 
72
  Finally, we convert the pre-trained model into Huggingface's format:
73
  ```
74
- python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path couplet_gpt_base_model.bin \
75
  --output_model_path pytorch_model.bin \
76
  --layers_num 12
77
  ```
 
43
 
44
  ## Training data
45
 
46
+ Training data contains 700,000 Chinese couplets which are collected by [couplet-clean-dataset](https://github.com/v-zich/couplet-clean-dataset).
47
 
48
  ## Training procedure
49
 
 
52
  ```
53
  python3 preprocess.py --corpus_path corpora/couplet.txt \
54
  --vocab_path models/google_zh_vocab.txt \
55
+ --dataset_path couplet_dataset.pt --processes_num 16 \
56
  --seq_length 64 --target lm
57
  ```
58
 
59
  ```
60
+ python3 pretrain.py --dataset_path couplet_dataset.pt \
61
  --vocab_path models/google_zh_vocab.txt \
62
+ --output_model_path models/couplet_gpt2_base_model.bin \
63
  --config_path models/bert_base_config.json --learning_rate 5e-4 \
64
  --tie_weight --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
65
  --embedding word_pos --remove_embedding_layernorm \
 
71
 
72
  Finally, we convert the pre-trained model into Huggingface's format:
73
  ```
74
+ python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path couplet_gpt2_base_model.bin-25000 \
75
  --output_model_path pytorch_model.bin \
76
  --layers_num 12
77
  ```