uer commited on
Commit
d3552f7
1 Parent(s): 50d4acd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -44,7 +44,7 @@ When the parameter skip_special_tokens is False:
44
 
45
  ## Training data
46
 
47
- Contains 800,000 Chinese ancient poems collected by [chinese-poetry](https://github.com/chinese-poetry/chinese-poetry) and [Poetry](https://github.com/Werneror/Poetry) projects.
48
 
49
  ## Training procedure
50
 
@@ -53,14 +53,14 @@ The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tence
53
  ```
54
  python3 preprocess.py --corpus_path corpora/poem.txt \
55
  --vocab_path models/google_zh_vocab.txt \
56
- --dataset_path poem.pt --processes_num 16 \
57
  --seq_length 128 --target lm
58
  ```
59
 
60
  ```
61
- python3 pretrain.py --dataset_path poem.pt \
62
  --vocab_path models/google_zh_vocab.txt \
63
- --output_model_path models/poem_gpt_base_model.bin \
64
  --config_path models/bert_base_config.json --learning_rate 5e-4 \
65
  --tie_weight --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
66
  --embedding word_pos --remove_embedding_layernorm \
@@ -72,7 +72,7 @@ python3 pretrain.py --dataset_path poem.pt \
72
 
73
  Finally, we convert the pre-trained model into Huggingface's format:
74
  ```
75
- python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path poem_gpt_base_model.bin \
76
  --output_model_path pytorch_model.bin \
77
  --layers_num 12
78
  ```
 
44
 
45
  ## Training data
46
 
47
+ Training data contains 800,000 Chinese ancient poems which are collected by [chinese-poetry](https://github.com/chinese-poetry/chinese-poetry) and [Poetry](https://github.com/Werneror/Poetry) projects.
48
 
49
  ## Training procedure
50
 
 
53
  ```
54
  python3 preprocess.py --corpus_path corpora/poem.txt \
55
  --vocab_path models/google_zh_vocab.txt \
56
+ --dataset_path poem_dataset.pt --processes_num 16 \
57
  --seq_length 128 --target lm
58
  ```
59
 
60
  ```
61
+ python3 pretrain.py --dataset_path poem_dataset.pt \
62
  --vocab_path models/google_zh_vocab.txt \
63
+ --output_model_path models/poem_gpt2_base_model.bin \
64
  --config_path models/bert_base_config.json --learning_rate 5e-4 \
65
  --tie_weight --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
66
  --embedding word_pos --remove_embedding_layernorm \
 
72
 
73
  Finally, we convert the pre-trained model into Huggingface's format:
74
  ```
75
+ python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path poem_gpt2_base_model.bin-200000 \
76
  --output_model_path pytorch_model.bin \
77
  --layers_num 12
78
  ```