Update README.md
Browse files
README.md
CHANGED
@@ -46,12 +46,13 @@ python3 pretrain.py --dataset_path lyric_dataset.pt \
|
|
46 |
--pretrained_model_path gpt2-base-chinese-cluecorpussmall/pytorch_model.bin\
|
47 |
--vocab_path models/google_zh_vocab.txt \
|
48 |
--output_model_path models/lyric_gpt2_seq512_model.bin \
|
49 |
-
--config_path models/bert_base_config.json
|
50 |
-
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7
|
|
|
|
|
51 |
--embedding word_pos --remove_embedding_layernorm \
|
52 |
--encoder transformer --mask causal --layernorm_positioning pre \
|
53 |
-
--target lm --
|
54 |
-
--save_checkpoint_steps 10000 --report_steps 5000
|
55 |
```
|
56 |
|
57 |
Finally, we convert the pre-trained model into Huggingface's format:
|
|
|
46 |
--pretrained_model_path gpt2-base-chinese-cluecorpussmall/pytorch_model.bin\
|
47 |
--vocab_path models/google_zh_vocab.txt \
|
48 |
--output_model_path models/lyric_gpt2_seq512_model.bin \
|
49 |
+
--config_path models/bert_base_config.json \
|
50 |
+
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
51 |
+
--total_steps 100000 --save_checkpoint_steps 10000 --report_steps 5000 \
|
52 |
+
--learning_rate 5e-5 --batch_size 64 \
|
53 |
--embedding word_pos --remove_embedding_layernorm \
|
54 |
--encoder transformer --mask causal --layernorm_positioning pre \
|
55 |
+
--target lm --tie_weight
|
|
|
56 |
```
|
57 |
|
58 |
Finally, we convert the pre-trained model into Huggingface's format:
|