uer commited on
Commit
0fa9a51
1 Parent(s): 20bc9d6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -16
README.md CHANGED
@@ -36,25 +36,11 @@ The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tence
36
 
37
 
38
  ```
39
- python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
40
- --vocab_path models/google_zh_vocab.txt \
41
- --dataset_path cluecorpussmall_bart_seq512_dataset.pt \
42
- --processes_num 32 --seq_length 512 \
43
- --dynamic_masking --target bart
44
  ```
45
 
46
  ```
47
- python3 pretrain.py --dataset_path cluecorpussmall_bart_seq512_dataset.pt \
48
- --vocab_path models/google_zh_vocab.txt \
49
- --config_path models/bart/base_config.json \
50
- --output_model_path models/cluecorpussmall_bart_base_seq512_model.bin \
51
- --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
52
- --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
53
- --learning_rate 1e-4 --batch_size 16 \
54
- --span_masking --span_max_length 3 \
55
- --embedding word_pos --tgt_embedding word_pos \
56
- --encoder transformer --mask fully_visible --decoder transformer \
57
- --target bart --tie_weights --has_lmtarget_bias
58
  ```
59
 
60
  Finally, we convert the pre-trained model into Huggingface's format:
 
36
 
37
 
38
  ```
39
+ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \\n --vocab_path models/google_zh_vocab.txt \\n --dataset_path cluecorpussmall_bart_seq512_dataset.pt \\n --processes_num 32 --seq_length 512 \\n --target bart
 
 
 
 
40
  ```
41
 
42
  ```
43
+ python3 pretrain.py --dataset_path cluecorpussmall_bart_seq512_dataset.pt \\n --vocab_path models/google_zh_vocab.txt \\n --config_path models/bart/base_config.json \\n --output_model_path models/cluecorpussmall_bart_base_seq512_model.bin \\n --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \\n --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \\n --learning_rate 1e-4 --batch_size 16 \\n --span_masking --span_max_length 3 \\n --embedding word_pos --tgt_embedding word_pos \\n --encoder transformer --mask fully_visible --decoder transformer \\n --target bart --tie_weights --has_lmtarget_bias
 
 
 
 
 
 
 
 
 
 
44
  ```
45
 
46
  Finally, we convert the pre-trained model into Huggingface's format: