Update README.md
Browse files
README.md
CHANGED
@@ -40,7 +40,7 @@ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
|
40 |
--vocab_path models/google_zh_vocab.txt \
|
41 |
--dataset_path cluecorpussmall_bart_seq512_dataset.pt \
|
42 |
--processes_num 32 --seq_length 512 \
|
43 |
-
--
|
44 |
```
|
45 |
|
46 |
```
|
@@ -51,10 +51,7 @@ python3 pretrain.py --dataset_path cluecorpussmall_bart_seq512_dataset.pt \
|
|
51 |
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
52 |
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
|
53 |
--learning_rate 1e-4 --batch_size 16 \
|
54 |
-
--span_masking --span_max_length 3
|
55 |
-
--embedding word_pos --tgt_embedding word_pos \
|
56 |
-
--encoder transformer --mask fully_visible --decoder transformer \
|
57 |
-
--target bart --tie_weights --has_lmtarget_bias
|
58 |
```
|
59 |
|
60 |
Finally, we convert the pre-trained model into Huggingface's format:
|
|
|
40 |
--vocab_path models/google_zh_vocab.txt \
|
41 |
--dataset_path cluecorpussmall_bart_seq512_dataset.pt \
|
42 |
--processes_num 32 --seq_length 512 \
|
43 |
+
--data_processor bart
|
44 |
```
|
45 |
|
46 |
```
|
|
|
51 |
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
52 |
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
|
53 |
--learning_rate 1e-4 --batch_size 16 \
|
54 |
+
--span_masking --span_max_length 3
|
|
|
|
|
|
|
55 |
```
|
56 |
|
57 |
Finally, we convert the pre-trained model into Huggingface's format:
|