stefan-it commited on
Commit
3082f3c
1 Parent(s): fda8864

readme: training command fixes

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -91,7 +91,7 @@ model as back-bone model. Thus, the tokenizer and vocab is the same as used in t
91
  The model was trained on a v3-8 TPU, with the following parameters:
92
 
93
  ```bash
94
- python ./run_clm_flax.py --output_dir=/mnt/datasets/german-gpt2-larger/ \\n--name_or_path dbmdz/german-gpt2 --do_train --do_eval --block_size=512 \\n--per_device_train_batch_size=16 --per_device_eval_batch_size=16 \\n--learning_rate=5e-3 --warmup_steps=1000 --adam_beta1=0.9 --adam_beta2=0.98 \\n--weight_decay=0.01 --overwrite_output_dir --num_train_epochs=20 \\n--logging_steps=500 --save_steps=2500 --eval_steps=2500 \\n--train_file /mnt/datasets/gc4/train.txt \\n--validation_file /mnt/datasets/gc4/validation.txt \\n--preprocessing_num_workers 16
95
  ```
96
 
97
  Training took around 17 days for 20 epochs.
 
91
  The model was trained on a v3-8 TPU, with the following parameters:
92
 
93
  ```bash
94
+ python ./run_clm_flax.py --output_dir=/mnt/datasets/german-gpt2-larger/ --name_or_path dbmdz/german-gpt2 --do_train --do_eval --block_size=512 --per_device_train_batch_size=16 --per_device_eval_batch_size=16 --learning_rate=5e-3 --warmup_steps=1000 --adam_beta1=0.9 --adam_beta2=0.98 --weight_decay=0.01 --overwrite_output_dir --num_train_epochs=20 --logging_steps=500 --save_steps=2500 --eval_steps=2500 --train_file /mnt/datasets/gc4/train.txt --validation_file /mnt/datasets/gc4/validation.txt --preprocessing_num_workers 16
95
  ```
96
 
97
  Training took around 17 days for 20 epochs.