regisss HF staff astachowicz commited on
Commit
8c9958f
1 Parent(s): 61f4c93

Update README.md (#2)

Browse files

- Update README.md (177d8fd1827f7b799c2ee5d6f1e19ee533dc0cb5)
- Update README.md (6a8bc56c3b3fa3140058e5f18b9baa30d3dd06a6)


Co-authored-by: Adam Stachowicz <astachowicz@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -25,9 +25,9 @@ It is strongly recommended to train this model doing bf16 mixed-precision traini
25
 
26
  [Here](https://github.com/huggingface/optimum-habana/blob/main/examples/question-answering/run_qa.py) is a question-answering example script to fine-tune a model on SQuAD. You can run it with BERT Large with the following command:
27
  ```bash
28
- python run_qa.py \
29
  --model_name_or_path bert-large-uncased-whole-word-masking \
30
- --gaudi_config_name gaudi_config_name_or_path \
31
  --dataset_name squad \
32
  --do_train \
33
  --do_eval \
@@ -39,7 +39,9 @@ python run_qa.py \
39
  --doc_stride 128 \
40
  --output_dir /tmp/squad/ \
41
  --use_habana \
42
- --use_lazy_mode \
 
 
43
  --throughput_warmup_steps 3 \
44
  --bf16
45
  ```
 
25
 
26
  [Here](https://github.com/huggingface/optimum-habana/blob/main/examples/question-answering/run_qa.py) is a question-answering example script to fine-tune a model on SQuAD. You can run it with BERT Large with the following command:
27
  ```bash
28
+ PT_HPU_LAZY_MODE=0 python run_qa.py \
29
  --model_name_or_path bert-large-uncased-whole-word-masking \
30
+ --gaudi_config_name Habana/bert-large-uncased-whole-word-masking \
31
  --dataset_name squad \
32
  --do_train \
33
  --do_eval \
 
39
  --doc_stride 128 \
40
  --output_dir /tmp/squad/ \
41
  --use_habana \
42
+ --torch_compile_backend hpu_backend \
43
+ --torch_compile \
44
+ --use_lazy_mode false \
45
  --throughput_warmup_steps 3 \
46
  --bf16
47
  ```