jwieczorekhabana commited on
Commit
64740b8
1 Parent(s): 474b4fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -6
README.md CHANGED
@@ -13,18 +13,16 @@ This model only contains the `GaudiConfig` file for running the [distilbert-base
13
  **This model contains no model weights, only a GaudiConfig.**
14
 
15
  This enables to specify:
16
- - `use_habana_mixed_precision`: whether to use Habana Mixed Precision (HMP)
17
- - `hmp_opt_level`: optimization level for HMP, see [here](https://docs.habana.ai/en/latest/PyTorch/PyTorch_Mixed_Precision/PT_Mixed_Precision.html#configuration-options) for a detailed explanation
18
- - `hmp_bf16_ops`: list of operators that should run in bf16
19
- - `hmp_fp32_ops`: list of operators that should run in fp32
20
- - `hmp_is_verbose`: verbosity
21
  - `use_fused_adam`: whether to use Habana's custom AdamW implementation
22
  - `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator
 
 
23
 
24
  ## Usage
25
 
26
  The model is instantiated the same way as in the Transformers library.
27
- The only difference is that there are a few new training arguments specific to HPUs.
 
28
 
29
  [Here](https://github.com/huggingface/optimum-habana/blob/main/examples/question-answering/run_qa.py) is a question-answering example script to fine-tune a model on SQuAD. You can run it with DistilBERT with the following command:
30
  ```bash
 
13
  **This model contains no model weights, only a GaudiConfig.**
14
 
15
  This enables to specify:
 
 
 
 
 
16
  - `use_fused_adam`: whether to use Habana's custom AdamW implementation
17
  - `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator
18
+ - `disable_autocast`: whether to disable autocast; this parameter takes precedence over --bf16 flag and is temporary as some scripts produce nan values.
19
+ In those cases this parameter is already present in huggingface topology Habana gaudi_config.json.
20
 
21
  ## Usage
22
 
23
  The model is instantiated the same way as in the Transformers library.
24
+ The only difference is that there are a few new training arguments specific to HPUs.\
25
+ This model is supported only in mixed precision training with bf16 type.
26
 
27
  [Here](https://github.com/huggingface/optimum-habana/blob/main/examples/question-answering/run_qa.py) is a question-answering example script to fine-tune a model on SQuAD. You can run it with DistilBERT with the following command:
28
  ```bash