regisss HF staff commited on
Commit
3b2e653
1 Parent(s): f85eaa6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md CHANGED
@@ -1,3 +1,41 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ [Optimum Habana](https://github.com/huggingface/optimum-habana) is the interface between the Transformers library and Habana's Gaudi processor (HPU). It provides a set of tools enabling easy and fast model loading and fine-tuning on single- and multi-HPU settings for different downstream tasks.
6
+ Learn more about how to take advantage of the power of Habana HPUs to train Transformers models at [hf.co/Habana](https://huggingface.co/Habana).
7
+
8
+
9
+ # RoBERTa Base model HPU configuration
10
+
11
+ This model contains just the `GaudiConfig` file for running the [roberta-base](https://huggingface.co/roberta-base) model on Habana's Gaudi processors (HPU).
12
+
13
+ **This model contains no model weights, only a GaudiConfig.**
14
+
15
+ This enables to specify:
16
+ - `use_habana_mixed_precision`: whether to use Habana Mixed Precision (HMP)
17
+ - `hmp_opt_level`: optimization level for HMP, see [here](https://docs.habana.ai/en/latest/PyTorch/PyTorch_User_Guide/PT_Mixed_Precision.html#configuration-options) for a detailed explanation
18
+ - `hmp_bf16_ops`: list of operators that should run in bf16
19
+ - `hmp_fp32_ops`: list of operators that should run in fp32
20
+ - `hmp_is_verbose`: verbosity
21
+ - `use_fused_adam`: whether to use Habana's custom AdamW implementation
22
+ - `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator
23
+
24
+
25
+ ## Usage
26
+
27
+ The model is instantiated the same way as in the Transformers library.
28
+ The only difference is that a Gaudi configuration associated to this model has to be loaded and provided to the trainer.
29
+
30
+ ```
31
+ from transformers import RobertaTokenizer, RobertaModel
32
+ from optimum.habana import GaudiTrainer, GaudiTrainingArguments, GaudiConfig
33
+
34
+ tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
35
+ model = RobertaModel.from_pretrained('roberta-base')
36
+ gaudi_config = GaudiConfig.from_pretrained("Habana/roberta-base")
37
+ args = GaudiTrainingArguments(output_dir=path_to_my_output_dir, use_habana=True, use_lazy_mode=True)
38
+
39
+ trainer = GaudiTrainer(model=model, gaudi_config=gaudi_config, args=args, tokenizer=tokenizer)
40
+ trainer.train()
41
+ ```