Update README.md
Browse files
README.md
CHANGED
@@ -26,7 +26,7 @@ The model is instantiated the same way as in the Transformers library.
|
|
26 |
The only difference is that there are a few new training arguments specific to HPUs:
|
27 |
|
28 |
```
|
29 |
-
from optimum.habana import
|
30 |
from transformers import BertTokenizer, BertModel
|
31 |
|
32 |
tokenizer = BertTokenizer.from_pretrained("bert-large-uncased-whole-word-masking")
|
|
|
26 |
The only difference is that there are a few new training arguments specific to HPUs:
|
27 |
|
28 |
```
|
29 |
+
from optimum.habana import GaudiTrainer, GaudiTrainingArguments
|
30 |
from transformers import BertTokenizer, BertModel
|
31 |
|
32 |
tokenizer = BertTokenizer.from_pretrained("bert-large-uncased-whole-word-masking")
|