Update README.md
Browse files
README.md
CHANGED
@@ -17,9 +17,9 @@ large batch size of 0.5M tokens. A smaller 109 million parameter model can also
|
|
17 |
#### How to use
|
18 |
|
19 |
```python
|
20 |
-
from transformers import MegatronBertModel,
|
21 |
model = MegatronBertModel.from_pretrained("mmukh/SOBertLarge")
|
22 |
-
tokenizer =
|
23 |
|
24 |
```
|
25 |
|
|
|
17 |
#### How to use
|
18 |
|
19 |
```python
|
20 |
+
from transformers import MegatronBertModel,PreTrainedTokenizerFast
|
21 |
model = MegatronBertModel.from_pretrained("mmukh/SOBertLarge")
|
22 |
+
tokenizer = PreTrainedTokenizerFast.from_pretrained("mmukh/SOBertLarge")
|
23 |
|
24 |
```
|
25 |
|