pdelobelle commited on
Commit
bc2035f
1 Parent(s): 279de95

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -35,9 +35,9 @@ By default, RobBERT-2022 has the masked language model head used in training. Th
35
 
36
 
37
  ```python
38
- from transformers import AutoTokenizer, AutoForSequenceClassification
39
- tokenizer = RobertaTokenizer.from_pretrained("DTAI-KULeuven/robbert-2022-dutch-base")
40
- model = RobertaForSequenceClassification.from_pretrained("DTAI-KULeuven/robbert-2022-dutch-base")
41
  ```
42
 
43
  You can then use most of [HuggingFace's BERT-based notebooks](https://huggingface.co/transformers/v4.1.1/notebooks.html) for finetuning RobBERT-2022 on your type of Dutch language dataset.
 
35
 
36
 
37
  ```python
38
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
39
+ tokenizer = AutoTokenizer.from_pretrained("DTAI-KULeuven/robbert-2022-dutch-base")
40
+ model = AutoModelForSequenceClassification.from_pretrained("DTAI-KULeuven/robbert-2022-dutch-base")
41
  ```
42
 
43
  You can then use most of [HuggingFace's BERT-based notebooks](https://huggingface.co/transformers/v4.1.1/notebooks.html) for finetuning RobBERT-2022 on your type of Dutch language dataset.