pdelobelle commited on
Commit
271b8bf
1 Parent(s): 2931540

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -1,11 +1,12 @@
1
  ---
2
- language: "nl"
3
- thumbnail: "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png"
4
  tags:
5
  - Dutch
6
  - Flemish
7
  - RoBERTa
8
  - RobBERT
 
9
  license: mit
10
  datasets:
11
  - oscar
@@ -14,7 +15,7 @@ datasets:
14
  - europarl-mono
15
  - conll2002
16
  widget:
17
- - text: "Hallo, ik ben RobBERT, een <mask> taalmodel van de KU Leuven."
18
  ---
19
 
20
  <p align="center">
@@ -53,7 +54,7 @@ RobBERT uses the [RoBERTa](https://arxiv.org/abs/1907.11692) architecture and pr
53
 
54
  By default, RobBERT has the masked language model head used in training. This can be used as a zero-shot way to fill masks in sentences. It can be tested out for free on [RobBERT's Hosted infererence API of Huggingface](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=De+hoofdstad+van+Belgi%C3%AB+is+%3Cmask%3E.). You can also create a new prediction head for your own task by using any of HuggingFace's [RoBERTa-runners](https://huggingface.co/transformers/v2.7.0/examples.html#language-model-training), [their fine-tuning notebooks](https://huggingface.co/transformers/v4.1.1/notebooks.html) by changing the model name to `pdelobelle/robbert-v2-dutch-base`, or use the original fairseq [RoBERTa](https://github.com/pytorch/fairseq/tree/master/examples/roberta) training regimes.
55
 
56
- Use the following code to download the base model and finetune it yourself, or use one of our finetuned models (documented on [our project site](https://people.cs.kuleuven.be/~pieter.delobelle/robbert/)).
57
 
58
  ```python
59
  from transformers import RobertaTokenizer, RobertaForSequenceClassification
1
  ---
2
+ language: nl
3
+ thumbnail: https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png
4
  tags:
5
  - Dutch
6
  - Flemish
7
  - RoBERTa
8
  - RobBERT
9
+ - BERT
10
  license: mit
11
  datasets:
12
  - oscar
15
  - europarl-mono
16
  - conll2002
17
  widget:
18
+ - text: Hallo, ik ben RobBERT, een <mask> taalmodel van de KU Leuven.
19
  ---
20
 
21
  <p align="center">
54
 
55
  By default, RobBERT has the masked language model head used in training. This can be used as a zero-shot way to fill masks in sentences. It can be tested out for free on [RobBERT's Hosted infererence API of Huggingface](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=De+hoofdstad+van+Belgi%C3%AB+is+%3Cmask%3E.). You can also create a new prediction head for your own task by using any of HuggingFace's [RoBERTa-runners](https://huggingface.co/transformers/v2.7.0/examples.html#language-model-training), [their fine-tuning notebooks](https://huggingface.co/transformers/v4.1.1/notebooks.html) by changing the model name to `pdelobelle/robbert-v2-dutch-base`, or use the original fairseq [RoBERTa](https://github.com/pytorch/fairseq/tree/master/examples/roberta) training regimes.
56
 
57
+ Use the following code to download the base model and finetune it yourself, or use one of our finetuned models (documented on [our project site](https://pieter.ai/robbert/)).
58
 
59
  ```python
60
  from transformers import RobertaTokenizer, RobertaForSequenceClassification