julien-c HF staff commited on
Commit
f4cc78d
1 Parent(s): cb1b224

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/pdelobelle/robbert-v2-dutch-base/README.md

Files changed (1) hide show
  1. README.md +164 -0
README.md ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: "nl"
3
+ thumbnail: "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png"
4
+ tags:
5
+ - Dutch
6
+ - RoBERTa
7
+ - RobBERT
8
+ license: mit
9
+ datasets:
10
+ - oscar
11
+ - Shuffled Dutch section of the OSCAR corpus (https://oscar-corpus.com/)
12
+ ---
13
+
14
+ # RobBERT
15
+
16
+ ## Model description
17
+
18
+ [RobBERT v2](https://github.com/iPieter/RobBERT) is a Dutch state-of-the-art [RoBERTa](https://arxiv.org/abs/1907.11692)-based language model.
19
+
20
+ More detailled information can be found in the [RobBERT paper](https://arxiv.org/abs/2001.06286).
21
+
22
+ ## How to use
23
+
24
+ ```python
25
+ from transformers import RobertaTokenizer, RobertaForSequenceClassification
26
+ tokenizer = RobertaTokenizer.from_pretrained("pdelobelle/robbert-v2-dutch-base")
27
+ model = RobertaForSequenceClassification.from_pretrained("pdelobelle/robbert-v2-dutch-base")
28
+ ```
29
+
30
+ ## Performance Evaluation Results
31
+
32
+ All experiments are described in more detail in our [paper](https://arxiv.org/abs/2001.06286).
33
+
34
+ ### Sentiment analysis
35
+ Predicting whether a review is positive or negative using the [Dutch Book Reviews Dataset](https://github.com/benjaminvdb/110kDBRD).
36
+
37
+ | Model | Accuracy [%] |
38
+ |-------------------|--------------------------|
39
+ | ULMFiT | 93.8 |
40
+ | BERTje | 93.0 |
41
+ | RobBERT v2 | **95.1** |
42
+
43
+ ### Die/Dat (coreference resolution)
44
+
45
+ We measured how well the models are able to do coreference resolution by predicting whether "die" or "dat" should be filled into a sentence.
46
+ For this, we used the [EuroParl corpus](https://www.statmt.org/europarl/).
47
+
48
+ #### Finetuning on whole dataset
49
+
50
+ | Model | Accuracy [%] | F1 [%] |
51
+ |-------------------|--------------------------|--------------|
52
+ | [Baseline](https://arxiv.org/abs/2001.02943) (LSTM) | | 75.03 |
53
+ | mBERT | 98.285 | 98.033 |
54
+ | BERTje | 98.268 | 98.014 |
55
+ | RobBERT v2 | **99.232** | **99.121** |
56
+
57
+ #### Finetuning on 10K examples
58
+
59
+ We also measured the performance using only 10K training examples.
60
+ This experiment clearly illustrates that RobBERT outperforms other models when there is little data available.
61
+
62
+ | Model | Accuracy [%] | F1 [%] |
63
+ |-------------------|--------------------------|--------------|
64
+ | mBERT | 92.157 | 90.898 |
65
+ | BERTje | 93.096 | 91.279 |
66
+ | RobBERT v2 | **97.816** | **97.514** |
67
+
68
+ #### Using zero-shot word masking task
69
+
70
+ Since BERT models are pre-trained using the word masking task, we can use this to predict whether "die" or "dat" is more likely.
71
+ This experiment shows that RobBERT has internalised more information about Dutch than other models.
72
+
73
+ | Model | Accuracy [%] |
74
+ |-------------------|--------------------------|
75
+ | ZeroR | 66.70 |
76
+ | mBERT | 90.21 |
77
+ | BERTje | 94.94 |
78
+ | RobBERT v2 | **98.75** |
79
+
80
+ ### Part-of-Speech Tagging.
81
+
82
+ Using the [Lassy UD dataset](https://universaldependencies.org/treebanks/nl_lassysmall/index.html).
83
+
84
+
85
+ | Model | Accuracy [%] |
86
+ |-------------------|--------------------------|
87
+ | Frog | 91.7 |
88
+ | mBERT | **96.5** |
89
+ | BERTje | 96.3 |
90
+ | RobBERT v2 | 96.4 |
91
+
92
+ Interestingly, we found that when dealing with **small data sets**, RobBERT v2 **significantly outperforms** other models.
93
+
94
+ <p align="center">
95
+ <img src="https://github.com/iPieter/RobBERT/blob/master/res/robbert_pos_accuracy.png" alt="RobBERT's performance on smaller datasets">
96
+ </p>
97
+
98
+ ### Named Entity Recognition
99
+
100
+ Using the [CoNLL 2002 evaluation script](https://www.clips.uantwerpen.be/conll2002/ner/).
101
+
102
+
103
+ | Model | Accuracy [%] |
104
+ |-------------------|--------------------------|
105
+ | Frog | 57.31 |
106
+ | mBERT | **90.94** |
107
+ | BERT-NL | 89.7 |
108
+ | BERTje | 88.3 |
109
+ | RobBERT v2 | 89.08 |
110
+
111
+
112
+ ## Training procedure
113
+
114
+ We pre-trained RobBERT using the RoBERTa training regime.
115
+ We pre-trained our model on the Dutch section of the [OSCAR corpus](https://oscar-corpus.com/), a large multilingual corpus which was obtained by language classification in the Common Crawl corpus.
116
+ This Dutch corpus is 39GB large, with 6.6 billion words spread over 126 million lines of text, where each line could contain multiple sentences, thus using more data than concurrently developed Dutch BERT models.
117
+
118
+
119
+ RobBERT shares its architecture with [RoBERTa's base model](https://github.com/pytorch/fairseq/tree/master/examples/roberta), which itself is a replication and improvement over BERT.
120
+ Like BERT, it's architecture consists of 12 self-attention layers with 12 heads with 117M trainable parameters.
121
+ One difference with the original BERT model is due to the different pre-training task specified by RoBERTa, using only the MLM task and not the NSP task.
122
+ During pre-training, it thus only predicts which words are masked in certain positions of given sentences.
123
+ The training process uses the Adam optimizer with polynomial decay of the learning rate l_r=10^-6 and a ramp-up period of 1000 iterations, with hyperparameters beta_1=0.9
124
+ and RoBERTa's default beta_2=0.98.
125
+ Additionally, a weight decay of 0.1 and a small dropout of 0.1 helps prevent the model from overfitting.
126
+
127
+
128
+ RobBERT was trained on a computing cluster with 4 Nvidia P100 GPUs per node, where the number of nodes was dynamically adjusted while keeping a fixed batch size of 8192 sentences.
129
+ At most 20 nodes were used (i.e. 80 GPUs), and the median was 5 nodes.
130
+ By using gradient accumulation, the batch size could be set independently of the number of GPUs available, in order to maximally utilize the cluster.
131
+ Using the [Fairseq library](https://github.com/pytorch/fairseq/tree/master/examples/roberta), the model trained for two epochs, which equals over 16k batches in total, which took about three days on the computing cluster.
132
+ In between training jobs on the computing cluster, 2 Nvidia 1080 Ti's also covered some parameter updates for RobBERT v2.
133
+
134
+
135
+ ## Limitations and bias
136
+
137
+ In the [RobBERT paper](https://arxiv.org/abs/2001.06286), we also investigated potential sources of bias in RobBERT.
138
+
139
+ We found that the zeroshot model estimates the probability of *hij* (he) to be higher than *zij* (she) for most occupations in bleached template sentences, regardless of their actual job gender ratio in reality.
140
+
141
+ <p align="center">
142
+ <img src="https://github.com/iPieter/RobBERT/blob/master/res/gender_diff.png" alt="RobBERT's performance on smaller datasets">
143
+ </p>
144
+
145
+ By augmenting the DBRB Dutch Book sentiment analysis dataset with the stated gender of the author of the review, we found that highly positive reviews written by women were generally more accurately detected by RobBERT as being positive than those written by men.
146
+
147
+ <p align="center">
148
+ <img src="https://github.com/iPieter/RobBERT/blob/master/res/dbrd.png" alt="RobBERT's performance on smaller datasets">
149
+ </p>
150
+
151
+
152
+
153
+ ## BibTeX entry and citation info
154
+
155
+ ```bibtex
156
+ @misc{delobelle2020robbert,
157
+ title={RobBERT: a Dutch RoBERTa-based Language Model},
158
+ author={Pieter Delobelle and Thomas Winters and Bettina Berendt},
159
+ year={2020},
160
+ eprint={2001.06286},
161
+ archivePrefix={arXiv},
162
+ primaryClass={cs.CL}
163
+ }
164
+ ```