Pablogps commited on
Commit
1bcdbad
1 Parent(s): d6a0dc0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -14
README.md CHANGED
@@ -161,28 +161,73 @@ We are currently in the process of applying our language models to downstream ta
161
  **SQUAD-es**
162
  Using sequence length 128 we have achieved exact match 50.96 and F1 68.74.
163
 
164
- **POS**
 
165
 
166
  <figure>
167
 
168
- | Model | Metric |
169
- |----------------------------------------------------|----------|
170
- | bert-base-multilingual-cased | 0.9629 |
171
- | dccuchile/bert-base-spanish-wwm-cased | 0.9642 |
172
- | BSC-TeMU/roberta-base-bne | 0.9659 |
173
- | flax-community/bertin-roberta-large-spanish | 0.9646 |
174
- | bertin-project/bertin-roberta-base-spanish | 0.9638 |
175
- | bertin-project/bertin-base-random | 0.9656 |
176
- | bertin-project/bertin-base-stepwise | 0.9656 |
177
- | bertin-project/bertin-base-gaussian | **0.9662** |
178
- | bertin-project/bertin-base-random-exp-512seqlen | 0.9660 |
179
- | bertin-project/bertin-base-gaussian-exp-512seqlen | **0.9662** |
180
 
181
 
182
  <caption>Table 2. Results for POS.</caption>
183
  </figure>
184
 
185
- **Improve table 2 with details like number of epochs etc**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
186
 
187
  # Conclusions
188
 
161
  **SQUAD-es**
162
  Using sequence length 128 we have achieved exact match 50.96 and F1 68.74.
163
 
164
+ **POS**
165
+ All models trained with max length 512 and batch size 8, using the CoNLL 2002 dataset.
166
 
167
  <figure>
168
 
169
+ | Model | F1 | Accuracy |
170
+ |----------------------------------------------------|----------|----------|
171
+ | bert-base-multilingual-cased | 0.9629 | 0.9687 |
172
+ | dccuchile/bert-base-spanish-wwm-cased | 0.9642 | 0.9700 |
173
+ | BSC-TeMU/roberta-base-bne | 0.9659 | 0.9707 |
174
+ | flax-community/bertin-roberta-large-spanish | 0.9646 | 0.9697 |
175
+ | bertin-project/bertin-roberta-base-spanish | 0.9638 | 0.9690 |
176
+ | bertin-project/bertin-base-random | 0.9656 | 0.9704 |
177
+ | bertin-project/bertin-base-stepwise | 0.9656 | 0.9707 |
178
+ | bertin-project/bertin-base-gaussian | **0.9662** | 0.9709 |
179
+ | bertin-project/bertin-base-random-exp-512seqlen | 0.9660 | 0.9707 |
180
+ | bertin-project/bertin-base-gaussian-exp-512seqlen | **0.9662** | **0.9714** |
181
 
182
 
183
  <caption>Table 2. Results for POS.</caption>
184
  </figure>
185
 
186
+
187
+ **NER**
188
+ All models trained with max length 512 and batch size 8, using the CoNLL 2002 dataset.
189
+
190
+ <figure>
191
+
192
+ | Model | F1 | Accuracy |
193
+ |----------------------------------------------------|----------|----------|
194
+ | bert-base-multilingual-cased | 0.8539 | 0.9779 |
195
+ | dccuchile/bert-base-spanish-wwm-cased | 0.8579 | 0.9783 |
196
+ | BSC-TeMU/roberta-base-bne | 0.8700 | 0.9807 |
197
+ | flax-community/bertin-roberta-large-spanish | 0.8735 | 0.9806 |
198
+ | bertin-project/bertin-roberta-base-spanish | 0.8725 | 0.9812 |
199
+ | bertin-project/bertin-base-random | 0.8704 | 0.9807 |
200
+ | bertin-project/bertin-base-stepwise | 0.8705 | 0.9809 |
201
+ | bertin-project/bertin-base-gaussian | **0.8792** | **0.9816** |
202
+ | bertin-project/bertin-base-random-exp-512seqlen | 0.8616 | 0.9803 |
203
+ | bertin-project/bertin-base-gaussian-exp-512seqlen | **0.8764** | **0.9819** |
204
+
205
+
206
+ <caption>Table 3. Results for NER.</caption>
207
+ </figure>
208
+
209
+
210
+ **PAWS-X**
211
+ All models trained with max length 512 and batch size 8. The accuracy values in this case are a bit surprising (given some models are below 0.60 while others are close to 0.90), so these were run 3 times, with very similar results (these are the metrics for the last run).
212
+
213
+ <figure>
214
+
215
+ | Model | Accuracy |
216
+ |----------------------------------------------------|----------|
217
+ | bert-base-multilingual-cased | 0.5765 |
218
+ | dccuchile/bert-base-spanish-wwm-cased | 0.5765 |
219
+ | BSC-TeMU/roberta-base-bne | 0.5765 |
220
+ | flax-community/bertin-roberta-large-spanish | 0.5765 |
221
+ | bertin-project/bertin-roberta-base-spanish | 0.6550 |
222
+ | bertin-project/bertin-base-random | 0.8665 |
223
+ | bertin-project/bertin-base-stepwise | 0.8610 |
224
+ | bertin-project/bertin-base-gaussian | **0.8800** |
225
+ | bertin-project/bertin-base-random-exp-512seqlen | 0.5765 |
226
+ | bertin-project/bertin-base-gaussian-exp-512seqlen | **0.875** |
227
+
228
+
229
+ <caption>Table 4. Results for PAWS-X.</caption>
230
+ </figure>
231
 
232
  # Conclusions
233