asier-gutierrez commited on
Commit
93a9a2f
1 Parent(s): 44e2be3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -2
README.md CHANGED
@@ -87,6 +87,16 @@ It contains the following tasks and their related datasets:
87
 
88
  Here are the train/dev/test splits of the datasets:
89
 
 
 
 
 
 
 
 
 
 
 
90
  | Task | NER (F1) | POS (F1) | STS (Pearson) | TC (accuracy) | QA (ViquiQuAD) (F1/EM) | QA (XQuAD) (F1/EM) |
91
  | ------------|:-------------:| -----:|:------|:-------|:------|:----|
92
  | RoBERTa-base-ca-v2 | **89.80** | **99.10** | **80.00** | **83.40** | **88.00** | **71.50** |
@@ -95,8 +105,6 @@ Here are the train/dev/test splits of the datasets:
95
  | XLM-RoBERTa | 87.66 | 98.89 | 75.40 | 71.68 | 85.50/70.47 | 67.10/46.42 |
96
  | WikiBERT-ca | 77.66 | 97.60 | 77.18 | 73.22 | 85.45/70.75 | 65.21/36.60 |
97
 
98
- ### Results
99
-
100
  ## Intended uses & limitations
101
  The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section)
102
  However, the is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification or Named Entity Recognition.
 
87
 
88
  Here are the train/dev/test splits of the datasets:
89
 
90
+ | Task (Dataset) | Total | Train | Dev | Test |
91
+ |:--|:--|:--|:--|:--|
92
+ | NER (Ancora) |13,581 | 10,628 | 1,427 | 1,526 |
93
+ | POS (Ancora)| 16,678 | 13,123 | 1,709 | 1,846 |
94
+ | STS | 3,073 | 2,073 | 500 | 500 |
95
+ | TC (TeCla) | 137,775 | 110,203 | 13,786 | 13,786|
96
+ | QA (ViquiQuAD) | 14,239 | 11,255 | 1,492 | 1,429 |
97
+
98
+ ### Results
99
+
100
  | Task | NER (F1) | POS (F1) | STS (Pearson) | TC (accuracy) | QA (ViquiQuAD) (F1/EM) | QA (XQuAD) (F1/EM) |
101
  | ------------|:-------------:| -----:|:------|:-------|:------|:----|
102
  | RoBERTa-base-ca-v2 | **89.80** | **99.10** | **80.00** | **83.40** | **88.00** | **71.50** |
 
105
  | XLM-RoBERTa | 87.66 | 98.89 | 75.40 | 71.68 | 85.50/70.47 | 67.10/46.42 |
106
  | WikiBERT-ca | 77.66 | 97.60 | 77.18 | 73.22 | 85.45/70.75 | 65.21/36.60 |
107
 
 
 
108
  ## Intended uses & limitations
109
  The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section)
110
  However, the is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification or Named Entity Recognition.