David commited on
Commit
0e56b3f
1 Parent(s): f5efd65

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -12
README.md CHANGED
@@ -51,24 +51,26 @@ We provide models fine-tuned on the [XNLI dataset](https://huggingface.co/datase
51
 
52
  ## Metrics
53
 
54
- We fine-tune our models on 4 different down-stream tasks:
55
 
56
  - [XNLI](https://huggingface.co/datasets/xnli)
57
  - [PAWS-X](https://huggingface.co/datasets/paws-x)
58
- - [CoNLL2002 - POS](https://huggingface.co/datasets/conll2002)
59
  - [CoNLL2002 - NER](https://huggingface.co/datasets/conll2002)
60
 
61
  For each task, we conduct 5 trials and state the mean and standard deviation of the metrics in the table below.
62
- To compare our results to other Spanish language models, we provide the same metrics taken from [Table 4](https://huggingface.co/bertin-project/bertin-roberta-base-spanish#results) of the Bertin-project model card.
63
-
64
- | Model | [CoNLL2002](https://huggingface.co/datasets/conll2002) - POS (acc) | [CoNLL2002](https://huggingface.co/datasets/conll2002) - NER (f1) | [PAWS-X](https://huggingface.co/datasets/paws-x) (acc) | [XNLI](https://huggingface.co/datasets/xnli) (acc) | Params |
65
- | --- | --- | --- | --- | --- | --- |
66
- | SELECTRA small | 0.9653 +- 0.0007 | 0.863 +- 0.004 | 0.896 +- 0.002 | 0.784 +- 0.002 | **22M** |
67
- | SELECTRA medium | 0.9677 +- 0.0004 | 0.870 +- 0.003 | 0.896 +- 0.002 | **0.804 +- 0.002** | 41M |
68
- | [mBERT](https://huggingface.co/bert-base-multilingual-cased) | 0.9689 | 0.8616 | 0.8895 | 0.7606 | 178M |
69
- | [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) | 0.9693 | 0.8596 | 0.8720 | 0.8012 | 110M |
70
- | [BSC-BNE](https://huggingface.co/BSC-TeMU/roberta-base-bne) | **0.9706** | **0.8764** | 0.8815 | 0.7771 | 125M |
71
- | [Bertin](https://huggingface.co/bertin-project/bertin-roberta-base-spanish/tree/v1-512) | 0.9697 | 0.8707 | **0.8965** | 0.7843 | 125M |
 
 
 
72
 
73
  Some details of our fine-tuning runs:
74
  - epochs: 5
51
 
52
  ## Metrics
53
 
54
+ We fine-tune our models on 3 different down-stream tasks:
55
 
56
  - [XNLI](https://huggingface.co/datasets/xnli)
57
  - [PAWS-X](https://huggingface.co/datasets/paws-x)
 
58
  - [CoNLL2002 - NER](https://huggingface.co/datasets/conll2002)
59
 
60
  For each task, we conduct 5 trials and state the mean and standard deviation of the metrics in the table below.
61
+ To compare our results to other Spanish language models, we provide the same metrics taken from the [evaluation table](https://github.com/PlanTL-SANIDAD/lm-spanish#evaluation-) of the [Spanish Language Model](https://github.com/PlanTL-SANIDAD/lm-spanish) repo.
62
+
63
+ | Model | CoNLL2002 - NER (f1) | PAWS-X (acc) | XNLI (acc) | Params |
64
+ | --- | --- | --- | --- | --- |
65
+ | SELECTRA small | 0.865 +- 0.004 | 0.896 +- 0.002 | 0.784 +- 0.002 | 22M |
66
+ | SELECTRA medium | 0.873 +- 0.003 | 0.896 +- 0.002 | 0.804 +- 0.002 | 41M |
67
+ | | | | | |
68
+ | [mBERT](https://huggingface.co/bert-base-multilingual-cased) | 0.8691 | 0.8955 | 0.7876 | 178M |
69
+ | [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) | 0.8759 | 0.9000 | 0.8130 | 110M |
70
+ | [RoBERTa-b](https://huggingface.co/BSC-TeMU/roberta-base-bne) | 0.8851 | 0.9000 | 0.8016 | 125M |
71
+ | [RoBERTa-l](https://huggingface.co/BSC-TeMU/roberta-large-bne) | 0.8772 | 0.9060 | 0.7958 | 355M |
72
+ | [Bertin](https://huggingface.co/bertin-project/bertin-roberta-base-spanish/tree/v1-512) | 0.8835 | 0.8990 | 0.7890 | 125M |
73
+ | [ELECTRICIDAD](https://huggingface.co/mrm8488/electricidad-base-discriminator) | 0.7954 | 0.9025 | 0.7878 | 109M |
74
 
75
  Some details of our fine-tuning runs:
76
  - epochs: 5