David
commited on
Commit
•
20f9940
1
Parent(s):
ce08dcb
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- es
|
4 |
+
thumbnail: "url to a thumbnail used in social sharing"
|
5 |
+
tags:
|
6 |
+
- tag1
|
7 |
+
- tag2
|
8 |
+
license: apache-2.0
|
9 |
+
datasets:
|
10 |
+
- Oscar
|
11 |
+
metrics:
|
12 |
+
- metric1
|
13 |
+
- metric2
|
14 |
+
---
|
15 |
+
|
16 |
+
# SELECTRA: A Spanish ELECTRA
|
17 |
+
|
18 |
+
SELECTRA is a Spanish pre-trained language model based on [ELECTRA](https://github.com/google-research/electra).
|
19 |
+
We release a `small` and `medium` version with the following configuration:
|
20 |
+
|
21 |
+
| Model | Layers | Embedding/Hidden Size | Params | Vocab Size | Max Sequence Length | Cased |
|
22 |
+
| --- | --- | --- | --- | --- | --- | --- |
|
23 |
+
| SELECTRA small | 12 | 256 | 22M | 50k | 512 | True |
|
24 |
+
| SELECTRA medium | 12 | 384 | 41M | 50k | 512 | True |
|
25 |
+
|
26 |
+
## Usage
|
27 |
+
|
28 |
+
```python
|
29 |
+
from transformers import ElectraForPreTraining, ElectraTokenizerFast
|
30 |
+
|
31 |
+
discriminator = ElectraForPreTraining.from_pretrained("models/small/pytorch_model")
|
32 |
+
tokenizer = ElectraTokenizerFast.from_pretrained("models/medium/pytorch_model")
|
33 |
+
```
|
34 |
+
- Links to our zero-shot-classifiers
|
35 |
+
|
36 |
+
## Metrics
|
37 |
+
|
38 |
+
We fine-tune our models on 4 different down-stream tasks:
|
39 |
+
|
40 |
+
- [XNLI](https://huggingface.co/datasets/xnli)
|
41 |
+
- [PAWS-X](https://huggingface.co/datasets/paws-x)
|
42 |
+
- [CoNLL2002 - POS](https://huggingface.co/datasets/conll2002)
|
43 |
+
- [CoNLL2002 - NER](https://huggingface.co/datasets/conll2002)
|
44 |
+
|
45 |
+
We provide the mean and standard deviation of 5 fine-tuning runs.
|
46 |
+
| Model |
|
47 |
+
|
|
48 |
+
|
49 |
+
|
50 |
+
## Training
|
51 |
+
|
52 |
+
- Link to our repo
|
53 |
+
|
54 |
+
## Motivation
|
55 |
+
|
56 |
+
Despite the abundance of excelent Spanish language models (BETO, bertin, etc) we felt there was still a lack of distilled or compact models with comparable metrics to their bigger siblings.
|