gonzalez-agirre commited on
Commit
c363907
1 Parent(s): d6b15b5

First commit

Browse files
README.md CHANGED
@@ -1,3 +1,187 @@
1
  ---
 
 
 
 
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+
4
+ - es
5
+
6
  license: apache-2.0
7
+
8
+ tags:
9
+
10
+ - "national library of spain"
11
+ - "spanish"
12
+ - "bne"
13
+ - "xnli"
14
+ - "textual entailment"
15
+
16
+ datasets:
17
+
18
+ - "xnli"
19
+
20
+ metrics:
21
+ - "accuracy"
22
+
23
+ model-index:
24
+ - name: roberta-large-bne-te
25
+ results:
26
+ - task:
27
+ type: text-classification
28
+ dataset:
29
+ type: xnli
30
+ name: XNLI
31
+ metrics:
32
+ - name: Accuracy
33
+ type: accuracy
34
+ value: 0.8263
35
+
36
+
37
+ widget:
38
+
39
+ - "Me gustas. Te quiero."
40
+
41
+ - "Mi cumpleaños es el 27 de mayo. Cumpliré años a finales de mayo."
42
+
43
  ---
44
+
45
+ # Spanish RoBERTa-large trained on BNE finetuned for the Spanish Cross-lingual Natural Language Inference (XNLI) dataset.
46
+
47
+ ## Table of contents
48
+ <details>
49
+ <summary>Click to expand</summary>
50
+
51
+ - [Model description](#model-description)
52
+ - [Intended uses and limitations](#intended-use)
53
+ - [How to use](#how-to-use)
54
+ - [Limitations and bias](#limitations-and-bias)
55
+ - [Training](#training)
56
+ - [Training](#training)
57
+ - [Training data](#training-data)
58
+ - [Training procedure](#training-procedure)
59
+ - [Evaluation](#evaluation)
60
+ - [Evaluation](#evaluation)
61
+ - [Variable and metrics](#variable-and-metrics)
62
+ - [Evaluation results](#evaluation-results)
63
+ - [Additional information](#additional-information)
64
+ - [Author](#author)
65
+ - [Contact information](#contact-information)
66
+ - [Copyright](#copyright)
67
+ - [Licensing information](#licensing-information)
68
+ - [Funding](#funding)
69
+ - [Citing information](#citing-information)
70
+ - [Disclaimer](#disclaimer)
71
+
72
+ </details>
73
+
74
+ ## Model description
75
+ The **roberta-large-bne-te** is a Textual Entailment (TE) model for the Spanish language fine-tuned from the [roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) large model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
76
+
77
+
78
+
79
+ ## Intended uses and limitations
80
+
81
+ **roberta-large-bne-te** model can be used to recognize Textual Entailment (TE). The model is limited by its training dataset and may not generalize well for all use cases.
82
+
83
+
84
+ ## How to use
85
+
86
+ Here is how to use this model:
87
+
88
+ ```python
89
+ from transformers import pipeline
90
+ from pprint import pprint
91
+
92
+ nlp = pipeline("text-classification", model="PlanTL-GOB-ES/roberta-large-bne-te")
93
+ example = "Mi cumpleaños es el 27 de mayo. Cumpliré años a finales de mayo."
94
+
95
+ te_results = nlp(example)
96
+ pprint(te_results)
97
+ ```
98
+
99
+ ## Limitations and bias
100
+ At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
101
+
102
+ ## Training
103
+
104
+ We used the TE dataset in Spanish called [XNLI dataset](https://huggingface.co/datasets/xnli) for training and evaluation.
105
+
106
+ ### Training procedure
107
+ The model was trained with a batch size of 16 and a learning rate of 1e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
108
+
109
+
110
+ ## Evaluation
111
+
112
+ ### Variable and metrics
113
+
114
+ This model was finetuned maximizing accuracy.
115
+
116
+ ## Evaluation results
117
+ We evaluated the *roberta-large-bne-te* on the XNLI test set against standard multilingual and monolingual baselines:
118
+
119
+
120
+ | Model | XNLI (Accuracy) |
121
+ | ------------|:----|
122
+ | roberta-large-bne | **82.63** |
123
+ | roberta-base-bne | 80.16 |
124
+ | BETO | 81.30 |
125
+ | mBERT | 78.76 |
126
+ | BERTIN | 78.90 |
127
+ | ELECTRA | 78.78 |
128
+
129
+ For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish).
130
+
131
+
132
+ ## Additional information
133
+
134
+ ### Author
135
+ Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
136
+
137
+ ### Contact information
138
+ For further information, send an email to <plantl-gob-es@bsc.es>
139
+
140
+ ### Copyright
141
+ Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
142
+
143
+ ### Licensing information
144
+ [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
145
+
146
+ ### Funding
147
+ This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
148
+
149
+ ## Citing information
150
+
151
+ If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405):
152
+ ```
153
+ @article{,
154
+ abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a
155
+ Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial
156
+ Intelligence (SEDIA) within the framework of the Plan-TL.},
157
+ author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas},
158
+ doi = {10.26342/2022-68-3},
159
+ issn = {1135-5948},
160
+ journal = {Procesamiento del Lenguaje Natural},
161
+ keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural},
162
+ publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural},
163
+ title = {MarIA: Spanish Language Models},
164
+ volume = {68},
165
+ url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley},
166
+ year = {2022},
167
+ }
168
+
169
+ ```
170
+
171
+ ## Disclaimer
172
+
173
+ <details>
174
+ <summary>Click to expand</summary>
175
+
176
+ The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
177
+
178
+ When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
179
+
180
+ In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
181
+
182
+
183
+ Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
184
+
185
+ Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
186
+
187
+ En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "../models/roberta-bne-large-te",
3
+ "architectures": [
4
+ "RobertaForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.0,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "finetuning_task": "mnli",
11
+ "gradient_checkpointing": false,
12
+ "hidden_act": "gelu",
13
+ "hidden_dropout_prob": 0.0,
14
+ "hidden_size": 1024,
15
+ "id2label": {
16
+ "0": "entailment",
17
+ "1": "not_entailment",
18
+ "2": "contradiction"
19
+ },
20
+ "initializer_range": 0.02,
21
+ "intermediate_size": 4096,
22
+ "label2id": {
23
+ "entailment": 0,
24
+ "not_entailment": 1,
25
+ "contradiction": 2
26
+ },
27
+ "layer_norm_eps": 1e-05,
28
+ "max_position_embeddings": 514,
29
+ "model_type": "roberta",
30
+ "num_attention_heads": 16,
31
+ "num_hidden_layers": 24,
32
+ "pad_token_id": 1,
33
+ "position_embedding_type": "absolute",
34
+ "problem_type": "single_label_classification",
35
+ "torch_dtype": "float32",
36
+ "transformers_version": "4.6.1",
37
+ "type_vocab_size": 1,
38
+ "use_cache": true,
39
+ "vocab_size": 50262
40
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbb071ce1ee29b936a23057afa2343bc3bfc34b911e9e2d6051d098bd81bdfeb
3
+ size 1421603117
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true}}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": true, "errors": "replace", "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "max_len": 512, "special_tokens_map_file": null, "name_or_path": "../models/bne-large-new"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff