giotvr commited on
Commit
b9d026e
1 Parent(s): 9cf0393

Updates model card

Browse files

Signed-off-by: Giovani <giovanitavares@outlook.com>

Files changed (1) hide show
  1. README.md +238 -1
README.md CHANGED
@@ -1,3 +1,240 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ datasets:
3
+ - assin2
4
+ language:
5
+ - pt
6
+ metrics:
7
+ - accuracy
8
+ pipeline_tag: text-classification
9
+ tags:
10
+ - nli
11
  ---
12
+
13
+ # Model Card for Model ID
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+ This is a **[XLM-RoBERTa-base](https://huggingface.co/xlm-roberta-base) fine-tuned model** on 5K (premise, hypothesis) sentence pairs from
18
+ the **ASSIN (Avaliação de Similaridade Semântica e Inferência textual)** corpus. The original reference papers are:
19
+ [Unsupervised Cross-Lingual Representation Learning At Scale](https://arxiv.org/pdf/1911.02116), [ASSIN2: Avaliação de Similaridade Semântica e Inferência Textual](https://huggingface.co/datasets/assin), respectivelly. This model is suitable for Portuguese (from Brazil or Portugal).
20
+
21
+ ## Model Details
22
+
23
+ ### Model Description
24
+
25
+ <!-- Provide a longer summary of what this model is. -->
26
+
27
+ - **Developed by:** Giovani Tavares and Felipe Ribas Serras
28
+ - **Oriented By:** Felipe Ribas Serras, Renata Wassermann and Marcelo Finger
29
+ - **Model type:** Transformer-based text classifier
30
+ - **Language(s) (NLP):** Portuguese
31
+ - **License:** mit
32
+ - **Finetuned from model** [XLM-RoBERTa-base](https://huggingface.co/xlm-roberta-base)
33
+
34
+ ### Model Sources
35
+
36
+ <!-- Provide the basic links for the model. -->
37
+
38
+ - **Repository:** [Natural-Portuguese-Language-Inference](https://github.com/giogvn/Natural-Portuguese-Language-Inference)
39
+ - **Paper:** This is an ongoing research. We are currently writing a paper where we fully describe our experiments.
40
+
41
+ ## Uses
42
+
43
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
44
+
45
+ ### Direct Use
46
+
47
+ This fine-tuned version of [XLM-RoBERTa-base](https://huggingface.co/xlm-roberta-base) performs Natural
48
+ Language Inference (NLI), which is a text classification task.
49
+
50
+ <!-- <div id="assin_function">
51
+
52
+ **Definition 1.** Given a pair of sentences $$(premise, hypothesis)$, let $\hat{f}^{(xlmr\_base)}$ be the fine-tuned models' inference function:
53
+
54
+ $$
55
+ \hat{f}^{(xlmr\_base)} =
56
+ \begin{cases}
57
+ ENTAILMENT, & \text{if $premise$ entails $hypothesis$}\\
58
+ PARAPHRASE, & \text{if $premise$ entails $hypothesis$ and $hypothesis$ entails $premise$}\\
59
+ NONE & \text{otherwise}
60
+ \end{cases}
61
+ $$
62
+ </div> -->
63
+
64
+
65
+ The *(premise, hypothesis)* entailment definition used is the same as the one found in Salvatore's paper [1].
66
+
67
+ Therefore, this fine-tuned version of [XLM-RoBERTa-base](https://huggingface.co/xlm-roberta-base) classifies pairs of sentences in the form *(premise, hypothesis)* into the classes *ENTAILMENT* or *NONE*.
68
+
69
+ <!-- ## Bias, Risks, and Limitations
70
+
71
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
72
+
73
+
74
+ ## Demo
75
+
76
+ ```python
77
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
78
+ import torch
79
+
80
+ model_path = "giotvr/xlm_roberta_base_assin2_fine_tuned"
81
+ premise = "As mudanças climáticas são uma ameaça séria para a biodiversidade do planeta."
82
+ hypothesis ="A biodiversidade do planeta é seriamente ameaçada pelas mudanças climáticas."
83
+ tokenizer = XLMRobertaTokenizer.from_pretrained(model_path, use_auth_token=True)
84
+ input_pair = tokenizer(premise, hypothesis, return_tensors="pt",padding=True, truncation=True)
85
+ model = AutoModelForSequenceClassification.from_pretrained(model_path, use_auth_token=True)
86
+
87
+ with torch.no_grad():
88
+ logits = model(**input_pair).logits
89
+ probs = torch.nn.functional.softmax(logits, dim=-1)
90
+ probs, sorted_indices = torch.sort(probs, descending=True)
91
+ for i, score in enumerate(probs[0]):
92
+ print(f"Class {sorted_indices[0][i]}: {score.item():.4f}")
93
+ ```
94
+ ### Recommendations
95
+
96
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
97
+
98
+ This model should be used for scientific purposes only. It was not tested for production environments.
99
+
100
+ <!-- ## How to Get Started with the Model
101
+
102
+ Use the code below to get started with the model.
103
+
104
+ [More Information Needed] -->
105
+
106
+ ## Fine-Tuning Details
107
+
108
+ ### Fine-Tuning Data
109
+
110
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
111
+ ---
112
+
113
+ - **Train Dataset**: [ASSIN](https://huggingface.co/datasets/assin) <br>
114
+
115
+ - **Evaluation Dataset used for Hyperparameter Tuning:** [ASSIN2](https://huggingface.co/datasets/assin2)'s validation split
116
+
117
+ - **Test Datasets:**
118
+ - [ASSIN](https://huggingface.co/datasets/assin)'s test split
119
+ - [ASSIN2](https://huggingface.co/datasets/assin2)'s test split
120
+ - [PLUE/MNLI](https://huggingface.co/datasets/dlb/plue/viewer/mnli_matched)'s validation matched split
121
+
122
+
123
+ ---
124
+ This is a fine tuned version of [XLM-RoBERTa-base](https://huggingface.co/xlm-roberta-base) using the [ASSIN2 (Avaliação de Similaridade Semântica e Inferência textual)](https://huggingface.co/datasets/assin2) dataset. [ASSIN2](https://huggingface.co/datasets/assin2) is a corpus annotated with hypothesis/premise Portuguese sentence pairs suitable for detecting textual entailment or neutral
125
+ relationship between the members of such pairs. Such corpus is balanced with 7k *ptbr* (Brazilian Portuguese) sentence pairs.
126
+
127
+ ### Fine-Tuning Procedure
128
+
129
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
130
+ The model's fine-tuning procedure can be summarized in three major subsequent tasks:
131
+ <ol type="i">
132
+ <li>**Data Processing:**</li> [ASSIN2](https://huggingface.co/datasets/assin2)'s *validation* and *train* splits were loaded from the **Hugging Face Hub** and processed afterwards;
133
+ <li>**Hyperparameter Tuning:**</li> [XLM-RoBERTa-base](https://huggingface.co/xlm-roberta-base)'s hyperparameters were chosen with the help of the [Weights & Biases] API to track the results and upload the fine-tuned models;
134
+ <li>**Final Model Loading and Testing:**</li>
135
+ using the *cross-tests* approach described in the [this section](#evaluation), the models' performance were measured using different datasets and metrics.
136
+ </ol>
137
+
138
+
139
+ <!-- ##### Column Renaming
140
+ The **Hugging Face**'s ```transformers``` module's ```DataCollator``` used by its ```Trainer``` requires that the ```class label``` column of the collated dataset to be called ```label```. [ASSIN](https://huggingface.co/datasets/assin)'s class label column for each hypothesis/premise pair is called ```entailment_judgement```. Therefore, as the first step of the data preprocessing pipeline the column ```entailment_judgement``` was renamed to ```label``` so that the **Hugging Face**'s ```transformers``` module's ```Trainer``` could be used. -->
141
+
142
+ #### Hyperparameter Tuning
143
+
144
+ <!-- The model's training hyperparameters were chosen according to the following definition:
145
+
146
+ <div id="hyperparameter_tuning">
147
+
148
+ **Definition 2.** Let $Hyperparms= \{i: i \text{ is an hyperparameter of } \hat{f}^{(xlmr\_base)}\}$ and $\hat{f}^{(xlmr\_base)}$ be the model's inference function defined in [Definition 1](#assin_function) :
149
+
150
+ $$
151
+ Hyperparms = \argmax_{hyp}(eval\_acc(\hat{f}^{(xlmr\_base)}_{hyp}, assin\_validation))
152
+ $$
153
+ </div> -->
154
+
155
+ The following hyperparameters were tested in order to maximize the evaluation accuracy.
156
+
157
+ - **Number of Training Epochs:** $(1,2,3)$
158
+ - **Per Device Train Batch Size:** $(8,16,32)$
159
+ - **Learning Rate:** $(1e−5, 2e−5, 5e−5)$
160
+
161
+
162
+ The hyperaparemeter tuning experiments were run and tracked using the [Weights & Biases' API](https://docs.wandb.ai/ref/python/public-api/api) and can be found at this [link](https://wandb.ai/gio_projs/assin_xlm_roberta_v5?workspace=user-giogvn).
163
+
164
+
165
+ #### Training Hyperparameters
166
+
167
+ The [hyperparameter tuning](#hyperparameter-tuning) performed yelded the following values:
168
+
169
+ - **Number of Training Epochs:** $3$
170
+ - **Per Device Train Batch Size:** $8$
171
+ - **Learning Rate:** $5e-5$
172
+
173
+ ## Evaluation
174
+
175
+ ### ASSIN
176
+
177
+ Testing this model in ASSIN's test split was straightforward because this model was fine tuned using ASSIN2's training set which contains the same labels as ASSIN. Hence, it can predict the same labels as the ones found in ASSIN's test set.
178
+
179
+ ### ASSIN2
180
+
181
+ Testing this model in ASSIN2's test split is straightforward because this model was fine tuned using ASSIN2's training set and therefore can predict the same labels as the ones found in its test set.
182
+
183
+ ### PLUE/MNLI
184
+
185
+ Testing this model in PLUE/MNLI was only possible by considering PLUE/MNLI's *contradiction* and *neutral* labels as *NONE* and PLUE/MNLI's *entailment* label as equivalent to the *ENTAILMENT* predicted by the model.
186
+
187
+ More information on how such mapping is performed can be found in [Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa](https://linux.ime.usp.br/~giovani/).
188
+
189
+ ### Metrics
190
+
191
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
192
+ The model's performance metrics for each test dataset are presented separately. Accuracy, f1 score, precision and recall were the metrics used to every evaluation performed. Such metrics are reported below. More information on such metrics them will be available in our ongoing research paper.
193
+
194
+ ### Results
195
+
196
+ | test set | accuracy | f1 score | precision | recall |
197
+ |----------|----------|----------|-----------|--------|
198
+ | assin |0.78 |0.78 |0.80 |0.80 |
199
+ | assin2 |0.86 |0.86 |0.87 |0.86 |
200
+ | plue/mnli|0.71 |0.67 |0.71 |0.71 |
201
+
202
+ ## Model Examination
203
+
204
+ <!-- Relevant interpretability work for the model goes here -->
205
+ Some interpretability work is being done in order to understand the model's behavior. Such details will be available in the previoulsy referred paper.
206
+
207
+ <!--## Environmental Impact
208
+
209
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly
210
+
211
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
212
+
213
+ - **Hardware Type:** [More Information Needed]
214
+ - **Hours used:** [More Information Needed]
215
+ - **Cloud Provider:** [More Information Needed]
216
+ - **Compute Region:** [More Information Needed]
217
+ - **Carbon Emitted:** [More Information Needed] -->
218
+
219
+ <!-- ## Citation
220
+
221
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section.
222
+
223
+ **BibTeX:**
224
+
225
+ ```bibtex
226
+ @article{tcc_paper,
227
+ author = {Giovani Tavares and Felipe Ribas Serras and Renata Wassermann and Marcelo Finger},
228
+ title = {Modelos Transformer para Inferência de Linguagem Natural em Português},
229
+ pages = {x--y},
230
+ year = {2023}
231
+ }
232
+ ``` -->
233
+
234
+ ## References
235
+
236
+ [1][Salvatore, F. S. (2020). Analyzing Natural Language Inference from a Rigorous Point of View (pp. 1-2).](https://www.teses.usp.br/teses/disponiveis/45/45134/tde-05012021-151600/publico/tese_de_doutorado_felipe_salvatore.pdf)
237
+
238
+ <!--[2][Andrade, G. T. (2023) Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa (train_assin_xlmr_base_results PAGES GO HERE)](https://linux.ime.usp.br/~giovani/)
239
+
240
+ [3][Andrade, G. T. (2023) Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa (train_assin_xlmr_base_conclusions PAGES GO HERE)](https://linux.ime.usp.br/~giovani/) -->