File size: 11,924 Bytes
99dadba
 
 
 
 
 
 
572b8f4
 
 
99dadba
 
 
 
 
 
06d618f
 
 
99dadba
 
 
 
 
 
 
 
bf15357
c45a414
99dadba
06d618f
349a0ea
99dadba
349a0ea
99dadba
 
 
 
349a0ea
99dadba
 
 
 
 
 
 
06d618f
6675d18
7538165
7c567b4
7538165
 
 
99dadba
06d618f
99dadba
06d618f
 
 
 
 
 
 
 
7538165
 
 
99dadba
 
 
06d618f
 
99dadba
 
 
349a0ea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99dadba
 
 
 
06d618f
99dadba
06d618f
99dadba
 
 
06d618f
99dadba
c45a414
99dadba
c45a414
99dadba
 
06d618f
99dadba
06d618f
 
 
 
 
 
 
 
 
 
 
c45a414
06d618f
99dadba
c45a414
99dadba
 
06d618f
d4f13bb
06d618f
2f8f772
d4f13bb
2f8f772
d4f13bb
99dadba
 
06d618f
 
99dadba
06d618f
99dadba
7538165
99dadba
06d618f
99dadba
06d618f
99dadba
06d618f
 
 
7538165
99dadba
06d618f
99dadba
7538165
 
 
99dadba
 
06d618f
99dadba
 
06d618f
99dadba
06d618f
99dadba
7538165
 
 
06d618f
 
99dadba
06d618f
99dadba
7538165
99dadba
06d618f
7538165
99dadba
2f8f772
99dadba
06d618f
99dadba
06d618f
349a0ea
99dadba
06d618f
 
 
 
 
 
 
 
 
 
349a0ea
99dadba
349a0ea
99dadba
349a0ea
99dadba
 
 
 
 
 
 
349a0ea
99dadba
349a0ea
99dadba
349a0ea
99dadba
 
 
06d618f
 
 
 
 
 
 
349a0ea
99dadba
06d618f
99dadba
06d618f
99dadba
349a0ea
99dadba
349a0ea
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
---
datasets:
- assin
language:
- pt
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- nli
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

This is a **[XLM-RoBERTa-base](https://huggingface.co/xlm-roberta-base) fine-tuned model** on 5K (premise, hypothesis) sentence pairs from
the **ASSIN (Avaliação de Similaridade Semântica e Inferência textual)** corpus. The original reference papers are:
[Unsupervised Cross-Lingual Representation Learning At Scale](https://arxiv.org/pdf/1911.02116), [ASSIN: Avaliação de Similaridade Semântica e Inferência Textual](https://huggingface.co/datasets/assin), respectivelly. This model is suitable for Portuguese (from Brazil or Portugal).

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

- **Developed by:** Giovani Tavares and Felipe Ribas Serras
- **Oriented By:** Renata Wassermann, Felipe Ribas Serras and Marcelo Finger
- **Model type:** Transformer-based text classifier
- **Language(s) (NLP):** Portuguese
- **License:** mit
- **Finetuned from model** [XLM-RoBERTa-base](https://huggingface.co/xlm-roberta-base)

### Model Sources

<!-- Provide the basic links for the model. -->

- **Repository:** [Natural-Portuguese-Language-Inference](https://github.com/giogvn/Natural-Portuguese-Language-Inference)
- **Paper:** This is an ongoing research. We are currently writing a paper where we fully describe our experiments.

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

This fine-tuned version of [XLM-RoBERTa-base](https://huggingface.co/xlm-roberta-base) performs Natural
Language Inference (NLI), which is a text classification task. Therefore, it classifies pairs of sentences in the form (premise, hypothesis) into one of the following classes ENTAILMENT, PARAPHRASE or NONE. Salvatore's definition [1] for ENTAILEMENT is assumed to be the same as the one found in [ASSIN](https://huggingface.co/datasets/assin)'s labels in which this model was trained on.

PARAPHRASE and NONE are not defined in [1].Therefore, it is assumed that in this model's training set, given a pair of sentences (paraphase, hypothesis), hypothesis is a PARAPHRASE of premise if premise is an ENTAILMENT of hypothesis *and* vice-versa. If (premise, hypothesis) don't have an ENTAILMENT or PARAPHARSE relationship, (premise, hypothesis)  is classified as NONE.


<!-- <div id="assin_function">

**Definition 1.** Given a pair of sentences $(premise, hypothesis)$, let $\hat{f}^{(xlmr\_base)}$ be the fine-tuned models' inference function:

$$
\hat{f}^{(xlmr\_base)} = 
\begin{cases} 
ENTAILMENT, & \text{if $premise$ entails $hypothesis$}\\
PARAPHRASE, & \text{if $premise$ entails $hypothesis$ and $hypothesis$ entails $premise$}\\
NONE & \text{otherwise}
\end{cases}
$$
</div> 

The (premise, hypothesis)$ entailment definition used is the same as the one found in Salvatore's paper [1].-->



 
<!-- ## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->


## Demo

```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch

model_path = "giotvr/portuguese-nli-3-labels"
premise = "As mudanças climáticas são uma ameaça séria para a biodiversidade do planeta."
hypothesis ="A biodiversidade do planeta é seriamente ameaçada pelas mudanças climáticas."
tokenizer = XLMRobertaTokenizer.from_pretrained(model_path, use_auth_token=True)
input_pair = tokenizer(premise, hypothesis, return_tensors="pt",padding=True, truncation=True)
model = AutoModelForSequenceClassification.from_pretrained(model_path, use_auth_token=True)

with torch.no_grad():
    logits = model(**input_pair).logits
probs = torch.nn.functional.softmax(logits, dim=-1)
probs, sorted_indices = torch.sort(probs, descending=True)
for i, score in enumerate(probs[0]):
    print(f"Class {sorted_indices[0][i]}: {score.item():.4f}")
```
### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

This model should be used for scientific purposes only. It was not tested for production environments.

<!-- ## How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed] -->

## Fine-Tuning Details

### Fine-Tuning Data

<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
---

- **Train Dataset**: [ASSIN](https://huggingface.co/datasets/assin) <br>

- **Evaluation Dataset used for Hyperparameter Tuning:** [ASSIN](https://huggingface.co/datasets/assin)'s validation split

- **Test Datasets:**
    - [ASSIN](https://huggingface.co/datasets/assin)'s test splits
    - [ASSIN2](https://huggingface.co/datasets/assin2)'s test splits


---
This is a fine tuned version of [XLM-RoBERTa-base](https://huggingface.co/xlm-roberta-base) using the [ASSIN (Avaliação de Similaridade Semântica e Inferência textual)](https://huggingface.co/datasets/assin) dataset. [ASSIN](https://huggingface.co/datasets/assin) is a corpus annotated with hypothesis/premise Portuguese sentence pairs suitable for detecting textual entailment, paraphrase or neutral 
relationship between the members of such pairs. Such corpus has three subsets: *ptbr* (Brazilian Portuguese), *ptpt* (Portuguese Portuguese) and *full* (the union of the latter with the former). The *full* subset has 
10k sentence pairs equally distributed between *ptbr*  and *ptpt* subsets.

### Fine-Tuning Procedure 

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
The model's fine-tuning procedure can be summarized in three major subsequent tasks:
    <ol type="i">
        <li>**Data Processing:**</li> [ASSIN](https://huggingface.co/datasets/assin)'s *validation* and *train* splits were loaded from the **Hugging Face Hub** and processed afterwards; 
        <li>**Hyperparameter Tuning:**</li> [XLM-RoBERTa-base](https://huggingface.co/xlm-roberta-base)'s hyperparameters were chosen with the help of the [Weights & Biases](https://docs.wandb.ai/ref/python/public-api/api) API to track the results and upload the fine-tuned models;
        <li>**Final Model Loading and Testing:**</li>
        The models' performance was evaluated using different datasets and metrics that will be better described in the future paper. 
    </ol>


<!--  ##### Column Renaming
The **Hugging Face**'s ```transformers``` module's ```DataCollator``` used by its ```Trainer``` requires that the ```class label``` column of the collated dataset to be called ```label```.  [ASSIN](https://huggingface.co/datasets/assin)'s class label column for each hypothesis/premise pair is called ```entailment_judgement```. Therefore, as the first step of the data preprocessing pipeline the column  ```entailment_judgement``` was renamed to ```label``` so that the **Hugging Face**'s ```transformers``` module's ```Trainer``` could be used. -->

#### Hyperparameter Tuning

<!-- The model's training hyperparameters were chosen according to the following definition:

<div id="hyperparameter_tuning">

**Definition 2.** Let $Hyperparms= \{i: i \text{ is an hyperparameter of } \hat{f}^{(xlmr\_base)}\}$ and $\hat{f}^{(xlmr\_base)}$ be the model's inference function defined in [Definition 1](#assin_function) :

$$
Hyperparms = \argmax_{hyp}(eval\_acc(\hat{f}^{(xlmr\_base)}_{hyp}, assin\_validation))
$$
</div> -->

The following hyperparameters were tested in order to maximize the evaluation accuracy.

- **Number of Training Epochs:** (1,2,3)
- **Per Device Train Batch Size:** (16,32)
- **Learning Rate:** (1e-6, 2e-6,3e-6)


The hyperaparemeter tuning experiments were run and tracked using the [Weights & Biases' API](https://docs.wandb.ai/ref/python/public-api/api) and can be found at this [link](https://wandb.ai/gio_projs/assin_xlm_roberta_v5?workspace=user-giogvn).


#### Training Hyperparameters

The [hyperparameter tuning](#hyperparameter-tuning) performed yelded the following values:

- **Number of Training Epochs:** 3
- **Per Device Train Batch Size:** 16
- **Learning Rate:** 3e-6

## Evaluation

### ASSIN

Testing this model in [ASSIN](https://huggingface.co/datasets/assin)'s test split is straightforward because this model was tested using [ASSIN](https://huggingface.co/datasets/assin)'s training set and therefore can predict the same labels as the ones found in its test set.

### ASSIN2
<!-- Given a pair of sentences $(premise, hypothesis)$, $\hat{f}^{(xlmr\_base)}(premise, hypothesis)$ can be equal to $PARAPHRASE, ENTAILMENT$ or $NONE$ as defined in [Definition 1](#assin_function). -->

[ASSIN2](https://huggingface.co/datasets/assin2)'s test split's class label's column has only two possible values: *ENTAILMENT* and *NONE*. Therefore some mapping must be done so this model can be tested in [ASSIN2](https://huggingface.co/datasets/assin2)'s test split. More information on how such mapping is performed will be available in the [referred paper](#model-sources).

### Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->
The model's performance metrics for each test dataset are presented separately. Accuracy, f1 score, precision and recall were the metrics used to every evaluation performed. Such metrics are reported below. More information on such metrics them will be available in our ongoing research paper.

### Results

| test set | accuracy | f1 score | precision | recall |
|----------|----------|----------|-----------|--------|
| assin    |0.89      |0.89      |0.89       |0.89    |
| assin2   |0.70      |0.69      |0.73       |0.70    |

## Model Examination

<!-- Relevant interpretability work for the model goes here -->
Some interpretability work is being done in order to understand the model's behavior. Such details will be available in the previoulsy referred paper.

<!--## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed] -->

<!-- ## Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section.

**BibTeX:**

```bibtex
    @article{tcc_paper,
    author    = {Giovani Tavares and Felipe Ribas Serras and Renata Wassermann and Marcelo Finger},
    title     = {Modelos Transformer para Inferência de Linguagem Natural em Português},
    pages     = {x--y},
    year      = {2023}
    }
``` -->

## References

[1][Salvatore, F. S. (2020). Analyzing Natural Language Inference from a Rigorous Point of View (pp. 1-2).](https://www.teses.usp.br/teses/disponiveis/45/45134/tde-05012021-151600/publico/tese_de_doutorado_felipe_salvatore.pdf)

<!--[2][Andrade, G. T. (2023)  Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa  (train_assin_xlmr_base_results PAGES GO HERE)](https://linux.ime.usp.br/~giovani/)

[3][Andrade, G. T. (2023)  Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa (train_assin_xlmr_base_conclusions PAGES GO HERE)](https://linux.ime.usp.br/~giovani/) -->