Edit model card

Model Card for Model ID

This is a XLM-RoBERTa-base fine-tuned model on 393K (premise, hypothesis) sentence pairs from the PLUE/MNLI (Portuguese translation of the SNLI's GLUE benchmark) corpus. The original references are: Unsupervised Cross-Lingual Representation Learning At Scale, PLUE, respectivelly. This model is suitable for Portuguese.

Model Details

Model Description

  • Developed by: Giovani Tavares and Felipe Ribas Serras
  • Oriented By: Felipe Ribas Serras, Renata Wassermann and Marcelo Finger
  • Model type: Transformer-based text classifier
  • Language(s) (NLP): Portuguese
  • License: mit
  • Finetuned from model XLM-RoBERTa-base

Model Sources

Uses

Direct Use

This fine-tuned version of XLM-RoBERTa-base performs Natural Language Inference (NLI), which is a text classification task.

The (premise, hypothesis) entailment definition used is the same as the one found in Salvatore's paper [1].

Therefore, this fine-tuned version of XLM-RoBERTa-base classifies pairs of sentences in the form (premise, hypothesis) into the classes entailment, neutral and contradiction.

Demo

from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch

model_path = "giotvr/xlm_roberta_base_plue_mnli_fine_tuned"
premise = "As mudanças climáticas são uma ameaça séria para a biodiversidade do planeta."
hypothesis ="A biodiversidade do planeta é seriamente ameaçada pelas mudanças climáticas."
tokenizer = XLMRobertaTokenizer.from_pretrained(model_path, use_auth_token=True)
input_pair = tokenizer(premise, hypothesis, return_tensors="pt",padding=True, truncation=True)
model = AutoModelForSequenceClassification.from_pretrained(model_path, use_auth_token=True)

with torch.no_grad():
    logits = model(**input_pair).logits
probs = torch.nn.functional.softmax(logits, dim=-1)
probs, sorted_indices = torch.sort(probs, descending=True)
for i, score in enumerate(probs[0]):
    print(f"Class {sorted_indices[0][i]}: {score.item():.4f}")

Recommendations

This model should be used for scientific purposes only. It was not tested for production environments.

Fine-Tuning Details

Fine-Tuning Data


  • Train Dataset: PLUE/MNLI

  • Evaluation Dataset used for Hyperparameter Tuning: PLUE/MNLI's validation split

  • Test Datasets:


This is a fine tuned version of XLM-RoBERTa-base using the PLUE/MNLI dataset. PLUE/MNLI is a corpus annotated with hypothesis/premise Portuguese sentence pairs suitable for detecting textual entailment, neutral or contradiction relationships between the members of such pairs. Such corpus is balanced among the three classes.

Fine-Tuning Procedure

The model's fine-tuning procedure can be summarized in three major subsequent tasks:

  1. Data Processing:
  2. PLUE/MNLI's validation and train splits were loaded from the Hugging Face Hub and processed afterwards;
  3. Hyperparameter Tuning:
  4. XLM-RoBERTa-base's hyperparameters were chosen with the help of the [Weights & Biases] API to track the results and upload the fine-tuned models;
  5. Final Model Loading and Testing:
  6. using the cross-tests approach described in the this section, the models' performance were measured using different datasets and metrics.

Hyperparameter Tuning

The following hyperparameters were tested in order to maximize the evaluation accuracy.

  • Number of Training Epochs: $(1,2,3)$
  • Per Device Train Batch Size: $(8,16,32)$
  • Learning Rate: $(1e−5, 2e−5, 3e−5)$

The hyperaparemeter tuning experiments were run and tracked using the Weights & Biases' API and can be found at this link.

Training Hyperparameters

The hyperparameter tuning performed yelded the following values:

  • Number of Training Epochs: $3$
  • Per Device Train Batch Size: $16$
  • Learning Rate: $2e-5$

Evaluation

ASSIN

Testing this model in ASSIN's test split required some translation of the NONE and PARAPHRASE classes found in it, because such classes are not present in PLUE/MNLI. The NONE class was considered contradiction or neutral, and PARAPHRASE was considered entailment in both ways: from premise to hypothesis and from hypothesis to premise. More details on such translation can be found in Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa.

ASSIN2

Testing this model in ASSIN2's test split required some translation of the NONE classe found in it, because such class is not present in PLUE/MNLI. The NONE class was considered contradiction or neutral. More details on such translation can be found in Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa.

PLUE/MNLI

Testing this model in PLUE/MNLI's test set was straightforward as it was fine-tuned in its training set.

More information on how such mapping is performed can be found in Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa.

Metrics

The model's performance metrics for each test dataset are presented separately. Accuracy, f1 score, precision and recall were the metrics used to every evaluation performed. Such metrics are reported below. More information on such metrics them will be available in our ongoing research paper.

Results

test set accuracy f1 score precision recall
assin 0.77 0.68 0.61 0.77
assin2 0.86 0.86 0.86 0.86
plue/mnli 0.82 0.82 0.83 0.82

Model Examination

Some interpretability work is being done in order to understand the model's behavior. Such details will be available in the previoulsy referred paper.

References

[1]Salvatore, F. S. (2020). Analyzing Natural Language Inference from a Rigorous Point of View (pp. 1-2).

Downloads last month
1

Dataset used to train giotvr/xlm_roberta_base_plue_mnli_fine_tuned