giotvr's picture
Update README.md
572b8f4
metadata
datasets:
  - assin
language:
  - pt
metrics:
  - accuracy
pipeline_tag: text-classification
tags:
  - nli

Model Card for Model ID

This is a XLM-RoBERTa-base fine-tuned model on 5K (premise, hypothesis) sentence pairs from the ASSIN (Avaliação de Similaridade Semântica e Inferência textual) corpus. The original reference papers are: Unsupervised Cross-Lingual Representation Learning At Scale, ASSIN: Avaliação de Similaridade Semântica e Inferência Textual, respectivelly. This model is suitable for Portuguese (from Brazil or Portugal).

Model Details

Model Description

  • Developed by: Giovani Tavares and Felipe Ribas Serras
  • Oriented By: Felipe Ribas Serras, Renata Wassermann and Marcelo Finger
  • Shared by [optional]: [More Information Needed]
  • Model type: Transformer-based text classifier
  • Language(s) (NLP): Portuguese
  • License: mit
  • Finetuned from model [optional]: XLM-RoBERTa-base

Model Sources [optional]

  • Repository: Natural-Portuguese-Language-Inference
  • Paper [optional]: This is an ongoing research. We are currently writing a paper where we describe our experiments.
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

This fine-tuned version of XLM-RoBERTa-base performs Natural Language Inference (NLI), which is a text classification task.

Definition 1. Given a pair of sentences $(premise, hypothesis)$, let $\hat{f}^{(xlmr_base)}$ be the fine-tuned models' inference function:

f^(xlmr_base)={ENTAILMENT,if premise entails hypothesisPARAPHRASE,if premise entails hypothesis and hypothesis entails premiseNONEotherwise \hat{f}^{(xlmr\_base)} = \begin{cases} ENTAILMENT, & \text{if $premise$ entails $hypothesis$}\\ PARAPHRASE, & \text{if $premise$ entails $hypothesis$ and $hypothesis$ entails $premise$}\\ NONE & \text{otherwise} \end{cases}

The $(premise, hypothesis)$ entailment definition used is the same as the one found in Salvatore's paper [1].

Therefore, this fine-tuned version of XLM-RoBERTa-base classifies pairs of sentences into one of the following classes $ENTAILMENT, PARAPHRASE$ or $NONE$. using Definition 1.

Recommendations

This model should be used for scientific purposes only. It was not tested for production environments.

Fine-Tuning Details

Fine-Tuning Data


  • Train Dataset: ASSIN

  • Evaluation Dataset used for Hyperparameter Tuning: ASSIN's validation split

  • Test Datasets:


This is a fine tuned version of XLM-RoBERTa-base using the ASSIN (Avaliação de Similaridade Semântica e Inferência textual) dataset. ASSIN is a corpus annotated with hypothesis/premise Portuguese sentence pairs suitable for detecting textual entailment, paraphrase or neutral relationship between the members of such pairs. Such corpus has three subsets: ptbr (Brazilian Portuguese), ptpt (Portuguese Portuguese) and full (the union of the latter with the former). The full subset has 10k sentence pairs equally distributed between ptbr and ptpt subsets.

Fine-Tuning Procedure

The model's fine-tuning procedure can be summarized in three major subsequent tasks:

  1. Data Processing:
  2. ASSIN's validation and train splits were loaded from the Hugging Face Hub and processed afterwards;
  3. Hyperparameter Tuning:
  4. XLM-RoBERTa-base's hyperparameters were chosen with the help of the [Weights & Biases] API to track the results and upload the fine-tuned models;
  5. Final Model Loading and Testing:
  6. using the cross-tests approach described in the this section, the models' performance were measured using different datasets and metrics.

More information on the fine-tuning procedure can be found in [@tcc_paper].

Hyperparameter Tuning

The model's training hyperparameters were chosen according to the following definition:

Definition 2. Let $Hyperparms= {i: i \text{ is an hyperparameter of } \hat{f}^{(xlmr_base)}}$ and $\hat{f}^{(xlmr_base)}$ be the model's inference function defined in Definition 1 :

Hyperparms=arg maxhyp(eval_acc(f^hyp(xlmr_base),assin_validation)) Hyperparms = \argmax_{hyp}(eval\_acc(\hat{f}^{(xlmr\_base)}_{hyp}, assin\_validation))

The following hyperparameters were tested in order to maximize the evaluation accuracy.

  • Number of Training Epochs: $(1,2,3)$
  • Per Device Train Batch Size: $(16,32)$
  • Learning Rate: $(1e-6, 2e-6,3e-6)$

The hyperaparemeter tuning experiments were run and tracked using the Weights & Biases' API and can be found at this link.

Training Hyperparameters

The hyperparameter tuning performed yelded the following values:

  • Number of Training Epochs: $3$
  • Per Device Train Batch Size: $16$
  • Learning Rate: $3e-6$

Evaluation

ASSIN

Testing this model in ASSIN's test split is straightforward. The following code snippet shows how to do it:

ASSIN2

Given a pair of sentences $(premise, hypothesis)$, $\hat{f}^{(xlmr_base)}(premise, hypothesis)$ can be equal to $PARAPHRASE, ENTAILMENT$ or $NONE$ as defined in Definition 1.

ASSIN2's test split's class label's column has only two possible values: $ENTAILMENT$ and $NONE$. Therefore, in order to test this model in ASSIN2's test split some mapping must be done in order to make the ASSIN2' class labels compatible with the model's inference function.

More information on how such mapping is performed can be found in Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa.

Metrics

The model's performance metrics for each test dataset are presented separately. Accuracy, f1 score, precision and recall were the metrics used to every evaluation performed. Such metrics are reported below. More information on such metrics them can be found in [2].

Results

test set accuracy f1 score precision recall
assin 0.89 0.89 0.89 0.89
assin2 0.70 0.69 0.73 0.70

Model Examination

Some interpretability work was done in order to understand the model's behavior. Such work can be found in the paper describing the procedure to create this fine-tuned model in [@tcc_paper].

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: [More Information Needed]
  • Hours used: [More Information Needed]
  • Cloud Provider: [More Information Needed]
  • Compute Region: [More Information Needed]
  • Carbon Emitted: [More Information Needed]

Citation

BibTeX:

    @article{tcc_paper,
    author    = {Giovani Tavares and Felipe Ribas Serras and Renata Wassermann and Marcelo Finger},
    title     = {Modelos Transformer para Inferência de Linguagem Natural em Português},
    pages     = {x--y},
    year      = {2023}
    }

References

[1]Salvatore, F. S. (2020). Analyzing Natural Language Inference from a Rigorous Point of View (pp. 1-2).

[2]Andrade, G. T. (2023) Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa (train_assin_xlmr_base_results PAGES GO HERE)

[3]Andrade, G. T. (2023) Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa (train_assin_xlmr_base_conclusions PAGES GO HERE)