--- datasets: - assin2 language: - pt metrics: - accuracy pipeline_tag: text-classification tags: - nli --- # Model Card for Model ID This is a **[BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased) fine-tuned model** on 5K (premise, hypothesis) sentence pairs from the **ASSIN (Avaliação de Similaridade Semântica e Inferência textual)** corpus. The original reference papers are: [BERTimbau: Pretrained BERT Models for Brazilian Portuguese](https://www.researchgate.net/publication/345395208_BERTimbau_Pretrained_BERT_Models_for_Brazilian_Portuguese), [ASSIN: Avaliação de Similaridade Semântica e Inferência Textual](https://huggingface.co/datasets/assin), respectivelly. This model is suitable for Portuguese (from Brazil or Portugal). ## Model Details ### Model Description - **Developed by:** Giovani Tavares and Felipe Ribas Serras - **Oriented By:** Felipe Ribas Serras, Renata Wassermann and Marcelo Finger - **Model type:** Transformer-based text classifier - **Language(s) (NLP):** Portuguese - **License:** mit - **Finetuned from model** [BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased) ### Model Sources - **Repository:** [Natural-Portuguese-Language-Inference](https://github.com/giogvn/Natural-Portuguese-Language-Inference) - **Paper:** This is an ongoing research. We are currently writing a paper where we fully describe our experiments. ## Uses ### Direct Use This fine-tuned version of [BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased) performs Natural Language Inference (NLI), which is a text classification task. The *(premise, hypothesis)* entailment definition used is the same as the one found in Salvatore's paper [1]. Therefore, this fine-tuned version of [BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased) classifies pairs of sentences in the form *(premise, hypothesis)* into the classes *ENTAILMENT*, *NONE* and *PARAPHRASE*. ## Demo ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch model_path = "giotvr/bertimbau_large_assin_fine_tuned" premise = "As mudanças climáticas são uma ameaça séria para a biodiversidade do planeta." hypothesis ="A biodiversidade do planeta é seriamente ameaçada pelas mudanças climáticas." tokenizer = XLMRobertaTokenizer.from_pretrained(model_path, use_auth_token=True) input_pair = tokenizer(premise, hypothesis, return_tensors="pt",padding=True, truncation=True) model = AutoModelForSequenceClassification.from_pretrained(model_path, use_auth_token=True) with torch.no_grad(): logits = model(**input_pair).logits probs = torch.nn.functional.softmax(logits, dim=-1) probs, sorted_indices = torch.sort(probs, descending=True) for i, score in enumerate(probs[0]): print(f"Class {sorted_indices[0][i]}: {score.item():.4f}") ``` ### Recommendations This model should be used for scientific purposes only. It was not tested for production environments. ## Fine-Tuning Details ### Fine-Tuning Data --- - **Train Dataset**: [ASSIN](https://huggingface.co/datasets/assin)
- **Evaluation Dataset used for Hyperparameter Tuning:** [PLUE/MNLI](https://huggingface.co/datasets/dlb/plue)'s validation split - **Test Datasets:** - [ASSIN](https://huggingface.co/datasets/assin)'s test split - [ASSIN2](https://huggingface.co/datasets/assin2)'s test split - [PLUE/MNLI](https://huggingface.co/datasets/dlb/plue/viewer/mnli_matched)'s validation matched split --- This is a fine tuned version of [BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased) using the [ASSIN](https://huggingface.co/datasets/assin) dataset. [ASSIN](https://huggingface.co/datasets/assin) is a corpus annotated with hypothesis/premise Portuguese sentence pairs suitable for detecting textual entailment, none or paraphrase relationships between the members of such pairs. Such corpus is balanced among the three classes. ### Fine-Tuning Procedure The model's fine-tuning procedure can be summarized in three major subsequent tasks:
  1. **Data Processing:**
  2. [ASSIN](https://huggingface.co/datasets/assin)'s *validation* and *train* splits were loaded from the **Hugging Face Hub** and processed afterwards;
  3. **Hyperparameter Tuning:**
  4. [BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased)'s hyperparameters were chosen with the help of the [Weights & Biases] API to track the results and upload the fine-tuned models;
  5. **Final Model Loading and Testing:**
  6. using the *cross-tests* approach described in the [this section](#evaluation), the models' performance were measured using different datasets and metrics.
#### Hyperparameter Tuning The following hyperparameters were tested in order to maximize the evaluation accuracy. - **Number of Training Epochs:** $(2,3,4)$ - **Per Device Train Batch Size:** $(8,16,32)$ - **Learning Rate:** $(3e−5, 2e−5, 3e−5)$ The hyperparemeter tuning experiments were run and tracked using the [Weights & Biases' API](https://docs.wandb.ai/ref/python/public-api/api) and can be found at this [link](https://wandb.ai/gio_projs/assin_xlm_roberta_v5?workspace=user-giogvn). #### Training Hyperparameters The [hyperparameter tuning](#hyperparameter-tuning) performed yelded the following values: - **Number of Training Epochs:** $2$ - **Per Device Train Batch Size:** $16$ - **Learning Rate:** $5e-5$ ## Evaluation ### ASSIN Testing this model in ASSIN's test set was straightforward as it was fine-tuned in its training set. ### ASSIN2 Testing this model in ASSIN2's test set was straightforward as ASSIN2 contains the same classes as ASSIN. ### PLUE/MNLI Testing this model in PLUE/MNLI's test split required some translation of the *neutral* and *contradictions* classes found in it, because such classes are not present in ASSIN. Both were considered equivalent to ASSIN's *NONE* class. More details on such translation can be found in [Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa](https://linux.ime.usp.br/~giovani/). ### Metrics The model's performance metrics for each test dataset are presented separately. Accuracy, f1 score, precision and recall were the metrics used to every evaluation performed. Such metrics are reported below. More information on such metrics them will be available in our ongoing research paper. ### Results | test set | accuracy | f1 score | precision | recall | |----------|----------|----------|-----------|--------| | assin |0.92 |0.92 |0.92 |0.92 | | assin2 |0.73 |0.72 |0.77 |0.73 | | plue/mnli|0.49 |0.40 |0.35 |0.49 | ## Model Examination Some interpretability work is being done in order to understand the model's behavior. Such details will be available in the previoulsy referred paper. ## References [1][Salvatore, F. S. (2020). Analyzing Natural Language Inference from a Rigorous Point of View (pp. 1-2).](https://www.teses.usp.br/teses/disponiveis/45/45134/tde-05012021-151600/publico/tese_de_doutorado_felipe_salvatore.pdf)