language:
- ca
license: apache-2.0
tags:
- catalan
- textual entailment
- teca
- CaText
- Catalan Textual Corpus
datasets:
- projecte-aina/teca
metrics:
- accuracy
model-index:
- name: roberta-base-ca-v2-cased-te
results:
- task:
type: text-classification
dataset:
type: projecte-aina/teca
name: TECA
metrics:
- name: Accuracy
type: accuracy
value: 0.8314
widget:
- text: M'agrades. T'estimo.
- text: M'agrada el sol i la calor. A la Garrotxa plou molt.
- text: El llibre va caure per la finestra. El llibre va sortir volant.
- text: El meu aniversari és el 23 de maig. Faré anys a finals de maig.
Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Textual Entailment.
Table of Contents
- Model Description
- Intended Uses and Limitations
- How to Use
- Training
- Evaluation
- Licensing Information
- Citation Information
- Funding
- Contributions
Model description
The roberta-base-ca-v2-cased-te is a Textual Entailment (TE) model for the Catalan language fine-tuned from the roberta-base-ca-v2 model, a RoBERTa base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
Intended Uses and Limitations
roberta-base-ca-v2-cased-te model can be used to recognize Textual Entailment (TE). The model is limited by its training dataset and may not generalize well for all use cases.
How to Use
Here is how to use this model:
from transformers import pipeline
from pprint import pprint
nlp = pipeline("text-classification", model="projecte-aina/roberta-base-ca-v2-cased-te")
example = "M'agrada el sol i la calor. </s></s> A la Garrotxa plou molt."
te_results = nlp(example)
pprint(te_results)
Training
Training data
We used the TE dataset in Catalan called TECA for training and evaluation.
Training Procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
Evaluation
Variable and Metrics
This model was finetuned maximizing accuracy.
Evaluation results
We evaluated the roberta-base-ca-cased-te on the TECA test set against standard multilingual and monolingual baselines:
Model | TECA (Accuracy) |
---|---|
roberta-base-ca-v2-cased-te | 83.14 |
BERTa | 79.26 |
mBERT | 74.63 |
XLM-RoBERTa | 33.30 |
For more details, check the fine-tuning and evaluation scripts in the official GitHub repository.
Licensing Information
Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
Funding
This work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.