--- language: ca license: apache-2.0 tags: - catalan - masked-lm - distilroberta widget: - text: El Català és una llengua molt . - text: Salvador Dalí va viure a . - text: La Costa Brava té les millors d'Espanya. - text: El cacaolat és un batut de . - text: és la capital de la Garrotxa. - text: Vaig al a buscar bolets. - text: Antoni Gaudí vas ser un molt important per la ciutat. - text: Catalunya és una referència en a nivell europeu. pipeline_tag: fill-mask --- # DistilRoBERTa-base-ca-v2 ## Table of Contents
Click to expand - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [CLUB benchmark](#club-benchmark) - [Evaluation results](#evaluation-results) - [Licensing Information](#licensing-information) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer)
## Model description This model is a distilled version of [projecte-aina/roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2). It follows the same training procedure as [DistilBERT](https://arxiv.org/abs/1910.01108), using the implementation of Knowledge Distillation from the paper's [official repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation). The resulting architecture consists of 6 layers, 768 dimensional embeddings and 12 attention heads. This adds up to a total of 82M parameters, which is considerably less than the 125M of standard RoBERTa-base models. This makes the model lighter and faster than the original, at the cost of slightly lower performance. We encourage users of this model to check out the [projecte-aina/roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model card to learn more details about the teacher model. ## Intended uses and limitations This model is ready-to-use only for masked language modeling (MLM) to perform the Fill-Mask task. However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification or Named Entity Recognition. ## How to use Usage example where the model is passed to a fill-mask pipeline to predict the masked word (``) from a given text. ```python from pprint import pprint from transformers import pipeline pipe = pipeline("fill-mask", model="projecte-aina/distilroberta-base-ca-v2") text = "El és el meu dia preferit de la setmana." pprint(pipe(text)) ``` ``` [{'score': 0.2531125545501709, 'sequence': ' El dilluns és el meu dia preferit de la setmana.', 'token': 2885, 'token_str': ' dilluns'}, {'score': 0.13626143336296082, 'sequence': ' El divendres és el meu dia preferit de la setmana.', 'token': 2539, 'token_str': ' divendres'}, {'score': 0.11026635020971298, 'sequence': ' El dijous és el meu dia preferit de la setmana.', 'token': 2868, 'token_str': ' dijous'}, {'score': 0.10040736198425293, 'sequence': ' El dissabte és el meu dia preferit de la setmana.', 'token': 2480, 'token_str': ' dissabte'}, {'score': 0.09762872755527496, 'sequence': ' El diumenge és el meu dia preferit de la setmana.', 'token': 2587, 'token_str': ' diumenge'}] ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training ### Training data The training corpus consists of several corpora gathered from web crawling and public corpora, as shown in the table below: | Corpus | Size (GB) | |--------------------------|------------| | Catalan Crawling | 13.00 | | RacoCatalá | 8.10 | | Catalan Oscar | 4.00 | | CaWaC | 3.60 | | Cat. General Crawling | 2.50 | | Wikipedia | 1.10 | | DOGC | 0.78 | | Padicat | 0.63 | | ACN | 0.42 | | Nació Digital | 0.42 | | Cat. Government Crawling | 0.24 | | Vilaweb | 0.06 | | Catalan Open Subtitles | 0.02 | | Tweets | 0.02 | ### Training procedure This model has been trained using a technique known as Knowledge Distillation, which is used to shrink networks to a reasonable size while minimizing the loss in performance. It basically consists in distilling a large language model (the teacher) into a more lightweight, energy-efficient, and production-friendly model (the student). So, in a “teacher-student learning” setup, a relatively small student model is trained to mimic the behavior of a larger teacher model. As a result, the student has lower inference time and the ability to run in commodity hardware. ## Evaluation ### CLUB benchmark This model has been fine-tuned on the downstream tasks of the [Catalan Language Understanding Evaluation benchmark (CLUB)](https://club.aina.bsc.es/), which includes the following datasets: | Dataset | Task| Total | Train | Dev | Test | |:----------|:----|:--------|:-------|:------|:------| | AnCora | NER | 13,581 | 10,628 | 1,427 | 1,526 | | AnCora | POS | 16,678 | 13,123 | 1,709 | 1,846 | | STS-ca | STS | 3,073 | 2,073 | 500 | 500 | | TeCla | TC | 137,775 | 110,203| 13,786| 13,786| | TE-ca | RTE | 21,163 | 16,930 | 2,116 | 2,117 | | CatalanQA | QA | 21,427 | 17,135 | 2,157 | 2,135 | | XQuAD-ca | QA | - | - | - | 1,189 | ### Evaluation results This is how it compares to its teacher when fine-tuned on the aforementioned downstream tasks: | Model \ Task |NER (F1)|POS (F1)|STS-ca (Comb.)|TeCla (Acc.)|TEca (Acc.)|CatalanQA (F1/EM)| XQuAD-ca 1 (F1/EM) | | ------------------------|:-------|:-------|:-------------|:-----------|:----------|:----------------|:------------------------------| | RoBERTa-base-ca-v2 | **89.29** | **98.96** | **79.07** | **74.26** | **83.14** | **89.50**/**76.63** | **73.64**/**55.42** | | DistilRoBERTa-base-ca | 87.88 | 98.83 | 77.26 | 73.20 | 76.00 | 84.07/70.77 | 62.93/45.08 | 1 : Trained on CatalanQA, tested on XQuAD-ca. ## Additional information ### Authors Language Technologies Unit at Barcelona Supercomputing Center ([langtech@bsc.es](langtech@bsc.es)). ### Contact information For further information, send an email to [aina@bsc.es](aina@bsc.es). ### Copyright Copyright by the Language Technologies Unit at Barcelona Supercomputing Center. ### Licensing information This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Funding This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ### Citation information There is no publication for this specific model, but you can cite the paper where the teacher model was presented: ```bibtex @inproceedings{armengol-estape-etal-2021-multilingual, title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan", author = "Armengol-Estap{\'e}, Jordi and Carrino, Casimiro Pio and Rodriguez-Penagos, Carlos and de Gibert Bonet, Ona and Armentano-Oller, Carme and Gonzalez-Agirre, Aitor and Melero, Maite and Villegas, Marta", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.437", doi = "10.18653/v1/2021.findings-acl.437", pages = "4933--4946", } ``` ### Disclaimer
Click to expand The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner and creator of the models (BSC) be liable for any results arising from the use made by third parties of these models.