--- language: - es license: cc-by-4.0 tags: - anglicisms # Example: audio - loanwords # Example: automatic-speech-recognition - borrowing # Example: speech - codeswitching # Example to specify a library: allennlp - flair - token-classification - sequence-tagger-model - arxiv:2203.16169 datasets: - coalas widget: - text: >- Las fake news sobre la celebrity se reprodujeron por los 'mass media' en prime time. - text: Me gusta el cine noir y el anime. - text: >- Benching, estar en el banquillo de tu 'crush' mientras otro juega de titular. - text: Recetas de noviembre para el batch cooking. - text: Utilizaron técnicas de machine learning, big data o blockchain. - text: >- En la 'red carpet' lució un look muy urban con chunky shoes de inspiración anime. - text: Buscamos data scientist con conocimientos de machine learning y blockchain. library_name: flair --- # Model Card for Model anglicisms-spanish-beto This is a pretrained model for detecting unassimilated English lexical borrowings (a.k.a. anglicisms) on Spanish newswire. This model labels words of foreign origin (fundamentally from English) used in Spanish language, words such as *fake news*, *machine learning*, *smartwatch*, *influencer* or *streaming*. ## Model Details ### Model Description This is a pretrained model for detecting unassimilated English lexical borrowings (a.k.a. anglicisms) on Spanish newswire. This model labels words of foreign origin (fundamentally from English) used in Spanish language, words such as *fake news*, *machine learning*, *smartwatch*, *influencer* or *streaming*. The model is a BiLSTM-CRF fed with Transformer-based BERT and BETO embeddings (along with character and BPE embeddings) trained on the [COALAS](https://github.com/lirondos/coalas/) corpus for the task of detecting lexical borrowings. The model considers two labels: * ``ENG``: For English lexical borrowings (*smartphone*, *online*, *podcast*) * ``OTHER``: For lexical borrowings from any other language (*boutique*, *anime*, *umami*) The model uses BIO encoding to account for multitoken borrowings. **⚠ This is not the best-performing model for this task.** For the best-performing model (F1=85.76) see [Flair model](https://huggingface.co/lirondos/anglicisms-spanish-flair-cs) or [mBERT model](https://huggingface.co/lirondos/anglicisms-spanish-mbert) (F1=83.5). - **Developed and shared by:** [Elena Álvarez Mellado](https://lirondos.github.io/) - **Language(s) (NLP):** Spanish - **License:** cc-by-4.0 ### Model Sources [optional] - **Paper:** Elena Álvarez-Mellado and Constantine Lignos, 2022. [Detecting Unassimilated Borrowings in Spanish: An Annotated Corpus and Approaches to Modeling](https://aclanthology.org/2022.acl-long.268/). In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*, pages 3868–3888, Dublin, Ireland. Association for Computational Linguistics. - **Demo:** - [Observatory of anglicism usage in the Spanish press](https://observatoriolazaro.es/) - [pylazaro Python library](https://pylazaro.readthedocs.io/) ALL ENG OTHER ## Metrics (on the test set) The following table summarizes the results obtained by this model on the test set of the [COALAS](https://github.com/lirondos/coalas/) corpus. | LABEL | Precision | Recall | F1 | |:-------|-----:|-----:|---------:| | ALL |90.35 |80.16 | 84.95 | | ENG | 90.47 | 82.73 | 86.42 | | OTHER | 71.43 | 10.87 | 18.87 | ## Dataset This model was trained on [COALAS](https://github.com/lirondos/coalas/), a corpus of Spanish newswire annotated with unassimilated lexical borrowings. The corpus contains 370,000 tokens and includes various written media written in European Spanish. The test set was designed to be as difficult as possible: it covers sources and dates not seen in the training set, includes a high number of OOV words (92% of the borrowings in the test set are OOV) and is very borrowing-dense (20 borrowings per 1,000 tokens). |Set | Tokens | ENG | OTHER | Unique | |:-------|-----:|-----:|---------:|---------:| |Training |231,126 |1,493 | 28 |380 | |Development |82,578 |306 |49 |316| |Test |58,997 |1,239 |46 |987| |**Total** |372,701 |3,038 |123 |1,683 | ## More info More information about the dataset, model experimentation and error analysis can be found in the paper: *[Detecting Unassimilated Borrowings in Spanish: An Annotated Corpus and Approaches to Modeling](https://aclanthology.org/2022.acl-long.268/)*. ## How to use ``` from flair.data import Sentence from flair.models import SequenceTagger import pathlib import os if os.name == 'nt': # Minor patch needed if you are running from Windows temp = pathlib.PosixPath pathlib.PosixPath = pathlib.WindowsPath tagger = SequenceTagger.load("lirondos/anglicisms-spanish-flair-bert-beto") text = "Las fake news sobre la celebrity se reprodujeron por los mass media en prime time." sentence = Sentence(text) # predict tags tagger.predict(sentence) # print sentence print(sentence) # print predicted borrowing spans print('The following borrowing were found:') for entity in sentence.get_spans(): print(entity) ``` ## Citation **BibTeX:** If you use this model, please cite the following reference: ``` @inproceedings{alvarez-mellado-lignos-2022-detecting, title = "Detecting Unassimilated Borrowings in {S}panish: {A}n Annotated Corpus and Approaches to Modeling", author = "{\'A}lvarez-Mellado, Elena and Lignos, Constantine", editor = "Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.268", doi = "10.18653/v1/2022.acl-long.268", pages = "3868--3888", abstract = "This work presents a new resource for borrowing identification and analyzes the performance and errors of several models on this task. We introduce a new annotated corpus of Spanish newswire rich in unassimilated lexical borrowings{---}words from one language that are introduced into another without orthographic adaptation{---}and use it to evaluate how several sequence labeling models (CRF, BiLSTM-CRF, and Transformer-based models) perform. The corpus contains 370,000 tokens and is larger, more borrowing-dense, OOV-rich, and topic-varied than previous corpora available for this task. Our results show that a BiLSTM-CRF model fed with subword embeddings along with either Transformer-based embeddings pretrained on codeswitched data or a combination of contextualized word embeddings outperforms results obtained by a multilingual BERT-based model.", } ```