--- language: - es license: apache-2.0 tags: - "longformer" - "national library of spain" - "spanish" - "bne" datasets: - "bne" widget: - text: "Este año las campanadas de La Sexta las presentará ." - text: "David Broncano es un presentador de La ." - text: "Gracias a los datos de la BNE se ha podido este modelo del lenguaje." - text: "Hay base legal dentro del marco actual." --- # Longformer base trained with data from National Library of Spain (BNE) ## Model Description The longformer-base-4096-bne-es is the [Longformer](https://huggingface.co/allenai/longformer-base-4096) version of the [roberta-base-bne](https://https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) masked language model for the Spanish language. The model started from the **roberta-base-bne** checkpoint and was pretrained for MLM on long documents from our biomedical and clinical corpora. ## Intended Uses and Limitations The longformer-base-4096-biomedical-clinical-es model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition. ## How to Use Here is how to use this model: ```python from transformers import AutoModelForMaskedLM from transformers import AutoTokenizer, FillMaskPipeline from pprint import pprint tokenizer_hf = AutoTokenizer.from_pretrained('PlanTL-GOB-ES/longformer-base-4096-bne-es') model = AutoModelForMaskedLM.from_pretrained('PlanTL-GOB-ES/longformer-base-4096-bne-es') model.eval() pipeline = FillMaskPipeline(model, tokenizer_hf) text = f"Hay base legal dentro del marco actual." res_hf = pipeline(text) pprint([r['token_str'] for r in res_hf]) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training corpora and preprocessing The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text. Some of the statistics of the corpus: | Corpora | Number of documents | Number of tokens | Size (GB) | |---------|---------------------|------------------|-----------| | BNE | 201,080,084 | 135,733,450,668 | 570GB | For this Longformer, we have used a small random partition of 7,2GB containing documents with less than 4096 tokens as a training split. ## Tokenization and pre-training The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens. The RoBERTa-base-bne pre-training consists of a masked language model training that follows the approach employed for the RoBERTa base. The training lasted a total of 40 hours with 8 computing nodes each one with 2 AMD MI50 GPUs of 32GB VRAM. ## Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ## Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.