mmarimon commited on
Commit
40009c4
1 Parent(s): 8da8e89

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -10
README.md CHANGED
@@ -19,15 +19,33 @@ widget:
19
 
20
  # Longformer base trained with data from National Library of Spain (BNE)
21
 
22
- ## Model Description
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  The longformer-base-4096-bne-es is the [Longformer](https://huggingface.co/allenai/longformer-base-4096) version of the [roberta-base-bne](https://https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) masked language model for the Spanish language. The model started from the **roberta-base-bne** checkpoint and was pretrained for MLM on long documents from our biomedical and clinical corpora.
24
 
25
- ## Intended Uses and Limitations
26
  The longformer-base-4096-biomedical-clinical-es model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section).
27
 
28
  However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition.
29
 
30
- ## How to Use
31
 
32
  Here is how to use this model:
33
 
@@ -48,7 +66,9 @@ pprint([r['token_str'] for r in res_hf])
48
 
49
  At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
50
 
51
- ## Training corpora and preprocessing
 
 
52
  The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.
53
 
54
  To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text.
@@ -61,18 +81,42 @@ Some of the statistics of the corpus:
61
 
62
  For this Longformer, we have used a small random partition of 7,2GB containing documents with less than 4096 tokens as a training split.
63
 
64
- ## Tokenization and pre-training
65
  The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens. The RoBERTa-base-bne pre-training consists of a masked language model training that follows the approach employed for the RoBERTa base. The training lasted a total of 40 hours with 8 computing nodes each one with 2 AMD MI50 GPUs of 32GB VRAM.
66
 
67
 
68
- ## Copyright
69
 
70
- Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
 
71
 
72
- ## Licensing information
 
73
 
74
- [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
 
75
 
76
- ## Funding
 
77
 
 
78
  This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  # Longformer base trained with data from National Library of Spain (BNE)
21
 
22
+ ## Table of contents
23
+ <details>
24
+ <summary>Click to expand</summary>
25
+
26
+ - [Model description](#model-description)
27
+ - [Intended uses and limitations](#intended-use)
28
+ - [How to use](#how-to-use)
29
+ - [Limitations and bias](#limitations-and-bias)
30
+ - [Training](#training)
31
+ - [Evaluation](#evaluation)
32
+ - [Additional information](#additional-information)
33
+ - [Author](#author)
34
+ - [Contact information](#contact-information)
35
+ - [Copyright](#copyright)
36
+ - [Licensing information](#licensing-information)
37
+ - [Funding](#funding)
38
+ - [Disclaimer](#disclaimer)
39
+
40
+ ## Model description
41
  The longformer-base-4096-bne-es is the [Longformer](https://huggingface.co/allenai/longformer-base-4096) version of the [roberta-base-bne](https://https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) masked language model for the Spanish language. The model started from the **roberta-base-bne** checkpoint and was pretrained for MLM on long documents from our biomedical and clinical corpora.
42
 
43
+ ## Intended uses and limitations
44
  The longformer-base-4096-biomedical-clinical-es model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section).
45
 
46
  However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition.
47
 
48
+ ## How to use
49
 
50
  Here is how to use this model:
51
 
 
66
 
67
  At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
68
 
69
+ ## Training
70
+
71
+ ### Training corpora and preprocessing
72
  The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.
73
 
74
  To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text.
 
81
 
82
  For this Longformer, we have used a small random partition of 7,2GB containing documents with less than 4096 tokens as a training split.
83
 
84
+ ### Tokenization and pre-training
85
  The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens. The RoBERTa-base-bne pre-training consists of a masked language model training that follows the approach employed for the RoBERTa base. The training lasted a total of 40 hours with 8 computing nodes each one with 2 AMD MI50 GPUs of 32GB VRAM.
86
 
87
 
88
+ ## Additional information
89
 
90
+ ### Author
91
+ Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
92
 
93
+ ### Contact information
94
+ For further information, send an email to <plantl-gob-es@bsc.es>
95
 
96
+ ### Copyright
97
+ Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
98
 
99
+ ### Licensing information
100
+ [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
101
 
102
+ ### Funding
103
  This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
104
+
105
+ ### Disclaimer
106
+
107
+ <details>
108
+ <summary>Click to expand</summary>
109
+
110
+ The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
111
+
112
+ When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
113
+
114
+ In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
115
+
116
+
117
+ Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
118
+
119
+ Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
120
+
121
+ En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
122
+ </details>