mmarimon commited on
Commit
1366758
1 Parent(s): f04b02b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -21,19 +21,19 @@ widget:
21
  <details>
22
  <summary>Click to expand</summary>
23
 
24
- - [Model Description](#model-description)
25
- - [Intended Uses and Limitations](#intended-use)
26
  - [How to Use](#how-to-use)
27
  - [Limitations and bias](#limitations-and-bias)
28
  - [Training](#training)
29
  - [Evaluation](#evaluation)
30
- - [Additional Information](#additional-information)
31
  - [Author](#author)
32
- - [Contact Information](#contact-information)
33
  - [Copyright](#copyright)
34
- - [Licensing Information](#licensing-information)
35
  - [Funding](#funding)
36
- - [Citation Information](#citation-information)
37
  - [Disclaimer](#disclaimer)
38
 
39
  </details>
@@ -41,7 +41,7 @@ widget:
41
  ## Model description
42
  Biomedical pretrained language model for Spanish. This model is a [RoBERTa-based](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model trained on a **biomedical-clinical** corpus in Spanish collected from several sources.
43
 
44
- ## Intended uses & limitations
45
  The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification.
46
 
47
  ## How to use
@@ -148,7 +148,7 @@ The evaluation results are compared against the [mBERT](https://huggingface.co/b
148
  ### Author
149
  Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
150
 
151
- ### Contact Information
152
  For further information, send an email to <plantl-gob-es@bsc.es>
153
 
154
  ### Copyright
@@ -160,7 +160,7 @@ Copyright by the Spanish State Secretariat for Digitalization and Artificial Int
160
  ### Funding
161
  This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
162
 
163
- ### Citation Information
164
  If you use our models, please cite our latest preprint:
165
 
166
  ```bibtex
 
21
  <details>
22
  <summary>Click to expand</summary>
23
 
24
+ - [Model description](#model-description)
25
+ - [Intended Uses and limitations](#intended-use)
26
  - [How to Use](#how-to-use)
27
  - [Limitations and bias](#limitations-and-bias)
28
  - [Training](#training)
29
  - [Evaluation](#evaluation)
30
+ - [Additional information](#additional-information)
31
  - [Author](#author)
32
+ - [Contact information](#contact-information)
33
  - [Copyright](#copyright)
34
+ - [Licensing information](#licensing-information)
35
  - [Funding](#funding)
36
+ - [Citation information](#citation-information)
37
  - [Disclaimer](#disclaimer)
38
 
39
  </details>
 
41
  ## Model description
42
  Biomedical pretrained language model for Spanish. This model is a [RoBERTa-based](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model trained on a **biomedical-clinical** corpus in Spanish collected from several sources.
43
 
44
+ ## Intended uses and limitations
45
  The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification.
46
 
47
  ## How to use
 
148
  ### Author
149
  Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
150
 
151
+ ### Contact information
152
  For further information, send an email to <plantl-gob-es@bsc.es>
153
 
154
  ### Copyright
 
160
  ### Funding
161
  This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
162
 
163
+ ### Citation information
164
  If you use our models, please cite our latest preprint:
165
 
166
  ```bibtex