ccasimiro commited on
Commit
8287103
1 Parent(s): 8f603cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -1
README.md CHANGED
@@ -45,7 +45,31 @@ F1 Score: 0.8913
45
  For evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es).
46
 
47
  ## Citing
48
- To be announced soon!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
  ## Funding
51
  This work was partially funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL, and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020).
 
45
  For evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es).
46
 
47
  ## Citing
48
+ If you use these models, please cite our work:
49
+
50
+ ```bibtext
51
+ @inproceedings{carrino-etal-2022-pretrained,
52
+ title = "Pretrained Biomedical Language Models for Clinical {NLP} in {S}panish",
53
+ author = "Carrino, Casimiro Pio and
54
+ Llop, Joan and
55
+ P{\`a}mies, Marc and
56
+ Guti{\'e}rrez-Fandi{\~n}o, Asier and
57
+ Armengol-Estap{\'e}, Jordi and
58
+ Silveira-Ocampo, Joaqu{\'\i}n and
59
+ Valencia, Alfonso and
60
+ Gonzalez-Agirre, Aitor and
61
+ Villegas, Marta",
62
+ booktitle = "Proceedings of the 21st Workshop on Biomedical Language Processing",
63
+ month = may,
64
+ year = "2022",
65
+ address = "Dublin, Ireland",
66
+ publisher = "Association for Computational Linguistics",
67
+ url = "https://aclanthology.org/2022.bionlp-1.19",
68
+ doi = "10.18653/v1/2022.bionlp-1.19",
69
+ pages = "193--199",
70
+ abstract = "This work presents the first large-scale biomedical Spanish language models trained from scratch, using large biomedical corpora consisting of a total of 1.1B tokens and an EHR corpus of 95M tokens. We compared them against general-domain and other domain-specific models for Spanish on three clinical NER tasks. As main results, our models are superior across the NER tasks, rendering them more convenient for clinical NLP applications. Furthermore, our findings indicate that when enough data is available, pre-training from scratch is better than continual pre-training when tested on clinical tasks, raising an exciting research question about which approach is optimal. Our models and fine-tuning scripts are publicly available at HuggingFace and GitHub.",
71
+ }
72
+ ```
73
 
74
  ## Funding
75
  This work was partially funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL, and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020).