UMCU commited on
Commit
1973820
·
verified ·
1 Parent(s): 9e8a98d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +96 -0
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ id: sap_umls_MedRoBERTa.nl_meantoken
3
+ name: sap_umls_MedRoBERTa.nl_meantoken
4
+ description: MedRoBERTa.nl continued pre-training on hard medical terms pairs from
5
+ the UMLS ontology, using the multi-similarity loss function
6
+ license: gpl-3.0
7
+ language: nl
8
+ tags:
9
+ - bionlp
10
+ - lexical semantic
11
+ - biology
12
+ - embedding
13
+ - biomedical
14
+ - science
15
+ - entity linking
16
+ pipeline_tag: feature-extraction
17
+ ---
18
+
19
+ # Model Card for Sap Umls Medroberta.Nl Meantoken
20
+
21
+ The model was trained on medical entity triplets (anchor, term, synonym)
22
+
23
+
24
+ ### Expected input and output
25
+ The input should be a string of biomedical entity names, e.g., "covid infection" or "Hydroxychloroquine". The [CLS] embedding of the last layer is regarded as the output.
26
+
27
+ #### Extracting embeddings from sap_umls_MedRoBERTa.nl_meantoken
28
+
29
+ The following script converts a list of strings (entity names) into embeddings.
30
+ ```python
31
+ import numpy as np
32
+ import torch
33
+ from tqdm.auto import tqdm
34
+ from transformers import AutoTokenizer, AutoModel
35
+
36
+ tokenizer = AutoTokenizer.from_pretrained("UMCU/sap_umls_MedRoBERTa.nl_meantoken")
37
+ model = AutoModel.from_pretrained("UMCU/sap_umls_MedRoBERTa.nl_meantoken").cuda()
38
+
39
+ # replace with your own list of entity names
40
+ all_names = ["covid-19", "Coronavirus infection", "high fever", "Tumor of posterior wall of oropharynx"]
41
+
42
+ bs = 128 # batch size during inference
43
+ all_embs = []
44
+ for i in tqdm(np.arange(0, len(all_names), bs)):
45
+ toks = tokenizer.batch_encode_plus(all_names[i:i+bs],
46
+ padding="max_length",
47
+ max_length=25,
48
+ truncation=True,
49
+ return_tensors="pt")
50
+ toks_cuda = {}
51
+ for k,v in toks.items():
52
+ toks_cuda[k] = v.cuda()
53
+ cls_rep = model(**toks_cuda)[0].mean(1)
54
+ all_embs.append(cls_rep.cpu().detach().numpy())
55
+
56
+ all_embs = np.concatenate(all_embs, axis=0)
57
+ ```
58
+
59
+
60
+ # Data description
61
+
62
+ Hard Dutch UMLS synonym pairs (terms referring to the same CUI). Dutch UMLS extended with matching Dutch SNOMEDCT term, and including English medication names
63
+
64
+
65
+ # Acknowledgement
66
+
67
+ This is part of the [DT4H project](https://www.datatools4heart.eu/).
68
+
69
+ # Doi and reference
70
+
71
+
72
+
73
+ For more details about training and eval, see SapBERT [github repo](https://github.com/cambridgeltl/sapbert).
74
+
75
+
76
+ ### Citation
77
+ ```bibtex
78
+ @inproceedings{liu-etal-2021-self,
79
+ title = "Self-Alignment Pretraining for Biomedical Entity Representations",
80
+ author = "Liu, Fangyu and
81
+ Shareghi, Ehsan and
82
+ Meng, Zaiqiao and
83
+ Basaldella, Marco and
84
+ Collier, Nigel",
85
+ booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
86
+ month = jun,
87
+ year = "2021",
88
+ address = "Online",
89
+ publisher = "Association for Computational Linguistics",
90
+ url = "https://www.aclweb.org/anthology/2021.naacl-main.334",
91
+ pages = "4228--4238",
92
+ abstract = "Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SapBERT, a pretraining scheme that self-aligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipeline-based hybrid systems, SapBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without task-specific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BioBERT, SciBERTand and PubMedBERT, our pretraining scheme proves to be both effective and robust.",
93
+ }
94
+ ```
95
+
96
+