Pre-trained word embeddings using the text of published clinical case reports. These embeddings use 300 dimensions and were trained using the GloVe algorithm on published clinical case reports found in the PMC Open Access Subset. See the paper here: https://pubmed.ncbi.nlm.nih.gov/34920127/

Citation:

@article{flamholz2022word,
  title={Word embeddings trained on published case reports are lightweight, effective for clinical tasks, and free of protected health information},
  author={Flamholz, Zachary N and Crane-Droesch, Andrew and Ungar, Lyle H and Weissman, Gary E},
  journal={Journal of Biomedical Informatics},
  volume={125},
  pages={103971},
  year={2022},
  publisher={Elsevier}
}

Quick start

Word embeddings are compatible with the gensim Python package format.

First download the files from this archive. Then load the embeddings into Python.


from gensim.models import FastText, Word2Vec, KeyedVectors # KeyedVectors are used to load the GloVe models

# Load the model
model = KeyedVectors.load_word2vec_format('gl_300_cr.txt')

# Return 100-dimensional vector representations of each word
model.word_vec('diabetes')
model.word_vec('cardiac_arrest')
model.word_vec('lymphangioleiomyomatosis')

# Try out cosine similarity
model.similarity('copd', 'chronic_obstructive_pulmonary_disease')
model.similarity('myocardial_infarction', 'heart_attack')
model.similarity('lymphangioleiomyomatosis', 'lam')
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .