lsacy commited on
Commit
b028793
1 Parent(s): e9bc418

Upload 5 files

Browse files
Files changed (5) hide show
  1. README.md +54 -0
  2. config.json +18 -0
  3. flax_model.msgpack +3 -0
  4. pytorch_model.bin +3 -0
  5. vocab.txt +0 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: "en"
3
+ tags:
4
+ - bert
5
+ - medical
6
+ - clinical
7
+ thumbnail: "https://core.app.datexis.com/static/paper.png"
8
+ ---
9
+
10
+ # CORe Model - BioBERT + Clinical Outcome Pre-Training
11
+
12
+ ## Model description
13
+
14
+ The CORe (_Clinical Outcome Representations_) model is introduced in the paper [Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration](https://www.aclweb.org/anthology/2021.eacl-main.75.pdf).
15
+ It is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective.
16
+
17
+ #### How to use CORe
18
+
19
+ You can load the model via the transformers library:
20
+ ```
21
+ from transformers import AutoTokenizer, AutoModel
22
+ tokenizer = AutoTokenizer.from_pretrained("bvanaken/CORe-clinical-outcome-biobert-v1")
23
+ model = AutoModel.from_pretrained("bvanaken/CORe-clinical-outcome-biobert-v1")
24
+ ```
25
+ From there, you can fine-tune it on clinical tasks that benefit from patient outcome knowledge.
26
+
27
+ ### Pre-Training Data
28
+
29
+ The model is based on [BioBERT](https://huggingface.co/dmis-lab/biobert-v1.1) pre-trained on PubMed data.
30
+ The _Clinical Outcome Pre-Training_ included discharge summaries from the MIMIC III training set (specified [here](https://github.com/bvanaken/clinical-outcome-prediction/blob/master/tasks/mimic_train.csv)), medical transcriptions from [MTSamples](https://mtsamples.com/) and clinical notes from the i2b2 challenges 2006-2012. It further includes ~10k case reports from PubMed Central (PMC), disease articles from Wikipedia and article sections from the [MedQuAd](https://github.com/abachaa/MedQuAD) dataset extracted from NIH websites.
31
+
32
+ ### More Information
33
+
34
+ For all the details about CORe and contact info, please visit [CORe.app.datexis.com](http://core.app.datexis.com/).
35
+
36
+ ### Cite
37
+
38
+ ```bibtex
39
+ @inproceedings{vanaken21,
40
+ author = {Betty van Aken and
41
+ Jens-Michalis Papaioannou and
42
+ Manuel Mayrdorfer and
43
+ Klemens Budde and
44
+ Felix A. Gers and
45
+ Alexander Löser},
46
+ title = {Clinical Outcome Prediction from Admission Notes using Self-Supervised
47
+ Knowledge Integration},
48
+ booktitle = {Proceedings of the 16th Conference of the European Chapter of the
49
+ Association for Computational Linguistics: Main Volume, {EACL} 2021,
50
+ Online, April 19 - 23, 2021},
51
+ publisher = {Association for Computational Linguistics},
52
+ year = {2021},
53
+ }
54
+ ```
config.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "attention_probs_dropout_prob": 0.1,
3
+ "hidden_act": "gelu",
4
+ "hidden_dropout_prob": 0.1,
5
+ "hidden_size": 768,
6
+ "initializer_range": 0.02,
7
+ "intermediate_size": 3072,
8
+ "language": "english",
9
+ "layer_norm_eps": 1e-12,
10
+ "max_position_embeddings": 512,
11
+ "model_type": "bert",
12
+ "name": "Bert",
13
+ "num_attention_heads": 12,
14
+ "num_hidden_layers": 12,
15
+ "pad_token_id": 0,
16
+ "type_vocab_size": 2,
17
+ "vocab_size": 28996
18
+ }
flax_model.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:277cd5a458f41be8e9bb108cab0c7dd6feebccd0f3389ec7c42f4543a17e198d
3
+ size 433248237
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f84af74e6d3aad1ea53de52df6e24e9e56f4d0bc457ada858d633da9e7d2d44
3
+ size 433289285
vocab.txt ADDED
The diff for this file is too large to render. See raw diff