Commit
•
fa603a0
1
Parent(s):
40688e3
Update README.md
Browse files
README.md
CHANGED
@@ -8,3 +8,18 @@ MeDa-We was trained on a Danish medical corpus of 123M tokens. The word embeddin
|
|
8 |
The embeddings were trained for 10 epochs using a window size of 5 and 10 negative samples.
|
9 |
|
10 |
The development of the corpus and word embeddings is described further in our [paper](https://openreview.net/forum?id=cc9USd2ec-)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
The embeddings were trained for 10 epochs using a window size of 5 and 10 negative samples.
|
9 |
|
10 |
The development of the corpus and word embeddings is described further in our [paper](https://openreview.net/forum?id=cc9USd2ec-)
|
11 |
+
|
12 |
+
### Citing
|
13 |
+
If you find our model helps, please consider citing this :)
|
14 |
+
```
|
15 |
+
@article{li2023comparative,
|
16 |
+
title={A comparative study of pretrained language models for long clinical text},
|
17 |
+
author={Li, Yikuan and Wehbe, Ramsey M and Ahmad, Faraz S and Wang, Hanyin and Luo, Yuan},
|
18 |
+
journal={Journal of the American Medical Informatics Association},
|
19 |
+
volume={30},
|
20 |
+
number={2},
|
21 |
+
pages={340--347},
|
22 |
+
year={2023},
|
23 |
+
publisher={Oxford University Press}
|
24 |
+
}
|
25 |
+
```
|