add citation
Browse files
README.md
CHANGED
@@ -17,6 +17,18 @@ This is a Japanese RoBERTa base model pre-trained on academic articles in medica
|
|
17 |
|
18 |
This model is released under the [Creative Commons 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/deed) (CC BY-NC-SA 4.0).
|
19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
## Datasets used for pre-training
|
21 |
|
22 |
- abstracts (train: 1.6GB (10M sentences), validation: 0.2GB (1.3M sentences))
|
|
|
17 |
|
18 |
This model is released under the [Creative Commons 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/deed) (CC BY-NC-SA 4.0).
|
19 |
|
20 |
+
#### Reference
|
21 |
+
|
22 |
+
```
|
23 |
+
@InProceedings{sugimoto_nlp2023_jmedroberta,
|
24 |
+
author = "杉本海人 and 壹岐太一 and 知田悠生 and 金沢輝一 and 相澤彰子",
|
25 |
+
title = "J{M}ed{R}o{BERT}a: 日本語の医学論文にもとづいた事前学習済み言語モデルの構築と評価",
|
26 |
+
booktitle = "言語処理学会第29回年次大会",
|
27 |
+
year = "2023",
|
28 |
+
url = ""
|
29 |
+
}
|
30 |
+
```
|
31 |
+
|
32 |
## Datasets used for pre-training
|
33 |
|
34 |
- abstracts (train: 1.6GB (10M sentences), validation: 0.2GB (1.3M sentences))
|