Files changed (1) hide show
  1. README.md +26 -26
README.md CHANGED
@@ -1,46 +1,46 @@
1
  ---
2
- language: "en"
3
- tags:
4
- - fill-mask
5
- license: mit
6
 
7
  ---
8
 
9
- # ClinicalBERT - Bio + Clinical BERT Model
10
 
11
- The [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) paper contains four unique clinicalBERT models: initialized with BERT-Base (`cased_L-12_H-768_A-12`) or BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`) & trained on either all MIMIC notes or only discharge summaries.
12
 
13
- This model card describes the Bio+Clinical BERT model, which was initialized from [BioBERT](https://arxiv.org/abs/1901.08746) & trained on all MIMIC notes.
14
 
15
- ## Pretraining Data
16
- The `Bio_ClinicalBERT` model was trained on all notes from [MIMIC III](https://www.nature.com/articles/sdata201635), a database containing electronic health records from ICU patients at the Beth Israel Hospital in Boston, MA. For more details on MIMIC, see [here](https://mimic.physionet.org/). All notes from the `NOTEEVENTS` table were included (~880M words).
17
 
18
- ## Model Pretraining
19
 
20
- ### Note Preprocessing
21
- Each note in MIMIC was first split into sections using a rules-based section splitter (e.g. discharge summary notes were split into "History of Present Illness", "Family History", "Brief Hospital Course", etc. sections). Then each section was split into sentences using SciSpacy (`en core sci md` tokenizer).
22
 
23
- ### Pretraining Procedures
24
- The model was trained using code from [Google's BERT repository](https://github.com/google-research/bert) on a GeForce GTX TITAN X 12 GB GPU. Model parameters were initialized with BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`).
25
 
26
- ### Pretraining Hyperparameters
27
- We used a batch size of 32, a maximum sequence length of 128, and a learning rate of 5 · 10−5 for pre-training our models. The models trained on all MIMIC notes were trained for 150,000 steps. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15
28
- and max predictions per sequence = 20).
29
 
30
- ## How to use the model
31
 
32
- Load the model via the transformers library:
33
  ```
34
- from transformers import AutoTokenizer, AutoModel
35
- tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
36
- model = AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
37
  ```
38
 
39
- ## More Information
40
 
41
- Refer to the original paper, [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) (NAACL Clinical NLP Workshop 2019) for additional details and performance on NLI and NER tasks.
42
 
43
- ## Questions?
44
 
45
- Post a Github issue on the [clinicalBERT repo](https://github.com/EmilyAlsentzer/clinicalBERT) or email emilya@mit.edu with any questions.
46
 
 
1
  ---
2
+ 语言:
3
+ 标签:
4
+ - 填充掩模
5
+ 许可证: 麻省理工学院
6
 
7
  ---
8
 
9
+ #生物+临床BERT模型
10
 
11
+ [公开可用的临床BERT嵌入](https://arxiv.org/abs/1904.03323)论文包含四个独特的临床BERT模型:初具规模(`外壳式`) 或生物工程师(`生物BERT基础版1.0+PubMed 200K+270K`),并接受过所有模拟笔记或仅进行出院总结的培训。
12
 
13
+ 这张模型卡描述了生物+临床BERT模型,它是从[生物伯特](https://arxiv.org/abs/1901.08746)并对所有模拟笔记进行了培训。
14
 
15
+ ##训练前数据
16
+ 这`生物_临床BE RT`模型训练的所有音符[模拟三](https://www.nature.com/articles/sdata201635),一个包含来自马萨诸塞州波士顿贝斯以色列医院ICU患者的电子健康记录的数据库。有关MIMIC的更多详细信息,请参见[在这里](https://mimic.physionet.org/). 中的所有注释`注意事项`表包括(880M字)
17
 
18
+ ##模型预训练
19
 
20
+ ###注意预处理
21
+ 模拟器中的每个记录首先使用基于规则的部分分割器拆分成部分(例如,出院总结记录分为“当前疾病史“、“家族史”、“简要住院过程”等部分)。然后每一部分被分成句子使用SciSpacy(`核心科学医学博士`标记器)。
22
 
23
+ ###培训前程序
24
+ 该模型的训练使用的代码从[谷歌的BERT存储库](https://github.com/google-research/bert)12 GB的图形处理器上运行。用比奥贝特初始模型参数(`生物BERT基础版1.0+PubMed 200K+270K`).
25
 
26
+ ###训练前超参数
27
+ 我们使用了一个批大小为32,最大序列长度为128,学习率为5.105的预训练我们的模型。所有笔记训练的模型被训练了150,000步。使用不同掩码复制输入数据的DUP因子被设置为5。使用了所有其他默认参数(具体而言,屏蔽语言模型概率=0.15
28
+ 和每个序列最大预测值=20)。
29
 
30
+ ##如何使用模型
31
 
32
+ 通过变压器库加载模型:
33
  ```
34
+ 从变压器导入自动标记器、自动建模
35
+ 自动令牌化器。"埃米尔·森策尔生物_临床应用")
36
+ 模型=AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
37
  ```
38
 
39
+ ##更多信息
40
 
41
+ 参考原文,[公开可用的临床BERT嵌入](https://arxiv.org/abs/1904.03323)NAACL临床NLP研讨会2019有关NLINER任务的其他详细信息和性能。
42
 
43
+ ##问题吗?
44
 
45
+ 发布一个关于Github的问题[临床放射治疗回收](https://github.com/EmilyAlsentzer/clinicalBERT)或电子邮件emilya@mit.edu有任何问题。
46