olivierb commited on
Commit
e5844c7
0 Parent(s):

initial commit

Browse files
.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - fr
5
+ metrics:
6
+ - seqeval
7
+ library_name: transformers
8
+ pipeline_tag: token-classification
9
+ tags:
10
+ - medical
11
+ - biomedical
12
+ - medkit-lib
13
+ widget:
14
+ - text: >-
15
+ La radiographie et la tomodensitométrie ont montré des micronodules diffus
16
+ example_title: example 1
17
+ - text: >-
18
+ Elle souffre d'asthme mais n'a pas besoin d'Allegra
19
+ example_title: example 2
20
+ ---
21
+
22
+
23
+ # DrBERT-CASM2
24
+
25
+ ## Model description
26
+
27
+ **DrBERT-CASM2** is a French Named Entity Recognition model that was fine-tuned from
28
+ [DrBERT](https://huggingface.co/Dr-BERT/DrBERT-4GB-CP-PubMedBERT): A PreTrained model in French for biomedical and clinical domains.
29
+ It has been trained to detect the following type of entities: **problem**, **treatment** and **test** using the medkit Trainer.
30
+
31
+ - **Fine-tuned using** medkit [GitHub Repo](https://github.com/TeamHeka/medkit)
32
+ - **Developed by** @camila-ud, medkit, HeKA Research team
33
+ - **Dataset source**
34
+
35
+ Annotated version from @aneuraz called 'corpusCasM2: A corpus of annotated clinical texts'
36
+ - The annotation was performed collaborativelly by the students of masters students from Université Paris Cité.
37
+
38
+ - The corpus contains documents from CAS:
39
+ ```
40
+ Natalia Grabar, Vincent Claveau, and Clément Dalloux. 2018. CAS: French Corpus with Clinical Cases.
41
+ In Proceedings of the Ninth International Workshop on Health Text Mining and Information Analysis,
42
+ pages 122–128, Brussels, Belgium. Association for Computational Linguistics.
43
+ ```
44
+ # Intended uses & limitations
45
+
46
+ ## Limitations and bias
47
+
48
+ This model was trained for **development and test phases**.
49
+ This model is limited by its training dataset, and it should be used with caution.
50
+ The results are not guaranteed, and the model should be used only in data exploration stages.
51
+ The model may be able to detect entities in the early stages of the analysis of medical documents in French.
52
+
53
+ The maximum token size was reduced to **128 tokens** to minimize training time.
54
+
55
+ # How to use
56
+
57
+ ## Install medkit
58
+
59
+ First of all, please install medkit with the following command:
60
+
61
+ ```
62
+ pip install 'medkit-lib[optional]'
63
+ ```
64
+
65
+ Please check the [documentation](https://medkit.readthedocs.io/en/latest/user_guide/install.html) for more info and examples.
66
+
67
+ ## Using the model
68
+
69
+ ```python
70
+ from medkit.core.text import TextDocument
71
+ from medkit.text.ner.hf_entity_matcher import HFEntityMatcher
72
+
73
+ matcher = HFEntityMatcher(model="medkit/DrBERT-CASM2")
74
+
75
+ test_doc = TextDocument("Elle souffre d'asthme mais n'a pas besoin d'Allegra")
76
+ detected_entities = matcher.run([test_doc.raw_segment])
77
+
78
+ # show information
79
+ msg = "|".join(f"'{entity.label}':{entity.text}" for entity in detected_entities)
80
+ print(f"Text: '{test_doc.text}'\n{msg}")
81
+ ```
82
+ ```
83
+ Text: "Elle souffre d'asthme mais n'a pas besoin d'Allegra"
84
+ 'problem':asthme|'treatment':Allegra
85
+ ```
86
+
87
+ # Training data
88
+
89
+ This model was fine-tuned on **CASM2**, an internal corpus with clinical cases (in french) annotated by master students.
90
+ The corpus contains more than 5000 medkit documents (~ phrases) with entities to detect.
91
+
92
+ **Number of documents (~ phrases) by split**
93
+
94
+ | Split | # medkit docs |
95
+ | ---------- | ------------- |
96
+ | Train | 5824 |
97
+ | Validation | 1457 |
98
+ | Test | 1821 |
99
+
100
+
101
+ **Number of examples per entity type**
102
+
103
+ | Split | treatment | test | problem |
104
+ | ---------- | --------- | ---- | ------- |
105
+ | Train | 3258 | 3990 | 6808 |
106
+ | Validation | 842 | 1007 | 1745 |
107
+ | Test | 994 | 1289 | 2113 |
108
+
109
+ ## Training procedure
110
+
111
+ This model was fine-tuned using the medkit trainer on CPU, it takes about 3h.
112
+
113
+ # Model perfomances
114
+
115
+ Model performances computes on CASM2 test dataset (using medkit seqeval evaluator)
116
+
117
+ Entity|precision|recall|f1
118
+ -|-|-|-
119
+ treatment|0.7492|0.7666|0.7578
120
+ test|0.7449|0.8240|0.7824
121
+ problem|0.6884|0.7304|0.7088
122
+ Overall|0.7188|0.7660|0.7416
123
+
124
+ ## How to evaluate using medkit
125
+ ```python
126
+ from medkit.text.metrics.ner import SeqEvalEvaluator
127
+
128
+ # load the matcher and get predicted entities by document
129
+ matcher = HFEntityMatcher(model="medkit/DrBERT-CASM2")
130
+ predicted_entities = [matcher.run([doc.raw_segment]) for doc in test_documents]
131
+
132
+ evaluator = SeqEvalEvaluator(tagging_scheme="iob2")
133
+ evaluator.compute(test_documents,predicted_entities=predicted_entities)
134
+ ```
135
+ You can use the tokenizer from HF to evaluate by tokens instead of characters
136
+ ```python
137
+ from transformers import AutoTokenizer
138
+
139
+ tokenizer_drbert = AutoTokenizer.from_pretrained("medkit/DrBERT-CASM2", use_fast=True)
140
+
141
+ evaluator = SeqEvalEvaluator(tokenizer=tokenizer_drbert,tagging_scheme="iob2")
142
+ evaluator.compute(test_documents,predicted_entities=predicted_entities)
143
+ ```
144
+
145
+ # Citation
146
+
147
+ ```
148
+ @online{medkit-lib,
149
+ author={HeKA Research Team},
150
+ title={medkit, A Python library for a learning health system},
151
+ url={https://pypi.org/project/medkit-lib/},
152
+ urldate = {2023-07-24},
153
+ }
154
+ ```
155
+ ```
156
+ HeKA Research Team, “medkit, a Python library for a learning health system.” https://pypi.org/project/medkit-lib/ (accessed Jul. 24, 2023).
157
+ ```
config.json ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "dcariasvi/DrBERT-CASM2",
3
+ "architectures": [
4
+ "BertForTokenClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "id2label": {
12
+ "0": "O",
13
+ "1": "B-problem",
14
+ "2": "I-problem",
15
+ "3": "B-treatment",
16
+ "4": "I-treatment",
17
+ "5": "B-test",
18
+ "6": "I-test"
19
+ },
20
+ "initializer_range": 0.02,
21
+ "intermediate_size": 3072,
22
+ "label2id": {
23
+ "B-problem": 1,
24
+ "B-test": 5,
25
+ "B-treatment": 3,
26
+ "I-problem": 2,
27
+ "I-test": 6,
28
+ "I-treatment": 4,
29
+ "O": 0
30
+ },
31
+ "layer_norm_eps": 1e-12,
32
+ "max_position_embeddings": 512,
33
+ "model_type": "bert",
34
+ "num_attention_heads": 12,
35
+ "num_hidden_layers": 12,
36
+ "pad_token_id": 0,
37
+ "position_embedding_type": "absolute",
38
+ "torch_dtype": "float32",
39
+ "transformers_version": "4.26.1",
40
+ "type_vocab_size": 2,
41
+ "use_cache": true,
42
+ "vocab_size": 30522
43
+ }
history.json ADDED
@@ -0,0 +1 @@
 
 
1
+ [{"train": {"loss": 0.4125201638787985}, "eval": {"loss": 0.28105667685968394, "overall_precision": 0.6288554677542697, "overall_recall": 0.6964991530208922, "overall_f1-score": 0.6609511051574012, "overall_support": "3542", "overall_acc": 0.9027570584010166, "problem_precision": 0.6217303822937625, "problem_recall": 0.6821192052980133, "problem_f1-score": 0.6505263157894737, "problem_support": "1812", "test_precision": 0.6564245810055865, "test_recall": 0.7524012806830309, "test_f1-score": 0.7011437095972153, "test_support": "937", "treatment_precision": 0.610917537746806, "treatment_recall": 0.6633039092055486, "treatment_f1-score": 0.6360338573155987, "treatment_support": "793"}}, {"train": {"loss": 0.2762815960689264}, "eval": {"loss": 0.2743868621309166, "overall_precision": 0.6495202558635395, "overall_recall": 0.6880293619424054, "overall_f1-score": 0.6682204551686318, "overall_support": "3542", "overall_acc": 0.9052710094480358, "problem_precision": 0.633705475810739, "problem_recall": 0.6578366445916115, "problem_f1-score": 0.6455456268616301, "problem_support": "1812", "test_precision": 0.6851851851851852, "test_recall": 0.7502668089647813, "test_f1-score": 0.7162506367804381, "test_support": "937", "treatment_precision": 0.6414201183431952, "treatment_recall": 0.6834804539722572, "treatment_f1-score": 0.6617826617826619, "treatment_support": "793"}}, {"train": {"loss": 0.24771345957834906}, "eval": {"loss": 0.271967472341519, "overall_precision": 0.6427855711422845, "overall_recall": 0.7244494635798984, "overall_f1-score": 0.6811786567560393, "overall_support": "3542", "overall_acc": 0.9052710094480358, "problem_precision": 0.6335992023928215, "problem_recall": 0.7014348785871964, "problem_f1-score": 0.6657936092194865, "problem_support": "1812", "test_precision": 0.6627379873073436, "test_recall": 0.7801494130202775, "test_f1-score": 0.7166666666666668, "test_support": "937", "treatment_precision": 0.638731596828992, "treatment_recall": 0.7112232030264817, "treatment_f1-score": 0.6730310262529833, "treatment_support": "793"}}, {"train": {"loss": 0.22672327848620155}, "eval": {"loss": 0.2861445434745806, "overall_precision": 0.6255808266079727, "overall_recall": 0.7221908526256352, "overall_f1-score": 0.670423273489713, "overall_support": "3542", "overall_acc": 0.8994695839549146, "problem_precision": 0.6085686465433301, "problem_recall": 0.6898454746136865, "problem_f1-score": 0.6466632177961718, "problem_support": "1812", "test_precision": 0.666970802919708, "test_recall": 0.7801494130202775, "test_f1-score": 0.719134284308903, "test_support": "937", "treatment_precision": 0.6144834930777423, "treatment_recall": 0.7276166456494325, "treatment_f1-score": 0.6662817551963048, "treatment_support": "793"}}, {"train": {"loss": 0.2031083636096723}, "eval": {"loss": 0.2810970079027238, "overall_precision": 0.6511976047904192, "overall_recall": 0.7368718238283456, "overall_f1-score": 0.6913907284768213, "overall_support": "3542", "overall_acc": 0.9046356152273606, "problem_precision": 0.6431761786600496, "problem_recall": 0.7152317880794702, "problem_f1-score": 0.6772929187353018, "problem_support": "1812", "test_precision": 0.663963963963964, "test_recall": 0.7865528281750267, "test_f1-score": 0.7200781631656081, "test_support": "937", "treatment_precision": 0.6534541336353341, "treatment_recall": 0.7276166456494325, "treatment_f1-score": 0.68854415274463, "treatment_support": "793"}}]
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ebaffebd7050c6fb415bf3ca942d36bcb5feab62c817138c1bf57b2eca6a72e
3
+ size 435615652
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3526e3c39fd476e1004854f34ad46f2088a51998aff0d7a25aa9e2f25c6c3146
3
+ size 435655729
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "do_basic_tokenize": true,
4
+ "do_lower_case": true,
5
+ "mask_token": "[MASK]",
6
+ "model_max_length": 128,
7
+ "name_or_path": "Dr-BERT/DrBERT-4GB-CP-PubMedBERT",
8
+ "never_split": null,
9
+ "pad_token": "[PAD]",
10
+ "sep_token": "[SEP]",
11
+ "special_tokens_map_file": null,
12
+ "strip_accents": null,
13
+ "tokenize_chinese_chars": true,
14
+ "tokenizer_class": "BertTokenizer",
15
+ "unk_token": "[UNK]"
16
+ }
trainer_config.yml ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ learning_rate: 5.0e-06
2
+ nb_training_epochs: 5
3
+ dataloader_nb_workers: 0
4
+ batch_size: 4
5
+ seed: 0
6
+ gradient_accumulation_steps: 1
7
+ do_metrics_in_training: false
8
+ metric_to_track_lr: loss
9
+ log_step_interval: 100
vocab.txt ADDED
The diff for this file is too large to render. See raw diff