CLTL commited on
Commit
6127b30
1 Parent(s): 48c0a3c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: nl
3
+ license: mit
4
+ pipeline_tag: text-classification
5
+ inference: false
6
+ ---
7
+
8
+ # Regression Model for Eating Functioning Levels (ICF d550)
9
+
10
+ ## Description
11
+ A fine-tuned regression model that assigns a functioning level to Dutch sentences describing eating functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about eating functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
12
+
13
+ ## Functioning levels
14
+ Level | Meaning
15
+ ---|---
16
+ 4 | Can eat independently (in culturally acceptable ways), good intake, eats according to her/his needs.
17
+ 3 | Can eat independently but with adjustments, and/or somewhat reduced intake (>75% of her/his needs), and/or good intake can be achieved with proper advice.
18
+ 2 | Reduced intake, and/or stimulus / feeding modules / nutrition drinks are needed (but not tube feeding / TPN).
19
+ 1 | Intake is severely reduced (<50% of her/his needs), and/or tube feeding / TPN is needed.
20
+ 0 | Cannot eat, and/or fully dependent on tube feeding / TPN.
21
+
22
+ The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
23
+
24
+ ## Intended uses and limitations
25
+ - The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
26
+ - The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
27
+
28
+ ## How to use
29
+ To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
30
+ ```
31
+ from simpletransformers.classification import ClassificationModel
32
+
33
+ model = ClassificationModel(
34
+ 'roberta',
35
+ 'CLTL/icf-levels-etn',
36
+ use_cuda=False,
37
+ )
38
+
39
+ example = 'Sondevoeding is geïndiceerd'
40
+ _, raw_outputs = model.predict([example])
41
+ predictions = np.squeeze(raw_outputs)
42
+ ```
43
+ The prediction on the example is:
44
+ ```
45
+ 0.89
46
+ ```
47
+ The raw outputs look like this:
48
+ ```
49
+ [[0.8872931]]
50
+ ```
51
+
52
+ ## Training data
53
+ - The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
54
+ - The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
55
+
56
+ ## Training procedure
57
+ The default training parameters of Simple Transformers were used, including:
58
+ - Optimizer: AdamW
59
+ - Learning rate: 4e-5
60
+ - Num train epochs: 1
61
+ - Train batch size: 8
62
+
63
+ ## Evaluation results
64
+ The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
65
+
66
+ | | Sentence-level | Note-level
67
+ |---|---|---
68
+ mean absolute error | 0.59 | 0.50
69
+ mean squared error | 0.65 | 0.47
70
+ root mean squared error | 0.81 | 0.68
71
+
72
+ ## Authors and references
73
+ ### Authors
74
+ Jenia Kim, Piek Vossen
75
+
76
+ ### References
77
+ TBD