Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: nl
|
3 |
+
license: mit
|
4 |
+
pipeline_tag: text-classification
|
5 |
+
inference: false
|
6 |
+
---
|
7 |
+
|
8 |
+
# Regression Model for Attention Functioning Levels (ICF b140)
|
9 |
+
|
10 |
+
## Description
|
11 |
+
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing attention functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about attention functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
|
12 |
+
|
13 |
+
## Functioning levels
|
14 |
+
Level | Meaning
|
15 |
+
---|---
|
16 |
+
4 | No problem with concentrating / directing / holding / dividing attention.
|
17 |
+
3 | Slight problem with concentrating / directing / holding / dividing attention for a longer period of time or for complex tasks.
|
18 |
+
2 | Can concentrate / direct / hold / divide attention only for a short time.
|
19 |
+
1 | Can barely concentrate / direct / hold / divide attention.
|
20 |
+
0 | Unable to concentrate / direct / hold / divide attention.
|
21 |
+
|
22 |
+
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
|
23 |
+
|
24 |
+
## Intended uses and limitations
|
25 |
+
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
|
26 |
+
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
|
27 |
+
|
28 |
+
## How to use
|
29 |
+
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
|
30 |
+
```
|
31 |
+
from simpletransformers.classification import ClassificationModel
|
32 |
+
|
33 |
+
model = ClassificationModel(
|
34 |
+
'roberta',
|
35 |
+
'CLTL/icf-levels-att',
|
36 |
+
use_cuda=False,
|
37 |
+
)
|
38 |
+
|
39 |
+
example = 'Snel afgeleid, moeite aandacht te behouden.'
|
40 |
+
_, raw_outputs = model.predict([example])
|
41 |
+
predictions = np.squeeze(raw_outputs)
|
42 |
+
```
|
43 |
+
The prediction on the example is:
|
44 |
+
```
|
45 |
+
2.89
|
46 |
+
```
|
47 |
+
The raw outputs look like this:
|
48 |
+
```
|
49 |
+
[[2.89226103]]
|
50 |
+
```
|
51 |
+
|
52 |
+
## Training data
|
53 |
+
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
|
54 |
+
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
|
55 |
+
|
56 |
+
## Training procedure
|
57 |
+
The default training parameters of Simple Transformers were used, including:
|
58 |
+
- Optimizer: AdamW
|
59 |
+
- Learning rate: 4e-5
|
60 |
+
- Num train epochs: 1
|
61 |
+
- Train batch size: 8
|
62 |
+
|
63 |
+
## Evaluation results
|
64 |
+
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
|
65 |
+
|
66 |
+
| | Sentence-level | Note-level
|
67 |
+
|---|---|---
|
68 |
+
mean absolute error | 0.99 | 1.03
|
69 |
+
mean squared error | 1.35 | 1.47
|
70 |
+
root mean squared error | 1.16 | 1.21
|
71 |
+
|
72 |
+
## Authors and references
|
73 |
+
### Authors
|
74 |
+
Jenia Kim, Piek Vossen
|
75 |
+
|
76 |
+
### References
|
77 |
+
TBD
|