Regression Model for Walking Functioning Levels (ICF d550)
Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing walking functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about walking functions in clinical text in Dutch, use the icf-domains classification model.
Functioning levels
Level | Meaning |
---|---|
5 | Patient can walk independently anywhere: level surface, uneven surface, slopes, stairs. |
4 | Patient can walk independently on level surface but requires help on stairs, inclines, uneven surface; or, patient can walk independently, but the walking is not fully normal. |
3 | Patient requires verbal supervision for walking, without physical contact. |
2 | Patient needs continuous or intermittent support of one person to help with balance and coordination. |
1 | Patient needs firm continuous support from one person who helps carrying weight and with balance. |
0 | Patient cannot walk or needs help from two or more people; or, patient walks on a treadmill. |
The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model.
Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the Simple Transformers library. This library is based on Transformers but the model cannot be used directly with Transformers
pipeline
and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
How to use
To generate predictions with the model, use the Simple Transformers library:
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-fac',
use_cuda=False,
)
example = 'kan nog goed traplopen, maar flink ingeleverd aan conditie na Corona'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
The prediction on the example is:
4.2
The raw outputs look like this:
[[4.20903111]]
Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found here.
Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
Sentence-level | Note-level | |
---|---|---|
mean absolute error | 0.70 | 0.66 |
mean squared error | 0.91 | 0.93 |
root mean squared error | 0.95 | 0.96 |
Authors and references
Authors
Jenia Kim, Piek Vossen
References
TBD
- Downloads last month
- 19