|
--- |
|
language: nl |
|
license: mit |
|
pipeline_tag: text-classification |
|
inference: false |
|
--- |
|
|
|
# Regression Model for Eating Functioning Levels (ICF d550) |
|
|
|
## Description |
|
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing eating functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about eating functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model. |
|
|
|
## Functioning levels |
|
Level | Meaning |
|
---|--- |
|
4 | Can eat independently (in culturally acceptable ways), good intake, eats according to her/his needs. |
|
3 | Can eat independently but with adjustments, and/or somewhat reduced intake (>75% of her/his needs), and/or good intake can be achieved with proper advice. |
|
2 | Reduced intake, and/or stimulus / feeding modules / nutrition drinks are needed (but not tube feeding / TPN). |
|
1 | Intake is severely reduced (<50% of her/his needs), and/or tube feeding / TPN is needed. |
|
0 | Cannot eat, and/or fully dependent on tube feeding / TPN. |
|
|
|
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. |
|
|
|
## Intended uses and limitations |
|
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records). |
|
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled. |
|
|
|
## How to use |
|
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library: |
|
``` |
|
from simpletransformers.classification import ClassificationModel |
|
|
|
model = ClassificationModel( |
|
'roberta', |
|
'CLTL/icf-levels-etn', |
|
use_cuda=False, |
|
) |
|
|
|
example = 'Sondevoeding is geïndiceerd' |
|
_, raw_outputs = model.predict([example]) |
|
predictions = np.squeeze(raw_outputs) |
|
``` |
|
The prediction on the example is: |
|
``` |
|
0.89 |
|
``` |
|
The raw outputs look like this: |
|
``` |
|
[[0.8872931]] |
|
``` |
|
|
|
## Training data |
|
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released. |
|
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines). |
|
|
|
## Training procedure |
|
The default training parameters of Simple Transformers were used, including: |
|
- Optimizer: AdamW |
|
- Learning rate: 4e-5 |
|
- Num train epochs: 1 |
|
- Train batch size: 8 |
|
|
|
## Evaluation results |
|
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). |
|
|
|
| | Sentence-level | Note-level |
|
|---|---|--- |
|
mean absolute error | 0.59 | 0.50 |
|
mean squared error | 0.65 | 0.47 |
|
root mean squared error | 0.81 | 0.68 |
|
|
|
## Authors and references |
|
### Authors |
|
Jenia Kim, Piek Vossen |
|
|
|
### References |
|
TBD |
|
|