Regression Model for Weight Maintenance Functioning Levels (ICF b530)
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing weight maintenance functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about weight maintenance functions in clinical text in Dutch, use the icf-domains classification model.
|4||Healthy weight, no unintentional weight loss or gain, SNAQ 0 or 1.|
|3||Some unintentional weight loss or gain, or lost a lot of weight but gained some of it back afterwards.|
|2||Moderate unintentional weight loss or gain (more than 3 kg in the last month), SNAQ 2.|
|1||Severe unintentional weight loss or gain (more than 6 kg in the last 6 months), SNAQ ≥ 3.|
|0||Severe unintentional weight loss or gain (more than 6 kg in the last 6 months) and admitted to ICU.|
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the Simple Transformers library. This library is based on Transformers but the model cannot be used directly with Transformers
pipelineand classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
How to use
To generate predictions with the model, use the Simple Transformers library:
from simpletransformers.classification import ClassificationModel model = ClassificationModel( 'roberta', 'CLTL/icf-levels-mbw', use_cuda=False, ) example = 'Tijdens opname >10 kg afgevallen.' _, raw_outputs = model.predict([example]) predictions = np.squeeze(raw_outputs)
The prediction on the example is:
The raw outputs look like this:
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found here.
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
|mean absolute error||0.81||0.60|
|mean squared error||0.83||0.56|
|root mean squared error||0.91||0.75|
Authors and references
Jenia Kim, Piek Vossen
- Downloads last month
Inference API has been turned off for this model.