ADAPMIT-multilabel-climatebert
This model is a fine-tuned version of climatebert/distilroberta-base-climate-f on the Policy-Classification dataset. It achieves the following results on the evaluation set:
- Loss: 0.3535
- Precision-micro: 0.8999
- Precision-samples: 0.8559
- Precision-weighted: 0.9001
- Recall-micro: 0.9173
- Recall-samples: 0.8592
- Recall-weighted: 0.9173
- F1-micro: 0.9085
- F1-samples: 0.8521
- F1-weighted: 0.9085
Model description
The purpose of this model is to predict multiple labels simultaneously from a given input data. Specifically, the model will predict 2 labels - AdaptationLabel, MitigationLabel - that are relevant to a particular task or application
Intended uses & limitations
More information needed
Training and evaluation data
Training Dataset: 12538
Class Positive Count of Class AdaptationLabel 5439 MitigationLabel 6659 Validation Dataset: 1190
Class Positive Count of Class AdaptationLabel 533 MitigationLabel 604
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.03e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 300
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Precision-micro | Precision-samples | Precision-weighted | Recall-micro | Recall-samples | Recall-weighted | F1-micro | F1-samples | F1-weighted |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0.3512 | 1.0 | 784 | 0.3253 | 0.8530 | 0.8273 | 0.8572 | 0.8883 | 0.8311 | 0.8883 | 0.8703 | 0.8238 | 0.8703 |
0.2152 | 2.0 | 1568 | 0.2604 | 0.8999 | 0.8580 | 0.9002 | 0.9094 | 0.8521 | 0.9094 | 0.9046 | 0.8510 | 0.9046 |
0.1348 | 3.0 | 2352 | 0.2908 | 0.9038 | 0.8626 | 0.9059 | 0.9173 | 0.8588 | 0.9173 | 0.9105 | 0.8566 | 0.9107 |
0.0767 | 4.0 | 3136 | 0.3367 | 0.8999 | 0.8563 | 0.9000 | 0.9173 | 0.8588 | 0.9173 | 0.9085 | 0.8524 | 0.9085 |
0.0475 | 5.0 | 3920 | 0.3535 | 0.8999 | 0.8559 | 0.9001 | 0.9173 | 0.8592 | 0.9173 | 0.9085 | 0.8521 | 0.9085 |
label | precision | recall | f1-score | support |
---|---|---|---|---|
AdaptationLabel | 0.909 | 0.908 | 0.909 | 533.0 |
MitigationLabel | 0.891 | 0.925 | 0.908 | 604.0 |
Environmental Impact
Carbon emissions were measured using CodeCarbon.
- Carbon Emitted: 0.0375 kg of CO2
- Hours Used: 0.659 hours
Training Hardware
- On Cloud: yes
- GPU Model: 1 x Tesla T4
- CPU Model: Intel(R) Xeon(R) CPU @ 2.00GHz
- RAM Size: 12.67 GB
Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for GIZ/ADAPMIT-multilabel-climatebert_f
Base model
climatebert/distilroberta-base-climate-f