Edit model card

dit-tiny_tobacco3482_kd_CEKD_t1.5_a0.5

This model is a fine-tuned version of microsoft/dit-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.9246
  • Accuracy: 0.18
  • Brier Loss: 0.8755
  • Nll: 6.7967
  • F1 Micro: 0.18
  • F1 Macro: 0.0306
  • Ece: 0.2497
  • Aurc: 0.8499

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 256
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 25

Training results

Training Loss Epoch Step Validation Loss Accuracy Brier Loss Nll F1 Micro F1 Macro Ece Aurc
No log 0.96 3 3.1239 0.145 0.8999 10.1580 0.145 0.0253 0.2222 0.8467
No log 1.96 6 3.0895 0.145 0.8946 10.5934 0.145 0.0253 0.2303 0.8470
No log 2.96 9 3.0385 0.165 0.8866 8.6307 0.165 0.0502 0.2200 0.8458
No log 3.96 12 2.9972 0.21 0.8806 6.5449 0.2100 0.0615 0.2512 0.8364
No log 4.96 15 2.9719 0.155 0.8776 6.7565 0.155 0.0271 0.2414 0.8884
No log 5.96 18 2.9579 0.215 0.8768 7.0870 0.2150 0.0643 0.2713 0.8778
No log 6.96 21 2.9485 0.18 0.8768 7.0291 0.18 0.0308 0.2482 0.8532
No log 7.96 24 2.9417 0.18 0.8770 6.9706 0.18 0.0306 0.2559 0.8525
No log 8.96 27 2.9360 0.18 0.8768 6.9349 0.18 0.0306 0.2498 0.8527
No log 9.96 30 2.9326 0.18 0.8767 6.9268 0.18 0.0306 0.2635 0.8533
No log 10.96 33 2.9303 0.18 0.8765 6.9226 0.18 0.0306 0.2637 0.8531
No log 11.96 36 2.9289 0.18 0.8764 6.9217 0.18 0.0306 0.2591 0.8524
No log 12.96 39 2.9279 0.18 0.8762 6.8547 0.18 0.0306 0.2505 0.8526
No log 13.96 42 2.9270 0.18 0.8760 6.8491 0.18 0.0306 0.2500 0.8520
No log 14.96 45 2.9263 0.18 0.8759 6.8471 0.18 0.0306 0.2463 0.8518
No log 15.96 48 2.9258 0.18 0.8758 6.8445 0.18 0.0306 0.2462 0.8520
No log 16.96 51 2.9255 0.18 0.8758 6.8452 0.18 0.0306 0.2587 0.8511
No log 17.96 54 2.9256 0.18 0.8758 6.7940 0.18 0.0306 0.2585 0.8513
No log 18.96 57 2.9256 0.18 0.8758 6.7930 0.18 0.0306 0.2625 0.8508
No log 19.96 60 2.9252 0.18 0.8757 6.7945 0.18 0.0306 0.2580 0.8506
No log 20.96 63 2.9250 0.18 0.8756 6.7999 0.18 0.0306 0.2539 0.8505
No log 21.96 66 2.9248 0.18 0.8756 6.8441 0.18 0.0306 0.2538 0.8502
No log 22.96 69 2.9247 0.18 0.8755 6.8439 0.18 0.0306 0.2497 0.8500
No log 23.96 72 2.9247 0.18 0.8755 6.7977 0.18 0.0306 0.2497 0.8500
No log 24.96 75 2.9246 0.18 0.8755 6.7967 0.18 0.0306 0.2497 0.8499

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1.post200
  • Datasets 2.9.0
  • Tokenizers 0.13.2
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.