--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: dit-small_tobacco3482_kd_CEKD_t2.5_a0.7 results: [] --- # dit-small_tobacco3482_kd_CEKD_t2.5_a0.7 This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1993 - Accuracy: 0.185 - Brier Loss: 0.8672 - Nll: 6.5703 - F1 Micro: 0.185 - F1 Macro: 0.0488 - Ece: 0.2594 - Aurc: 0.7367 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 0.96 | 3 | 3.4684 | 0.06 | 0.9042 | 9.2910 | 0.06 | 0.0114 | 0.1755 | 0.9033 | | No log | 1.96 | 6 | 3.3741 | 0.18 | 0.8886 | 6.5491 | 0.18 | 0.0305 | 0.2324 | 0.8055 | | No log | 2.96 | 9 | 3.2779 | 0.18 | 0.8767 | 7.2662 | 0.18 | 0.0305 | 0.2493 | 0.8196 | | No log | 3.96 | 12 | 3.2605 | 0.18 | 0.8816 | 7.0963 | 0.18 | 0.0305 | 0.2628 | 0.8140 | | No log | 4.96 | 15 | 3.2592 | 0.185 | 0.8814 | 6.9350 | 0.185 | 0.0488 | 0.2584 | 0.7850 | | No log | 5.96 | 18 | 3.2576 | 0.185 | 0.8782 | 6.3113 | 0.185 | 0.0488 | 0.2561 | 0.7731 | | No log | 6.96 | 21 | 3.2540 | 0.185 | 0.8747 | 6.0058 | 0.185 | 0.0488 | 0.2446 | 0.7705 | | No log | 7.96 | 24 | 3.2500 | 0.185 | 0.8731 | 5.9849 | 0.185 | 0.0488 | 0.2442 | 0.7669 | | No log | 8.96 | 27 | 3.2430 | 0.185 | 0.8717 | 5.9785 | 0.185 | 0.0488 | 0.2483 | 0.7626 | | No log | 9.96 | 30 | 3.2377 | 0.185 | 0.8711 | 6.2837 | 0.185 | 0.0488 | 0.2462 | 0.7609 | | No log | 10.96 | 33 | 3.2332 | 0.185 | 0.8713 | 6.8641 | 0.185 | 0.0488 | 0.2560 | 0.7601 | | No log | 11.96 | 36 | 3.2293 | 0.185 | 0.8719 | 6.8631 | 0.185 | 0.0488 | 0.2523 | 0.7587 | | No log | 12.96 | 39 | 3.2246 | 0.185 | 0.8717 | 6.8535 | 0.185 | 0.0488 | 0.2526 | 0.7558 | | No log | 13.96 | 42 | 3.2190 | 0.185 | 0.8709 | 6.8177 | 0.185 | 0.0488 | 0.2565 | 0.7533 | | No log | 14.96 | 45 | 3.2134 | 0.185 | 0.8700 | 6.7894 | 0.185 | 0.0488 | 0.2630 | 0.7533 | | No log | 15.96 | 48 | 3.2091 | 0.185 | 0.8691 | 6.7672 | 0.185 | 0.0488 | 0.2585 | 0.7500 | | No log | 16.96 | 51 | 3.2069 | 0.185 | 0.8687 | 6.6512 | 0.185 | 0.0488 | 0.2536 | 0.7466 | | No log | 17.96 | 54 | 3.2063 | 0.185 | 0.8682 | 6.5227 | 0.185 | 0.0488 | 0.2520 | 0.7429 | | No log | 18.96 | 57 | 3.2057 | 0.185 | 0.8682 | 6.5119 | 0.185 | 0.0488 | 0.2514 | 0.7406 | | No log | 19.96 | 60 | 3.2036 | 0.185 | 0.8678 | 6.5674 | 0.185 | 0.0488 | 0.2501 | 0.7385 | | No log | 20.96 | 63 | 3.2023 | 0.185 | 0.8677 | 6.5709 | 0.185 | 0.0488 | 0.2506 | 0.7385 | | No log | 21.96 | 66 | 3.2010 | 0.185 | 0.8675 | 6.5731 | 0.185 | 0.0488 | 0.2631 | 0.7376 | | No log | 22.96 | 69 | 3.2000 | 0.185 | 0.8673 | 6.5723 | 0.185 | 0.0488 | 0.2591 | 0.7371 | | No log | 23.96 | 72 | 3.1996 | 0.185 | 0.8673 | 6.5715 | 0.185 | 0.0488 | 0.2593 | 0.7368 | | No log | 24.96 | 75 | 3.1993 | 0.185 | 0.8672 | 6.5703 | 0.185 | 0.0488 | 0.2594 | 0.7367 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2