--- license: apache-2.0 tags: - generated_from_trainer datasets: - filter_sort metrics: - f1 - accuracy model-index: - name: favs-filtersort-multilabel-classification-bert-base-cased results: - task: name: Text Classification type: text-classification dataset: name: filter_sort type: filter_sort config: default split: train args: default metrics: - name: F1 type: f1 value: 0.7714285714285716 - name: Accuracy type: accuracy value: 0.4 --- # favs-filtersort-multilabel-classification-bert-base-cased This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the filter_sort dataset. It achieves the following results on the evaluation set: - Loss: 0.2711 - F1: 0.7714 - Roc Auc: 0.8309 - Accuracy: 0.4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | 0.6535 | 1.0 | 12 | 0.5860 | 0.4524 | 0.6444 | 0.0 | | 0.5843 | 2.0 | 24 | 0.5121 | 0.5 | 0.6708 | 0.0 | | 0.5308 | 3.0 | 36 | 0.4460 | 0.5484 | 0.6950 | 0.0 | | 0.4663 | 4.0 | 48 | 0.4023 | 0.5574 | 0.6989 | 0.0 | | 0.4116 | 5.0 | 60 | 0.3769 | 0.5806 | 0.7117 | 0.0 | | 0.3936 | 6.0 | 72 | 0.3620 | 0.6032 | 0.7245 | 0.0 | | 0.3691 | 7.0 | 84 | 0.3519 | 0.625 | 0.7373 | 0.0 | | 0.3565 | 8.0 | 96 | 0.3410 | 0.6269 | 0.7425 | 0.0 | | 0.3548 | 9.0 | 108 | 0.3324 | 0.6562 | 0.7540 | 0.0 | | 0.3235 | 10.0 | 120 | 0.3229 | 0.6866 | 0.7758 | 0.1 | | 0.3157 | 11.0 | 132 | 0.3115 | 0.7164 | 0.7924 | 0.2 | | 0.297 | 12.0 | 144 | 0.3055 | 0.7164 | 0.7924 | 0.2 | | 0.2923 | 13.0 | 156 | 0.2988 | 0.7246 | 0.8014 | 0.2 | | 0.2848 | 14.0 | 168 | 0.2903 | 0.7164 | 0.7924 | 0.2 | | 0.2715 | 15.0 | 180 | 0.2908 | 0.7429 | 0.8142 | 0.3 | | 0.2696 | 16.0 | 192 | 0.2807 | 0.7353 | 0.8052 | 0.3 | | 0.2543 | 17.0 | 204 | 0.2794 | 0.7536 | 0.8181 | 0.3 | | 0.2504 | 18.0 | 216 | 0.2711 | 0.7714 | 0.8309 | 0.4 | | 0.2577 | 19.0 | 228 | 0.2708 | 0.7536 | 0.8181 | 0.3 | | 0.2401 | 20.0 | 240 | 0.2693 | 0.7536 | 0.8181 | 0.3 | | 0.2415 | 21.0 | 252 | 0.2669 | 0.7714 | 0.8309 | 0.4 | | 0.241 | 22.0 | 264 | 0.2691 | 0.7536 | 0.8181 | 0.3 | | 0.2341 | 23.0 | 276 | 0.2669 | 0.7536 | 0.8181 | 0.3 | | 0.2355 | 24.0 | 288 | 0.2660 | 0.7536 | 0.8181 | 0.3 | | 0.232 | 25.0 | 300 | 0.2655 | 0.7536 | 0.8181 | 0.3 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1