categor_ai

This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6307
  • Accuracy: 0.8901

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 14 2.4323 0.4615
No log 2.0 28 2.0883 0.5495
No log 3.0 42 1.7465 0.7033
No log 4.0 56 1.4428 0.7363
No log 5.0 70 1.1838 0.8242
No log 6.0 84 0.9881 0.8132
No log 7.0 98 0.8446 0.8571
No log 8.0 112 0.7304 0.8791
No log 9.0 126 0.6456 0.8681
No log 10.0 140 0.6267 0.8352
No log 11.0 154 0.5656 0.8791
No log 12.0 168 0.5412 0.8901
No log 13.0 182 0.5301 0.8901
No log 14.0 196 0.5190 0.8791
No log 15.0 210 0.5175 0.8901
No log 16.0 224 0.5295 0.8681
No log 17.0 238 0.5147 0.8901
No log 18.0 252 0.5094 0.8901
No log 19.0 266 0.5130 0.8791
No log 20.0 280 0.5212 0.8901
No log 21.0 294 0.5421 0.8791
No log 22.0 308 0.5439 0.8791
No log 23.0 322 0.5516 0.8791
No log 24.0 336 0.5544 0.8791
No log 25.0 350 0.5441 0.8901
No log 26.0 364 0.5497 0.8901
No log 27.0 378 0.5502 0.8791
No log 28.0 392 0.5345 0.8901
No log 29.0 406 0.5444 0.8901
No log 30.0 420 0.5489 0.8901
No log 31.0 434 0.5838 0.8681
No log 32.0 448 0.5444 0.9011
No log 33.0 462 0.6005 0.8681
No log 34.0 476 0.5633 0.8901
No log 35.0 490 0.5701 0.8791
0.4178 36.0 504 0.5805 0.8901
0.4178 37.0 518 0.5919 0.8791
0.4178 38.0 532 0.5729 0.8901
0.4178 39.0 546 0.5805 0.8901
0.4178 40.0 560 0.5940 0.8901
0.4178 41.0 574 0.5816 0.8901
0.4178 42.0 588 0.5754 0.8901
0.4178 43.0 602 0.5838 0.8901
0.4178 44.0 616 0.5901 0.8901
0.4178 45.0 630 0.5942 0.8901
0.4178 46.0 644 0.5922 0.8901
0.4178 47.0 658 0.5908 0.8901
0.4178 48.0 672 0.5921 0.8901
0.4178 49.0 686 0.5916 0.8901
0.4178 50.0 700 0.6024 0.8901
0.4178 51.0 714 0.6012 0.8901
0.4178 52.0 728 0.5998 0.8901
0.4178 53.0 742 0.6031 0.8901
0.4178 54.0 756 0.5967 0.8901
0.4178 55.0 770 0.5950 0.8901
0.4178 56.0 784 0.6018 0.8901
0.4178 57.0 798 0.5989 0.8901
0.4178 58.0 812 0.5945 0.8901
0.4178 59.0 826 0.5948 0.8901
0.4178 60.0 840 0.5930 0.8901
0.4178 61.0 854 0.5961 0.8901
0.4178 62.0 868 0.6010 0.8901
0.4178 63.0 882 0.5973 0.8901
0.4178 64.0 896 0.5997 0.8901
0.4178 65.0 910 0.6016 0.8901
0.4178 66.0 924 0.6069 0.8901
0.4178 67.0 938 0.6095 0.8901
0.4178 68.0 952 0.6117 0.8901
0.4178 69.0 966 0.6160 0.8901
0.4178 70.0 980 0.6166 0.8901
0.4178 71.0 994 0.6177 0.8901
0.0153 72.0 1008 0.6172 0.8901
0.0153 73.0 1022 0.6190 0.8901
0.0153 74.0 1036 0.6213 0.8901
0.0153 75.0 1050 0.6220 0.8901
0.0153 76.0 1064 0.6204 0.8901
0.0153 77.0 1078 0.6193 0.8901
0.0153 78.0 1092 0.6198 0.8901
0.0153 79.0 1106 0.6235 0.8901
0.0153 80.0 1120 0.6260 0.8901
0.0153 81.0 1134 0.6271 0.8901
0.0153 82.0 1148 0.6290 0.8901
0.0153 83.0 1162 0.6288 0.8901
0.0153 84.0 1176 0.6298 0.8901
0.0153 85.0 1190 0.6301 0.8901
0.0153 86.0 1204 0.6324 0.8901
0.0153 87.0 1218 0.6332 0.8901
0.0153 88.0 1232 0.6333 0.8901
0.0153 89.0 1246 0.6347 0.8901
0.0153 90.0 1260 0.6337 0.8901
0.0153 91.0 1274 0.6332 0.8901
0.0153 92.0 1288 0.6329 0.8901
0.0153 93.0 1302 0.6316 0.8901
0.0153 94.0 1316 0.6315 0.8901
0.0153 95.0 1330 0.6312 0.8901
0.0153 96.0 1344 0.6310 0.8901
0.0153 97.0 1358 0.6305 0.8901
0.0153 98.0 1372 0.6306 0.8901
0.0153 99.0 1386 0.6307 0.8901
0.0153 100.0 1400 0.6307 0.8901

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.3.1
  • Datasets 2.19.1
  • Tokenizers 0.15.1
Downloads last month
6
Safetensors
Model size
67M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for tinutmap/categor_ai

Finetuned
(7891)
this model