Edit model card

distilbert-base-uncased-textclassification_adalora

This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6697
  • Precision: 0.6503
  • Recall: 0.5072
  • F1: 0.5699
  • Accuracy: 0.9524

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 1.0 213 1.9179 0.4803 0.2620 0.3390 0.9371
No log 2.0 426 1.6876 0.5057 0.3194 0.3915 0.9404
1.7592 3.0 639 1.4144 0.5161 0.3445 0.4132 0.9420
1.7592 4.0 852 1.1653 0.5455 0.3517 0.4276 0.9428
1.1731 5.0 1065 1.0656 0.575 0.3852 0.4613 0.9443
1.1731 6.0 1278 0.9946 0.5878 0.4163 0.4874 0.9461
1.1731 7.0 1491 0.9620 0.6467 0.4139 0.5047 0.9464
0.8895 8.0 1704 0.9450 0.6587 0.4294 0.5199 0.9473
0.8895 9.0 1917 0.9187 0.6382 0.4557 0.5318 0.9488
0.8124 10.0 2130 0.9042 0.6528 0.4522 0.5343 0.9493
0.8124 11.0 2343 0.8847 0.6443 0.4701 0.5436 0.9500
0.7741 12.0 2556 0.8773 0.6594 0.4677 0.5472 0.9502
0.7741 13.0 2769 0.8642 0.6672 0.4653 0.5483 0.9502
0.7741 14.0 2982 0.8439 0.6694 0.4821 0.5605 0.9514
0.7346 15.0 3195 0.8381 0.6735 0.4737 0.5562 0.9512
0.7346 16.0 3408 0.8199 0.6773 0.4844 0.5649 0.9517
0.6966 17.0 3621 0.8007 0.6744 0.4856 0.5647 0.9521
0.6966 18.0 3834 0.7845 0.6618 0.4916 0.5642 0.9520
0.6575 19.0 4047 0.7677 0.6491 0.5 0.5649 0.9522
0.6575 20.0 4260 0.7573 0.6624 0.4904 0.5636 0.9524
0.6575 21.0 4473 0.7419 0.6561 0.4928 0.5628 0.9522
0.6218 22.0 4686 0.7282 0.6435 0.4988 0.5620 0.9522
0.6218 23.0 4899 0.7142 0.6346 0.5048 0.5623 0.9520
0.5894 24.0 5112 0.7173 0.6474 0.4964 0.5619 0.9521
0.5894 25.0 5325 0.7132 0.6562 0.4976 0.5660 0.9526
0.5728 26.0 5538 0.7051 0.6453 0.5048 0.5664 0.9523
0.5728 27.0 5751 0.7032 0.6462 0.5024 0.5653 0.9524
0.5728 28.0 5964 0.6984 0.6405 0.5072 0.5661 0.9524
0.5629 29.0 6177 0.6973 0.6502 0.5024 0.5668 0.9523
0.5629 30.0 6390 0.6928 0.6459 0.5084 0.5689 0.9527
0.5543 31.0 6603 0.6935 0.6483 0.5072 0.5691 0.9528
0.5543 32.0 6816 0.6893 0.6448 0.5060 0.5670 0.9526
0.5465 33.0 7029 0.6893 0.6593 0.5024 0.5703 0.9524
0.5465 34.0 7242 0.6863 0.6594 0.5048 0.5718 0.9526
0.5465 35.0 7455 0.6829 0.6543 0.5072 0.5714 0.9526
0.5414 36.0 7668 0.6780 0.6464 0.5096 0.5699 0.9528
0.5414 37.0 7881 0.6776 0.6508 0.5084 0.5709 0.9526
0.5341 38.0 8094 0.6764 0.6549 0.5084 0.5724 0.9525
0.5341 39.0 8307 0.6749 0.6549 0.5084 0.5724 0.9526
0.5301 40.0 8520 0.6773 0.6640 0.5012 0.5712 0.9525
0.5301 41.0 8733 0.6730 0.6518 0.5084 0.5712 0.9525
0.5301 42.0 8946 0.6717 0.6509 0.5108 0.5724 0.9526
0.5268 43.0 9159 0.6721 0.6544 0.5096 0.5730 0.9525
0.5268 44.0 9372 0.6694 0.6480 0.5108 0.5712 0.9526
0.5236 45.0 9585 0.6709 0.6528 0.5084 0.5716 0.9525
0.5236 46.0 9798 0.6694 0.6494 0.5096 0.5710 0.9525
0.5231 47.0 10011 0.6693 0.6514 0.5096 0.5718 0.9525
0.5231 48.0 10224 0.6696 0.6503 0.5072 0.5699 0.9524
0.5231 49.0 10437 0.6699 0.6513 0.5072 0.5703 0.9524
0.5224 50.0 10650 0.6697 0.6503 0.5072 0.5699 0.9524

Framework versions

  • PEFT 0.7.1
  • Transformers 4.36.2
  • Pytorch 2.0.0+cu117
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
12
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Yeji-Seong/distilbert-base-uncased-textclassification_adalora

Adapter
(200)
this model