Edit model card
Configuration Parsing Warning: In adapter_config.json: "peft.base_model_name_or_path" must be a string

STS-Lora-Fine-Tuning-Capstone-bert-testing-70-with-lower-r-mid

This model is a fine-tuned version of google-bert/bert-base-cased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2732
  • Accuracy: 0.4706

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 40

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 197 1.7537 0.2444
No log 2.0 394 1.7017 0.2886
1.6735 3.0 591 1.6479 0.2988
1.6735 4.0 788 1.5870 0.3169
1.6735 5.0 985 1.5191 0.3328
1.5268 6.0 1182 1.4680 0.3611
1.5268 7.0 1379 1.4300 0.3887
1.3747 8.0 1576 1.4043 0.4039
1.3747 9.0 1773 1.3854 0.4039
1.3747 10.0 1970 1.3713 0.4104
1.2814 11.0 2167 1.3599 0.4191
1.2814 12.0 2364 1.3560 0.4199
1.2408 13.0 2561 1.3407 0.4228
1.2408 14.0 2758 1.3234 0.4380
1.2408 15.0 2955 1.3233 0.4329
1.2136 16.0 3152 1.3146 0.4373
1.2136 17.0 3349 1.3181 0.4409
1.1914 18.0 3546 1.3267 0.4387
1.1914 19.0 3743 1.3103 0.4467
1.1914 20.0 3940 1.3056 0.4525
1.1759 21.0 4137 1.2887 0.4605
1.1759 22.0 4334 1.2917 0.4648
1.1661 23.0 4531 1.2955 0.4576
1.1661 24.0 4728 1.2841 0.4634
1.1661 25.0 4925 1.2850 0.4634
1.1566 26.0 5122 1.2998 0.4554
1.1566 27.0 5319 1.2854 0.4656
1.1482 28.0 5516 1.2792 0.4750
1.1482 29.0 5713 1.2809 0.4677
1.1482 30.0 5910 1.2777 0.4735
1.1407 31.0 6107 1.2799 0.4677
1.1407 32.0 6304 1.2816 0.4699
1.1417 33.0 6501 1.2802 0.4692
1.1417 34.0 6698 1.2739 0.4685
1.1417 35.0 6895 1.2739 0.4699
1.1391 36.0 7092 1.2745 0.4692
1.1391 37.0 7289 1.2733 0.4714
1.1391 38.0 7486 1.2729 0.4714
1.134 39.0 7683 1.2719 0.4706
1.134 40.0 7880 1.2732 0.4706

Framework versions

  • PEFT 0.10.0
  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
4
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for rajevan123/STS-Lora-Fine-Tuning-Capstone-bert-testing-70-with-lower-r-mid

Adapter
(24)
this model