Configuration Parsing Warning: In adapter_config.json: "peft.base_model_name_or_path" must be a string

STS-Lora-Fine-Tuning-Capstone-bert-testing-23-with-lower-r-mid

This model is a fine-tuned version of dslim/bert-base-NER on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3610
  • Accuracy: 0.4300

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 40

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 180 1.7491 0.2429
No log 2.0 360 1.7395 0.2451
1.7055 3.0 540 1.7242 0.2451
1.7055 4.0 720 1.6937 0.2980
1.7055 5.0 900 1.6446 0.3038
1.6419 6.0 1080 1.6173 0.3176
1.6419 7.0 1260 1.5638 0.3401
1.6419 8.0 1440 1.5355 0.3524
1.5258 9.0 1620 1.5112 0.3590
1.5258 10.0 1800 1.4870 0.3742
1.5258 11.0 1980 1.4729 0.3749
1.4424 12.0 2160 1.4664 0.3938
1.4424 13.0 2340 1.4524 0.4003
1.4002 14.0 2520 1.4390 0.4061
1.4002 15.0 2700 1.4317 0.4090
1.4002 16.0 2880 1.4241 0.4155
1.376 17.0 3060 1.4201 0.4148
1.376 18.0 3240 1.4069 0.4083
1.376 19.0 3420 1.4000 0.4184
1.3533 20.0 3600 1.3978 0.4235
1.3533 21.0 3780 1.3929 0.4329
1.3533 22.0 3960 1.3896 0.4329
1.3336 23.0 4140 1.3856 0.4264
1.3336 24.0 4320 1.3833 0.4322
1.3254 25.0 4500 1.3787 0.4235
1.3254 26.0 4680 1.3744 0.4329
1.3254 27.0 4860 1.3751 0.4300
1.3082 28.0 5040 1.3720 0.4336
1.3082 29.0 5220 1.3687 0.4300
1.3082 30.0 5400 1.3674 0.4293
1.3105 31.0 5580 1.3663 0.4373
1.3105 32.0 5760 1.3643 0.4351
1.3105 33.0 5940 1.3630 0.4271
1.295 34.0 6120 1.3628 0.4322
1.295 35.0 6300 1.3625 0.4300
1.295 36.0 6480 1.3623 0.4307
1.2919 37.0 6660 1.3617 0.4322
1.2919 38.0 6840 1.3613 0.4315
1.2905 39.0 7020 1.3610 0.4300
1.2905 40.0 7200 1.3610 0.4300

Framework versions

  • PEFT 0.10.0
  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for rajevan123/STS-Lora-Fine-Tuning-Capstone-bert-testing-23-with-lower-r-mid

Adapter
(4)
this model