bert-19

This model is a fine-tuned version of deepset/bert-base-cased-squad2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 11.2343

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss
11.3192 0.09 5 12.3265
11.4416 0.18 10 12.2762
11.0285 0.27 15 12.2265
11.1568 0.36 20 12.1786
11.1352 0.45 25 12.1313
11.1596 0.55 30 12.0857
10.4352 0.64 35 12.0412
11.0699 0.73 40 11.9982
10.6195 0.82 45 11.9567
10.5109 0.91 50 11.9161
10.2699 1.0 55 11.8766
10.4784 1.09 60 11.8384
10.5932 1.18 65 11.8018
10.8098 1.27 70 11.7661
10.3369 1.36 75 11.7312
10.7722 1.45 80 11.6981
10.4952 1.55 85 11.6657
10.4398 1.64 90 11.6341
10.2621 1.73 95 11.6045
10.4932 1.82 100 11.5753
10.0321 1.91 105 11.5481
10.3808 2.0 110 11.5216
10.3108 2.09 115 11.4966
10.1234 2.18 120 11.4725
10.2887 2.27 125 11.4492
10.4092 2.36 130 11.4274
9.9991 2.45 135 11.4068
10.3832 2.55 140 11.3872
9.937 2.64 145 11.3692
10.4397 2.73 150 11.3521
10.1919 2.82 155 11.3364
10.1394 2.91 160 11.3214
10.3371 3.0 165 11.3080
10.2649 3.09 170 11.2953
10.2511 3.18 175 11.2844
9.9485 3.27 180 11.2741
9.8203 3.36 185 11.2649
10.559 3.45 190 11.2574
10.1233 3.55 195 11.2504
9.9711 3.64 200 11.2451
9.8388 3.73 205 11.2407
9.7467 3.82 210 11.2373
9.7465 3.91 215 11.2350
10.3259 4.0 220 11.2343

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.1
Downloads last month
17
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for hung200504/bert-19

Finetuned
(28)
this model