Edit model card

roberta-base-on-cuad-finetuned-squad

This model is a fine-tuned version of Rakib/roberta-base-on-cuad on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0793

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
0.1503 0.1142 50 0.1216
0.1305 0.2283 100 0.1138
0.1693 0.3425 150 0.1135
0.1986 0.4566 200 0.1063
0.1089 0.5708 250 0.0963
0.0799 0.6849 300 0.1018
0.1527 0.7991 350 0.0986
0.1387 0.9132 400 0.1064
0.0938 1.0274 450 0.0951
0.1533 1.1416 500 0.0805
0.1329 1.2557 550 0.0800
0.1254 1.3699 600 0.0763
0.1247 1.4840 650 0.0789
0.1185 1.5982 700 0.0817
0.0808 1.7123 750 0.0835
0.0622 1.8265 800 0.0815
0.0455 1.9406 850 0.0809
0.0846 2.0548 900 0.0851
0.0453 2.1689 950 0.0832
0.0808 2.2831 1000 0.0789
0.0902 2.3973 1050 0.0793
0.0974 2.5114 1100 0.0787
0.0508 2.6256 1150 0.0802
0.0535 2.7397 1200 0.0835
0.0956 2.8539 1250 0.0815
0.1126 2.9680 1300 0.0793

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
27
Safetensors
Model size
124M params
Tensor type
F32
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ygory/roberta-base-on-cuad-finetuned-squad

Finetuned
this model