Edit model card

text-to-sparql-Version-3.0

This model is a fine-tuned version of yazdipour/text-to-sparql-t5-small-qald9 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0012
  • Gen Len: 19.0
  • P: 0.4860
  • R: -0.0588
  • F1: 0.2003
  • Bleu-score: 0.7608
  • Bleu-precisions: [96.54320987654322, 93.6231884057971, 92.28070175438596, 91.11111111111111]
  • Bleu-bp: 0.0081

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 12
  • eval_batch_size: 12
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Gen Len P R F1 Bleu-score Bleu-precisions Bleu-bp
No log 1.0 12 0.9239 19.0 0.2943 -0.1644 0.0553 0.1413 [68.58316221765914, 39.578454332552695, 4.087193460490464, 0.16286644951140064] 0.0217
No log 2.0 24 0.3111 19.0 0.3376 -0.1178 0.1005 0.5992 [80.31319910514542, 58.13953488372093, 33.027522935779814, 21.348314606741575] 0.0141
No log 3.0 36 0.0964 19.0 0.4469 -0.0580 0.1829 0.4274 [89.05852417302799, 74.77477477477477, 53.47985347985348, 43.1924882629108] 0.0068
No log 4.0 48 0.0317 19.0 0.4665 -0.0697 0.1854 0.8582 [95.2153110047847, 90.50279329608938, 84.56375838926175, 81.9327731092437] 0.0098
No log 5.0 60 0.0162 19.0 0.4946 -0.0505 0.2087 0.7919 [97.29064039408867, 96.24277456647398, 95.45454545454545, 94.24778761061947] 0.0083
No log 6.0 72 0.0057 19.0 0.4871 -0.0620 0.1990 0.7719 [96.56019656019656, 93.0835734870317, 90.2439024390244, 88.54625550660793] 0.0084
No log 7.0 84 0.0037 19.0 0.4860 -0.0588 0.2003 0.7608 [96.54320987654322, 93.6231884057971, 92.28070175438596, 91.11111111111111] 0.0081
No log 8.0 96 0.0028 19.0 0.4860 -0.0588 0.2003 0.7608 [96.54320987654322, 93.6231884057971, 92.28070175438596, 91.11111111111111] 0.0081
No log 9.0 108 0.0026 19.0 0.4860 -0.0588 0.2003 0.7608 [96.54320987654322, 93.6231884057971, 92.28070175438596, 91.11111111111111] 0.0081
No log 10.0 120 0.0023 19.0 0.4860 -0.0588 0.2003 0.7608 [96.54320987654322, 93.6231884057971, 92.28070175438596, 91.11111111111111] 0.0081
No log 11.0 132 0.0018 19.0 0.4860 -0.0588 0.2003 0.7608 [96.54320987654322, 93.6231884057971, 92.28070175438596, 91.11111111111111] 0.0081
No log 12.0 144 0.0017 19.0 0.4860 -0.0588 0.2003 0.7608 [96.54320987654322, 93.6231884057971, 92.28070175438596, 91.11111111111111] 0.0081
No log 13.0 156 0.0015 19.0 0.4860 -0.0588 0.2003 0.7608 [96.54320987654322, 93.6231884057971, 92.28070175438596, 91.11111111111111] 0.0081
No log 14.0 168 0.0013 19.0 0.4860 -0.0588 0.2003 0.7608 [96.54320987654322, 93.6231884057971, 92.28070175438596, 91.11111111111111] 0.0081
No log 15.0 180 0.0012 19.0 0.4961 -0.0508 0.2092 0.7932 [98.0246913580247, 97.68115942028986, 97.19298245614036, 96.44444444444444] 0.0081
No log 16.0 192 0.0013 19.0 0.4860 -0.0588 0.2003 0.7608 [96.54320987654322, 93.6231884057971, 92.28070175438596, 91.11111111111111] 0.0081
No log 17.0 204 0.0014 19.0 0.4860 -0.0588 0.2003 0.7608 [96.54320987654322, 93.6231884057971, 92.28070175438596, 91.11111111111111] 0.0081
No log 18.0 216 0.0013 19.0 0.4860 -0.0588 0.2003 0.7608 [96.54320987654322, 93.6231884057971, 92.28070175438596, 91.11111111111111] 0.0081
No log 19.0 228 0.0013 19.0 0.4860 -0.0588 0.2003 0.7608 [96.54320987654322, 93.6231884057971, 92.28070175438596, 91.11111111111111] 0.0081
No log 20.0 240 0.0012 19.0 0.4860 -0.0588 0.2003 0.7608 [96.54320987654322, 93.6231884057971, 92.28070175438596, 91.11111111111111] 0.0081

Framework versions

  • Transformers 4.38.1
  • Pytorch 2.1.2
  • Datasets 2.1.0
  • Tokenizers 0.15.2
Downloads last month
21
Safetensors
Model size
60.5M params
Tensor type
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from