Edit model card

text-to-sparql-Version-3.5

This model is a fine-tuned version of yazdipour/text-to-sparql-t5-small-qald9 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0689
  • Gen Len: 19.0
  • Bleu-score: 0.3061
  • Bleu-precisions: [91.41630901287553, 82.88177339901478, 72.54335260115607, 64.86013986013987]
  • Bleu-bp: 0.0040

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 6
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Gen Len Bleu-score Bleu-precisions Bleu-bp
No log 1.0 18 0.9058 19.0 0.3206 [77.68014059753955, 42.92730844793713, 18.040089086859687, 6.298200514138817] 0.0129
No log 2.0 36 0.3255 19.0 0.1908 [85.59046587215602, 65.37982565379825, 42.60614934114202, 29.129662522202487] 0.0037
No log 3.0 54 0.1637 19.0 0.1969 [89.16478555304741, 76.76240208877284, 62.538699690402474, 55.32319391634981] 0.0028
No log 4.0 72 0.1002 19.0 0.2395 [91.40022050716648, 79.41550190597205, 66.11694152923538, 57.22120658135283] 0.0033
No log 5.0 90 0.0761 19.0 0.2712 [91.28540305010893, 81.57894736842105, 70.64896755162242, 62.18637992831541] 0.0036
No log 6.0 108 0.0689 19.0 0.3061 [91.41630901287553, 82.88177339901478, 72.54335260115607, 64.86013986013987] 0.0040

Framework versions

  • Transformers 4.38.1
  • Pytorch 2.1.2
  • Datasets 2.1.0
  • Tokenizers 0.15.2
Downloads last month
4
Safetensors
Model size
60.5M params
Tensor type
F32
·
Inference API
This model can be loaded on Inference API (serverless).

Finetuned from