Edit model card

NL-RX-Synth-t5-small-finetuned-en-to-regex

This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0131
  • Semantic-accuracy: 0.36
  • Gen Len: 18.24

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 10000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Semantic-accuracy Gen Len
0.2382 1.0 563 0.0431 0.322 18.224
0.0477 2.0 1126 0.0229 0.356 18.236
0.0305 3.0 1689 0.0259 0.34 18.266
0.0231 4.0 2252 0.0204 0.35 18.238
0.0197 5.0 2815 0.0162 0.352 18.232
0.02 6.0 3378 0.0162 0.354 18.238
0.0172 7.0 3941 0.0147 0.356 18.24
0.0145 8.0 4504 0.0259 0.34 18.246
0.0133 9.0 5067 0.0129 0.358 18.238
0.0131 10.0 5630 0.0121 0.366 18.242
0.0122 11.0 6193 0.0128 0.354 18.242
0.0123 12.0 6756 0.0129 0.356 18.222
0.0113 13.0 7319 0.0131 0.362 18.232
0.0095 14.0 7882 0.0124 0.358 18.238
0.0102 15.0 8445 0.0127 0.362 18.244
0.0089 16.0 9008 0.0126 0.358 18.242
0.0086 17.0 9571 0.0133 0.358 18.242
0.0084 17.76 10000 0.0131 0.36 18.24

Framework versions

  • Transformers 4.25.1
  • Pytorch 1.12.1+cu113
  • Datasets 2.7.1
  • Tokenizers 0.13.2
Downloads last month
2