Edit model card

peft-InstructionTuning-training-1719334832

This model is a fine-tuned version of google/flan-t5-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: nan
  • Rouge1: 6.2665
  • Rouge2: 0.0511
  • Rougel: 5.2669
  • Rougelsum: 5.4181

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 6
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 12
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
0.0 2.6316 50 nan 6.2665 0.0511 5.2669 5.4181
0.0 5.2632 100 nan 6.2665 0.0511 5.2669 5.4181
0.0 7.8947 150 nan 6.2665 0.0511 5.2669 5.4181
0.0 10.5263 200 nan 6.2665 0.0511 5.2669 5.4181
0.0 13.1579 250 nan 6.2665 0.0511 5.2669 5.4181
0.0 15.7895 300 nan 6.2665 0.0511 5.2669 5.4181
0.0 18.4211 350 nan 6.2665 0.0511 5.2669 5.4181
0.0 21.0526 400 nan 6.2665 0.0511 5.2669 5.4181
0.0 23.6842 450 nan 6.2665 0.0511 5.2669 5.4181
0.0 26.3158 500 nan 6.2665 0.0511 5.2669 5.4181
0.0 28.9474 550 nan 6.2665 0.0511 5.2669 5.4181
0.0 31.5789 600 nan 6.2665 0.0511 5.2669 5.4181
0.0 34.2105 650 nan 6.2665 0.0511 5.2669 5.4181
0.0 36.8421 700 nan 6.2665 0.0511 5.2669 5.4181
0.0 39.4737 750 nan 6.2665 0.0511 5.2669 5.4181
0.0 42.1053 800 nan 6.2665 0.0511 5.2669 5.4181
0.0 44.7368 850 nan 6.2665 0.0511 5.2669 5.4181
0.0 47.3684 900 nan 6.2665 0.0511 5.2669 5.4181
0.0 50.0 950 nan 6.2665 0.0511 5.2669 5.4181

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.2
  • Pytorch 2.1.2
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
2
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for TatvaJoshi-AHS/peft-InstructionTuning-training-1719334832

Adapter
(127)
this model