Edit model card

flan-t5-small-instructiongen

Instead of generating questions from text, generate instructions for LLMs!

This model is a fine-tuned version of google/flan-t5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3401
  • Rouge1: 52.201
  • Rouge2: 35.6154
  • Rougel: 50.2334
  • Rougelsum: 50.338
  • Gen Len: 14.0450

Intended uses & limitations

This is just a small model/example. There is likely to be even better performance with larger models (ex pszemraj/bart-base-instructiongen) generalizes better)

Additionally, this was trained on a dataset of only instructions+outputs, with the inputs filtered out. This means that text of 1) cookies and cream 2) chocolate chip 3) mint chip 4) oreo will not get you "Rank the following ice cream flavors: oreo, mint chip, chocolate chip, cookies and cream".

Training and evaluation data

See the linked dataset pszemraj/fleece2instructions - it is a filtered/formatted version of tatsu-lab/alpaca to generate instructions for arbitrary text.

  • Some of the API examples are intentionally weird to demonstrate the generalizability of the model.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 8e-05
  • train_batch_size: 8
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.02
  • num_epochs: 2.0

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.6161 1.0 181 1.3714 51.1003 34.5701 49.1277 49.2466 13.8357
1.539 2.0 362 1.3401 52.201 35.6154 50.2334 50.338 14.0450
Downloads last month
43
Safetensors
Model size
77M params
Tensor type
F32
·

Dataset used to train pszemraj/flan-t5-small-instructiongen

Space using pszemraj/flan-t5-small-instructiongen 1

Evaluation results