Edit model card

Chatbot_Model_Trial

This model is a fine-tuned version of google/flan-t5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4141
  • Rouge1: 20.2835
  • Rouge2: 9.4794
  • Rougel: 20.2587
  • Rougelsum: 20.2835
  • Gen Len: 14.125

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 10 1.9123 18.7955 5.5978 17.9735 17.9848 14.75
No log 2.0 20 1.6554 22.5345 7.961 21.3846 21.8442 15.0
No log 3.0 30 1.5016 25.764 8.9312 24.6364 25.0268 13.75
No log 4.0 40 1.4346 20.2835 9.4794 20.2587 20.2835 14.125
No log 5.0 50 1.4141 20.2835 9.4794 20.2587 20.2835 14.125

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1
Downloads last month
1
Safetensors
Model size
248M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for StaticOwl/Chatbot_Model_Trial

Finetuned
(640)
this model