Edit model card

IC_ver2_coco_swin_gpt2_5pc_1e

This model is a fine-tuned version of on the coco dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9972
  • Rouge1: 34.8608
  • Rouge2: 10.9857
  • Rougel: 32.1905
  • Rougelsum: 32.1794
  • Bleu: 6.1162
  • Gen Len: 11.2887

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Bleu Gen Len
1.4995 0.23 100 1.1227 28.4179 5.9026 26.5601 26.5682 3.0237 11.2887
1.136 0.45 200 1.0506 31.4866 8.9504 29.1403 29.0996 4.2965 11.2887
1.0899 0.68 300 1.0203 33.9899 10.3576 31.6646 31.6435 5.5456 11.2887
1.057 0.9 400 0.9972 34.8608 10.9857 32.1905 32.1794 6.1162 11.2887

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
2
Inference API
Inference API (serverless) does not yet support transformers models for this pipeline type.