Edit model card

IC_ver5b_coco_swin_gpt2_01pc_1e

This model is a fine-tuned version of VK246/IC_ver5a_coco_swin_gpt2_05pc_1e on the coco dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1266
  • Rouge1: 27.4772
  • Rouge2: 5.9305
  • Rougel: 25.1138
  • Rougelsum: 25.1235
  • Bleu: 2.437
  • Gen Len: 11.1124

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 96
  • eval_batch_size: 96
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Bleu Gen Len
1.2093 0.42 25 1.1552 22.8898 3.6353 20.6781 20.6737 1.1554 11.1124
1.2149 0.85 50 1.1358 26.2857 5.2765 24.0266 24.0308 2.1954 11.1124

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
1
Unable to determine this model’s pipeline type. Check the docs .