paligemma-vqa / README.md
statking's picture
Model save
851e4e8 verified
|
raw
history blame
No virus
2.25 kB
metadata
license: gemma
base_model: google/paligemma-3b-pt-224
tags:
  - generated_from_trainer
datasets:
  - vq_av2
model-index:
  - name: paligemma-vqa
    results: []

Visualize in Weights & Biases

paligemma-vqa

This model is a fine-tuned version of google/paligemma-3b-pt-224 on the vq_av2 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0001

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.02
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1200
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
0.0019 0.0736 500 0.0081
0.0004 0.1472 1000 0.0002
0.0003 0.2207 1500 0.0002
0.0001 0.2943 2000 0.0001
0.0001 0.3679 2500 0.0001
0.0001 0.4415 3000 0.0001
0.0002 0.5151 3500 0.0002
0.0001 0.5886 4000 0.0001
0.0001 0.6622 4500 0.0001
0.0001 0.7358 5000 0.0001
0.0001 0.8094 5500 0.0001
0.0001 0.8830 6000 0.0001
0.0001 0.9566 6500 0.0001

Framework versions

  • Transformers 4.41.0
  • Pytorch 2.2.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1