Idefics2-Chatty LoRA Adapter

This is a pretrained Idefics2-chatty LoRA adapter trained on 'dawoz/frozenlake_prompts_dataset'. the checkpoint save here is the one after 1 epoch of training.

Trainer Hyperparameters

num_train_epochs: 3
per_device_train_batch_size: 8
per_device_eval_batch_size: 4
gradient_accumulation_steps: 4
gradient_checkpointing: true
optim: adamw_bnb_8bit
warmup_steps: 50
learning_rate: 1e-4
weight_decay: 0.01
logging_steps: 25
output_dir: "outputs"
run_name: "idefics2-frozenlake"
save_strategy: "epoch"
save_steps: 250
save_total_limit: 4
eval_strategy: "steps"
eval_steps: 250
bf16: true
push_to_hub: false
hub_model_id: "idefics2-frozenlake"
remove_unused_columns: false
report_to: "wandb"
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .