Edit model card

Repo:

Model Config:

  • LLM: Vicuna_13b_v0
  • Vision Encoder: CLIP ViT-L-14
  • lora_r: 32
  • lora_alpha: 32
  • lora_dropout: 0.1
  • lora_target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj']

Train Config:

  • Epochs: 2
  • train_batch_size: 64
Downloads last month
2

Dataset used to train openlamm/lamm_13b_lora32_186k