Visual Question Answering
English
Edit model card

DinoV2-SigLIP-Phi3(LoRA) VLM

  • Vision Encoder - DinoV2 + SigLIP @384px resolution. Why 2 vision encoders?
  • Connector - MLP (Dino and SigLIP features are concatenated and then projected to Phi3 representation space)
  • Language Model - Phi3 + LoRA
  • Pre-train (Align) Dataset - LLaVA-CC3M-Pretrain-595K
  • Fine-tune (Instruction) Dataset - LLAVA-v1.5-Instruct + LRV-Instruct

Scripts to build and train the models are available at NMS05/DinoV2-SigLIP-Phi3-LoRA-VLM.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Datasets used to train nms05/Dinov2-SigLIP-Phi3-LoRA