praeclarumjj3's picture
Update README.md
0a5c677 verified
|
raw
history blame
1.12 kB
metadata
library_name: transformers
license: apache-2.0
language:
  - en
pipeline_tag: image-text-to-text

probe_seg_llava-1.5-pt-0.5ift

This model checkpoint contains the seg probes for CLIP-ConvNeXT-XXL Llama-3-8b based LLaVA-1.5 model after the PT stage and 50% of the IFT stage, i.e., trained on the LLaVA-558K and 50% of the LLaVA-665K datasets. Please refer to documentation for more details.

Citation

If you found our work useful in your research, please consider starring ⭐ us on GitHub and citing 📚 us in your research!

@article{jain2024ola_vlm,
    title={{OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation}},
    author={Jitesh Jain and Zhengyuan Yang and Humphrey Shi and Jianfeng Gao and Jianwei Yang},
    journal={arXiv},
    year={2024}
}