File size: 1,103 Bytes
1d5ab67 48afda6 1d5ab67 48afda6 1d5ab67 48afda6 1d5ab67 48afda6 1d5ab67 48afda6 1d5ab67 48afda6 1d5ab67 48afda6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
---
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
---
# probe_depth_llava-1.5-pt-ift
This model checkpoint contains the depth probes for CLIP-ConvNeXT-XXL Llama-3-8b based LLaVA-1.5 model after the PT and 50% IFT stages, i.e., trained on the LLaVA-558K and 50% LLaVA-665K dataset. Please refer to [documentation](https://github.com/SHI-Labs/OLA-VLM/blob/main/docs/Probing.md) for more details.
- **GitHub Repo:** [https://github.com/SHI-Labs/OLA-VLM](https://github.com/SHI-Labs/OLA-VLM)
- **Project Page:** [https://praeclarumjj3.github.io/ola_vlm/](https://praeclarumjj3.github.io/ola_vlm/)
## Citation
If you found our work useful in your research, please consider starring ⭐ us on [GitHub](https://github.com/SHI-Labs/OLA-VLM) and citing 📚 us in your research!
```
@article{jain2024ola_vlm,
title={{OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation}},
author={Jitesh Jain and Zhengyuan Yang and Humphrey Shi and Jianfeng Gao and Jianwei Yang},
journal={arXiv},
year={2024}
}
```
|