Request: Please Make a LLAVA-Like Model from Mistral-7B - It Would be Amazing 🀩

#57
by Joseph717171 - opened

LLaVA: Large Language and Vision Assistant Visual Instruction Fine-tuning

Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks in the language domain, but the idea is less explored in the multimodal field.

Multimodal Instruct Data. We present the first attempt to use language-only GPT-4 to generate multimodal language-image instruction-following data.
LLaVA Model. We introduce LLaVA (Large Language-and-Vision Assistant), an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding.
Performance. Our early experiments show that LLaVA demonstrates impressive multimodel chat abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal instruction-following dataset. When fine-tuned on Science QA, the synergy of LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%.
Open-source. We make GPT-4 generated visual instruction tuning data, our model and code base publicly available.

https://llava-vl.github.io

Thanks for your time and consideration.

Mistral AI_ org
β€’
edited Oct 16, 2023

cc @Leyo @VictorSanh πŸ‘€

stay tuned

What if they do LLaVA, but also integrate Wav2Vec2 so that the model can understand audio, text, and images?

BakLlaVa is the Mistral 7B version

Indeed, you can find BakLlava implementation here: https://huggingface.co/llava-hf/bakLlava-v1-hf

Sign up or log in to comment