PsiPi's picture
Specify right model card metadata (#9)
0c968a4 verified
---
tags:
- llava
pipeline_tag: image-text-to-text
---
---
inference: false
---
<br>
<br>
# LLaVA Model Card
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LLaVA-v1.5-13B was trained in September 2023.
**Paper or resources for more information:**
https://llava-vl.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 450K academic-task-oriented VQA data mixture.
- 40K ShareGPT data.
## Evaluation dataset
A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.
llava-v1.5-13b-GGUF
This repo contains GGUF files to inference llava-v1.5-13b with llama.cpp end-to-end without any extra dependency.
stirred by twobob
Note: The mmproj-model-f16.gguf file structure is experimental and may change. Always use the latest code in llama.cpp.
props to @mys
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a22257d3149e05bc6d259f/QuoYvv46QmwgAS6d3LYxj.png)