osanseviero's picture
Specify right model card metadata
3c10a61 verified
|
raw
history blame
1.78 kB
metadata
library_name: sglang
tags:
  - llava
inference: false
pipeline_tag: image-text-to-text

Inference Preparation

This is a fork of liuhaotian/llava-v1.6-vicuna-7b to be fully compatible for inference with SGLang. No other changes were made.



LLaVA Model Card

Model details

Model type: LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: lmsys/vicuna-7b-v1.5

Model date: LLaVA-v1.6-Vicuna-7B was trained in December 2023.

Paper or resources for more information: https://llava-vl.github.io/

License

Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

Where to send questions or comments about the model: https://github.com/haotian-liu/LLaVA/issues

Intended use

Primary intended uses: The primary use of LLaVA is research on large multimodal models and chatbots.

Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.

Training dataset

  • 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
  • 158K GPT-generated multimodal instruction-following data.
  • 500K academic-task-oriented VQA data mixture.
  • 50K GPT-4V data mixture.
  • 40K ShareGPT data.

Evaluation dataset

A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.