Edit model card


open-llava-next-vicuna-7b Model Card

Model details

Model type: open-llava-next-vicuna-7b is an open-source implementation chatbot trained by fine-tuning the entire model on open-source Open-LLaVA-Next-mix1M data.

Model date: open-llava-next-vicuna-7b was trained in May 2024.

Paper or resources for more information: [Code]

Name LLM Checkpoint MME SEED-image SQA-image MMBench MMBench-CN TextVQA GQA
llava-next-vicuna-7b Vicuna-7B HF 1519 70.2 70.1 67.4 60.6 64.9 64.2
open-llava-next-vicuna-7b Vicuna-7B HF 1540 71.1 70.7 68.5 60.7 67.2 64.3

Usage

You can utilize this model as we provide in our [repository]. Moreover, you can direct load this model and use it in the [LLaVA repository].

Training dataset

All training data are open-sourced in our repository.

Intended use

Primary intended uses: The primary use of open-llava-next-vicuna-7b is research on large multimodal models and chatbots.

Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.

Downloads last month
19
Safetensors
Model size
7.06B params
Tensor type
F32
·
BF16
·
Inference API (serverless) has been turned off for this model.

Dataset used to train Lin-Chen/open-llava-next-vicuna-7b

Collection including Lin-Chen/open-llava-next-vicuna-7b