chenlin
update readme
087b3a6
|
raw
history blame
2.12 kB
metadata
license: apache-2.0
inference: false
datasets:
  - Lin-Chen/ShareGPT4V
pipeline_tag: image-text-to-text


open-llava-next-llama3-8b Model Card

Model details

Model type: open-llava-next-llama3-8b is an open-source implementation chatbot trained by fine-tuning the entire model on open-source Open-LLaVA-Next-mix1M data.

Model date: open-llava-next-llama3-8b was trained in May 2024.

Paper or resources for more information: [Code]

Name ViT LLM Weights MME SEED SQA MMB MMB-CN TextVQA GQA
llava-next-vicuna-7b CLIP-L-336 Vicuna-7B HF 1519 70.2 70.1 67.4 60.6 64.9 64.2
open-llava-next-vicuna-7b CLIP-L-336 Vicuna-7B HF 1540 71.1 70.7 68.5 60.7 67.2 64.3
open-llava-next-llama3-8b CLIP-L-336 LLaMA3-8B HF 1552 74.4 77.3 74.4 70.4 69.8 65.9

Usage

You can utilize this model as we provide in our [repository]. Moreover, you can direct load this model and use it in the [LLaVA repository].

Training dataset

All training data are open-sourced in our repository.

Intended use

Primary intended uses: The primary use of open-llava-next-llama3-8b is research on large multimodal models and chatbots.

Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.