ZhangYuanhan's picture
Update README.md
475c62d verified
|
raw
history blame
No virus
1.44 kB
metadata
inference: false
license: apache-2.0


LLaVA-Next-Video Model Card

Model details

Model type: LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. Base LLM: NousResearch/Nous-Hermes-2-Yi-34B

Model date: LLaVA-Next-Video-34B was trained in April 2024.

Paper or resources for more information: https://github.com/LLaVA-VL/LLaVA-NeXT

License

NousResearch/Nous-Hermes-2-Yi-34B license.

Where to send questions or comments about the model: https://github.com/LLaVA-VL/LLaVA-NeXT/issues

Intended use

Primary intended uses: The primary use of LLaVA is research on large multimodal models and chatbots.

Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.

Training dataset

Image

  • 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
  • 158K GPT-generated multimodal instruction-following data.
  • 500K academic-task-oriented VQA data mixture.
  • 50K GPT-4V data mixture.
  • 40K ShareGPT data.

Video

  • 100K VideoChatGPT-Instruct.

Evaluation dataset

A collection of 4 benchmarks, including 3 academic VQA benchmarks and 1 captioning benchmark.