File size: 1,463 Bytes
8b5efe3
475c62d
 
8b5efe3
475c62d
 
 
 
 
 
 
 
e722df2
475c62d
e722df2
475c62d
 
 
e722df2
475c62d
 
 
e722df2
475c62d
 
 
 
 
 
f4077b7
475c62d
 
 
 
e722df2
475c62d
 
 
e722df2
475c62d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
inference: false
license: apache-2.0
---

<br>

# LLaVA-Next-Video Model Card

## Model details

**Model type:**
<br>
LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
<br>
Base LLM: [NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)

**Model date:**
<br>
LLaVA-Next-Video-34B was trained in April 2024.

**Paper or resources for more information:**
<br>
https://github.com/LLaVA-VL/LLaVA-NeXT

## License
[NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) license.


## Where to send questions or comments about the model
https://github.com/LLaVA-VL/LLaVA-NeXT/issues

## Intended use
**Primary intended uses:**
<br>
The primary use of LLaVA is research on large multimodal models and chatbots.

**Primary intended users:**
<br>
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.

## Training dataset

### Image
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 500K academic-task-oriented VQA data mixture.
- 50K GPT-4V data mixture.
- 40K ShareGPT data.
### Video
- 100K VideoChatGPT-Instruct.

## Evaluation dataset
A collection of 4 benchmarks, including 3 academic VQA benchmarks and 1 captioning benchmark.