ermu2001 commited on
Commit
eb3aa65
1 Parent(s): dbdf958

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - video LLM
5
+ datasets:
6
+ - OpenGVLab/VideoChat2-IT
7
+ ---
8
+
9
+
10
+ # PLLaVA Model Card
11
+ ## Model details
12
+ **Model type:**
13
+ PLLaVA-13B is an open-source video-language chatbot trained by fine-tuning Image-LLM on video instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: llava-hf/llava-v1.6-vicuna-13b-hf
14
+
15
+ **Model date:**
16
+ PLLaVA-13B was trained in April 2024.
17
+
18
+ **Paper or resources for more information:**
19
+ - github repo: https://github.com/magic-research/PLLaVA
20
+ - project page: https://pllava.github.io/
21
+ - paper link: https://arxiv.org/abs/2404.16994
22
+
23
+ ## License
24
+ llava-hf/llava-v1.6-vicuna-13b-hf license.
25
+
26
+ **Where to send questions or comments about the model:**
27
+ https://github.com/magic-research/PLLaVA/issues
28
+
29
+ ## Intended use
30
+ **Primary intended uses:**
31
+ The primary use of PLLaVA is research on large multimodal models and chatbots.
32
+
33
+ **Primary intended users:**
34
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
35
+
36
+ ## Training dataset
37
+ Video-Instruct-Tuning data of OpenGVLab/VideoChat2-IT
38
+ ## Evaluation dataset
39
+ A collection of 6 benchmarks, including 5 VQA benchmarks and 1 recent benchmarks specifically proposed for Video-LMMs.