Xidong commited on
Commit
acd9ad9
·
verified ·
1 Parent(s): c590d40

Upload README .md

Browse files
Files changed (1) hide show
  1. README .md +105 -0
README .md ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ pipeline_tag: image-text-to-text
5
+ ---
6
+ ![header](./assets/assets_header.png)
7
+
8
+ <p align="center">
9
+ 📃 <a href="https://arxiv.org/abs/2409.02889" target="_blank">Paper</a> • 🌐 <a href="" target="_blank">Demo</a> • 📃 <a href="https://github.com/FreedomIntelligence/LongLLaVA" target="_blank">Github</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/LongLLaVA-53B-A13B" target="_blank">LongLLaVA-53B-A13B</a>
10
+ </p>
11
+
12
+ ![efficiency](./assets/singleGPU.png)
13
+
14
+
15
+ ## 🌈 Update
16
+
17
+ * **[2024.09.05]** LongLLaVA repo is published!🎉
18
+ * **[2024.10.12]** [LongLLaVA-53B-A13B](https://huggingface.co/FreedomIntelligence/LongLLaVA-53B-A13B), [LongLLaVA-9b](https://huggingface.co/FreedomIntelligence/LongLLaVA-9B) and [Jamba-9B-Instruct](https://huggingface.co/FreedomIntelligence/Jamba-9B-Instruct) are repleased!🎉
19
+
20
+ ## Architecture
21
+
22
+ <details>
23
+ <summary>Click to view the architecture image</summary>
24
+
25
+ ![Architecture Image](./assets/arch.png)
26
+
27
+ </details>
28
+
29
+
30
+ ## Results
31
+
32
+ <details>
33
+ <summary>Click to view the Results</summary>
34
+
35
+ - Main Results
36
+ ![Main Results](./assets/result1.png)
37
+ - Diagnostic Results
38
+ ![Diagnostic Results](./assets/diaresult.png)
39
+ - Video-NIAH
40
+ ![Video-NIAH](./assets/NIAH.png)
41
+
42
+ </details>
43
+
44
+
45
+
46
+ ## Results reproduction
47
+
48
+
49
+ ### Evaluation
50
+
51
+ - Preparation
52
+
53
+ Get the model inference code from [Github](https://github.com/FreedomIntelligence/LongLLaVA).
54
+
55
+ ```bash
56
+ git clone https://github.com/FreedomIntelligence/LongLLaVA.git
57
+ ```
58
+
59
+ - Environment Setup
60
+
61
+ ```bash
62
+ pip install -r requirements.txt
63
+ ```
64
+
65
+
66
+ - Command Line Interface
67
+
68
+ ```bash
69
+ python cli.py --model_dir path-to-longllava
70
+ ```
71
+
72
+
73
+ - Model Inference
74
+
75
+ ```python
76
+ query = 'What does the picture show?'
77
+ image_paths = ['image_path1'] # image or video path
78
+
79
+ from cli import Chatbot
80
+ bot = Chatbot(path-to-longllava)
81
+ output = bot.chat(query, image_paths)
82
+ print(output) # Prints the output of the model
83
+ ```
84
+
85
+
86
+ ## Acknowledgement
87
+
88
+ - [LLaVA](https://github.com/haotian-liu/LLaVA): Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
89
+
90
+ ## Citation
91
+
92
+ ```
93
+ @misc{wang2024longllavascalingmultimodalllms,
94
+ title={LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture},
95
+ author={Xidong Wang and Dingjie Song and Shunian Chen and Chen Zhang and Benyou Wang},
96
+ year={2024},
97
+ eprint={2409.02889},
98
+ archivePrefix={arXiv},
99
+ primaryClass={cs.CL},
100
+ url={https://arxiv.org/abs/2409.02889},
101
+ }
102
+ ```
103
+
104
+
105
+