void0721 commited on
Commit
438eb49
β€’
1 Parent(s): a9d26e3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -15
README.md CHANGED
@@ -3,7 +3,7 @@ license: mit
3
  ---
4
  # πŸ”₯ SPHINX: A Mixer of Tasks, Domains, and Embeddings
5
 
6
- Official implementation of ['SPHINX: A Mixer of Tasks, Domains, and Embeddings Advances Multi-modal Large Language Models']().
7
 
8
  Try out our [web demo πŸš€](http://imagebind-llm.opengvlab.com/) here!
9
 
@@ -20,23 +20,14 @@ We present $\color{goldenrod}{SPHINX}$, a versatile multi-modal large language m
20
 
21
  - **Domain Mix.** For data from real-world and synthetic domains, we mix the weights of two domain-specific models for complementarity.
22
 
23
- <p align="center"> <img src="https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/SPHINX/figs/pipeline.png"/ width="90%"> <br>
 
24
  </p>
25
- <p align="center"> <img src="https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/SPHINX/figs/pipeline2.png"/ width="90%"> <br>
26
- </p>
27
- ## Demo
28
- Via our proposed three-fold mixer, $\color{goldenrod}{SPHINX}$ exhibits superior multi-modal understanding and reasoning powers.
29
- <p align="center"> <img src="figs/1.png"/ width="70%"> <br>
30
- </p>
31
- <p align="center"> <img src="figs/2.png"/ width="70%"> <br>
32
- </p>
33
- <p align="center"> <img src="figs/3.png"/ width="70%"> <br>
34
- </p>
35
- <p align="center"> <img src="figs/4.png"/ width="50%"> <br>
36
- </p>
37
- <p align="center"> <img src="figs/5.png"/ width="60%"> <br>
38
  </p>
39
 
 
40
  ## Inference
41
  This section provides a step-by-step guide for hosting a local SPHINX demo. If you're already familiar with the LLAMA2-Accessory toolkit, note that hosting a SPHINX demo follows the same pipeline as hosting demos for the other models supported by LLAMA2-Accessory.
42
 
 
3
  ---
4
  # πŸ”₯ SPHINX: A Mixer of Tasks, Domains, and Embeddings
5
 
6
+ Official implementation of ['SPHINX: A Mixer of Tasks, Domains, and Embeddings Advances Multi-modal Large Language Models'](https://github.com/Alpha-VLLM/LLaMA2-Accessory/tree/main/SPHINX).
7
 
8
  Try out our [web demo πŸš€](http://imagebind-llm.opengvlab.com/) here!
9
 
 
20
 
21
  - **Domain Mix.** For data from real-world and synthetic domains, we mix the weights of two domain-specific models for complementarity.
22
 
23
+ <p align="center">
24
+ <img src="https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/SPHINX/figs/pipeline.png"/ width="90%"> <br>
25
  </p>
26
+ <p align="center">
27
+ <img src="https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/SPHINX/figs/pipeline2.png"/ width="90%"> <br>
 
 
 
 
 
 
 
 
 
 
 
28
  </p>
29
 
30
+
31
  ## Inference
32
  This section provides a step-by-step guide for hosting a local SPHINX demo. If you're already familiar with the LLAMA2-Accessory toolkit, note that hosting a SPHINX demo follows the same pipeline as hosting demos for the other models supported by LLAMA2-Accessory.
33