czczup commited on
Commit
5861e03
1 Parent(s): 2731ac7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -9,7 +9,7 @@ inference: false
9
 
10
  ## What is InternVL?
11
 
12
- \[[Paper](https://arxiv.org/abs/2312.14238)\] \[[GitHub](https://github.com/OpenGVLab/InternVL)\]
13
 
14
  InternVL scales up the ViT to _**6B parameters**_ and aligns it with LLM.
15
 
@@ -21,7 +21,7 @@ It is _**the largest open-source vision/vision-language foundation model (14B)**
21
 
22
  ## How to Run?
23
 
24
- Please refer to this [README](https://github.com/OpenGVLab/InternVL/tree/main/llava#internvl-for-multimodal-dialogue-using-llava) to run this model.
25
 
26
  Note: We have retained the original documentation of LLaVA 1.5 as a more detailed manual. In most cases, you will only need to refer to the new documentation that we have added.
27
 
 
9
 
10
  ## What is InternVL?
11
 
12
+ \[[Paper](https://arxiv.org/abs/2312.14238)\] \[[GitHub](https://github.com/OpenGVLab/InternVL)\] \[[Chat Demo](https://internvl.opengvlab.com/)\]
13
 
14
  InternVL scales up the ViT to _**6B parameters**_ and aligns it with LLM.
15
 
 
21
 
22
  ## How to Run?
23
 
24
+ Please refer to this [README](https://github.com/OpenGVLab/InternVL/tree/main/internvl_chat_llava#internvl-for-multimodal-dialogue-using-llava) to run this model.
25
 
26
  Note: We have retained the original documentation of LLaVA 1.5 as a more detailed manual. In most cases, you will only need to refer to the new documentation that we have added.
27