Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ pipeline_tag: visual-question-answering
|
|
17 |
|
18 |
## Model
|
19 |
|
20 |
-
|
21 |
|
22 |
|
23 |
## Quickstart
|
@@ -33,7 +33,7 @@ pip install -U 'xtuner[deepspeed]'
|
|
33 |
```shell
|
34 |
xtuner chat internlm/internlm-chat-7b \
|
35 |
--visual-encoder openai/clip-vit-large-patch14 \
|
36 |
-
--llava xtuner/llava-internlm-
|
37 |
--prompt-template internlm_chat \
|
38 |
--image $IMAGE_PATH
|
39 |
```
|
@@ -60,7 +60,7 @@ XTuner integrates the MMBench evaluation, and you can perform evaluations with t
|
|
60 |
```bash
|
61 |
xtuner mmbench internlm/internlm-chat-7b \
|
62 |
--visual-encoder openai/clip-vit-large-patch14 \
|
63 |
-
--llava xtuner/llava-internlm-
|
64 |
--prompt-template internlm_chat \
|
65 |
--data-path $MMBENCH_DATA_PATH \
|
66 |
--language en \
|
|
|
17 |
|
18 |
## Model
|
19 |
|
20 |
+
llava-internlm-7b is a LLaVA model fine-tuned from [InternLM-Chat-7B](https://huggingface.co/internlm/internlm-chat-7b) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) with [LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) and [LLaVA-Instruct](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) by [XTuner](https://github.com/InternLM/xtuner).
|
21 |
|
22 |
|
23 |
## Quickstart
|
|
|
33 |
```shell
|
34 |
xtuner chat internlm/internlm-chat-7b \
|
35 |
--visual-encoder openai/clip-vit-large-patch14 \
|
36 |
+
--llava xtuner/llava-internlm-7b \
|
37 |
--prompt-template internlm_chat \
|
38 |
--image $IMAGE_PATH
|
39 |
```
|
|
|
60 |
```bash
|
61 |
xtuner mmbench internlm/internlm-chat-7b \
|
62 |
--visual-encoder openai/clip-vit-large-patch14 \
|
63 |
+
--llava xtuner/llava-internlm-7b \
|
64 |
--prompt-template internlm_chat \
|
65 |
--data-path $MMBENCH_DATA_PATH \
|
66 |
--language en \
|