LZHgrla commited on
Commit
9baee66
1 Parent(s): 3fb7768

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -12
README.md CHANGED
@@ -19,7 +19,14 @@ library_name: xtuner
19
 
20
  llava-llama-3-8b is a LLaVA model fine-tuned from [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) with [LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) and [LLaVA-Instruct](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) by [XTuner](https://github.com/InternLM/xtuner).
21
 
22
- ⚠️⚠️⚠️ LLaVA-format LLaVA-Llama-3-8B can be found on [xtuner/llava-llama-3-8b-hf](https://huggingface.co/xtuner/llava-llama-3-8b-hf), which is compatible with downstream deployment and evaluation toolkits.
 
 
 
 
 
 
 
23
 
24
  ## Details
25
 
@@ -76,19 +83,10 @@ xtuner mmbench xtuner/llava-llama-3-8b \
76
 
77
  After the evaluation is completed, if it's a development set, it will directly print out the results; If it's a test set, you need to submit `mmbench_result.xlsx` to the official MMBench for final evaluation to obtain precision results!
78
 
79
- ### Training
80
 
81
- 1. Pretrain (saved by default in `./work_dirs/llava_llama3_8b_instruct_clip_vit_large_p14_336_e1_gpu8_pretrain/`)
82
 
83
- ```bash
84
- NPROC_PER_NODE=8 xtuner train llava_llama3_8b_instruct_clip_vit_large_p14_336_e1_gpu8_pretrain --deepspeed deepspeed_zero2 --seed 1024
85
- ```
86
-
87
- 2. Fine-tune (saved by default in `./work_dirs/llava_llama3_8b_instruct_full_clip_vit_large_p14_336_lora_e1_gpu8_finetune/`)
88
-
89
- ```bash
90
- NPROC_PER_NODE=8 xtuner train llava_llama3_8b_instruct_full_clip_vit_large_p14_336_lora_e1_gpu8_finetune --deepspeed deepspeed_zero2 --seed 1024
91
- ```
92
 
93
  ## Citation
94
 
 
19
 
20
  llava-llama-3-8b is a LLaVA model fine-tuned from [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) with [LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) and [LLaVA-Instruct](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) by [XTuner](https://github.com/InternLM/xtuner).
21
 
22
+ **Note: This model is in XTuner LLaVA format.**
23
+
24
+ Resources:
25
+
26
+ - GitHub: [xtuner](https://github.com/InternLM/xtuner)
27
+ - HuggingFace LLaVA format model: [xtuner/llava-llama-3-8b-transformers](https://huggingface.co/xtuner/llava-llama-3-8b-transformers)
28
+ - Official LLaVA format model: [xtuner/llava-llama-3-8b-hf](https://huggingface.co/xtuner/llava-llama-3-8b-hf)
29
+
30
 
31
  ## Details
32
 
 
83
 
84
  After the evaluation is completed, if it's a development set, it will directly print out the results; If it's a test set, you need to submit `mmbench_result.xlsx` to the official MMBench for final evaluation to obtain precision results!
85
 
86
+ ### Reproduce
87
 
88
+ Please refer to [docs](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336#readme).
89
 
 
 
 
 
 
 
 
 
 
90
 
91
  ## Citation
92