StarCycle commited on
Commit
6dc926d
β€’
1 Parent(s): fec9b94

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +246 -0
README.md CHANGED
@@ -1,3 +1,249 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - llava
5
+ datasets:
6
+ - liuhaotian/LLaVA-Pretrain
7
+ - liuhaotian/LLaVA-Instruct-150K
8
+ pipeline_tag: image-text-to-text
9
  ---
10
+
11
+ ## Model
12
+ llava-siglip-internlm2-1_8b-pretrain-v1 is a LLaVA checkpoint finetuned from [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b) and [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) with [LLaVA-Pretrain](liuhaotian/LLaVA-Pretrain) and [LLaVA-Instruct-150K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) by [Xtuner](https://github.com/InternLM/xtuner). The pretraining phase took 5.5 hours on 4 Nvidia GTX 4090 GPU (see this [intermediate checkpoint](https://huggingface.co/StarCycle/llava-clip-internlm2-1_8b-pretrain-v1/)). The finetuning phase took 16 hours on 4 Nvidia GTX 4090 GPU.
13
+
14
+ The total size of the model is around 2.2B, which is suitable for embedded applications like robotics. This model performs slightly better than [llava-clip-internlm2-1_8b-v1](https://huggingface.co/StarCycle/llava-clip-internlm2-1_8b-v1).
15
+
16
+ I have not carefully tune the hyperparameters during training. If you have any idea to improve it, please open an issue or just send an email to zhuohengli@foxmail.com. You are welcomed!
17
+
18
+ ## Example
19
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/AEw4i1rkIcUY74hFLhXLW.png)
20
+ Explain this photo in English and Chinese:
21
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/AnrlQbychHvf7gkARdhMV.png)
22
+
23
+ ## Results
24
+ Model | MMBench Test (EN) | MMBench Dev (EN) | MMBench Test (CN) | MMBench Dev (CN) | CCBench Dev
25
+ ------------- | ------------- | ------------- | ------------- | ------------- | -------------
26
+ LLaVA-v1.5-7B | 67.7 | 69.2 | 61.0 | 59.7 | 28.4
27
+ LLaVA-InternLM-7B | 69.0 | 68.5 | 66.7 | 63.8 | 37.3
28
+ LLaVA-InternLM2-7B | 73.3 | 74.6 | 71.7 | 72.0 | 42.5
29
+ Bunny-3B | 69.2 | 68.6 | - | - | -
30
+ MiniCPM-V | 64.1 | 67.9 | 62.6 | 65.3 | 41.4
31
+ llava-clip-internlm2-1_8b-v1 | 63.3 | 63.1 | 63.6 | 61.7 | 35.3
32
+ llava-clip-internlm2-1_8b-v1 | - | 63.5 | - | 62.9 | 36.3
33
+
34
+ ## Installation
35
+ ```
36
+ git clone https://github.com/huggingface/transformers/
37
+ https://github.com/huggingface/peft
38
+ git clone https://github.com/InternLM/xtuner
39
+ ```
40
+ Now please replace the files in transformers and xtuner with the source code files in modified_transformers and modified_xtuner.
41
+
42
+ Then run
43
+ ```
44
+ pip install -e ./xtuner[deepspeed]
45
+ apt install git-lfs
46
+ ```
47
+
48
+ ## Chat
49
+ ```
50
+ xtuner chat internlm/internlm2-chat-1_8b \
51
+ --visual-encoder google/siglip-so400m-patch14-384 \
52
+ --llava ./lora_and_projector \
53
+ --prompt-template internlm2_chat \
54
+ --image $IMAGE_PATH
55
+ ```
56
+
57
+ ## Common Errors
58
+ 1.
59
+ ```
60
+ command error: 'libGL.so.1: cannot open shared object file: No such file or directory'!
61
+ ```
62
+ You can solve it by
63
+ ```
64
+ # For Ubuntu
65
+ sudo apt-get update
66
+ sudo apt-get install libgl1-mesa-glx
67
+
68
+ # For CentOS and Fedora
69
+ sudo yum install mesa-libGL
70
+ ```
71
+
72
+ 2.
73
+ ```
74
+ Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.
75
+ Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.
76
+ ```
77
+ You can solve it by reinstall numpy.
78
+
79
+ 3.
80
+ ```
81
+ ImportError:
82
+ InternLM2Converter requires the protobuf library but it was not found in your environment. Checkout the instructions on the
83
+ ```
84
+ You just need
85
+ ```
86
+ pip install protobuf
87
+ ```
88
+ 4.
89
+ To use tensorboard to visualize the training loss curve:
90
+ ```
91
+ pip install future tensorboard
92
+ ```
93
+
94
+ 5. If your training process is killed during data preprocessing, you can modify the `map_num_proc` in xtuner/xtuner/dataset
95
+ /huggingface.py
96
+ ```
97
+ def process(dataset,
98
+ do_dataset_tokenization=True,
99
+ tokenizer=None,
100
+ max_length=None,
101
+ dataset_map_fn=None,
102
+ template_map_fn=None,
103
+ max_dataset_length=None,
104
+ split='train',
105
+ remove_unused_columns=False,
106
+ rename_maps=[],
107
+ shuffle_before_pack=True,
108
+ pack_to_max_length=True,
109
+ use_varlen_attn=False,
110
+ input_ids_with_output=True,
111
+ with_image_token=False,
112
+ map_num_proc=32): # modify it to a smaller number, e.g., 4
113
+ ```
114
+
115
+ 6. If you fail to load the model, check whether you installed git-lfs and actually downloaded the model file.
116
+
117
+ ## Data prepration
118
+ 1. File structure
119
+
120
+ ```
121
+ # . means the llava-dinov2-internlm2-7b-v1 folder you clone
122
+ ./data/llava_data
123
+ β”œβ”€β”€ LLaVA-Pretrain
124
+ β”‚Β Β  β”œβ”€β”€ blip_laion_cc_sbu_558k.json
125
+ β”‚Β Β  β”œβ”€β”€ blip_laion_cc_sbu_558k_meta.json
126
+ β”‚Β Β  └── images
127
+ β”œβ”€β”€ LLaVA-Instruct-150K
128
+ β”‚Β Β  └── llava_v1_5_mix665k.json
129
+ └── llava_images
130
+ Β Β  β”œβ”€β”€ coco
131
+ Β Β  β”‚ └── train2017
132
+ Β Β  β”œβ”€β”€ gqa
133
+ Β Β  β”‚ └── images
134
+ Β Β  β”œβ”€β”€ ocr_vqa
135
+ Β Β  β”‚ └── images
136
+ Β Β  β”œβ”€β”€ textvqa
137
+ Β Β  β”‚ └── train_images
138
+ Β Β  └── vg
139
+ Β Β  Β Β  β”œβ”€β”€ VG_100K
140
+ Β οΏ½οΏ½ └── VG_100K_2
141
+ ```
142
+
143
+ 2. Pretrain Data
144
+
145
+ LLaVA-Pretrain
146
+
147
+ ```shell
148
+ # Make sure you have git-lfs installed (https://git-lfs.com)
149
+ git lfs install
150
+ git clone https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain --depth=1
151
+ ```
152
+
153
+ 3. Finetune Data
154
+
155
+ 3.1 Text data
156
+
157
+ LLaVA-Instruct-150K
158
+
159
+ ```shell
160
+ # Make sure you have git-lfs installed (https://git-lfs.com)
161
+ git lfs install
162
+ git clone https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K --depth=1
163
+ ```
164
+
165
+ 3.2 Image data
166
+
167
+ 3.2.1 COCO (coco): [train2017](http://images.cocodataset.org/zips/train2017.zip)
168
+
169
+ 3.2.2 GQA (gqa): [images](https://downloads.cs.stanford.edu/nlp/data/gqa/images.zip)
170
+
171
+ 3.2.3 OCR-VQA (ocr_vqa): [download script](https://drive.google.com/drive/folders/1_GYPY5UkUy7HIcR0zq3ZCFgeZN7BAfm_?usp=sharing)
172
+
173
+ ⚠️⚠️⚠️ Modify the name of OCR-VQA's images to keep the extension as `.jpg`!
174
+
175
+ ```shell
176
+ #!/bin/bash
177
+ ocr_vqa_path="<your-directory-path>"
178
+
179
+ find "$target_dir" -type f | while read file; do
180
+ extension="${file##*.}"
181
+ if [ "$extension" != "jpg" ]
182
+ then
183
+ cp -- "$file" "${file%.*}.jpg"
184
+ fi
185
+ done
186
+ ```
187
+
188
+ 3.2.4 TextVQA (textvqa): [train_val_images](https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip)
189
+
190
+ 3.2.5 VisualGenome (VG): [part1](https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip), [part2](https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip)
191
+
192
+ ## Cheers! Now train your own model!
193
+ 1. Alignment module pretraining
194
+ ```
195
+ # single GPU
196
+ xtuner train ./pretrain.py --deepspeed deepspeed_zero2
197
+
198
+ # multiple GPU
199
+ NPROC_PER_NODE=4 xtuner train ./pretrain.py --deepspeed deepspeed_zero2
200
+ ```
201
+
202
+ #### Remember to change the batch size and gradient accumulation parameters to fit your hardware. So your GPU_num * batch_size * gradient_accumulation is roughly equal to mine to reproduce the result.
203
+
204
+ The checkpoint and tensorboard logs are saved by default in ./work_dirs/. I only train it for 1 epoch to be same as the original LLaVA paper. Some researches also report that training for multiple epochs will make the model overfit the training dataset and perform worse in other domains.
205
+
206
+ This is my loss curve for llava-siglip-internlm2-1_8b-pretrain-v1:
207
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/iNxPxfOvSJq8ZPz8uP_sP.png)
208
+
209
+ And the learning rate curve:
210
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/U1U9Kapcd6AIEUySvt2RS.png)
211
+
212
+ 2. Instruction following fine-tuning
213
+ ```
214
+ NPROC_PER_NODE=4 xtuner train ./finetune.py --deepspeed deepspeed_zero2
215
+ ```
216
+ Here is my loss curve (the curve fluctuates strongly because the batch size is small, and I only record batch loss instead of epoch loss):
217
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/kby2Y1dixeTaALliZ4pJa.png)
218
+
219
+ And the learning rate curve:
220
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/7ue98bikCOU7ub2jEHrom.png)
221
+
222
+ ## Transfer the checkpoints to Huggingface safetensor format
223
+ ```
224
+ xtuner convert pth_to_hf ./finetune.py ./work_dirs/iter_xxx.pth ./my_lora_and_projector
225
+ ```
226
+ The adapter still need to be used with the internlm/internlm2-chat-7b and the vision encoder. I have not tried to merge them yet but it is possible with Xtuner, see this [tutorial](https://github.com/InternLM/xtuner/blob/f63859b3d0cb39cbac709e3850f3fe01de1023aa/xtuner/configs/llava/README.md#L4).
227
+
228
+ ## MMBench Evaluation
229
+ You can first download the MMBench data:
230
+ ```
231
+ wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_DEV_EN.tsv
232
+ wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_TEST_EN.tsv
233
+ wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_DEV_CN.tsv
234
+ wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_TEST_CN.tsv
235
+ wget https://opencompass.openxlab.space/utils/VLMEval/CCBench.tsv
236
+ ```
237
+ Then run:
238
+ ```
239
+ NPROC_PER_NODE=8 xtuner mmbench internlm/internlm2-chat-1_8b \
240
+ --visual-encoder google/siglip-so400m-patch14-384 \
241
+ --llava ./my_lora_and_projector \
242
+ --prompt-template internlm2_chat \
243
+ --data-path $MMBENCH_DATA_PATH \
244
+ --work-dir $RESULT_PATH
245
+ ```
246
+ You can also use [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) to evaluate it on other benckmarks.
247
+
248
+ ## Deployment
249
+ Xtuner team is developing HF chatbot (based on Huggingface transformers) and LMDeploy chatbot (based on TurboMind). I am waiting for their final version of API.