czczup commited on
Commit
9089edf
1 Parent(s): f54d62f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -5
README.md CHANGED
@@ -17,6 +17,57 @@ datasets:
17
 
18
  InternVL scales up the ViT to _**6B parameters**_ and aligns it with LLM.
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ## Model Details
21
  - **Model Type:** vision large language model, multimodal chatbot
22
  - **Model Stats:**
@@ -77,11 +128,6 @@ response = model.chat(tokenizer, pixel_values, question, generation_config)
77
  ```
78
 
79
 
80
- ## Evaluation
81
-
82
- TODO
83
-
84
-
85
  ## Citation
86
 
87
  If you find this project useful in your research, please consider citing:
 
17
 
18
  InternVL scales up the ViT to _**6B parameters**_ and aligns it with LLM.
19
 
20
+ ## InternVL-Chat-V1.2 Blog
21
+
22
+ > Date: 2024/02/12<br>
23
+ > Developed by: Zhe Chen, Weiyun Wang, Wenhai Wang
24
+
25
+ In January 2024, we released [InternVL-Chat-V1.1](https://huggingface.co/OpenGVLab/InternVL-Chat-Chinese-V1-1), featuring a structure similar to LLaVA, including a ViT, an MLP projector, and an LLM. In that version, we explored increasing the resolution to 448x448, enhancing OCR capabilities, and improving support for Chinese conversations. However, it still lagged behind existing SOTA in some benchmarks.
26
+
27
+ <img width="600" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/GIEKCvNc1Y5iMQqLv645p.png">
28
+
29
+ Today, we are excited to introduce InternVL-Chat-V1.2. Inspired by [LLaVA-NeXT-34B](https://llava-vl-llava-next/), we have also adopted [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) as the language model.
30
+ From the experimental results, **we've observed that a stronger language model (34B) can better leverage the powerful capabilities of our vision foundation model ([InternViT-6B](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)).**
31
+
32
+ For better training reproducibility, we follow the minimalist design and data efficiency similar to LLaVA-NeXT. To reduce training costs, we provide a pre-trained MLP projector and only employ around 1 million visual instruction tuning samples for SFT. Our model has a total of 40 billion parameters and can be trained within 1.5 days using 32 A100 GPUs. The code, data, and model will be made publicly available.
33
+
34
+ ### Data Preparation
35
+
36
+ Inspired by LLaVA-NeXT, we adopted a data-efficient SFT strategy to train InternVL-Chat-V1.2, utilizing approximately 1.2M of visual instruction tuning samples in total, all of which are fully open-source. In a macro sense, we build upon [ShareGPT-4V](https://github.com/InternLM/InternLM-XComposer/blob/main/projects/ShareGPT4V/docs/Data.md#prepare-images) and additionally integrate [LLaVA-ZH](https://huggingface.co/datasets/openbmb/llava_zh), [DVQA](https://github.com/kushalkafle/DVQA_dataset), [ChartQA](https://github.com/vis-nlp/ChartQA), [AI2D](https://allenai.org/data/diagrams), [DocVQA](https://www.docvqa.org/datasets), [GeoQA+](https://github.com/SCNU203/GeoQA-Plus), and [SynthDoG-EN](https://huggingface.co/datasets/naver-clova-ix/synthdog-en). Most of the data remains consistent with LLaVA-NeXT.
37
+
38
+ For more details about data preparation, please see [here](./internvl_chat#prepare-training-datasets).
39
+
40
+ ### Performance
41
+
42
+ \* Proprietary Model
43
+
44
+ | name | image size | MMMU<br>(val) | MMMU<br>(test) | MathVista<br>(testmini) | MMB<br>(test) | MMB−CN<br>(test) | MMVP | MME | ScienceQA<br>(image) | POPE | SEEDv1<br>(image) | TextVQA | VizWiz | GQA |
45
+ | ------------------ | ---------- | ------------- | -------------- | ----------------------- | ------------- | ---------------- | ---- | -------- | -------------------- | ---- | ----------------- | ------- | ------ | ---- |
46
+ | GPT-4V\* | unknown | 56.8 | 55.7 | 49.9 | 77.0 | 74.4 | 38.7 | 1409/517 | - | - | 71.6 | 78.0 | - | - |
47
+ | Gemini Ultra\* | unknown | 59.4 | - | 53.0 | - | - | - | - | - | - | - | 82.3 | - | - |
48
+ | Gemini Pro\* | unknown | 47.9 | - | 45.2 | 73.6 | 74.3 | 40.7 | 1497/437 | - | - | 70.7 | 74.6 | - | - |
49
+ | Qwen-VL-Plus\* | unknown | 45.2 | 40.8 | 43.3 | 67.0 | 70.7 | - | 1681/502 | - | - | 65.7 | 78.9 | - | - |
50
+ | Qwen-VL-Max\* | unknown | 51.4 | 46.8 | 51.0 | 77.6 | 75.7 | - | - | - | - | - | 79.5 | - | - |
51
+ | | | | | | | | | | | | | | | |
52
+ | LLaVA-NEXT-34B | 672x672 | 51.1 | 44.7 | 46.5 | 79.3 | 79.0 | - | 1631/397 | 81.8 | 87.7 | 75.9 | 69.5 | 63.8 | 67.1 |
53
+ | InternVL-Chat-V1.2 | 448x448 | 51.6 | 46.2 | 47.7 | 82.2 | 81.2 | 56.7 | 1672/509 | 83.3 | 88.0 | TODO | 69.7 | 60.0 | 64.0 |
54
+
55
+ - MMBench results are collected from the [leaderboard](https://mmbench.opencompass.org.cn/leaderboard).
56
+ - In most benchmarks, InternVL-Chat-V1.2 achieves better performance than LLaVA-NeXT-34B.
57
+
58
+ ### Training (SFT)
59
+
60
+ We provide [slurm scripts](./internvl_chat/shell/hermes2_yi34b/internvl_chat_v1_2_hermes2_yi34b_448_finetune.sh) for multi-node multi-GPU training. You can use either 32 or 64 GPUs to train this model. If you use 64 GPUs, training will take approximately 18 hours.
61
+
62
+ For more details about training, please see [here](./internvl_chat#start-training).
63
+
64
+ The hyperparameters used for finetuning are listed in the following table.
65
+
66
+ | Hyperparameter | Trainable Param | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
67
+ | ------------------ | ---------------- | ----------------- | ------------- | ------ | ---------- | ------------ |
68
+ | InternVL-Chat-V1.2 | 40B (full model) | 512 | 1e-5 | 1 | 2048 | 0.05 |
69
+
70
+
71
  ## Model Details
72
  - **Model Type:** vision large language model, multimodal chatbot
73
  - **Model Stats:**
 
128
  ```
129
 
130
 
 
 
 
 
 
131
  ## Citation
132
 
133
  If you find this project useful in your research, please consider citing: