File size: 7,164 Bytes
fa69cc2
 
b8178db
 
 
 
 
 
 
 
fa69cc2
b8178db
a34a1a3
c5d3ba9
 
 
b8178db
cb1263c
 
 
b8178db
aa06952
c5d3ba9
 
 
37685b7
 
 
 
 
9ba46b7
37685b7
44e13c2
c5d3ba9
37685b7
c5d3ba9
267963d
 
5653280
 
 
267963d
 
b8178db
c5d3ba9
 
 
5653280
 
 
 
b8178db
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cb1263c
 
 
 
 
 
b8178db
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
license: mit
datasets:
- laion/laion2B-en
- laion/laion-coco
- laion/laion2B-multi
- kakaobrain/coyo-700m
- conceptual_captions
- wanng/wukong100m
pipeline_tag: image-feature-extraction
---

# Model Card for InternViT-6B-448px-V1-5
<p align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/AUE-3OBtfr9vDA7Elgkhd.webp" alt="Image Description" width="300" height="300">
</p>

[\[πŸ†• Blog\]](https://internvl.github.io/blog/)  [\[πŸ“œ InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238)  [\[πŸ“œ InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821)  [\[πŸ—¨οΈ Chat Demo\]](https://internvl.opengvlab.com/)

[\[πŸ€— HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL)  [\[πŸš€ Quick Start\]](#model-usage)  [\[🌐 Community-hosted API\]](https://rapidapi.com/adushar1320/api/internvl-chat)  [\[πŸ“– 中文解读\]](https://zhuanlan.zhihu.com/p/675877376)

We develop InternViT-6B-448px-V1-5 based on the pre-training of the strong foundation of [InternViT-6B-448px-V1-2](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2). In this update, the resolution of training images is expanded from 448&times;448 to dynamic 448&times;448, where the basic tile size is 448&times;448 and the number of tiles ranges from 1 to 12.
Additionally, we enhance the data scale, quality, and diversity of the pre-training dataset, resulting in the powerful robustness, OCR capability, and high-resolution processing capability of our
1.5 version model. 

## Model Details
- **Model Type:** vision foundation model, feature backbone
- **Model Stats:**
  - Params (M): 5540 (the last 3 blocks are discarded)
  - Image size: 448 x 448, training with 1 - 12 tiles
- **Pretrain Dataset:** LAION-en, LAION-zh, COYO, GRIT, COCO, TextCaps, Objects365, OpenImages, All-Seeing, Wukong-OCR, LaionCOCO-OCR, and other OCR-related datasets. 
To enhance the OCR capability of the model, we have incorporated additional OCR data alongside the general caption datasets. Specifically, we utilized PaddleOCR to perform Chinese OCR on images from Wukong and English OCR on images from LAION-COCO. 
- **Note:** InternViT-6B originally had 48 blocks, and we found that using the output after the fourth-to-last block worked best for MLLM. For ease of use and to save GPU memory, we simply discarded the last 3 blocks. Now, the model has only 45 blocks and the number of parameters has been reduced from 5.9B to 5.5B. Therefore, if you want to build a MLLM based on this model, **please make use of the features from the last layer.**

## Released Models
### Vision Foundation model
| Model                   | Date       | Download                                                               | Note                             |
| ----------------------- | ---------- | ---------------------------------------------------------------------- | -------------------------------- |
| InternViT-6B-448px-V1-5 | 2024.04.20 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | support dynamic resolution, super strong OCR (πŸ”₯new) |
| InternViT-6B-448px-V1-2 | 2024.02.11 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2) | 448 resolution                   |
| InternViT-6B-448px-V1-0 | 2024.01.30 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0) | 448 resolution                   |
| InternViT-6B-224px      | 2023.12.22 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-224px)      | vision foundation model          |
| InternVL-14B-224px      | 2023.12.22 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternVL-14B-224px)      | vision-language foundation model |

### Multimodal Large Language Model (MLLM)
| Model                   | Date       | Download                                                                    | Note                               |
| ----------------------- | ---------- | --------------------------------------------------------------------------- | ---------------------------------- |
| InternVL-Chat-V1-5      | 2024.04.18 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5)            | support 4K image; super strong OCR; Approaching the performance of GPT-4V and Gemini Pro on various benchmarks like MMMU, DocVQA, ChartQA, MathVista, etc. (πŸ”₯new)|
| InternVL-Chat-V1-2-Plus | 2024.02.21 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2-Plus)       | more SFT data and stronger  |
| InternVL-Chat-V1-2      | 2024.02.11 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2)            | scaling up LLM to 34B       |
| InternVL-Chat-V1-1      | 2024.01.24 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-1)            | support Chinese and stronger OCR   |

## Model Usage (Image Embeddings)

```python
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor

model = AutoModel.from_pretrained(
    'OpenGVLab/InternViT-6B-448px-V1-5',
    torch_dtype=torch.bfloat16,
    low_cpu_mem_usage=True,
    trust_remote_code=True).cuda().eval()

image = Image.open('./examples/image1.jpg').convert('RGB')

image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternViT-6B-448px-V1-5')

pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()

outputs = model(pixel_values)
```

## Citation

If you find this project useful in your research, please consider citing:

```BibTeX
@article{chen2023internvl,
  title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
  author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
  journal={arXiv preprint arXiv:2312.14238},
  year={2023}
}
@article{chen2024far,
  title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
  author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
  journal={arXiv preprint arXiv:2404.16821},
  year={2024}
}
```


## Acknowledgement

InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!