czczup commited on
Commit
511756c
β€’
1 Parent(s): c0a6b37

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -29
README.md CHANGED
@@ -10,10 +10,7 @@ datasets:
10
  pipeline_tag: image-feature-extraction
11
  ---
12
 
13
- # Model Card for InternViT-6B-448px-V1-2
14
- <p align="center">
15
- <img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/re658pVjHaJEnJerlmRco.webp" alt="Image Description" width="300" height="300">
16
- </p>
17
 
18
  [\[πŸ†• Blog\]](https://internvl.github.io/blog/) [\[πŸ“œ InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[πŸ“œ InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821) [\[πŸ—¨οΈ Chat Demo\]](https://internvl.opengvlab.com/)
19
 
@@ -31,26 +28,6 @@ To equip the model with high-resolution processing and OCR capabilities, both th
31
  To enhance the OCR capability of the model, we have incorporated additional OCR data alongside the general caption datasets. Specifically, we utilized PaddleOCR to perform Chinese OCR on images from Wukong and English OCR on images from LAION-COCO.
32
  - **Note:** InternViT-6B originally had 48 blocks, and we found that using the output after the fourth-to-last block worked best for MLLM. For ease of use and to save GPU memory, we simply discarded the last 3 blocks. Now, the model has only 45 blocks and the number of parameters has been reduced from 5.9B to 5.5B. Therefore, if you want to build a MLLM based on this model, **please make use of the features from the last layer.**
33
 
34
- ## Released Models
35
-
36
- ### Vision Foundation model
37
- | Model | Date | Download | Note |
38
- | ----------------------- | ---------- | ---------------------------------------------------------------------- | -------------------------------- |
39
- | InternViT-6B-448px-V1-5 | 2024.04.20 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | support dynamic resolution, super strong OCR (πŸ”₯new) |
40
- | InternViT-6B-448px-V1-2 | 2024.02.11 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2) | 448 resolution |
41
- | InternViT-6B-448px-V1-0 | 2024.01.30 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0) | 448 resolution |
42
- | InternViT-6B-224px | 2023.12.22 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-224px) | vision foundation model |
43
- | InternVL-14B-224px | 2023.12.22 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternVL-14B-224px) | vision-language foundation model |
44
-
45
- ### Multimodal Large Language Model (MLLM)
46
- | Model | Date | Download | Note |
47
- | ----------------------- | ---------- | --------------------------------------------------------------------------- | ---------------------------------- |
48
- | InternVL-Chat-V1-5 | 2024.04.18 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5) | support 4K image; super strong OCR; Approaching the performance of GPT-4V and Gemini Pro on various benchmarks like MMMU, DocVQA, ChartQA, MathVista, etc. (πŸ”₯new)|
49
- | InternVL-Chat-V1-2-Plus | 2024.02.21 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2-Plus) | more SFT data and stronger |
50
- | InternVL-Chat-V1-2 | 2024.02.11 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2) | scaling up LLM to 34B |
51
- | InternVL-Chat-V1-1 | 2024.01.24 | πŸ€— [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-1) | support Chinese and stronger OCR |
52
-
53
-
54
  ## Model Usage (Image Embeddings)
55
 
56
  ```python
@@ -92,8 +69,3 @@ If you find this project useful in your research, please consider citing:
92
  year={2024}
93
  }
94
  ```
95
-
96
-
97
- ## Acknowledgement
98
-
99
- InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!
 
10
  pipeline_tag: image-feature-extraction
11
  ---
12
 
13
+ # InternViT-6B-448px-V1-2
 
 
 
14
 
15
  [\[πŸ†• Blog\]](https://internvl.github.io/blog/) [\[πŸ“œ InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[πŸ“œ InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821) [\[πŸ—¨οΈ Chat Demo\]](https://internvl.opengvlab.com/)
16
 
 
28
  To enhance the OCR capability of the model, we have incorporated additional OCR data alongside the general caption datasets. Specifically, we utilized PaddleOCR to perform Chinese OCR on images from Wukong and English OCR on images from LAION-COCO.
29
  - **Note:** InternViT-6B originally had 48 blocks, and we found that using the output after the fourth-to-last block worked best for MLLM. For ease of use and to save GPU memory, we simply discarded the last 3 blocks. Now, the model has only 45 blocks and the number of parameters has been reduced from 5.9B to 5.5B. Therefore, if you want to build a MLLM based on this model, **please make use of the features from the last layer.**
30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  ## Model Usage (Image Embeddings)
32
 
33
  ```python
 
69
  year={2024}
70
  }
71
  ```