Update README.md
Browse files
README.md
CHANGED
@@ -15,10 +15,14 @@ new_version: OpenGVLab/InternViT-6B-448px-V2_5
|
|
15 |
|
16 |
# InternViT-6B-448px-V1-2
|
17 |
|
18 |
-
[\[π GitHub\]](https://github.com/OpenGVLab/InternVL) [\[π Blog\]](https://internvl.github.io/blog/) [\[π InternVL 1.0
|
19 |
|
20 |
[\[π¨οΈ Chat Demo\]](https://internvl.opengvlab.com/) [\[π€ HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[π Quick Start\]](#quick-start) [\[π δΈζ解读\]](https://zhuanlan.zhihu.com/p/706547971) [\[π Documents\]](https://internvl.readthedocs.io/en/latest/)
|
21 |
|
|
|
|
|
|
|
|
|
22 |
We release our new InternViT weights as InternViT-6B-448px-V1-2. The continuous pre-training of the InternViT-6B model is involved in the [InternVL 1.2](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2) update. Specifically, we increased the resolution of InternViT-6B from 224 to 448 and integrated it with [Nous-Hermes-2-Yi-34B]((https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B).
|
23 |
To equip the model with high-resolution processing and OCR capabilities, both the vision encoder and the MLP were activated for training, utilizing a mix of image captioning and OCR-specific datasets.
|
24 |
|
@@ -59,6 +63,12 @@ outputs = model(pixel_values)
|
|
59 |
If you find this project useful in your research, please consider citing:
|
60 |
|
61 |
```BibTeX
|
|
|
|
|
|
|
|
|
|
|
|
|
62 |
@article{chen2023internvl,
|
63 |
title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
|
64 |
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
|
|
|
15 |
|
16 |
# InternViT-6B-448px-V1-2
|
17 |
|
18 |
+
[\[π GitHub\]](https://github.com/OpenGVLab/InternVL) [\[π Blog\]](https://internvl.github.io/blog/) [\[π InternVL 1.0\]](https://arxiv.org/abs/2312.14238) [\[π InternVL 1.5\]](https://arxiv.org/abs/2404.16821) [\[π Mini-InternVL\]](https://arxiv.org/abs/2410.16261)
|
19 |
|
20 |
[\[π¨οΈ Chat Demo\]](https://internvl.opengvlab.com/) [\[π€ HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[π Quick Start\]](#quick-start) [\[π δΈζ解读\]](https://zhuanlan.zhihu.com/p/706547971) [\[π Documents\]](https://internvl.readthedocs.io/en/latest/)
|
21 |
|
22 |
+
<div align="center">
|
23 |
+
<img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
|
24 |
+
</div>
|
25 |
+
|
26 |
We release our new InternViT weights as InternViT-6B-448px-V1-2. The continuous pre-training of the InternViT-6B model is involved in the [InternVL 1.2](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2) update. Specifically, we increased the resolution of InternViT-6B from 224 to 448 and integrated it with [Nous-Hermes-2-Yi-34B]((https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B).
|
27 |
To equip the model with high-resolution processing and OCR capabilities, both the vision encoder and the MLP were activated for training, utilizing a mix of image captioning and OCR-specific datasets.
|
28 |
|
|
|
63 |
If you find this project useful in your research, please consider citing:
|
64 |
|
65 |
```BibTeX
|
66 |
+
@article{gao2024mini,
|
67 |
+
title={Mini-internvl: A flexible-transfer pocket multimodal model with 5\% parameters and 90\% performance},
|
68 |
+
author={Gao, Zhangwei and Chen, Zhe and Cui, Erfei and Ren, Yiming and Wang, Weiyun and Zhu, Jinguo and Tian, Hao and Ye, Shenglong and He, Junjun and Zhu, Xizhou and others},
|
69 |
+
journal={arXiv preprint arXiv:2410.16261},
|
70 |
+
year={2024}
|
71 |
+
}
|
72 |
@article{chen2023internvl,
|
73 |
title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
|
74 |
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
|