Update README.md
Browse files
README.md
CHANGED
@@ -9,17 +9,24 @@ datasets:
|
|
9 |
- wanng/wukong100m
|
10 |
---
|
11 |
|
12 |
-
# Model
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
## Model Details
|
15 |
- **Model Type:** feature backbone
|
16 |
- **Model Stats:**
|
17 |
- Params (M): 5903
|
18 |
- Image size: 224 x 224
|
19 |
-
- **Papers:**
|
20 |
-
- InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks
|
21 |
-
- **GitHub:**
|
22 |
-
- https://github.com/OpenGVLab/InternVL
|
23 |
- **Pretrain Dataset:** LAION-en, LAION-COCO, COYO, CC12M, CC3M, SBU, Wukong, LAION-multi
|
24 |
|
25 |
## Model Usage
|
@@ -45,4 +52,22 @@ pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
|
|
45 |
pixel_values = pixel_values.to(torch.bfloat16).cuda()
|
46 |
|
47 |
outputs = model(pixel_values)
|
48 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
- wanng/wukong100m
|
10 |
---
|
11 |
|
12 |
+
# Model Card for InternViT-6B-224px
|
13 |
+
|
14 |
+
## What is InternVL?
|
15 |
+
|
16 |
+
\[[Paper](https://arxiv.org/abs/2312.14238)\] \[[GitHub](https://github.com/OpenGVLab/InternVL)\]
|
17 |
+
|
18 |
+
InternVL scales up the ViT to _**6B parameters**_ and aligns it with LLM.
|
19 |
+
|
20 |
+
It is trained using web-scale, noisy image-text pairs. The data are all publicly available and comprise multilingual content, including LAION-en, LAION-multi, LAION-COCO, COYO, Wukong, CC12M, CC3M, and SBU.
|
21 |
+
|
22 |
+
It is _**the largest open-source vision/vision-language foundation model (14B)**_ to date, achieving _**32 state-of-the-art**_ performances on a wide range of tasks such as visual perception, cross-modal retrieval, multimodal dialogue, etc.
|
23 |
+
|
24 |
|
25 |
## Model Details
|
26 |
- **Model Type:** feature backbone
|
27 |
- **Model Stats:**
|
28 |
- Params (M): 5903
|
29 |
- Image size: 224 x 224
|
|
|
|
|
|
|
|
|
30 |
- **Pretrain Dataset:** LAION-en, LAION-COCO, COYO, CC12M, CC3M, SBU, Wukong, LAION-multi
|
31 |
|
32 |
## Model Usage
|
|
|
52 |
pixel_values = pixel_values.to(torch.bfloat16).cuda()
|
53 |
|
54 |
outputs = model(pixel_values)
|
55 |
+
```
|
56 |
+
|
57 |
+
## Citation
|
58 |
+
|
59 |
+
If you find this project useful in your research, please consider cite:
|
60 |
+
|
61 |
+
```BibTeX
|
62 |
+
@article{chen2023internvl,
|
63 |
+
title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
|
64 |
+
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
|
65 |
+
journal={arXiv preprint arXiv:2312.14238},
|
66 |
+
year={2023}
|
67 |
+
}
|
68 |
+
```
|
69 |
+
|
70 |
+
|
71 |
+
## Acknowledgement
|
72 |
+
|
73 |
+
InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!
|