czczup commited on
Commit
31da47b
1 Parent(s): ab5bff1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -27,6 +27,8 @@ It is _**the largest open-source vision/vision-language foundation model (14B)**
27
  - Params (M): 5903
28
  - Image size: 224 x 224
29
  - **Pretrain Dataset:** LAION-en, LAION-COCO, COYO, CC12M, CC3M, SBU, Wukong, LAION-multi
 
 
30
 
31
  ## Linear Probing Performance
32
 
 
27
  - Params (M): 5903
28
  - Image size: 224 x 224
29
  - **Pretrain Dataset:** LAION-en, LAION-COCO, COYO, CC12M, CC3M, SBU, Wukong, LAION-multi
30
+ - **Note:** This model has 48 blocks, and we found that using the output after the fourth-to-last block worked best for VLLM. Therefore, **please set mm_vision_select_layer=-4 when using this model to build VLLM.**
31
+
32
 
33
  ## Linear Probing Performance
34