tinyllava commited on
Commit
faca8fc
1 Parent(s): cd7a15c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -3
README.md CHANGED
@@ -4,8 +4,7 @@ pipeline_tag: image-text-to-text
4
  ---
5
 
6
  ### TinyLLaVA
7
-
8
- We trained a TinyLLaVA model with 3.1B parameters, employing the same training settings as [TinyLLaVA](https://github.com/DLCV-BUAA/TinyLLaVABench). For the Language and Vision models, we chose [Phi-2](microsoft/phi-2) and [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384), respectively. The Connector was configured with a 2-layer MLP. The dataset used for training is the [ShareGPT4V](https://github.com/InternLM/InternLM-XComposer/blob/main/projects/ShareGPT4V/docs/Data.md) dataset.
9
 
10
  ### Usage
11
  Execute the following test code:
@@ -29,4 +28,13 @@ print('runing time: ', genertaion_time)
29
  | model_name | vqav2 | gqa | sqa | textvqa | MM-VET | POPE | MME | MMMU |
30
  | :----------------------------------------------------------: | ----- | ------- | ----- | ----- | ------- | ----- | ------ | ------ |
31
  | [bczhou/TinyLLaVA-3.1B](https://huggingface.co/bczhou/TinyLLaVA-3.1B) | 79.9 | 62.0 | 69.1 | 59.1 | 32.0 | 86.4 | 1464.9 | - |
32
- | [tinyllava/TinyLLaVA-Phi-2-SigLIP-3.1B](https://huggingface.co/tinyllava/TinyLLaVA-Phi-2-SigLIP-3.1B) | 80.1 | 62.1 | 73.0 | 60.3 | 37.5 | 87.2 | 1466.4 | 38.4 |
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
  ### TinyLLaVA
7
+ We introduce TinyLLaVA-Phi-2-SigLIP-3.1B, a small-scale large multimodal model(LMM), which is trained by the TinyLLaVA Factory codebase. For the LLM and vision tower, we choose [Phi-2](microsoft/phi-2) and [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384), respectively. The dataset used for training this model is the [ShareGPT4V](https://github.com/InternLM/InternLM-XComposer/blob/main/projects/ShareGPT4V/docs/Data.md) dataset.
 
8
 
9
  ### Usage
10
  Execute the following test code:
 
28
  | model_name | vqav2 | gqa | sqa | textvqa | MM-VET | POPE | MME | MMMU |
29
  | :----------------------------------------------------------: | ----- | ------- | ----- | ----- | ------- | ----- | ------ | ------ |
30
  | [bczhou/TinyLLaVA-3.1B](https://huggingface.co/bczhou/TinyLLaVA-3.1B) | 79.9 | 62.0 | 69.1 | 59.1 | 32.0 | 86.4 | 1464.9 | - |
31
+ | [tinyllava/TinyLLaVA-Phi-2-SigLIP-3.1B](https://huggingface.co/tinyllava/TinyLLaVA-Phi-2-SigLIP-3.1B) | 80.1 | 62.1 | 73.0 | 60.3 | 37.5 | 87.2 | 1466.4 | 38.4 |
32
+
33
+ P.S. TinyLLaVA Factory is an open-source modular codebase for small-scale LMMs with a focus on simplicity of code implementations, extensibility of new features, and reproducibility of training results. This code repository provides standard training&evaluating pipelines, flexible data preprocessing&model configurations, and easily extensible architectures. Users can customize their own LMMs with minimal coding effort and less codding mistake.
34
+
35
+ TinyLLaVA Factory integrates a suite of cutting-edge models and methods.
36
+ - LLM currently supports OpenELM, TinyLlama, StableLM, Qwen, Gemma, and Phi.
37
+ - Vision tower currently supports CLIP, SigLIP, Dino, and combination of CLIP and Dino.
38
+ - Connector currently supports MLP, Qformer, and Resampler.
39
+
40
+ We will release the TinyLLaVA Factory codebase very soon!