Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

TinyLLaVA

arXivGithubDemo TinyLLaVA has released a family of small-scale Large Multimodel Models(LMMs), ranging from 0.55B to 3.1B. Our best model, TinyLLaVA-Phi-2-SigLIP-3.1B, achieves better overall performance against existing 7B models such as LLaVA-1.5 and Qwen-VL.

TinyLLaVA

Here, we introduce TinyLLaVA-OpenELM-450M-CLIP-0.55B, which is trained by the TinyLLaVA Factory codebase. For LLM and vision tower, we choose OpenELM-450M-Instruct and clip-vit-base-patch16, respectively. The dataset used for training this model is the LLaVA dataset.

Usage

Execute the following test code:

from transformers import AutoTokenizer, AutoModelForCausalLM
hf_path = 'jiajunlong/TinyLLaVA-OpenELM-450M-CLIP-0.55B'
model = AutoModelForCausalLM.from_pretrained(hf_path, trust_remote_code=True)
model.cuda()
config = model.config
tokenizer = AutoTokenizer.from_pretrained(hf_path, use_fast=False, model_max_length = config.tokenizer_model_max_length,padding_side = config.tokenizer_padding_side)
prompt="What are these?"
image_url="http://images.cocodataset.org/test-stuff2017/000000000001.jpg"
output_text, genertaion_time = model.chat(prompt=prompt, image=image_url, tokenizer=tokenizer)
print('model output:', output_text)
print('runing time:', genertaion_time)

Result

model_name gqa textvqa sqa vqav2 MME MMB MM-VET
TinyLLaVA-1.5B 60.3 51.7 60.3 76.9 1276.5 55.2 25.8
TinyLLaVA-0.55B 50.38 36.37 50.02 65.44 1056.69 26.29 15.4

P.S. TinyLLaVA Factory is an open-source modular codebase for small-scale LMMs with a focus on simplicity of code implementations, extensibility of new features, and reproducibility of training results. This code repository provides standard training&evaluating pipelines, flexible data preprocessing&model configurations, and easily extensible architectures. Users can customize their own LMMs with minimal coding effort and less coding mistake. TinyLLaVA Factory integrates a suite of cutting-edge models and methods.

  • LLM currently supports OpenELM, TinyLlama, StableLM, Qwen, Gemma, and Phi.
  • Vision tower currently supports CLIP, SigLIP, Dino, and combination of CLIP and Dino.
  • Connector currently supports MLP, Qformer, and Resampler.
Downloads last month
578
Safetensors
Model size
547M params
Tensor type
FP16
·
Inference API
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for jiajunlong/TinyLLaVA-OpenELM-450M-CLIP-0.55B

Finetunes
6 models
Quantizations
1 model