Image-Text-to-Text
Transformers
Safetensors
English
Chinese
llava
vision-language
llm
lmm
conversational
Inference Endpoints
tiny-llava-v1-hf / README.md
bczhou's picture
Update README.md
7c46f5f verified
|
raw
history blame
5.03 kB
metadata
license: apache-2.0
datasets:
  - liuhaotian/LLaVA-Pretrain
  - liuhaotian/LLaVA-Instruct-150K
language:
  - en
  - zh
library_name: transformers

WORK IN PROGRESS

We present TinyLLaVA, a small vision-language chatbot (1.4B) that reaches comparable performances with contemporary vision language models on common benchmarks, using less parameters. TinyLLaVA was trained by finetuning TinyLlama on the LLaVA-1.5 dataset, following the training recipe of LLaVA-1.5. For more details, please refer to the LLaVA-1.5 paper.

Model Performance

We have evaluated TinyLLaVA on GQA, VizWiz, VQAv2, TextVQA and SQA.

Model VQAv2 GQA SQA TextVQA VizWiz
TinyLLaVA-v1-tinyllama 73.41 57.54 59.40 46.37
TinyLLaVA-v1-stablelm 74.9 58.86 62.82 49.52 35.6
TinyLLaVA-v1.1-tinyllama 75.24 59.43 58.80 48.05 34.74
TinyLLaVA-v1.1-stablelm 76.34 60.26 63.06 51.6 36.34
BLIP-2 41.00 41.00 61.00 42.50 19.60
LLaVA-v1.5-7B 78.50 62.00 66.80 61.3 50
LLaVA-v1.5-13B 80.00 63.30 71.60 61.3 53.6
Qwen-VL-7B 78.80 59.30 67.10 63.8 35.2
Qwen-VL-13B 78.20 57.50 68.20 61.5 38.9

More evaluations are ongoing.

Model Preparations

- Transformers Version

Make sure to have transformers >= 4.35.3.

- Prompt Template

The model supports multi-image and multi-prompt generation. When using the model, make sure to follow the correct prompt template (USER: <image>xxx\nASSISTANT:), where <image> token is a place-holding special token for image embeddings.

Model Inference from pipeline and transformers

- Using pipeline:

Below we used "bczhou/tiny-llava-v1-hf" checkpoint.

from transformers import pipeline
from PIL import Image
import requests
model_id = "bczhou/tiny-llava-v1-hf"
pipe = pipeline("image-to-text", model=model_id)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(url, stream=True).raw)
prompt = "USER: <image>\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT:"
outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
print(outputs[0])
>>> {"generated_text': 'USER:  \nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT: The label 15 represents lava, which is a type of volcanic rock."}

- Using pure transformers:

Below is an example script to run generation in float16 precision on a GPU device:

import requests
from PIL import Image
import torch
from transformers import AutoProcessor, LlavaForConditionalGeneration
model_id = "bczhou/tiny-llava-v1-hf"
prompt = "USER: <image>\nWhat are these?\nASSISTANT:"
image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
).to(0)
processor = AutoProcessor.from_pretrained(model_id)
raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16)
output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
print(processor.decode(output[0][2:], skip_special_tokens=True))

Contact

This model was trained by Baichuan Zhou, from Beihang Univerisity, under the supervision of Prof. Lei Huang.

✏ Citation

If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:.

@misc{zhou2024tinyllava,
      title={TinyLLaVA: A Framework of Small-scale Large Multimodal Models}, 
      author={Baichuan Zhou and Ying Hu and Xi Weng and Junlong Jia and Jie Luo and Xien Liu and Ji Wu and Lei Huang},
      year={2024},
      eprint={2402.14289},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}