Edit model card

LLaVA-OneVision

banner

Play with the model on the LLaVA OneVision Chat.

Table of Contents

  1. Model Summary
  2. Use
  3. Limitations
  4. Training
  5. License
  6. Citation

Model Summary

The LLaVA-OneVision models are 0.5/7/72B parameter models trained on LLaVA-OneVision, based on Qwen2 language model with a context window of 32K tokens.

Use

Intended use

The model was trained on LLaVA-OneVision Dataset and have the ability to interact with images, multi-image and videos.

Feel free to share your generations in the Community tab!

Generation

# pip install git+https://github.com/LLaVA-VL/LLaVA-NeXT.git
from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token
from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN, IGNORE_INDEX
from llava.conversation import conv_templates, SeparatorStyle

from PIL import Image
import requests
import copy
import torch

import sys
import warnings

warnings.filterwarnings("ignore")
pretrained = "lmms-lab/llava-onevision-qwen2-0.5b-si"
model_name = "llava_qwen"
device = "cuda"
device_map = "auto"
tokenizer, model, image_processor, max_length = load_pretrained_model(pretrained, None, model_name, device_map=device_map)  # Add any other thing you want to pass in llava_model_args

model.eval()

url = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true"
image = Image.open(requests.get(url, stream=True).raw)
image_tensor = process_images([image], image_processor, model.config)
image_tensor = [_image.to(dtype=torch.float16, device=device) for _image in image_tensor]

conv_template = "qwen_1_5"  # Make sure you use correct chat template for different models
question = DEFAULT_IMAGE_TOKEN + "\nWhat is shown in this image?"
conv = copy.deepcopy(conv_templates[conv_template])
conv.append_message(conv.roles[0], question)
conv.append_message(conv.roles[1], None)
prompt_question = conv.get_prompt()

input_ids = tokenizer_image_token(prompt_question, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(device)
image_sizes = [image.size]


cont = model.generate(
    input_ids,
    images=image_tensor,
    image_sizes=image_sizes,
    do_sample=False,
    temperature=0,
    max_new_tokens=4096,
)
text_outputs = tokenizer.batch_decode(cont, skip_special_tokens=True)
print(text_outputs)

Training

Model

  • Architecture: SO400M + Qwen2
  • Pretraining Stage: LCS-558K, 1 epoch, projector
  • Mid Stage: A mixture of 4.7M high-quality synthetic data, 1 epoch, full model
  • Final-Image Stage: A mixture of 3.6M single-image data, 1 epoch, full model
  • OneVision Stage: A mixture of 1.6M single-image/multi-image/video data, 1 epoch, full model
  • Precision: bfloat16

Hardware & Software

Citation

@article{li2024llavaonevision,
      title={LLaVA-OneVision},
}
Downloads last month
36,687
Safetensors
Model size
73.2B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for lmms-lab/llava-onevision-qwen2-72b-ov-sft

Finetunes
1 model

Dataset used to train lmms-lab/llava-onevision-qwen2-72b-ov-sft

Collection including lmms-lab/llava-onevision-qwen2-72b-ov-sft