Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Introduction

MixSense is a series of models based on the widely adopted vision encoder-projector-LLM architecture. In this resource, we release Llama-3-MixSense checkpoint,which is Built with Meta Llama 3 as the text encoder,and SigLIP 400M as the vision encoder . We have developed an innovative data processing method that complements the training process, reducing training costs while improving training effectiveness.,The models are trained on our restructured dataset. Details of the data organization and related research papers will be available soon.

QuickStart

Requirements

conda create -n mixsense python==3.10 -y
conda activate mixsense
pip install torch transformers==4.37.2 accelerate pillow

Usage

Llama-3-Mixsense/demo.py

import torch
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
import warnings
import os


# disable some warnings
transformers.logging.set_verbosity_error()
transformers.logging.disable_progress_bar()
warnings.filterwarnings("ignore")

# set device
device = "cuda"  # or cpu

# create model
model = AutoModelForCausalLM.from_pretrained(
    "Zero-Vision/Llama-3-MixSense",
    torch_dtype=torch.float16,  # float32 for cpu
    device_map="auto",
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
    "Zero-Vision/Llama-3-MixSense",
    trust_remote_code=True,
)

qs = "describe the image detailly."
input_ids = model.text_process(qs, tokenizer).to(device)

image = Image.open("example.jpg")
image_tensor = model.image_process([image]).to(dtype=model.dtype, device=device)

# generate
with torch.inference_mode():
    output_ids = model.generate(
        input_ids,
        images=image_tensor,
        max_new_tokens=2048,
        use_cache=True,
        eos_token_id=[
            tokenizer.eos_token_id,
            tokenizer.convert_tokens_to_ids(["<|eot_id|>"])[0],
        ],
    )

print(tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip())

Eval

We offer Llama-3-Mixsense/llama3mixsense.py for VLMEvalKit.

License

This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses.including but not limited to Llama3 and SigLIP. Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. And Apache LICENSE 2.0 for SigLIP model. The project itself is licensed under the Apache LICENSE 2.0 .

Acknowledgement

Our code is largely borrowed from LLaVA We bulid this demo according to bunny

Downloads last month
8
Safetensors
Model size
8.48B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.