CustomModel / README.md
ManishThota's picture
Update README.md
032bbcb verified
metadata
license: creativeml-openrail-m
language:
  - en
metrics:
  - bleu
tags:
  - endpoints
  - text-generation-inference
inference: true

Blazzing Fast Tiny Vision Language Model

A Custom 3B parameter Model. Built by @Manish The model is released for research purposes only, commercial use is not allowed.

How to use

Install dependencies

pip install transformers # latest version is ok, but we recommend v4.31.0
pip install -q pillow accelerate einops

You can use the following code for model inference. The format of text instruction is similar to LLaVA.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image

torch.set_default_device("cuda")

#Create model
model = AutoModelForCausalLM.from_pretrained(
    "ManishThota/CustomModel", 
    torch_dtype=torch.float16, 
    device_map="auto",
    trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ManishThota/CustomModel", trust_remote_code=True)

#function to generate the answer
def predict(question, image_path):
    #Set inputs
    text = f"USER: <image>\n{question}? ASSISTANT:"
    image = Image.open(image_path)
    
    input_ids = tokenizer(text, return_tensors='pt').input_ids.to('cuda')
    image_tensor = model.image_preprocess(image)
    
    #Generate the answer
    output_ids = model.generate(
        input_ids,
        max_new_tokens=25,
        images=image_tensor,
        use_cache=True)[0]
    
    return tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip()