Image-Text-to-Text
Transformers
Safetensors
English
idefics2
pretraining
multimodal
vision
Inference Endpoints
5 papers

How can I deploy idefics2-8b with TensorRT + Triton?

#31
by marksuccsmfewercoc - opened

How can I deploy idefics2-8b with TensorRT + Triton? It would be cool if you guys wrote a blog about deploying VLMs with TensorRT + Triton.

HuggingFaceM4 org

Hi @marksuccsmfewercoc
I am not familiar with tensorrt and triton.
@mfuntowicz or @regisss do we have resources on how someone would do that?

@mfuntowicz or @regisss Any idea about this?

You have 2 routes-
1-(Most preferred)- Export HF model to Onnx . Use TensorRT to generate an optimized engine file. Deploy on Triton using the required preprocessing(Current challenge- You cannot directly export to Onnx since Optimum hasnt added support to this model yet for export.
2-(Less Preferred due to lower performance)- Create a Python Backend on Triton using The HF libraries and run with Triton. There will be no acceleration, Just a better inference Serving

Hey @VictorSanh How do I give idefics2-8b previous chat context with images?

HuggingFaceM4 org

Hi @marksuccsmfewercoc
is https://huggingface.co/HuggingFaceM4/idefics2-8b#how-to-get-started (and more specifically the messages list for idefics2-8b) useful?

Hi @marksuccsmfewercoc
is https://huggingface.co/HuggingFaceM4/idefics2-8b#how-to-get-started (and more specifically the messages list for idefics2-8b) useful?

@VictorSanh I saw that, but I don't think it's working properly here is my code and it responded "I'm not sure what you mean by that. Can you please clarify?" when I ask all previous conversations are correct or not.

import requests
import torch
from PIL import Image
from io import BytesIO

from transformers import AutoProcessor, AutoModelForVision2Seq
from transformers.image_utils import load_image

DEVICE = "cuda:0"

Note that passing the image urls (instead of the actual pil images) to the processor is also possible

image1 = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg")
image2 = load_image("https://cdn.britannica.com/59/94459-050-DBA42467/Skyline-Chicago.jpg")
image3 = load_image("https://cdn.britannica.com/68/170868-050-8DDE8263/Golden-Gate-Bridge-San-Francisco.jpg")

processor = AutoProcessor.from_pretrained("HuggingFaceM4/idefics2-8b")
model = AutoModelForVision2Seq.from_pretrained(
"HuggingFaceM4/idefics2-8b",
).to(DEVICE)

messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Where is this place?"},
]
},
{
"role": "assistant",
"content": [
{"type": "text", "text": "London"},
]
},
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Where is this place?"},
]
},
{
"role": "assistant",
"content": [
{"type": "text", "text": "San Francisco"},
]
},
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Do you think all the previous conversations we had all your answers were correct? what were the images in our previous conversations "},
]
},
]
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(text=prompt, images=[image1, image2, image3], return_tensors="pt")
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}

generated_ids = model.generate(**inputs, max_new_tokens=500)
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)

print(generated_texts)

HuggingFaceM4 org

Hi @marksuccsmfewercoc ,
i think it would be worth reformulating your last query into a more grammatical sentence? i think it's confusing the model.
for instance, I tried "Do you think that in all the previous conversations we had, your answers were correct? If not, where were these images taken?"

Sign up or log in to comment