metadata
license: llama3
datasets:
- swap-uniba/the_cauldron_ita
language:
- it
base_model:
- meta-llama/Meta-Llama-3-8B
- openai/clip-vit-large-patch14-336
pipeline_tag: text-generation
Model Card for LLaVA-NDiNO_pt_short_it
Model description
LLaVA-NDiNO is a family of Large Vision Language Models (LVLMs) that have been trained for the Italian language.
The model was trained by instruction-tuning LLaVA-NDiNO_pt
If you are interested in more details regarding the training procedure, you can find the code we used at the following link:
- Repository: https://github.com/swapUniba/LLaVA-NDiNO
NOTICE: the code has not been released yet, we apologize for the delay, it will be available asap!
- Developed by: Elio Musacchio, Lucia Siciliani, Pierpaolo Basile, Giovanni Semeraro
- Funded by: PNRR project FAIR - Future AI Research
- Compute infrastructure: Leonardo supercomputer
- Model type: LLaMA 3 + CLIP
- Language(s) (NLP): Italian
- License: Llama 3 Community License
- Finetuned from model: swap-uniba/LLaVA-NDiNO_pt
Example Usage
The following example requires installation of these dependencies:
import torch
import requests
from PIL import Image
from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration
model_name = "m-elio/LLaVA-NDiNO_pt_short_it"
processor = LlavaNextProcessor.from_pretrained(model_name)
model = LlavaNextForConditionalGeneration.from_pretrained(model_name, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map="cuda")
url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": "<image>\nCosa c'è di strano in questa immagine?"
},
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
inputs = processor(prompt, image, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=4096)
print(processor.decode(output[0][inputs.input_ids.shape[1]:]))
Citation
TBD