Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama3
|
3 |
+
language:
|
4 |
+
- it
|
5 |
+
base_model:
|
6 |
+
- meta-llama/Meta-Llama-3-8B
|
7 |
+
- openai/clip-vit-large-patch14-336
|
8 |
+
pipeline_tag: text-generation
|
9 |
+
---
|
10 |
+
|
11 |
+
# Model Card for LLaVA-NDiNO_pt_short_long
|
12 |
+
|
13 |
+
## Model description
|
14 |
+
|
15 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
16 |
+
|
17 |
+
**LLaVA-NDiNO** is a family of *Large Vision Language Models (LVLMs)* that have been trained for the Italian language.
|
18 |
+
|
19 |
+
The model was trained by instruction-tuning [LLaVA-NDiNO_pt_short](https://huggingface.co/swap-uniba/LLaVA-NDiNO_pt_short) on an Italian machine-translated version of [LLaVA Conversation 58k](https://huggingface.co/datasets/jxu124/llava_conversation_58k).
|
20 |
+
|
21 |
+
If you are interested in more details regarding the training procedure, you can find the code we used at the following link:
|
22 |
+
- **Repository:** https://github.com/swapUniba/LLaVA-NDiNO
|
23 |
+
|
24 |
+
- **Developed by:** Elio Musacchio, Lucia Siciliani, Pierpaolo Basile, Giovanni Semeraro
|
25 |
+
- **Funded by:** PNRR project FAIR - Future AI Research
|
26 |
+
- **Compute infrastructure:** [Leonardo](https://www.hpc.cineca.it/systems/hardware/leonardo/) supercomputer
|
27 |
+
- **Model type:** LLaMA 3 + CLIP
|
28 |
+
- **Language(s) (NLP):** Italian
|
29 |
+
- **License:** Llama 3 Community License
|
30 |
+
- **Finetuned from model:** [swap-uniba/LLaVA-NDiNO_pt_short](https://huggingface.co/swap-uniba/LLaVA-NDiNO_pt_short)
|
31 |
+
|
32 |
+
|
33 |
+
## Example Usage
|
34 |
+
|
35 |
+
```python
|
36 |
+
import torch
|
37 |
+
import requests
|
38 |
+
|
39 |
+
from PIL import Image
|
40 |
+
from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration, set_seed
|
41 |
+
|
42 |
+
model_name = "swap-uniba/LLaVA-NDiNO_pt_short_long"
|
43 |
+
|
44 |
+
processor = LlavaNextProcessor.from_pretrained(model_name)
|
45 |
+
model = LlavaNextForConditionalGeneration.from_pretrained(model_name, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map="auto")
|
46 |
+
|
47 |
+
url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg"
|
48 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
49 |
+
|
50 |
+
chat_template = "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}"
|
51 |
+
|
52 |
+
conversation = [
|
53 |
+
{
|
54 |
+
"role": "user",
|
55 |
+
"content": "<image>\nCosa c'è di strano in questa immagine?"
|
56 |
+
},
|
57 |
+
]
|
58 |
+
|
59 |
+
prompt = processor.apply_chat_template(conversation, chat_template, add_generation_prompt=True)
|
60 |
+
inputs = processor(prompt, image, return_tensors="pt")
|
61 |
+
|
62 |
+
set_seed(42)
|
63 |
+
output = model.generate(**inputs, max_new_tokens=4096)
|
64 |
+
|
65 |
+
print(processor.decode(output[0][inputs.input_ids.shape[1]:]))
|
66 |
+
```
|
67 |
+
|
68 |
+
## Citation
|
69 |
+
|
70 |
+
```
|
71 |
+
@inproceedings{musacchioLLaVANDiNO,
|
72 |
+
title={LLaVA-NDiNO: Empowering LLMs with Multimodality for the Italian Language},
|
73 |
+
author={Musacchio, Elio and Siciliani, Lucia and Basile, Pierpaolo and Semeraro, Giovanni},
|
74 |
+
booktitle={Proceedings of the Eighth Workshop on Natural Language for Artificial Intelligence (NL4AI 2024) co-located with 23th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2024)},
|
75 |
+
year={2024}
|
76 |
+
}
|
77 |
+
```
|