|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
datasets: |
|
- lmms-lab/textvqa |
|
language: |
|
- en |
|
tags: |
|
- multimodal |
|
- vision |
|
- image-text-to-text |
|
--- |
|
|
|
# Idefics2-8B-SFT |
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64e380b2e12618b261fa6ba0/TIxlOOLWmd_k_0grtzejN.jpeg) |
|
|
|
Idefics2-8B-SFT is SFT fine-tune of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on 35k [TextVQA dataset](https://huggingface.co/datasets/textvqa). Training was performed on RTX A5000 for 10 hrs. Wandb report: |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e380b2e12618b261fa6ba0/SjeZW06TBY2RmXPHVzxF1.png) |
|
|
|
This fine-tuned model achieves a Levenshtein score of 82.29%. |
|
|
|
# Model Summary |
|
|
|
- **Developed by:** Hugging Face |
|
- **Model type:** Multi-modal model (image+text) |
|
- **Language(s) (NLP):** en |
|
- **License:** Apache 2.0 |
|
- **Parent Models:** [google/siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) and [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) |
|
|
|
## π» Usage |
|
|
|
```python |
|
processor = AutoProcessor.from_pretrained("Syed-Hasan-8503/Idefics2-8B-SFT") |
|
model = AutoModelForVision2Seq.from_pretrained("Syed-Hasan-8503/Idefics2-8B-SFT",).to(DEVICE) |
|
|
|
# Create inputs |
|
messages = [ |
|
{ |
|
"role": "user", |
|
"content": [ |
|
{"type": "image"}, |
|
{"type": "text", "text": "What do we see in this image?"}, |
|
] |
|
}, |
|
{ |
|
"role": "assistant", |
|
"content": [ |
|
{"type": "text", "text": "In this image, we can see the city of New York, and more specifically the Statue of Liberty."}, |
|
] |
|
}, |
|
{ |
|
"role": "user", |
|
"content": [ |
|
{"type": "image"}, |
|
{"type": "text", "text": "And how about this image?"}, |
|
] |
|
}, |
|
] |
|
prompt = processor.apply_chat_template(messages, add_generation_prompt=True) |
|
inputs = processor(text=prompt, images=[image1, image2], return_tensors="pt") |
|
inputs = {k: v.to(DEVICE) for k, v in inputs.items()} |
|
|
|
|
|
# Generate |
|
generated_ids = model.generate(**inputs, max_new_tokens=500) |
|
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True) |
|
|
|
print(generated_texts) |
|
# ['User: What do we see in this image? \nAssistant: In this image, we can see the city of New York, and more specifically the Statue of Liberty. \nUser: And how about this image? \nAssistant: In this image we can see buildings, trees, lights, water and sky.'] |
|
``` |
|
|
|
## π Evaluation |
|
Coming Soon! |