File size: 2,978 Bytes
66cbec2
 
cbd131a
 
 
 
66cbec2
 
cbd131a
66cbec2
 
cbd131a
 
 
 
 
66cbec2
 
 
 
 
cbd131a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66cbec2
 
 
 
cbd131a
66cbec2
cbd131a
66cbec2
cbd131a
 
 
 
 
 
 
 
 
66cbec2
 
 
cbd131a
66cbec2
cbd131a
 
 
 
66cbec2
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
library_name: transformers
metrics:
- meteor
base_model:
- meta-llama/Llama-3.2-11B-Vision-Instruct
---

# Model Card


- **Developed by:** [Genloop.ai](https://huggingface.co/genloop)
- **Funded by:** [Genloop Labs, Inc.](https://genloop.ai/)
- **Model type:** Vision Language Model (VLM)
- **Finetuned from model:** [Meta Llama 3.2 11B Vision Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct)
- **Usage:** This model is intended for product cataloging, i.e. generating product descriptions from images



## How to Get Started with the Model

Make sure to update your transformers installation via `pip install --upgrade transformers`.

```python
import requests
import torch
from PIL import Image
from transformers import MllamaForConditionalGeneration, AutoProcessor
model_id = "meta-llama/Llama-3.2-11B-Vision-Instruct"
model = MllamaForConditionalGeneration.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_id)
url = "insert_your_image_link_here"
image = Image.open(requests.get(url, stream=True).raw)
user_prompt= """Create a SHORT Product description based on the provided a given ##PRODUCT NAME## and a ##CATEGORY## and an image of the product. 
Only return description. The description should be SEO optimized and for a better mobile search experience.

##PRODUCT NAME##: {product_name}
##CATEGORY##: {prod_category}"""

product_name = "insert_your_product_name_here"
product_category = "insert_your_product_category_here"
messages = [
    {"role": "user", "content": [
        {"type": "image"},
        {"type": "text", "text": user_prompt.format(product_name = product_name, product_category = product_category)}
    ]}
]
input_text = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(
    image,
    input_text,
    add_special_tokens=False,
    return_tensors="pt"
).to(model.device)
output = model.generate(**inputs, max_new_tokens=30)
print(processor.decode(output[0]))

```


## Training Details

This model has been finetuned on the [Amazon-Product-Descriptions](https://huggingface.co/datasets/philschmid/amazon-product-descriptions-vlm) dataset. The reference descriptions were generated using Gemini Flash.

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- seed: 3407
- gradient_accumulation_steps: 4
- gradient_checkpointing: True
- total_train_batch_size: 8
- lr_scheduler_type: linear
- num_epochs: 3



#### Results

| MODEL                             | FINETUNED OR NOT       | INFERENCE LATENCY | METEOR Score |
|-----------------------------------|------------------------|-------------------|--------------|
| Llama-3.2-11B-Vision-Instruct     | Not Finetuned          | 1.68              | 0.38         |
| Llama-3.2-11B-Vision-Instruct     | Finetuned              | 1.68              | 0.53         |