File size: 1,687 Bytes
addff67
 
 
 
 
f04a6f6
 
440ba15
 
f04a6f6
dac8303
 
29f29b8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d2407b6
f5e8b53
 
 
29f29b8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
license: mit
language:
- en
pipeline_tag: image-to-text
widget:
  - src: >-
      https://www.xtrafondos.com/wallpapers/perro-en-el-pasto-5797.jpg
    example_title: Dog
  - src: >-
      https://static.flickr.com/1126/5157409353_805483d0e4.jpg
    example_title: Water
---

## **Description**

It is a ViT model that has been fine-tuned on a **Stable Diffusion 2.0** image dataset and applied **LORA**.   
It produces optimal results in a reasonable time. Moreover, its implementation with Pytorch is straightforward.


<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/lora-assets/latent-diffusion.png" alt="Image" width="600">

* Reference: *https://huggingface.co/blog/lora*

## **Usage**

```python
# Libraries
from transformers import ViTFeatureExtractor, AutoTokenizer, VisionEncoderDecoderModel

# Model
model_id = "nttdataspain/vit-gpt2-stablediffusion2-lora"
model = VisionEncoderDecoderModel.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
feature_extractor = ViTFeatureExtractor.from_pretrained(model_id)

# Predict function
def predict_prompts(list_images, max_length=16):
    model.eval()
    pixel_values = feature_extractor(images=list_images, return_tensors="pt").pixel_values
    with torch.no_grad():
        output_ids = model.generate(pixel_values, max_length=max_length, num_beams=4, return_dict_in_generate=True).sequences

    preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
    preds = [pred.strip() for pred in preds]
    return preds

# Get an image and predict
img = Image.open(image_path).convert('RGB')
pred_prompts = predict_prompts([img], max_length=16)
```