File size: 6,208 Bytes
2bb004c
 
5b86ca4
 
 
feb26d1
5b86ca4
 
2bb004c
5b86ca4
061ffa4
5b86ca4
061ffa4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
---
license: apache-2.0
language:
- en
datasets:
- HuggingFaceM4/RAVEN
tags:
- code
---

**Try out the [demo](https://huggingface.co/spaces/HuggingFaceM4/ai_raven)!**

# Model Description

This model was trained to solve Raven's Progressive Matrices. It is based on an early checkpoint of our upcoming vision-language foundation model. 

We use the [RAVEN](https://huggingface.co/datasets/HuggingFaceM4/RAVEN) dataset of procedurally generated Raven puzzles to train the system. On the validation set, the model would reach 91% accuracy.

# Code snippet

The model has been specifically fine-tuned for solving Raven puzzles and we cannot guarantee that it will behave accurately outside of this use-case with no proper adaptation.

The code snippet how to do batch inference with the model. A lot of the input preparation will be encapsulated once we integrate the model into HF Transformers.

```python
import torch
import requests

from io import BytesIO
from PIL import Image
from transformers import AutoModelForCausalLM, AutoProcessor

from transformers.image_utils import to_numpy_array, PILImageResampling, ChannelDimension
from transformers.image_transforms import resize, to_channel_dimension_format


DEVICE = torch.device("cuda")
PROCESSOR = AutoProcessor.from_pretrained(
    "HuggingFaceM4/tr_272_bis_opt_step_15000_merge",
    token=API_TOKEN,
)
MODEL = AutoModelForCausalLM.from_pretrained(
    "HuggingFaceM4/tr_272_bis_opt_step_15000_merge",
    token=API_TOKEN,
    trust_remote_code=True,
    torch_dtype=torch.bfloat16,
).to(DEVICE)
image_seq_len = MODEL.config.perceiver_config.resampler_n_latents
BOS_TOKEN = PROCESSOR.tokenizer.bos_token
BAD_WORDS_IDS = PROCESSOR.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids


def convert_to_rgb(image):
    # `image.convert("RGB")` would only work for .jpg images, as it creates a wrong background
    # for transparent images. The call to `alpha_composite` handles this case
    if image.mode == "RGB":
        return image

    image_rgba = image.convert("RGBA")
    background = Image.new("RGBA", image_rgba.size, (255, 255, 255))
    alpha_composite = Image.alpha_composite(background, image_rgba)
    alpha_composite = alpha_composite.convert("RGB")
    return alpha_composite


# The processor is the same as the Idefics processor except for the BILINEAR interpolation,
# so this is a hack in order to redefine ONLY the transform method
def custom_transform(x):
    x = convert_to_rgb(x)
    x = to_numpy_array(x)

    height, width = x.shape[:2]
    aspect_ratio = width / height
    if width >= height and width > 980:
        width = 980
        height = int(width / aspect_ratio)
    elif height > width and height > 980:
        height = 980
        width = int(height * aspect_ratio)
    width = max(width, 378)
    height = max(height, 378)

    x = resize(x, (height, width), resample=PILImageResampling.BILINEAR)
    x = PROCESSOR.image_processor.rescale(x, scale=1 / 255)
    x = PROCESSOR.image_processor.normalize(
        x,
        mean=PROCESSOR.image_processor.image_mean,
        std=PROCESSOR.image_processor.image_std
    )
    x = to_channel_dimension_format(x, ChannelDimension.FIRST)
    x = torch.tensor(x)
    return x


# Create text token inputs
image_seq = '<image>' * image_seq_len
inputs = PROCESSOR.tokenizer(
    [
        f"{BOS_TOKEN}User:<fake_token_around_image>{image_seq}<fake_token_around_image>Which figure should complete the logical sequence?<end_of_utterance>\nAssistant:",
        f"{BOS_TOKEN}User:<fake_token_around_image>{image_seq}<fake_token_around_image>Which figure should complete the logical sequence?<end_of_utterance>\nAssistant:",
    ],
    return_tensors="pt",
    add_special_tokens=False,
    padding=True,
)

# Create pixel inputs
raw_images = [
    [your_raven_puzzle_as_a_pil_image1],
    [your_raven_puzzle_as_a_pil_image2],
]
output_images = [
    [PROCESSOR.image_processor(img, transform=custom_transform) for img in img_list]
    for img_list in raw_images
]
total_batch_size = len(output_images)
max_num_images = max([len(img_l) for img_l in output_images])
max_height = max([i.size(2) for img_l in output_images for i in img_l])
max_width = max([i.size(3) for img_l in output_images for i in img_l])
padded_image_tensor = torch.zeros(total_batch_size, max_num_images, 3, max_height, max_width)
padded_pixel_attention_masks = torch.zeros(
    total_batch_size, max_num_images, max_height, max_width, dtype=torch.bool
)
for batch_idx, img_l in enumerate(output_images):
    for img_idx, img in enumerate(img_l):
        im_height, im_width = img.size()[2:]
        padded_image_tensor[batch_idx, img_idx, :, :im_height, :im_width] = img
        padded_pixel_attention_masks[batch_idx, img_idx, :im_height, :im_width] = True

inputs["pixel_values"] = padded_image_tensor
inputs["pixel_attention_mask"] = padded_pixel_attention_masks
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}

generated_ids = MODEL.generate(**inputs, bad_words_ids=BAD_WORDS_IDS, max_new_tokens=10)
generated_texts = PROCESSOR.batch_decode(generated_ids, skip_special_tokens=True)

print(generated_texts)
```

# Model Details

- **Developed by:** Hugging Face
- **Model type:** Multi-modal model
- **Language(s) (NLP):** en
- **License:** see [License section](#license)
- **Parent Models:** [SigLIP](https://github.com/huggingface/transformers/pull/26522) and [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- **Resources for more information:**
    - RAVEN dataset: [Dataset card](https://huggingface.co/datasets/HuggingFaceM4/RAVEN)

# License

The model is built on top of two pre-trained models: [SigLIP](https://github.com/huggingface/transformers/pull/26522) and [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), which are delivered under an Apache-2.0 license. As such, users should comply with the licenses of these models.

The two pre-trained models are connected to each other with newly initialized parameters that we train. These are not based on any of the two base frozen models forming the composite model. We release the additional weights we trained under an Apache-2.0 license.