VictorSanh commited on
Commit
061ffa4
1 Parent(s): ccf5303

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +143 -1
README.md CHANGED
@@ -8,5 +8,147 @@ tags:
8
  - code
9
  ---
10
 
 
11
 
12
- idefics2 (upcoming) fine-tuned on RAVEN problems
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  - code
9
  ---
10
 
11
+ **Try out the [demo](https://huggingface.co/spaces/HuggingFaceM4/ai_raven)!**
12
 
13
+ # Model Description
14
+
15
+ This model was trained to solve Raven's Progressive Matrices. It is based on an early checkpoint of our upcoming vision-language foundation model.
16
+
17
+ We use the [RAVEN](https://huggingface.co/datasets/HuggingFaceM4/RAVEN) dataset of procedurally generated Raven puzzles to train the system. On the validation set, the model would reach 91% accuracy.
18
+
19
+ # Code snippet
20
+
21
+ The model has been specifically fine-tuned for solving Raven puzzles and we cannot guarantee that it will behave accurately outside of this use-case with no proper adaptation.
22
+
23
+ The code snippet how to do batch inference with the model. A lot of the input preparation will be encapsulated once we integrate the model into HF Transformers.
24
+
25
+ ```python
26
+ import torch
27
+ import requests
28
+
29
+ from io import BytesIO
30
+ from PIL import Image
31
+ from transformers import AutoModelForCausalLM, AutoProcessor
32
+
33
+ from transformers.image_utils import to_numpy_array, PILImageResampling, ChannelDimension
34
+ from transformers.image_transforms import resize, to_channel_dimension_format
35
+
36
+
37
+ DEVICE = torch.device("cuda")
38
+ PROCESSOR = AutoProcessor.from_pretrained(
39
+ "HuggingFaceM4/tr_272_bis_opt_step_15000_merge",
40
+ token=API_TOKEN,
41
+ )
42
+ MODEL = AutoModelForCausalLM.from_pretrained(
43
+ "HuggingFaceM4/tr_272_bis_opt_step_15000_merge",
44
+ token=API_TOKEN,
45
+ trust_remote_code=True,
46
+ torch_dtype=torch.bfloat16,
47
+ ).to(DEVICE)
48
+ image_seq_len = MODEL.config.perceiver_config.resampler_n_latents
49
+ BOS_TOKEN = PROCESSOR.tokenizer.bos_token
50
+ BAD_WORDS_IDS = PROCESSOR.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids
51
+
52
+
53
+ def convert_to_rgb(image):
54
+ # `image.convert("RGB")` would only work for .jpg images, as it creates a wrong background
55
+ # for transparent images. The call to `alpha_composite` handles this case
56
+ if image.mode == "RGB":
57
+ return image
58
+
59
+ image_rgba = image.convert("RGBA")
60
+ background = Image.new("RGBA", image_rgba.size, (255, 255, 255))
61
+ alpha_composite = Image.alpha_composite(background, image_rgba)
62
+ alpha_composite = alpha_composite.convert("RGB")
63
+ return alpha_composite
64
+
65
+
66
+ # The processor is the same as the Idefics processor except for the BILINEAR interpolation,
67
+ # so this is a hack in order to redefine ONLY the transform method
68
+ def custom_transform(x):
69
+ x = convert_to_rgb(x)
70
+ x = to_numpy_array(x)
71
+
72
+ height, width = x.shape[:2]
73
+ aspect_ratio = width / height
74
+ if width >= height and width > 980:
75
+ width = 980
76
+ height = int(width / aspect_ratio)
77
+ elif height > width and height > 980:
78
+ height = 980
79
+ width = int(height * aspect_ratio)
80
+ width = max(width, 378)
81
+ height = max(height, 378)
82
+
83
+ x = resize(x, (height, width), resample=PILImageResampling.BILINEAR)
84
+ x = PROCESSOR.image_processor.rescale(x, scale=1 / 255)
85
+ x = PROCESSOR.image_processor.normalize(
86
+ x,
87
+ mean=PROCESSOR.image_processor.image_mean,
88
+ std=PROCESSOR.image_processor.image_std
89
+ )
90
+ x = to_channel_dimension_format(x, ChannelDimension.FIRST)
91
+ x = torch.tensor(x)
92
+ return x
93
+
94
+
95
+ # Create text token inputs
96
+ image_seq = '<image>' * image_seq_len
97
+ inputs = PROCESSOR.tokenizer(
98
+ [
99
+ f"{BOS_TOKEN}User:<fake_token_around_image>{image_seq}<fake_token_around_image>Which figure should complete the logical sequence?<end_of_utterance>\nAssistant:",
100
+ f"{BOS_TOKEN}User:<fake_token_around_image>{image_seq}<fake_token_around_image>Which figure should complete the logical sequence?<end_of_utterance>\nAssistant:",
101
+ ],
102
+ return_tensors="pt",
103
+ add_special_tokens=False,
104
+ padding=True,
105
+ )
106
+
107
+ # Create pixel inputs
108
+ raw_images = [
109
+ [your_raven_puzzle_as_a_pil_image1],
110
+ [your_raven_puzzle_as_a_pil_image2],
111
+ ]
112
+ output_images = [
113
+ [PROCESSOR.image_processor(img, transform=custom_transform) for img in img_list]
114
+ for img_list in raw_images
115
+ ]
116
+ total_batch_size = len(output_images)
117
+ max_num_images = max([len(img_l) for img_l in output_images])
118
+ max_height = max([i.size(2) for img_l in output_images for i in img_l])
119
+ max_width = max([i.size(3) for img_l in output_images for i in img_l])
120
+ padded_image_tensor = torch.zeros(total_batch_size, max_num_images, 3, max_height, max_width)
121
+ padded_pixel_attention_masks = torch.zeros(
122
+ total_batch_size, max_num_images, max_height, max_width, dtype=torch.bool
123
+ )
124
+ for batch_idx, img_l in enumerate(output_images):
125
+ for img_idx, img in enumerate(img_l):
126
+ im_height, im_width = img.size()[2:]
127
+ padded_image_tensor[batch_idx, img_idx, :, :im_height, :im_width] = img
128
+ padded_pixel_attention_masks[batch_idx, img_idx, :im_height, :im_width] = True
129
+
130
+ inputs["pixel_values"] = padded_image_tensor
131
+ inputs["pixel_attention_mask"] = padded_pixel_attention_masks
132
+ inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
133
+
134
+ generated_ids = MODEL.generate(**inputs, bad_words_ids=BAD_WORDS_IDS, max_new_tokens=10)
135
+ generated_texts = PROCESSOR.batch_decode(generated_ids, skip_special_tokens=True)
136
+
137
+ print(generated_texts)
138
+ ```
139
+
140
+ # Model Details
141
+
142
+ - **Developed by:** Hugging Face
143
+ - **Model type:** Multi-modal model
144
+ - **Language(s) (NLP):** en
145
+ - **License:** see [License section](#license)
146
+ - **Parent Models:** [SigLIP](https://github.com/huggingface/transformers/pull/26522) and [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
147
+ - **Resources for more information:**
148
+ - RAVEN dataset: [Dataset card](https://huggingface.co/datasets/HuggingFaceM4/RAVEN)
149
+
150
+ # License
151
+
152
+ The model is built on top of two pre-trained models: [SigLIP](https://github.com/huggingface/transformers/pull/26522) and [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), which are delivered under an Apache-2.0 license. As such, users should comply with the licenses of these models.
153
+
154
+ The two pre-trained models are connected to each other with newly initialized parameters that we train. These are not based on any of the two base frozen models forming the composite model. We release the additional weights we trained under an Apache-2.0 license.