VictorSanh commited on
Commit
feb26d1
1 Parent(s): 5b86ca4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -157
README.md CHANGED
@@ -3,165 +3,10 @@ license: apache-2.0
3
  language:
4
  - en
5
  datasets:
6
- - HuggingFaceM4/WebSight
7
  tags:
8
  - code
9
  ---
10
 
11
 
12
- # Model Description
13
-
14
- This model converts screenshots of website components into HTML/Tailwind CSS codes.
15
-
16
- It is based on an early checkpoint of our forthcoming vision-language foundation model, which has been further fine-tuned with DoRA using the [Websight-v1](https://huggingface.co/datasets/HuggingFaceM4/Websight) dataset.
17
-
18
- The base model is built upon Mistral-7B and SigLIP-SO400M, and uses the Patch n’ Pack strategy to preserve the original aspect ratio of the input images, with a resolution of up to 980 pixels for each side.
19
- Further insights into the model’s architecture and its training process will be detailed upon its release.
20
-
21
- The goal of open-sourcing the WebSight dataset along with the model Sightseer is to kick off an effort to develop improved models capable of converting a website screenshot into actual code.
22
-
23
- Try out the [demo](https://huggingface.co/spaces/HuggingFaceM4/screenshot2html)!
24
-
25
-
26
- # Code snippet
27
-
28
- The code snippet demonstrates how to perform batched generation to convert screenshots of websites into corresponding HTML + Tailwind code.
29
-
30
- Note that the logic to process, and pad inputs will be encapsulated into a user-friendly processor upon the release of our vision and language model.
31
-
32
- ```python
33
- import torch
34
- import requests
35
-
36
- from datasets import load_dataset
37
- from io import BytesIO
38
- from transformers import AutoModelForCausalLM, AutoProcessor
39
- from PIL import Image
40
-
41
- from transformers.image_utils import to_numpy_array, PILImageResampling, ChannelDimension
42
- from transformers.image_transforms import resize, to_channel_dimension_format
43
-
44
-
45
- DEVICE = torch.device("cuda")
46
- PROCESSOR = AutoProcessor.from_pretrained(
47
- "HuggingFaceM4/Sightseer",
48
- token=API_TOKEN,
49
- )
50
- MODEL = AutoModelForCausalLM.from_pretrained(
51
- "HuggingFaceM4/Sightseer",
52
- token=API_TOKEN,
53
- trust_remote_code=True,
54
- torch_dtype=torch.bfloat16,
55
- ).to(DEVICE)
56
- image_seq_len = MODEL.config.perceiver_config.resampler_n_latents
57
- BOS_TOKEN = PROCESSOR.tokenizer.bos_token
58
- BAD_WORDS_IDS = PROCESSOR.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids
59
-
60
-
61
- def convert_to_rgb(image):
62
- if image.mode == "RGB":
63
- return image
64
-
65
- image_rgba = image.convert("RGBA")
66
- background = Image.new("RGBA", image_rgba.size, (255, 255, 255))
67
- alpha_composite = Image.alpha_composite(background, image_rgba)
68
- alpha_composite = alpha_composite.convert("RGB")
69
- return alpha_composite
70
-
71
-
72
- # The processor is the same as the Idefics processor except for the BILINEAR interpolation,
73
- # so this is a hack in order to redefine ONLY the transform method
74
- def custom_transform(x):
75
- x = convert_to_rgb(x)
76
- x = to_numpy_array(x)
77
-
78
- height, width = x.shape[:2]
79
- aspect_ratio = width / height
80
- if width >= height and width > 980:
81
- width = 980
82
- height = int(width / aspect_ratio)
83
- elif height > width and height > 980:
84
- height = 980
85
- width = int(height * aspect_ratio)
86
- width = max(width, 378)
87
- height = max(height, 378)
88
-
89
- x = resize(x, (height, width), resample=PILImageResampling.BILINEAR)
90
- x = PROCESSOR.image_processor.rescale(x, scale=1 / 255)
91
- x = PROCESSOR.image_processor.normalize(
92
- x,
93
- mean=PROCESSOR.image_processor.image_mean,
94
- std=PROCESSOR.image_processor.image_std
95
- )
96
- x = to_channel_dimension_format(x, ChannelDimension.FIRST)
97
- x = torch.tensor(x)
98
- return x
99
-
100
-
101
- # Create text token inputs
102
- image_seq = '<image>' * image_seq_len
103
- inputs = PROCESSOR.tokenizer(
104
- [
105
- f"{BOS_TOKEN}<fake_token_around_image>{image_seq}<fake_token_around_image>In this image, we see",
106
- f"{BOS_TOKEN}bla bla<fake_token_around_image>{image_seq}<fake_token_around_image>{image_seq}<fake_token_around_image>",
107
- ],
108
- return_tensors="pt",
109
- add_special_tokens=False,
110
- padding=True,
111
- )
112
-
113
-
114
- # Create pixel inputs
115
- # We load images from WebSight, but any screenshot in the form of a PIL image will work
116
- dataset = load_dataset("HuggingFaceM4/WebSight", streaming=True)
117
- dataset = iter(dataset)
118
- image1 = next(dataset)
119
- image2 = next(dataset)
120
- raw_images = [
121
- [image1],
122
- [image2],
123
- ]
124
- output_images = [
125
- [PROCESSOR.image_processor(img, transform=custom_transform) for img in img_list]
126
- for img_list in raw_images
127
- ]
128
- total_batch_size = len(output_images)
129
- max_num_images = max([len(img_l) for img_l in output_images])
130
- max_height = max([i.size(2) for img_l in output_images for i in img_l])
131
- max_width = max([i.size(3) for img_l in output_images for i in img_l])
132
- padded_image_tensor = torch.zeros(total_batch_size, max_num_images, 3, max_height, max_width)
133
- padded_pixel_attention_masks = torch.zeros(
134
- total_batch_size, max_num_images, max_height, max_width, dtype=torch.bool
135
- )
136
- for batch_idx, img_l in enumerate(output_images):
137
- for img_idx, img in enumerate(img_l):
138
- im_height, im_width = img.size()[2:]
139
- padded_image_tensor[batch_idx, img_idx, :, :im_height, :im_width] = img
140
- padded_pixel_attention_masks[batch_idx, img_idx, :im_height, :im_width] = True
141
-
142
- inputs["pixel_values"] = padded_image_tensor
143
- inputs["pixel_attention_mask"] = padded_pixel_attention_masks
144
- inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
145
-
146
- generated_ids = MODEL.generate(**inputs, bad_words_ids=BAD_WORDS_IDS, max_new_tokens=10)
147
- generated_texts = PROCESSOR.batch_decode(generated_ids, skip_special_tokens=True)
148
-
149
- print(generated_texts)
150
- ```
151
-
152
-
153
- # Model Details
154
-
155
- - **Developed by:** Hugging Face
156
- - **Model type:** Multi-modal model (screenshot of website component to HTML/Tailwind CSS code)
157
- - **Language(s) (NLP):** en
158
- - **License:** see [License section](#license)
159
- - **Parent Models:** [SigLIP](https://github.com/huggingface/transformers/pull/26522) and [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
160
- - **Resources for more information:**
161
- - WebSight dataset: [Dataset card](https://huggingface.co/datasets/HuggingFaceM4/Websight)
162
-
163
- # License
164
-
165
- The model is built on top of two pre-trained models: [SigLIP](https://github.com/huggingface/transformers/pull/26522) and [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), which are delivered under an Apache-2.0 license. As such, users should comply with the licenses of these models.
166
-
167
- The two pre-trained models are connected to each other with newly initialized parameters that we train. These are not based on any of the two base frozen models forming the composite model. We release the additional weights we trained under an Apache-2.0 license.
 
3
  language:
4
  - en
5
  datasets:
6
+ - HuggingFaceM4/RAVEN
7
  tags:
8
  - code
9
  ---
10
 
11
 
12
+ idefics2 (upcoming) fine-tuned on RAVEN problems