Ligeng-Zhu commited on
Commit
a6447a4
·
verified ·
1 Parent(s): a8bb425

Upload files with `vila-upload`.

Browse files

Upload vicuna_v1.jinja
Upload conversation.py
Upload media_encoder.py
Upload media.py
Upload utils.py
Upload modeling_vila.py
Upload main.py
Upload constants.py
Upload config.json
Upload README.md
Upload configuration_vila.py
Upload builder.py
Upload base_projector.py
Upload trainer_state.json
Upload mm_utils.py
Upload tokenizer_utils.py
Upload siglip_encoder.py
Upload llm/model-00005-of-00006.safetensors
Upload llm/model-00003-of-00006.safetensors
Upload llm/generation_config.json
Upload llm/tokenizer.model
Upload llm/special_tokens_map.json
Upload llm/config.json
Upload llm/model-00001-of-00006.safetensors
Upload llm/tokenizer_config.json
Upload llm/model-00004-of-00006.safetensors
Upload llm/model-00006-of-00006.safetensors
Upload llm/model-00002-of-00006.safetensors
Upload llm/model.safetensors.index.json
Upload mm_projector/config.json
Upload mm_projector/model.safetensors
Upload vision_tower/config.json
Upload vision_tower/preprocessor_config.json
Upload vision_tower/model.safetensors

README.md ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - VILA
7
+ - VLM
8
+ ---
9
+
10
+ # VILA Model Card
11
+
12
+ ## Model details
13
+
14
+ **Model type:**
15
+ VILA is a visual language model (VLM) pretrained with interleaved image-text data at scale, enabling multi-image VLM. VILA is deployable on the edge, including Jetson Orin and laptop by AWQ 4bit quantization through TinyChat framework. We find: (1) image-text pairs are not enough, interleaved image-text is essential; (2) unfreezing LLM during interleaved image-text pre-training enables in-context learning; (3)re-blending text-only instruction data is crucial to boost both VLM and text-only performance. VILA unveils appealing capabilities, including: multi-image reasoning, in-context learning, visual chain-of-thought, and better world knowledge.
16
+
17
+ **Model date:**
18
+ VILA1.5-40b was trained in May 2024.
19
+
20
+ **Paper or resources for more information:**
21
+ https://github.com/NVLabs/VILA
22
+
23
+ ```
24
+ @misc{lin2023vila,
25
+ title={VILA: On Pre-training for Visual Language Models},
26
+ author={Ji Lin and Hongxu Yin and Wei Ping and Yao Lu and Pavlo Molchanov and Andrew Tao and Huizi Mao and Jan Kautz and Mohammad Shoeybi and Song Han},
27
+ year={2023},
28
+ eprint={2312.07533},
29
+ archivePrefix={arXiv},
30
+ primaryClass={cs.CV}
31
+ }
32
+ ```
33
+
34
+ ## License
35
+ - The code is released under the Apache 2.0 license as found in the [LICENSE](./LICENSE) file.
36
+ - The pretrained weights are released under the [CC-BY-NC-SA-4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
37
+ - The service is a research preview intended for non-commercial use only, and is subject to the following licenses and terms:
38
+ - [Model License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA
39
+ - [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI
40
+ - [Dataset Licenses](https://github.com/Efficient-Large-Model/VILA/blob/main/data_prepare/LICENSE) for each one used during training.
41
+
42
+ **Where to send questions or comments about the model:**
43
+ https://github.com/NVLabs/VILA/issues
44
+
45
+ ## Intended use
46
+ **Primary intended uses:**
47
+ The primary use of VILA is research on large multimodal models and chatbots.
48
+
49
+ **Primary intended users:**
50
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
51
+
52
+ ## Model Architecture:
53
+ **Architecture Type:** Transformer
54
+ **Network Architecture:** siglip, vicuna1.5
55
+
56
+ ## Input:
57
+ **Input Type:** Image, Video, Text
58
+ **Input Format:** Red, Green, Blue; MP4 ;String
59
+ **Input Parameters:** 2D, 3D
60
+
61
+ ## Output:
62
+ **Output Type:** Text
63
+ **Output Format:** String
64
+
65
+ **Supported Hardware Microarchitecture Compatibility:**
66
+ * Ampere
67
+ * Jetson
68
+ * Hopper
69
+ * Lovelace
70
+
71
+ **[Preferred/Supported] Operating System(s):** <br>
72
+ Linux
73
+
74
+ ## Model Version(s):
75
+ * VILA1.5-3B
76
+ * VILA1.5-3B-s2
77
+ * Llama-3-VILA1.5-8B
78
+ * VILA1.5-13B
79
+ * VILA1.5-40B
80
+ * VILA1.5-3B-AWQ
81
+ * VILA1.5-3B-s2-AWQ
82
+ * Llama-3-VILA1.5-8B-AWQ
83
+ * VILA1.5-13B-AWQ
84
+ * VILA1.5-40B-AWQ
85
+
86
+ ## Training dataset
87
+ See [Dataset Preparation](https://github.com/NVLabs/VILA/blob/main/data_prepare/README.md) for more details.
88
+
89
+ ** Data Collection Method by dataset
90
+ * [Hybrid: Automated, Human]
91
+
92
+ ** Labeling Method by dataset
93
+ * [Hybrid: Automated, Human]
94
+
95
+ **Properties (Quantity, Dataset Descriptions, Sensor(s)):**
96
+ 53 million image-text pairs or interleaved image text content.
97
+
98
+
99
+ ## Evaluation dataset
100
+ A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.
101
+
102
+ ## Inference:
103
+ **Engine:** [Tensor(RT), Triton, Or List Other Here]
104
+ * PyTorch
105
+ * TensorRT-LLM
106
+ * TinyChat
107
+
108
+ **Test Hardware:**
109
+ * A100
110
+ * Jetson Orin
111
+ * RTX 4090
112
+
113
+ ## Ethical Considerations
114
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
base_projector.py ADDED
@@ -0,0 +1,228 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 NVIDIA CORPORATION & AFFILIATES
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ #
15
+ # SPDX-License-Identifier: Apache-2.0
16
+
17
+ import re
18
+
19
+ import torch
20
+ import torch.nn as nn
21
+ from transformers import AutoConfig, AutoModel, PretrainedConfig, PreTrainedModel
22
+
23
+
24
+ class IdentityMap(nn.Module):
25
+ def __init__(self):
26
+ super().__init__()
27
+
28
+ def forward(self, x, *args, **kwargs):
29
+ return x
30
+
31
+ @property
32
+ def config(self):
33
+ return {"mm_projector_type": "identity"}
34
+
35
+
36
+ class SimpleResBlock(nn.Module):
37
+ def __init__(self, channels):
38
+ super().__init__()
39
+ self.pre_norm = nn.LayerNorm(channels)
40
+
41
+ self.proj = nn.Sequential(nn.Linear(channels, channels), nn.GELU(), nn.Linear(channels, channels))
42
+
43
+ def forward(self, x):
44
+ x = self.pre_norm(x)
45
+ return x + self.proj(x)
46
+
47
+
48
+ class DownSampleBlock(nn.Module):
49
+ def forward(self, x):
50
+ vit_embeds = x
51
+ h = w = int(vit_embeds.shape[1] ** 0.5)
52
+ vit_embeds = vit_embeds.reshape(vit_embeds.shape[0], h, w, -1)
53
+ vit_embeds = self.flat_square(vit_embeds)
54
+ vit_embeds = vit_embeds.reshape(vit_embeds.shape[0], -1, vit_embeds.shape[-1])
55
+ return vit_embeds
56
+
57
+ def flat_square(self, x):
58
+ n, w, h, c = x.size()
59
+ if w % 2 == 1:
60
+ x = torch.concat([x, torch.zeros((n, 1, h, c), dtype=x.dtype).to(x.device)], dim=1).contiguous()
61
+ n, w, h, c = x.size()
62
+ if h % 2 == 1:
63
+ x = torch.concat([x, torch.zeros((n, w, 1, c), dtype=x.dtype).to(x.device)], dim=2).contiguous()
64
+ n, w, h, c = x.size()
65
+ x = x.contiguous()
66
+ x = x.view(n, w, int(h / 2), int(c * 2))
67
+ x = x.permute(0, 2, 1, 3).contiguous()
68
+ x = x.view(n, int(h / 2), int(w / 2), int(c * 4))
69
+ x = x.permute(0, 2, 1, 3).contiguous()
70
+ return x
71
+
72
+
73
+ class DownSample2x2BlockFix(nn.Module):
74
+ def forward(self, x):
75
+ vit_embeds = x
76
+ h = w = int(vit_embeds.shape[1] ** 0.5)
77
+ vit_embeds = vit_embeds.reshape(vit_embeds.shape[0], h, w, -1)
78
+ vit_embeds = flat_square_2x2(vit_embeds)
79
+ vit_embeds = vit_embeds.reshape(vit_embeds.shape[0], -1, vit_embeds.shape[-1])
80
+ return vit_embeds
81
+
82
+
83
+ def flat_square_2x2(x):
84
+ n, w, h, c = x.size()
85
+ if w % 2 == 1:
86
+ x = torch.concat([x, torch.zeros((n, 1, h, c), dtype=x.dtype).to(x.device)], dim=1).contiguous()
87
+ n, w, h, c = x.size()
88
+ x = x.contiguous()
89
+ if h % 2 == 1:
90
+ x = torch.concat([x, torch.zeros((n, w, 1, c), dtype=x.dtype).to(x.device)], dim=2).contiguous()
91
+ n, w, h, c = x.size()
92
+ x = x.view(n, w, int(h / 2), int(c * 2))
93
+ x = x.permute(0, 2, 1, 3).contiguous()
94
+ x = x.view(n, int(h / 2), int(w / 2), int(c * 4))
95
+ x = x.permute(0, 2, 1, 3).contiguous()
96
+ return x
97
+
98
+
99
+ class DownSample3x3BlockFix(nn.Module):
100
+ def forward(self, x):
101
+ vit_embeds = x
102
+ h = w = int(vit_embeds.shape[1] ** 0.5)
103
+ vit_embeds = vit_embeds.reshape(vit_embeds.shape[0], h, w, -1)
104
+ vit_embeds = flat_square_3x3(vit_embeds)
105
+ vit_embeds = vit_embeds.reshape(vit_embeds.shape[0], -1, vit_embeds.shape[-1])
106
+ return vit_embeds
107
+
108
+
109
+ def flat_square_3x3(x):
110
+ n, w, h, c = x.size()
111
+ if w % 3 != 0:
112
+ x = torch.concat([x, torch.zeros((n, 3 - (w % 3), h, c), dtype=x.dtype).to(x.device)], dim=1).contiguous()
113
+ n, w, h, c = x.size()
114
+ x = x.contiguous()
115
+ if h % 3 != 0:
116
+ x = torch.concat([x, torch.zeros((n, w, 3 - (h % 3), c), dtype=x.dtype).to(x.device)], dim=2).contiguous()
117
+ n, w, h, c = x.size()
118
+ x = x.view(n, w, int(h / 3), int(c * 3))
119
+ x = x.permute(0, 2, 1, 3).contiguous()
120
+ x = x.view(n, int(h / 3), int(w / 3), int(c * 9))
121
+ x = x.permute(0, 2, 1, 3).contiguous()
122
+ return x
123
+
124
+
125
+ class MultimodalProjectorConfig(PretrainedConfig):
126
+ model_type = "v2l_projector"
127
+
128
+ def __init__(self, mm_projector_type: str = None, **kwargs):
129
+ super().__init__()
130
+ self.mm_projector_type = mm_projector_type
131
+
132
+
133
+ class MultimodalProjector(PreTrainedModel):
134
+ config_class = MultimodalProjectorConfig
135
+
136
+ def __init__(self, mm_projector_cfg: MultimodalProjectorConfig, config: PretrainedConfig):
137
+ super().__init__(mm_projector_cfg)
138
+ mm_projector_type = mm_projector_cfg.mm_projector_type
139
+ self.downsample_rate = 1
140
+ if mm_projector_type == "identity":
141
+ self.layers = IdentityMap()
142
+ elif mm_projector_type == "linear":
143
+ self.layers = nn.Linear(config.mm_hidden_size, config.hidden_size)
144
+ elif mm_projector_type == "mlp_downsample":
145
+ self.layers = nn.Sequential(
146
+ DownSampleBlock(),
147
+ nn.LayerNorm(config.mm_hidden_size * 4),
148
+ nn.Linear(config.mm_hidden_size * 4, config.hidden_size),
149
+ nn.GELU(),
150
+ nn.Linear(config.hidden_size, config.hidden_size),
151
+ )
152
+ self.downsample_rate = 2
153
+ elif mm_projector_type == "mlp_downsample_2x2_fix":
154
+ self.layers = nn.Sequential(
155
+ DownSample2x2BlockFix(),
156
+ nn.LayerNorm(config.mm_hidden_size * 4),
157
+ nn.Linear(config.mm_hidden_size * 4, config.hidden_size),
158
+ nn.GELU(),
159
+ nn.Linear(config.hidden_size, config.hidden_size),
160
+ )
161
+ self.downsample_rate = 2
162
+ elif mm_projector_type == "mlp_downsample_3x3_fix":
163
+ self.layers = nn.Sequential(
164
+ DownSample3x3BlockFix(),
165
+ nn.LayerNorm(config.mm_hidden_size * 9),
166
+ nn.Linear(config.mm_hidden_size * 9, config.mm_hidden_size * 3),
167
+ nn.GELU(),
168
+ nn.LayerNorm(config.mm_hidden_size * 3),
169
+ nn.Linear(config.mm_hidden_size * 3, config.hidden_size),
170
+ nn.GELU(),
171
+ nn.Linear(config.hidden_size, config.hidden_size),
172
+ )
173
+ self.downsample_rate = 3
174
+ elif mm_projector_type == "mlp_downsample_3x3_s2":
175
+ self.layers = nn.Sequential(
176
+ DownSample3x3BlockFix(),
177
+ nn.LayerNorm(config.mm_hidden_size * 9),
178
+ nn.Linear(config.mm_hidden_size * 9, config.mm_hidden_size * 3),
179
+ nn.GELU(),
180
+ nn.LayerNorm(config.mm_hidden_size * 3),
181
+ nn.Linear(config.mm_hidden_size * 3, config.mm_hidden_size),
182
+ nn.GELU(),
183
+ nn.LayerNorm(config.mm_hidden_size),
184
+ nn.Linear(config.mm_hidden_size, config.mm_hidden_size // 3),
185
+ nn.GELU(),
186
+ nn.LayerNorm(config.mm_hidden_size // 3),
187
+ nn.Linear(config.mm_hidden_size // 3, config.hidden_size),
188
+ nn.GELU(),
189
+ nn.Linear(config.hidden_size, config.hidden_size),
190
+ )
191
+ elif mm_projector_type == "mlp_downsample_3x3_s2_new":
192
+ self.layers = nn.Sequential(
193
+ DownSample3x3BlockFix(),
194
+ nn.LayerNorm(config.mm_hidden_size * 9),
195
+ nn.Linear(config.mm_hidden_size * 9, config.mm_hidden_size * 4),
196
+ nn.GELU(),
197
+ nn.LayerNorm(config.mm_hidden_size * 4),
198
+ nn.Linear(config.mm_hidden_size * 4, config.mm_hidden_size * 2),
199
+ nn.GELU(),
200
+ nn.LayerNorm(config.mm_hidden_size * 2),
201
+ nn.Linear(config.mm_hidden_size * 2, config.mm_hidden_size),
202
+ nn.GELU(),
203
+ nn.LayerNorm(config.mm_hidden_size),
204
+ nn.Linear(config.mm_hidden_size, config.mm_hidden_size // 3),
205
+ nn.GELU(),
206
+ nn.LayerNorm(config.mm_hidden_size // 3),
207
+ nn.Linear(config.mm_hidden_size // 3, config.hidden_size),
208
+ nn.GELU(),
209
+ nn.Linear(config.hidden_size, config.hidden_size),
210
+ )
211
+ else:
212
+ mlp_gelu_match = re.match(r"^mlp(\d+)x_gelu$", mm_projector_type)
213
+ if mlp_gelu_match:
214
+ mlp_depth = int(mlp_gelu_match.group(1))
215
+ modules = [nn.Linear(config.mm_hidden_size, config.hidden_size)]
216
+ for _ in range(1, mlp_depth):
217
+ modules.append(nn.GELU())
218
+ modules.append(nn.Linear(config.hidden_size, config.hidden_size))
219
+ self.layers = nn.Sequential(*modules)
220
+ else:
221
+ raise ValueError(f"Unknown projector type: {mm_projector_type}")
222
+
223
+ def forward(self, x, *args, **kwargs):
224
+ return self.layers(x)
225
+
226
+
227
+ # AutoConfig.register("v2l_projector", MultimodalProjectorConfig)
228
+ # AutoModel.register(MultimodalProjectorConfig, MultimodalProjector)
builder.py ADDED
@@ -0,0 +1,246 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 NVIDIA CORPORATION & AFFILIATES
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ #
15
+ # SPDX-License-Identifier: Apache-2.0
16
+
17
+ import math
18
+ import os
19
+ import os.path as osp
20
+ import warnings
21
+ from dataclasses import asdict
22
+ from typing import Any, Dict, List, Optional, Sequence, Tuple
23
+
24
+ import torch
25
+ import transformers
26
+ from huggingface_hub import file_exists, repo_exists
27
+ from huggingface_hub.utils import HFValidationError
28
+ from transformers import (
29
+ AutoConfig,
30
+ AutoModelForCausalLM,
31
+ AutoTokenizer,
32
+ PretrainedConfig,
33
+ PreTrainedModel,
34
+ PreTrainedTokenizer,
35
+ )
36
+
37
+ # from .conversation import *
38
+ from .conversation import SeparatorStyle, default_conversation
39
+
40
+ SENTINEL_TOKEN = "<vila/sentinel>"
41
+ MEDIA_TOKENS = {
42
+ "image": "<image>",
43
+ "video": "<vila/video>",
44
+ }
45
+
46
+ # from llava.model.utils import packing
47
+ # from llava.utils.logging import logger
48
+ # from llava.utils.tokenizer import infer_stop_tokens
49
+
50
+ DUMMY_CONVERSATION = [
51
+ {"from": "human", "value": "question"},
52
+ {"from": "gpt", "value": "answer"},
53
+ ] * 10
54
+
55
+
56
+ def tokenizer_image_token(prompt, tokenizer, return_tensors=None):
57
+ return tokenizer(prompt, return_tensors=return_tensors).input_ids[0]
58
+
59
+
60
+ def has_tokenizer(repo_id_or_path: str) -> bool:
61
+ # Check if the tokenizer is in a local directory
62
+ if osp.exists(osp.join(repo_id_or_path, "tokenizer_config.json")):
63
+ return True
64
+
65
+ # Check if the tokenizer is in a Hugging Face Hub repo
66
+ try:
67
+ return repo_exists(repo_id_or_path) and file_exists(repo_id_or_path, "tokenizer_config.json")
68
+ except HFValidationError:
69
+ return False
70
+
71
+
72
+ def _maybe_add_sentinel_token(tokenizer: transformers.PreTrainedTokenizer) -> None:
73
+ if not hasattr(tokenizer, "sentinel_token"):
74
+ tokenizer.add_tokens([SENTINEL_TOKEN], special_tokens=True)
75
+ tokenizer.sentinel_token = SENTINEL_TOKEN
76
+ tokenizer.sentinel_token_id = tokenizer.convert_tokens_to_ids(SENTINEL_TOKEN)
77
+
78
+
79
+ def tokenize_conversation_legacy(
80
+ messages: Sequence[Dict[str, str]],
81
+ tokenizer: transformers.PreTrainedTokenizer,
82
+ add_generation_prompt: bool = False,
83
+ overrides: Optional[Dict[str, str]] = None,
84
+ no_system_prompt: bool = False,
85
+ ) -> torch.Tensor:
86
+ conv = default_conversation.copy()
87
+ roles = {"human": conv.roles[0], "gpt": conv.roles[1]}
88
+
89
+ if no_system_prompt:
90
+ conv.system = ""
91
+
92
+ # Skip the first message if it is not from human
93
+ if messages[0]["from"] != "human":
94
+ messages = messages[1:]
95
+
96
+ # Add a generation prompt if needed
97
+ if add_generation_prompt:
98
+ messages.append({"from": "gpt", "value": None})
99
+
100
+ conv.messages = []
101
+ for turn, message in enumerate(messages):
102
+ role = roles[message["from"]]
103
+ assert role == conv.roles[turn % 2]
104
+ if overrides is not None and message["from"] in overrides:
105
+ conv.append_message(role, overrides[message["from"]])
106
+ else:
107
+ conv.append_message(role, message["value"])
108
+
109
+ return tokenizer_image_token(conv.get_prompt(), tokenizer, return_tensors="pt")
110
+
111
+
112
+ def tokenize_conversation(
113
+ messages: Sequence[Dict[str, str]],
114
+ tokenizer: transformers.PreTrainedTokenizer,
115
+ add_generation_prompt: bool = False,
116
+ overrides: Optional[Dict[str, str]] = None,
117
+ no_system_prompt: bool = False,
118
+ ) -> torch.Tensor:
119
+ # Normalize the conversation before tokenization
120
+ for message in messages:
121
+ message["value"] = message["value"].strip()
122
+
123
+ if default_conversation.sep_style != SeparatorStyle.AUTO:
124
+ return tokenize_conversation_legacy(
125
+ messages,
126
+ tokenizer,
127
+ add_generation_prompt=add_generation_prompt,
128
+ overrides=overrides,
129
+ no_system_prompt=no_system_prompt,
130
+ )
131
+
132
+ conversation = []
133
+ for m in messages:
134
+ message = {}
135
+ if m["from"] == "human":
136
+ message["role"] = "user"
137
+ elif m["from"] == "gpt":
138
+ message["role"] = "assistant"
139
+ else:
140
+ raise ValueError(f"Unexpected sender '{m['from']}' in conversation entry.")
141
+
142
+ message["content"] = m["value"]
143
+ if overrides is not None and m["from"] in overrides:
144
+ message["content"] = overrides[m["from"]]
145
+ conversation.append(message)
146
+
147
+ if no_system_prompt:
148
+ conversation = [{"role": "system", "content": ""}] + conversation
149
+
150
+ text = tokenizer.apply_chat_template(
151
+ conversation,
152
+ add_generation_prompt=add_generation_prompt,
153
+ tokenize=False,
154
+ )
155
+ return tokenizer_image_token(text, tokenizer, return_tensors="pt")
156
+
157
+
158
+ def infer_stop_tokens(tokenizer: transformers.PreTrainedTokenizer) -> List[str]:
159
+ _maybe_add_sentinel_token(tokenizer)
160
+ template = tokenize_conversation(DUMMY_CONVERSATION, tokenizer, overrides={"gpt": SENTINEL_TOKEN})
161
+
162
+ stop_tokens = {tokenizer.eos_token}
163
+ for k in range(template.size(0) - 1):
164
+ if template[k] == tokenizer.sentinel_token_id:
165
+ stop_token = tokenizer.decode(template[k + 1])
166
+ stop_tokens.add(stop_token)
167
+ return list(stop_tokens)
168
+
169
+
170
+ def context_length_extension(config):
171
+ orig_ctx_len = getattr(config, "max_position_embeddings", None)
172
+ model_max_length = getattr(config, "model_max_length", None)
173
+ if orig_ctx_len and model_max_length > orig_ctx_len:
174
+ print(f"Scaling RoPE from {orig_ctx_len} to {model_max_length}")
175
+ scaling_factor = float(math.ceil(model_max_length / orig_ctx_len))
176
+ config.rope_scaling = {"type": "linear", "factor": scaling_factor}
177
+ return config
178
+
179
+
180
+ def build_llm_and_tokenizer(
181
+ model_name_or_path: str,
182
+ config: PretrainedConfig,
183
+ attn_implementation=None,
184
+ model_max_length=None,
185
+ *args,
186
+ **kwargs,
187
+ ) -> Tuple[PreTrainedModel, PreTrainedTokenizer]:
188
+ # print(model_name_or_path)
189
+ llm_cfg = AutoConfig.from_pretrained(model_name_or_path)
190
+ llm_cfg._attn_implementation = attn_implementation
191
+ llm_cfg.model_max_length = model_max_length
192
+ if model_max_length is not None:
193
+ context_length_extension(llm_cfg)
194
+
195
+ # Quantization related
196
+ quantization_restore_from_checkpoint = False
197
+
198
+ if quantization_restore_from_checkpoint:
199
+ fp8_model_name_or_path = kwargs.pop("fp8_llm_cfg", None)
200
+
201
+ llm = AutoModelForCausalLM.from_pretrained(
202
+ fp8_model_name_or_path, config=llm_cfg, torch_dtype=eval(config.model_dtype), *args, **kwargs
203
+ )
204
+ else:
205
+ llm = AutoModelForCausalLM.from_pretrained(
206
+ model_name_or_path, config=llm_cfg, torch_dtype=eval(config.model_dtype), *args, **kwargs
207
+ )
208
+ # NOTE(ligeng): not sure whether it affects the training
209
+ # packing.patch(llm)
210
+
211
+ # Locate the tokenizer.
212
+ llm_path = model_name_or_path
213
+ if not has_tokenizer(llm_path):
214
+ llm_path = osp.join(llm_path, "llm")
215
+ if not has_tokenizer(llm_path):
216
+ raise ValueError(f"Cannot find tokenizer in {llm_path}.")
217
+
218
+ tokenizer = AutoTokenizer.from_pretrained(llm_path, padding_side="right", use_fast=True, legacy=False)
219
+ if model_max_length is not None:
220
+ tokenizer.model_max_length = model_max_length
221
+
222
+ # Load chat template if specified.
223
+ if getattr(config, "chat_template", None) is not None:
224
+ print(f"Using chat template: {config.chat_template}")
225
+ fpath = os.path.join(os.path.dirname(__file__), "chat_templates", f"{config.chat_template}.jinja")
226
+ if not os.path.exists(fpath):
227
+ fpath = os.path.join(os.path.dirname(model_name_or_path), f"{config.chat_template}.jinja")
228
+ with open(fpath) as fd:
229
+ chat_template = fd.read()
230
+ tokenizer.chat_template = chat_template.replace(" ", "").replace("\n", "")
231
+
232
+ # NOTE(ligeng): disable temporarially, let see will any bugs introduce
233
+ # Set stop tokens for the tokenizer
234
+ tokenizer.stop_tokens = infer_stop_tokens(tokenizer)
235
+ tokenizer.stop_token_ids = tokenizer.convert_tokens_to_ids(tokenizer.stop_tokens)
236
+
237
+ # Add media tokens to the tokenizer
238
+ tokenizer.media_tokens = MEDIA_TOKENS
239
+ tokenizer.media_token_ids = {}
240
+ for name, token in MEDIA_TOKENS.items():
241
+ tokenizer.add_tokens([token], special_tokens=True)
242
+ tokenizer.media_token_ids[name] = tokenizer.convert_tokens_to_ids(token)
243
+
244
+ # TODO(ligeng): is this necessary for llava?
245
+ config.hidden_size = llm.config.hidden_size
246
+ return llm, tokenizer
config.json ADDED
@@ -0,0 +1,260 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "./vlm",
3
+ "architectures": [
4
+ "VILAForCasualLM"
5
+ ],
6
+ "drop_path_rate": 0.0,
7
+ "hidden_size": 5120,
8
+ "image_aspect_ratio": "resize",
9
+ "interpolate_mode": "linear",
10
+ "llm_cfg": {
11
+ "_name_or_path": "./llm",
12
+ "add_cross_attention": false,
13
+ "architectures": [
14
+ "LlamaForCausalLM"
15
+ ],
16
+ "attention_bias": false,
17
+ "attention_dropout": 0.0,
18
+ "bad_words_ids": null,
19
+ "begin_suppress_tokens": null,
20
+ "bos_token_id": 1,
21
+ "chunk_size_feed_forward": 0,
22
+ "cross_attention_hidden_size": null,
23
+ "decoder_start_token_id": null,
24
+ "diversity_penalty": 0.0,
25
+ "do_sample": false,
26
+ "early_stopping": false,
27
+ "encoder_no_repeat_ngram_size": 0,
28
+ "eos_token_id": 2,
29
+ "exponential_decay_length_penalty": null,
30
+ "finetuning_task": null,
31
+ "forced_bos_token_id": null,
32
+ "forced_eos_token_id": null,
33
+ "hidden_act": "silu",
34
+ "hidden_size": 5120,
35
+ "id2label": {
36
+ "0": "LABEL_0",
37
+ "1": "LABEL_1"
38
+ },
39
+ "initializer_range": 0.02,
40
+ "intermediate_size": 13824,
41
+ "is_decoder": false,
42
+ "is_encoder_decoder": false,
43
+ "label2id": {
44
+ "LABEL_0": 0,
45
+ "LABEL_1": 1
46
+ },
47
+ "length_penalty": 1.0,
48
+ "max_length": 4096,
49
+ "max_position_embeddings": 4096,
50
+ "min_length": 0,
51
+ "model_max_length": 4096,
52
+ "model_type": "llama",
53
+ "no_repeat_ngram_size": 0,
54
+ "num_attention_heads": 40,
55
+ "num_beam_groups": 1,
56
+ "num_beams": 1,
57
+ "num_hidden_layers": 40,
58
+ "num_key_value_heads": 40,
59
+ "num_return_sequences": 1,
60
+ "output_attentions": false,
61
+ "output_hidden_states": false,
62
+ "output_scores": false,
63
+ "pad_token_id": 0,
64
+ "prefix": null,
65
+ "pretraining_tp": 1,
66
+ "problem_type": null,
67
+ "pruned_heads": {},
68
+ "remove_invalid_values": false,
69
+ "repetition_penalty": 1.0,
70
+ "return_dict": true,
71
+ "return_dict_in_generate": false,
72
+ "rms_norm_eps": 1e-05,
73
+ "rope_scaling": null,
74
+ "rope_theta": 10000.0,
75
+ "sep_token_id": null,
76
+ "suppress_tokens": null,
77
+ "task_specific_params": null,
78
+ "temperature": 1.0,
79
+ "tf_legacy_loss": false,
80
+ "tie_encoder_decoder": false,
81
+ "tie_word_embeddings": false,
82
+ "tokenizer_class": null,
83
+ "tokenizer_model_max_length": 4096,
84
+ "tokenizer_padding_side": "right",
85
+ "top_k": 50,
86
+ "top_p": 1.0,
87
+ "torch_dtype": "bfloat16",
88
+ "torchscript": false,
89
+ "typical_p": 1.0,
90
+ "use_bfloat16": false,
91
+ "use_cache": true,
92
+ "vocab_size": 32000
93
+ },
94
+ "mm_hidden_size": 1152,
95
+ "mm_projector_cfg": {
96
+ "_name_or_path": "./mm_projector",
97
+ "add_cross_attention": false,
98
+ "architectures": [
99
+ "MultimodalProjector"
100
+ ],
101
+ "bad_words_ids": null,
102
+ "begin_suppress_tokens": null,
103
+ "bos_token_id": null,
104
+ "chunk_size_feed_forward": 0,
105
+ "cross_attention_hidden_size": null,
106
+ "decoder_start_token_id": null,
107
+ "diversity_penalty": 0.0,
108
+ "do_sample": false,
109
+ "early_stopping": false,
110
+ "encoder_no_repeat_ngram_size": 0,
111
+ "eos_token_id": null,
112
+ "exponential_decay_length_penalty": null,
113
+ "finetuning_task": null,
114
+ "forced_bos_token_id": null,
115
+ "forced_eos_token_id": null,
116
+ "id2label": {
117
+ "0": "LABEL_0",
118
+ "1": "LABEL_1"
119
+ },
120
+ "is_decoder": false,
121
+ "is_encoder_decoder": false,
122
+ "label2id": {
123
+ "LABEL_0": 0,
124
+ "LABEL_1": 1
125
+ },
126
+ "length_penalty": 1.0,
127
+ "max_length": 20,
128
+ "min_length": 0,
129
+ "mm_projector_type": "mlp_downsample",
130
+ "model_type": "v2l_projector",
131
+ "no_repeat_ngram_size": 0,
132
+ "num_beam_groups": 1,
133
+ "num_beams": 1,
134
+ "num_return_sequences": 1,
135
+ "output_attentions": false,
136
+ "output_hidden_states": false,
137
+ "output_scores": false,
138
+ "pad_token_id": null,
139
+ "prefix": null,
140
+ "problem_type": null,
141
+ "pruned_heads": {},
142
+ "remove_invalid_values": false,
143
+ "repetition_penalty": 1.0,
144
+ "return_dict": true,
145
+ "return_dict_in_generate": false,
146
+ "sep_token_id": null,
147
+ "suppress_tokens": null,
148
+ "task_specific_params": null,
149
+ "temperature": 1.0,
150
+ "tf_legacy_loss": false,
151
+ "tie_encoder_decoder": false,
152
+ "tie_word_embeddings": true,
153
+ "tokenizer_class": null,
154
+ "top_k": 50,
155
+ "top_p": 1.0,
156
+ "torch_dtype": "bfloat16",
157
+ "torchscript": false,
158
+ "typical_p": 1.0,
159
+ "use_bfloat16": false
160
+ },
161
+ "mm_projector_lr": null,
162
+ "mm_use_im_patch_token": false,
163
+ "mm_use_im_start_end": false,
164
+ "mm_vision_select_feature": "cls_patch",
165
+ "mm_vision_select_layer": -2,
166
+ "model_dtype": "torch.bfloat16",
167
+ "model_type": "vila",
168
+ "num_video_frames": 8,
169
+ "resume_path": "./vlm",
170
+ "s2": false,
171
+ "s2_max_split_size": 336,
172
+ "s2_scales": "336,672,1008",
173
+ "transformers_version": "4.36.2",
174
+ "tune_language_model": true,
175
+ "tune_mm_projector": true,
176
+ "tune_vision_tower": true,
177
+ "vision_resolution": -1,
178
+ "vision_tower_cfg": {
179
+ "_name_or_path": "./vision_tower",
180
+ "add_cross_attention": false,
181
+ "architectures": [
182
+ "SiglipVisionModel"
183
+ ],
184
+ "attention_dropout": 0.0,
185
+ "bad_words_ids": null,
186
+ "begin_suppress_tokens": null,
187
+ "bos_token_id": null,
188
+ "chunk_size_feed_forward": 0,
189
+ "cross_attention_hidden_size": null,
190
+ "decoder_start_token_id": null,
191
+ "diversity_penalty": 0.0,
192
+ "do_sample": false,
193
+ "early_stopping": false,
194
+ "encoder_no_repeat_ngram_size": 0,
195
+ "eos_token_id": null,
196
+ "exponential_decay_length_penalty": null,
197
+ "finetuning_task": null,
198
+ "forced_bos_token_id": null,
199
+ "forced_eos_token_id": null,
200
+ "hidden_act": "gelu_pytorch_tanh",
201
+ "hidden_size": 1152,
202
+ "id2label": {
203
+ "0": "LABEL_0",
204
+ "1": "LABEL_1"
205
+ },
206
+ "image_size": 384,
207
+ "intermediate_size": 4304,
208
+ "is_decoder": false,
209
+ "is_encoder_decoder": false,
210
+ "label2id": {
211
+ "LABEL_0": 0,
212
+ "LABEL_1": 1
213
+ },
214
+ "layer_norm_eps": 1e-06,
215
+ "length_penalty": 1.0,
216
+ "max_length": 20,
217
+ "min_length": 0,
218
+ "model_type": "siglip_vision_model",
219
+ "no_repeat_ngram_size": 0,
220
+ "num_attention_heads": 16,
221
+ "num_beam_groups": 1,
222
+ "num_beams": 1,
223
+ "num_channels": 3,
224
+ "num_hidden_layers": 27,
225
+ "num_return_sequences": 1,
226
+ "output_attentions": false,
227
+ "output_hidden_states": false,
228
+ "output_scores": false,
229
+ "pad_token_id": null,
230
+ "patch_size": 14,
231
+ "prefix": null,
232
+ "problem_type": null,
233
+ "pruned_heads": {},
234
+ "remove_invalid_values": false,
235
+ "repetition_penalty": 1.0,
236
+ "return_dict": true,
237
+ "return_dict_in_generate": false,
238
+ "sep_token_id": null,
239
+ "suppress_tokens": null,
240
+ "task_specific_params": null,
241
+ "temperature": 1.0,
242
+ "tf_legacy_loss": false,
243
+ "tie_encoder_decoder": false,
244
+ "tie_word_embeddings": true,
245
+ "tokenizer_class": null,
246
+ "top_k": 50,
247
+ "top_p": 1.0,
248
+ "torch_dtype": "bfloat16",
249
+ "torchscript": false,
250
+ "typical_p": 1.0,
251
+ "use_bfloat16": false
252
+ },
253
+ "version": "2.0",
254
+ "auto_map": {
255
+ "AutoConfig": "modeling_vila.VILAConfig",
256
+ "AutoModel": "modeling_vila.VILAForCasualLM",
257
+ "AutoModelForCausalLM": "modeling_vila.VILAForCasualLM"
258
+ },
259
+ "chat_template": "vicuna_v1"
260
+ }
configuration_vila.py ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import math
3
+ import os
4
+ import os.path as osp
5
+ from copy import deepcopy
6
+ from threading import Thread
7
+ from typing import List, Optional
8
+
9
+ import torch
10
+ import torchvision
11
+ from PIL import Image
12
+ from transformers import (
13
+ AutoProcessor,
14
+ PretrainedConfig,
15
+ PreTrainedModel,
16
+ Qwen2Config,
17
+ Qwen2ForCausalLM,
18
+ Qwen2PreTrainedModel,
19
+ TextIteratorStreamer,
20
+ )
21
+
22
+
23
+ class VILAConfig(PretrainedConfig):
24
+ model_type = "vila"
25
+ keys_to_ignore_at_inference = ["past_key_values"]
26
+
27
+ def __init__(
28
+ self,
29
+ llm_cfg=None,
30
+ vision_tower_cfg=None,
31
+ mm_projector_cfg=None,
32
+ architectures=None,
33
+ resume_path=None,
34
+ hidden_size=None,
35
+ mm_hidden_size=None,
36
+ image_aspect_ratio=None,
37
+ num_video_frames=None,
38
+ fps=None,
39
+ mm_vision_select_layer=None,
40
+ mm_vision_select_feature=None,
41
+ mm_use_im_start_end=False,
42
+ mm_use_im_patch_token=False,
43
+ mm_projector_lr=None,
44
+ vision_tower_lr=None,
45
+ vision_resolution=None,
46
+ interpolate_mode=None,
47
+ s2=None,
48
+ dynamic_s2=None,
49
+ s2_scales=None,
50
+ s2_max_split_size=None,
51
+ s2_resize_output_to_scale_idx=0,
52
+ min_tiles: Optional[int] = 1,
53
+ max_tiles: Optional[int] = 12,
54
+ num_time_tokens=None,
55
+ time_token_format=None,
56
+ image_encoder: str = '{"_target_": "llava.model.encoders.BasicImageEncoder"}',
57
+ video_encoder: str = '{"_target_": "llava.model.encoders.BasicVideoEncoder"}',
58
+ **kwargs,
59
+ ):
60
+ super().__init__()
61
+ self.architectures = architectures
62
+ self.llm_cfg = llm_cfg
63
+ self.vision_tower_cfg = vision_tower_cfg
64
+ self.mm_projector_cfg = mm_projector_cfg
65
+ self.resume_path = resume_path
66
+
67
+ self.hidden_size = hidden_size
68
+ self.mm_hidden_size = mm_hidden_size
69
+ self.image_aspect_ratio = image_aspect_ratio
70
+ self.num_video_frames = num_video_frames
71
+ self.fps = fps
72
+ self.mm_vision_select_layer = mm_vision_select_layer
73
+ self.mm_vision_select_feature = mm_vision_select_feature
74
+ self.mm_use_im_start_end = mm_use_im_start_end
75
+ self.mm_use_im_patch_token = mm_use_im_patch_token
76
+ self.mm_projector_lr = mm_projector_lr
77
+ self.vision_tower_lr = vision_tower_lr
78
+ self.vision_resolution = vision_resolution
79
+ self.interpolate_mode = interpolate_mode
80
+ self.s2 = s2
81
+ self.dynamic_s2 = dynamic_s2
82
+ self.s2_scales = s2_scales
83
+ self.s2_max_split_size = s2_max_split_size
84
+ self.s2_resize_output_to_scale_idx = s2_resize_output_to_scale_idx
85
+ self.min_tiles = min_tiles
86
+ self.max_tiles = max_tiles
87
+ self.num_time_tokens = num_time_tokens
88
+ self.time_token_format = time_token_format
89
+
90
+ self.image_encoder = image_encoder
91
+ self.video_encoder = video_encoder
92
+
93
+ super().__init__(**kwargs)
constants.py ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 NVIDIA CORPORATION & AFFILIATES
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ #
15
+ # SPDX-License-Identifier: Apache-2.0
16
+
17
+ CONTROLLER_HEART_BEAT_EXPIRATION = 30
18
+ WORKER_HEART_BEAT_INTERVAL = 15
19
+
20
+ LOGDIR = "."
21
+
22
+ # Model Constants
23
+ IGNORE_INDEX = -100
24
+ DEFAULT_IMAGE_TOKEN = "<image>"
25
+
26
+ SENTINEL_TOKEN = "<vila/sentinel>"
27
+ MEDIA_TOKENS = {
28
+ "image": "<image>",
29
+ "video": "<vila/video>",
30
+ }
31
+ # <image> <vila/video> <vila/sentinel>
32
+ # TODO(ligeng): need to discuss with Zhijian for the following tokens for different models.
33
+ """
34
+ 151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
35
+ 151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
36
+ 151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
37
+ 151646: AddedToken("[BOS]", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
38
+ 151647: AddedToken("[PAD]", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
39
+ 151648: AddedToken("<vila/sentinel>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
40
+ 151649: AddedToken("<image>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
41
+ 151650: AddedToken("<vila/video>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
42
+ """
43
+ NUM_EXTRA_TOKENS = 8
conversation.py ADDED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 NVIDIA CORPORATION & AFFILIATES
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ #
15
+ # SPDX-License-Identifier: Apache-2.0
16
+ # This file is modified from https://github.com/haotian-liu/LLaVA/
17
+
18
+ import dataclasses
19
+ from enum import Enum, auto
20
+ from typing import List
21
+
22
+ # from llava.utils.logging import logger
23
+
24
+
25
+ class SeparatorStyle(Enum):
26
+ """Different separator style."""
27
+
28
+ AUTO = auto()
29
+ TWO = auto()
30
+ MPT = auto()
31
+ PLAIN = auto()
32
+ LLAMA_3 = auto()
33
+
34
+
35
+ @dataclasses.dataclass
36
+ class Conversation:
37
+ """A class that keeps all conversation history."""
38
+
39
+ system: str
40
+ roles: List[str]
41
+ messages: List[List[str]]
42
+ sep_style: SeparatorStyle = SeparatorStyle.AUTO
43
+ sep: str = "###"
44
+ sep2: str = None
45
+ version: str = "Unknown"
46
+
47
+ def get_prompt(self):
48
+ messages = self.messages
49
+ if len(messages) > 0 and type(messages[0][1]) is tuple:
50
+ messages = self.messages.copy()
51
+ init_role, init_msg = messages[0].copy()
52
+ init_msg = init_msg[0].replace("<image>", "").strip()
53
+ messages[0] = (init_role, "<image>\n" + init_msg)
54
+
55
+ if self.sep_style == SeparatorStyle.TWO:
56
+ seps = [self.sep, self.sep2]
57
+ ret = self.system + seps[0]
58
+ for i, (role, message) in enumerate(messages):
59
+ if message:
60
+ if type(message) is tuple:
61
+ message, _, _ = message
62
+ ret += role + ": " + message + seps[i % 2]
63
+ else:
64
+ ret += role + ":"
65
+ elif self.sep_style == SeparatorStyle.LLAMA_3:
66
+ ret = self.system + self.sep
67
+ for rid, (role, message) in enumerate(messages):
68
+ if message:
69
+ if type(message) is tuple:
70
+ message = message[0]
71
+ sep = self.sep if rid < len(messages) - 1 else self.sep2
72
+ ret += role + message + sep
73
+ else:
74
+ ret += role
75
+ elif self.sep_style == SeparatorStyle.MPT:
76
+ ret = self.system + self.sep
77
+ for role, message in messages:
78
+ if message:
79
+ if type(message) is tuple:
80
+ message, _, _ = message
81
+ ret += role + message + self.sep
82
+ else:
83
+ ret += role
84
+ elif self.sep_style == SeparatorStyle.PLAIN:
85
+ seps = [self.sep, self.sep2]
86
+ ret = self.system
87
+ for i, (role, message) in enumerate(messages):
88
+ if message:
89
+ if type(message) is tuple:
90
+ message, _, _ = message
91
+ ret += message + seps[i % 2]
92
+ else:
93
+ ret += ""
94
+ else:
95
+ raise ValueError(f"Invalid style: {self.sep_style}")
96
+
97
+ return ret
98
+
99
+ def append_message(self, role, message):
100
+ self.messages.append([role, message])
101
+
102
+ def copy(self):
103
+ return Conversation(
104
+ system=self.system,
105
+ roles=self.roles,
106
+ messages=[[x, y] for x, y in self.messages],
107
+ sep_style=self.sep_style,
108
+ sep=self.sep,
109
+ sep2=self.sep2,
110
+ version=self.version,
111
+ )
112
+
113
+
114
+ conv_auto = Conversation(
115
+ system="",
116
+ roles=("", ""),
117
+ messages=(),
118
+ sep_style=SeparatorStyle.AUTO,
119
+ sep="\n",
120
+ )
121
+
122
+ conv_vicuna_v1 = Conversation(
123
+ system="A chat between a curious user and an artificial intelligence assistant. "
124
+ "The assistant gives helpful, detailed, and polite answers to the user's questions.",
125
+ roles=("USER", "ASSISTANT"),
126
+ version="v1",
127
+ messages=(),
128
+ sep_style=SeparatorStyle.TWO,
129
+ sep=" ",
130
+ sep2="</s>",
131
+ )
132
+
133
+ conv_llava_plain = Conversation(
134
+ system="",
135
+ roles=("", ""),
136
+ messages=(),
137
+ sep_style=SeparatorStyle.PLAIN,
138
+ sep="\n",
139
+ )
140
+
141
+ hermes_2 = Conversation(
142
+ system="<|im_start|>system\nAnswer the questions.",
143
+ roles=("<|im_start|>user\n", "<|im_start|>assistant\n"),
144
+ sep_style=SeparatorStyle.MPT,
145
+ sep="<|im_end|>",
146
+ messages=(),
147
+ version="hermes-2",
148
+ )
149
+
150
+ # Template added by Yukang. Note (kentang-mit@): sep is <|eot_id|> for official template.
151
+ llama_3_chat = Conversation(
152
+ system="<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful language and vision assistant. "
153
+ "You are able to understand the visual content that the user provides, "
154
+ "and assist the user with a variety of tasks using natural language.",
155
+ roles=("<|start_header_id|>user<|end_header_id|>\n\n", "<|start_header_id|>assistant<|end_header_id|>\n\n"),
156
+ version="llama_v3",
157
+ messages=(),
158
+ sep_style=SeparatorStyle.LLAMA_3,
159
+ sep="<|eot_id|>",
160
+ sep2="<|end_of_text|>",
161
+ )
162
+
163
+
164
+ default_conversation = conv_auto
165
+ conv_templates = {
166
+ "auto": conv_auto,
167
+ "hermes-2": hermes_2,
168
+ "llama_3": llama_3_chat,
169
+ "v1": conv_vicuna_v1,
170
+ "vicuna_v1": conv_vicuna_v1,
171
+ "plain": conv_llava_plain,
172
+ }
173
+
174
+
175
+ CONVERSATION_MODE_MAPPING = {
176
+ "vila1.5-3b": "vicuna_v1",
177
+ "vila1.5-8b": "llama_3",
178
+ "vila1.5-13b": "vicuna_v1",
179
+ "vila1.5-40b": "hermes-2",
180
+ "llama-3": "llama_3",
181
+ "llama3": "llama_3",
182
+ }
183
+
184
+
185
+ def auto_set_conversation_mode(model_name_or_path: str) -> str:
186
+ global default_conversation
187
+ for k, v in CONVERSATION_MODE_MAPPING.items():
188
+ if k in model_name_or_path.lower():
189
+ print(f"Setting conversation mode to `{v}` based on model name/path `{model_name_or_path}`.")
190
+ default_conversation = conv_templates[v]
191
+ return
llm/config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "./checkpoints/vila-siglip-vicuna-13b-r313/llm",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 1,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 5120,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 13824,
14
+ "max_length": 4096,
15
+ "max_position_embeddings": 4096,
16
+ "model_max_length": 4096,
17
+ "model_type": "llama",
18
+ "num_attention_heads": 40,
19
+ "num_hidden_layers": 40,
20
+ "num_key_value_heads": 40,
21
+ "pad_token_id": 0,
22
+ "pretraining_tp": 1,
23
+ "rms_norm_eps": 1e-05,
24
+ "rope_scaling": null,
25
+ "rope_theta": 10000.0,
26
+ "tie_word_embeddings": false,
27
+ "tokenizer_model_max_length": 4096,
28
+ "tokenizer_padding_side": "right",
29
+ "torch_dtype": "bfloat16",
30
+ "transformers_version": "4.36.2",
31
+ "use_cache": true,
32
+ "vocab_size": 32000
33
+ }
llm/generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "do_sample": true,
5
+ "eos_token_id": 2,
6
+ "max_length": 4096,
7
+ "pad_token_id": 0,
8
+ "temperature": 0.9,
9
+ "top_p": 0.6,
10
+ "transformers_version": "4.36.2"
11
+ }
llm/model-00001-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:934ec39298065edfe9d43d994055760ec01b92452c0d450f6f41108c987b4d40
3
+ size 4978265800
llm/model-00002-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e56179048435c68fd57261f99e3a2f4a460317a19db15deec56a60b24b2b0c7f
3
+ size 4970422232
llm/model-00003-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff2c4e769e2fa622e12036b604d651dadf4efbcbee1b181fa4829179ea17bc5a
3
+ size 4970422256
llm/model-00004-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24207f347a7b5775d560702807ca7eec3398f7b1be13a1ebce173add89134816
3
+ size 4933701504
llm/model-00005-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45501a52d9daae80250bf7f6b9be5c8be4f4e900a5b03324bee75387623e9a74
3
+ size 4933722216
llm/model-00006-of-00006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7babff663f05e92487b426b067940f0d20863865ff284bea5c32a4251f747544
3
+ size 1245236920
llm/model.safetensors.index.json ADDED
@@ -0,0 +1,370 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 26031728640
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00006-of-00006.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00006.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00006.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
13
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
14
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
15
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
16
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
17
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00006.safetensors",
18
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
19
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
20
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
21
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
22
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
23
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
24
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
25
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
26
+ "model.layers.10.input_layernorm.weight": "model-00002-of-00006.safetensors",
27
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
28
+ "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
29
+ "model.layers.10.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
30
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
31
+ "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
32
+ "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
33
+ "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
34
+ "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
35
+ "model.layers.11.input_layernorm.weight": "model-00002-of-00006.safetensors",
36
+ "model.layers.11.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
37
+ "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
38
+ "model.layers.11.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
39
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
40
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
41
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
42
+ "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
43
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
44
+ "model.layers.12.input_layernorm.weight": "model-00002-of-00006.safetensors",
45
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
46
+ "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
47
+ "model.layers.12.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
48
+ "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
49
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
50
+ "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
51
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
52
+ "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
53
+ "model.layers.13.input_layernorm.weight": "model-00002-of-00006.safetensors",
54
+ "model.layers.13.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
55
+ "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
56
+ "model.layers.13.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
57
+ "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
58
+ "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
59
+ "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
60
+ "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
61
+ "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
62
+ "model.layers.14.input_layernorm.weight": "model-00002-of-00006.safetensors",
63
+ "model.layers.14.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
64
+ "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
65
+ "model.layers.14.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
66
+ "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
67
+ "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
68
+ "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
69
+ "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
70
+ "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
71
+ "model.layers.15.input_layernorm.weight": "model-00003-of-00006.safetensors",
72
+ "model.layers.15.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
73
+ "model.layers.15.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
74
+ "model.layers.15.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
75
+ "model.layers.15.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
76
+ "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
77
+ "model.layers.15.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
78
+ "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
79
+ "model.layers.15.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
80
+ "model.layers.16.input_layernorm.weight": "model-00003-of-00006.safetensors",
81
+ "model.layers.16.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
82
+ "model.layers.16.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
83
+ "model.layers.16.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
84
+ "model.layers.16.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
85
+ "model.layers.16.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
86
+ "model.layers.16.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
87
+ "model.layers.16.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
88
+ "model.layers.16.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
89
+ "model.layers.17.input_layernorm.weight": "model-00003-of-00006.safetensors",
90
+ "model.layers.17.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
91
+ "model.layers.17.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
92
+ "model.layers.17.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
93
+ "model.layers.17.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
94
+ "model.layers.17.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
95
+ "model.layers.17.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
96
+ "model.layers.17.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
97
+ "model.layers.17.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
98
+ "model.layers.18.input_layernorm.weight": "model-00003-of-00006.safetensors",
99
+ "model.layers.18.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
100
+ "model.layers.18.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
101
+ "model.layers.18.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
102
+ "model.layers.18.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
103
+ "model.layers.18.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
104
+ "model.layers.18.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
105
+ "model.layers.18.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
106
+ "model.layers.18.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
107
+ "model.layers.19.input_layernorm.weight": "model-00003-of-00006.safetensors",
108
+ "model.layers.19.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
109
+ "model.layers.19.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
110
+ "model.layers.19.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
111
+ "model.layers.19.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
112
+ "model.layers.19.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
113
+ "model.layers.19.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
114
+ "model.layers.19.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
115
+ "model.layers.19.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
116
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00006.safetensors",
117
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
118
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
119
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
120
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
121
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
122
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
123
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
124
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
125
+ "model.layers.20.input_layernorm.weight": "model-00003-of-00006.safetensors",
126
+ "model.layers.20.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
127
+ "model.layers.20.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
128
+ "model.layers.20.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
129
+ "model.layers.20.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
130
+ "model.layers.20.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
131
+ "model.layers.20.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
132
+ "model.layers.20.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
133
+ "model.layers.20.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
134
+ "model.layers.21.input_layernorm.weight": "model-00003-of-00006.safetensors",
135
+ "model.layers.21.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
136
+ "model.layers.21.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
137
+ "model.layers.21.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
138
+ "model.layers.21.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
139
+ "model.layers.21.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
140
+ "model.layers.21.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
141
+ "model.layers.21.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
142
+ "model.layers.21.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
143
+ "model.layers.22.input_layernorm.weight": "model-00003-of-00006.safetensors",
144
+ "model.layers.22.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
145
+ "model.layers.22.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
146
+ "model.layers.22.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
147
+ "model.layers.22.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
148
+ "model.layers.22.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
149
+ "model.layers.22.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
150
+ "model.layers.22.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
151
+ "model.layers.22.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
152
+ "model.layers.23.input_layernorm.weight": "model-00004-of-00006.safetensors",
153
+ "model.layers.23.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
154
+ "model.layers.23.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
155
+ "model.layers.23.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
156
+ "model.layers.23.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
157
+ "model.layers.23.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
158
+ "model.layers.23.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
159
+ "model.layers.23.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
160
+ "model.layers.23.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
161
+ "model.layers.24.input_layernorm.weight": "model-00004-of-00006.safetensors",
162
+ "model.layers.24.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
163
+ "model.layers.24.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
164
+ "model.layers.24.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
165
+ "model.layers.24.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
166
+ "model.layers.24.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
167
+ "model.layers.24.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
168
+ "model.layers.24.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
169
+ "model.layers.24.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
170
+ "model.layers.25.input_layernorm.weight": "model-00004-of-00006.safetensors",
171
+ "model.layers.25.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
172
+ "model.layers.25.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
173
+ "model.layers.25.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
174
+ "model.layers.25.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
175
+ "model.layers.25.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
176
+ "model.layers.25.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
177
+ "model.layers.25.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
178
+ "model.layers.25.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
179
+ "model.layers.26.input_layernorm.weight": "model-00004-of-00006.safetensors",
180
+ "model.layers.26.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
181
+ "model.layers.26.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
182
+ "model.layers.26.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
183
+ "model.layers.26.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
184
+ "model.layers.26.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
185
+ "model.layers.26.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
186
+ "model.layers.26.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
187
+ "model.layers.26.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
188
+ "model.layers.27.input_layernorm.weight": "model-00004-of-00006.safetensors",
189
+ "model.layers.27.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
190
+ "model.layers.27.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
191
+ "model.layers.27.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
192
+ "model.layers.27.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
193
+ "model.layers.27.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
194
+ "model.layers.27.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
195
+ "model.layers.27.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
196
+ "model.layers.27.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
197
+ "model.layers.28.input_layernorm.weight": "model-00004-of-00006.safetensors",
198
+ "model.layers.28.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
199
+ "model.layers.28.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
200
+ "model.layers.28.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
201
+ "model.layers.28.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
202
+ "model.layers.28.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
203
+ "model.layers.28.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
204
+ "model.layers.28.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
205
+ "model.layers.28.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
206
+ "model.layers.29.input_layernorm.weight": "model-00004-of-00006.safetensors",
207
+ "model.layers.29.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
208
+ "model.layers.29.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
209
+ "model.layers.29.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
210
+ "model.layers.29.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
211
+ "model.layers.29.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
212
+ "model.layers.29.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
213
+ "model.layers.29.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
214
+ "model.layers.29.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
215
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00006.safetensors",
216
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
217
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
218
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
219
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
220
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
221
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
222
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
223
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
224
+ "model.layers.30.input_layernorm.weight": "model-00005-of-00006.safetensors",
225
+ "model.layers.30.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
226
+ "model.layers.30.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
227
+ "model.layers.30.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
228
+ "model.layers.30.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
229
+ "model.layers.30.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
230
+ "model.layers.30.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
231
+ "model.layers.30.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
232
+ "model.layers.30.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
233
+ "model.layers.31.input_layernorm.weight": "model-00005-of-00006.safetensors",
234
+ "model.layers.31.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
235
+ "model.layers.31.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
236
+ "model.layers.31.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
237
+ "model.layers.31.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
238
+ "model.layers.31.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
239
+ "model.layers.31.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
240
+ "model.layers.31.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
241
+ "model.layers.31.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
242
+ "model.layers.32.input_layernorm.weight": "model-00005-of-00006.safetensors",
243
+ "model.layers.32.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
244
+ "model.layers.32.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
245
+ "model.layers.32.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
246
+ "model.layers.32.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
247
+ "model.layers.32.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
248
+ "model.layers.32.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
249
+ "model.layers.32.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
250
+ "model.layers.32.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
251
+ "model.layers.33.input_layernorm.weight": "model-00005-of-00006.safetensors",
252
+ "model.layers.33.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
253
+ "model.layers.33.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
254
+ "model.layers.33.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
255
+ "model.layers.33.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
256
+ "model.layers.33.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
257
+ "model.layers.33.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
258
+ "model.layers.33.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
259
+ "model.layers.33.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
260
+ "model.layers.34.input_layernorm.weight": "model-00005-of-00006.safetensors",
261
+ "model.layers.34.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
262
+ "model.layers.34.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
263
+ "model.layers.34.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
264
+ "model.layers.34.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
265
+ "model.layers.34.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
266
+ "model.layers.34.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
267
+ "model.layers.34.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
268
+ "model.layers.34.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
269
+ "model.layers.35.input_layernorm.weight": "model-00005-of-00006.safetensors",
270
+ "model.layers.35.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
271
+ "model.layers.35.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
272
+ "model.layers.35.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
273
+ "model.layers.35.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
274
+ "model.layers.35.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
275
+ "model.layers.35.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
276
+ "model.layers.35.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
277
+ "model.layers.35.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
278
+ "model.layers.36.input_layernorm.weight": "model-00005-of-00006.safetensors",
279
+ "model.layers.36.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
280
+ "model.layers.36.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
281
+ "model.layers.36.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
282
+ "model.layers.36.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
283
+ "model.layers.36.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
284
+ "model.layers.36.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
285
+ "model.layers.36.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
286
+ "model.layers.36.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
287
+ "model.layers.37.input_layernorm.weight": "model-00005-of-00006.safetensors",
288
+ "model.layers.37.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
289
+ "model.layers.37.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
290
+ "model.layers.37.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
291
+ "model.layers.37.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
292
+ "model.layers.37.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
293
+ "model.layers.37.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
294
+ "model.layers.37.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
295
+ "model.layers.37.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
296
+ "model.layers.38.input_layernorm.weight": "model-00006-of-00006.safetensors",
297
+ "model.layers.38.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
298
+ "model.layers.38.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
299
+ "model.layers.38.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
300
+ "model.layers.38.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
301
+ "model.layers.38.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
302
+ "model.layers.38.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
303
+ "model.layers.38.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
304
+ "model.layers.38.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
305
+ "model.layers.39.input_layernorm.weight": "model-00006-of-00006.safetensors",
306
+ "model.layers.39.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
307
+ "model.layers.39.mlp.gate_proj.weight": "model-00006-of-00006.safetensors",
308
+ "model.layers.39.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
309
+ "model.layers.39.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
310
+ "model.layers.39.self_attn.k_proj.weight": "model-00006-of-00006.safetensors",
311
+ "model.layers.39.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
312
+ "model.layers.39.self_attn.q_proj.weight": "model-00006-of-00006.safetensors",
313
+ "model.layers.39.self_attn.v_proj.weight": "model-00006-of-00006.safetensors",
314
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00006.safetensors",
315
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
316
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
317
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
318
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
319
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
320
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
321
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
322
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
323
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00006.safetensors",
324
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
325
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
326
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
327
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
328
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
329
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
330
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
331
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
332
+ "model.layers.6.input_layernorm.weight": "model-00001-of-00006.safetensors",
333
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
334
+ "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
335
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
336
+ "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
337
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
338
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
339
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
340
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
341
+ "model.layers.7.input_layernorm.weight": "model-00002-of-00006.safetensors",
342
+ "model.layers.7.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
343
+ "model.layers.7.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
344
+ "model.layers.7.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
345
+ "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
346
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
347
+ "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
348
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
349
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
350
+ "model.layers.8.input_layernorm.weight": "model-00002-of-00006.safetensors",
351
+ "model.layers.8.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
352
+ "model.layers.8.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
353
+ "model.layers.8.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
354
+ "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
355
+ "model.layers.8.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
356
+ "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
357
+ "model.layers.8.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
358
+ "model.layers.8.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
359
+ "model.layers.9.input_layernorm.weight": "model-00002-of-00006.safetensors",
360
+ "model.layers.9.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
361
+ "model.layers.9.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
362
+ "model.layers.9.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
363
+ "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
364
+ "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
365
+ "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
366
+ "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
367
+ "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
368
+ "model.norm.weight": "model-00006-of-00006.safetensors"
369
+ }
370
+ }
llm/special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<unk>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
llm/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7aedb3582ecda9fa99ee9242c17a9658f6744db083ee6ebdc8fb14857f84d220
3
+ size 499723
llm/tokenizer_config.json ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": true,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ }
30
+ },
31
+ "bos_token": "<s>",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "legacy": false,
35
+ "model_max_length": 4096,
36
+ "pad_token": "<unk>",
37
+ "padding_side": "right",
38
+ "sp_model_kwargs": {},
39
+ "spaces_between_special_tokens": false,
40
+ "tokenizer_class": "LlamaTokenizer",
41
+ "unk_token": "<unk>",
42
+ "use_default_system_prompt": false
43
+ }
main.py ADDED
File without changes
media.py ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import glob
2
+ import os
3
+ from collections import defaultdict
4
+ from typing import Any, Dict, List, Optional, Union
5
+
6
+ import cv2
7
+ import numpy as np
8
+ import PIL
9
+ import PIL.Image
10
+ import requests
11
+ from transformers import PretrainedConfig
12
+
13
+ # from llava.constants import MEDIA_TOKENS
14
+ # from llava.media import Image, Video
15
+ # from llava.utils import make_list
16
+ # from llava.utils.logging import logger
17
+
18
+ MEDIA_TOKENS = {
19
+ "image": "<image>",
20
+ "video": "<vila/video>",
21
+ }
22
+
23
+
24
+ class Media:
25
+ pass
26
+
27
+
28
+ class File(Media):
29
+ def __init__(self, path: str) -> None:
30
+ self.path = path
31
+
32
+
33
+ class Image(File):
34
+ pass
35
+
36
+
37
+ class Video(File):
38
+ pass
39
+
40
+
41
+ def make_list(obj: Any) -> List:
42
+ return obj if isinstance(obj, list) else [obj]
43
+
44
+
45
+ def _extract_image(image: Union[Image, PIL.Image.Image]) -> PIL.Image.Image:
46
+ if isinstance(image, Image):
47
+ if image.path.startswith("http://") or image.path.startswith("https://"):
48
+ image = PIL.Image.open(requests.get(image.path, stream=True).raw)
49
+ else:
50
+ image = PIL.Image.open(image.path)
51
+ return image
52
+
53
+
54
+ def _load_video(video_path: str, *, num_frames: int) -> List[PIL.Image.Image]:
55
+ # Load video frames from a directory
56
+ if os.path.isdir(video_path):
57
+ frame_paths = sorted(glob.glob(os.path.join(video_path, "*")))
58
+ indices = np.round(np.linspace(0, len(frame_paths) - 1, num_frames)).astype(int)
59
+ return [PIL.Image.open(frame_paths[index]) for index in indices]
60
+
61
+ # Load video frames from a video file
62
+ vidcap = cv2.VideoCapture(video_path)
63
+
64
+ # Find the last frame as frame count might not be accurate
65
+ frame_count = int(vidcap.get(cv2.CAP_PROP_FRAME_COUNT))
66
+ while frame_count > 0:
67
+ vidcap.set(cv2.CAP_PROP_POS_FRAMES, frame_count - 1)
68
+ if vidcap.grab():
69
+ break
70
+ frame_count -= 1
71
+ else:
72
+ raise ValueError(f"Video '{video_path}' has no frames.")
73
+
74
+ # Extract frames uniformly
75
+ indices = np.round(np.linspace(0, frame_count - 1, num_frames)).astype(int)
76
+ frames = {}
77
+ for index in indices:
78
+ if index in frames:
79
+ continue
80
+ vidcap.set(cv2.CAP_PROP_POS_FRAMES, index)
81
+ success, frame = vidcap.read()
82
+ if not success:
83
+ print(f"Failed to read frame {index} from video '{video_path}'. Skipped.")
84
+ continue
85
+ frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
86
+ frames[index] = PIL.Image.fromarray(frame)
87
+ return [frames[index] for index in indices if index in frames]
88
+
89
+
90
+ def _extract_video(video: Video, config: PretrainedConfig) -> List[PIL.Image.Image]:
91
+ num_frames = config.num_video_frames
92
+ if getattr(config, "fps") != 0:
93
+ print("Extracting frames from video with specified FPS is not supported yet. Ignored.")
94
+
95
+ frames = _load_video(video.path, num_frames=num_frames)
96
+ return frames
97
+
98
+
99
+ def extract_media(
100
+ messages: List[Dict[str, Any]],
101
+ config: Optional[PretrainedConfig] = None,
102
+ draft: bool = False,
103
+ ) -> Dict[str, List[Any]]:
104
+ media = defaultdict(list)
105
+ for message in messages:
106
+ text = ""
107
+ for part in make_list(message["value"]):
108
+ if isinstance(part, str):
109
+ for token in MEDIA_TOKENS.values():
110
+ if token in part:
111
+ print(f"Media token '{token}' found in text: '{part}'. Removed.")
112
+ part = part.replace(token, "").strip()
113
+ text += part
114
+ elif isinstance(part, (Image, PIL.Image.Image)):
115
+ if draft:
116
+ media["image"].append(part)
117
+ else:
118
+ media["image"].append(_extract_image(part))
119
+ text += MEDIA_TOKENS["image"]
120
+ elif isinstance(part, Video):
121
+ if draft:
122
+ media["video"].append(part)
123
+ else:
124
+ media["video"].append(_extract_video(part, config))
125
+ text += MEDIA_TOKENS["video"]
126
+ else:
127
+ raise ValueError(f"Unsupported prompt part type: {type(part)}")
128
+ message["value"] = text
129
+ return media
media_encoder.py ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from functools import partial
2
+ from typing import Any, Dict, List, Optional
3
+
4
+ import torch
5
+ from torch import nn
6
+
7
+
8
+ class BaseEncoder(nn.Module):
9
+ def __init__(self, parent: nn.Module) -> None:
10
+ super().__init__()
11
+ self._parent = [parent]
12
+
13
+ @property
14
+ def parent(self) -> nn.Module:
15
+ return self._parent[0]
16
+
17
+
18
+ class BasicImageEncoder(BaseEncoder):
19
+ def __init__(
20
+ self,
21
+ parent: torch.nn.Module,
22
+ start_tokens: Optional[str] = None,
23
+ end_tokens: Optional[str] = "\n",
24
+ ) -> None:
25
+ super().__init__(parent)
26
+ self.start_tokens = start_tokens
27
+ self.end_tokens = end_tokens
28
+
29
+ def embed_tokens(self, tokens: Optional[str]) -> Optional[torch.Tensor]:
30
+ if tokens is None:
31
+ return None
32
+ token_ids = self.parent.tokenizer(tokens).input_ids
33
+ token_ids = torch.tensor(token_ids, device=self.parent.device)
34
+ return self.parent.llm.model.embed_tokens(token_ids)
35
+
36
+ def _process_features(
37
+ self,
38
+ features: torch.Tensor,
39
+ start_token_embeds: Optional[torch.Tensor],
40
+ end_token_embeds: Optional[torch.Tensor],
41
+ ) -> torch.Tensor:
42
+ if start_token_embeds is not None:
43
+ features = torch.cat([start_token_embeds, features], dim=0)
44
+ if end_token_embeds is not None:
45
+ features = torch.cat([features, end_token_embeds], dim=0)
46
+ return features
47
+
48
+ def forward(self, images: List[torch.Tensor], config: Dict[str, Any]) -> List[torch.Tensor]:
49
+ images = torch.stack(images, dim=0)
50
+ features = self.parent.encode_images(images, block_sizes=config.get("block_sizes"))
51
+ process_features = partial(
52
+ self._process_features,
53
+ start_token_embeds=self.embed_tokens(self.start_tokens),
54
+ end_token_embeds=self.embed_tokens(self.end_tokens),
55
+ )
56
+ return [process_features(f) for f in features]
57
+
58
+
59
+ class BasicVideoEncoder(BaseEncoder):
60
+ def __init__(
61
+ self,
62
+ parent: torch.nn.Module,
63
+ start_tokens: Optional[str] = None,
64
+ end_tokens: Optional[str] = "\n",
65
+ ) -> None:
66
+ super().__init__(parent)
67
+ self.start_tokens = start_tokens
68
+ self.end_tokens = end_tokens
69
+
70
+ def embed_tokens(self, tokens: Optional[str]) -> Optional[torch.Tensor]:
71
+ if tokens is None:
72
+ return None
73
+ token_ids = self.parent.tokenizer(tokens).input_ids
74
+ token_ids = torch.tensor(token_ids, device=self.parent.device)
75
+ return self.parent.llm.model.embed_tokens(token_ids)
76
+
77
+ def _process_features(
78
+ self,
79
+ features: torch.Tensor,
80
+ start_token_embeds: Optional[torch.Tensor],
81
+ end_token_embeds: Optional[torch.Tensor],
82
+ ) -> torch.Tensor:
83
+ if start_token_embeds is not None:
84
+ start_embeds = torch.stack([start_token_embeds] * features.shape[0], dim=0)
85
+ features = torch.cat([start_embeds, features], dim=1)
86
+ if end_token_embeds is not None:
87
+ end_embeds = torch.stack([end_token_embeds] * features.shape[0], dim=0)
88
+ features = torch.cat([features, end_embeds], dim=1)
89
+ return features.flatten(0, 1)
90
+
91
+ def forward(self, videos: List[torch.Tensor], config: Dict[str, Any]) -> List[torch.Tensor]:
92
+ num_frames = [video.shape[0] for video in videos]
93
+ images = torch.cat(videos, dim=0)
94
+ features = self.parent.encode_images(images)
95
+ features = torch.split(features, num_frames)
96
+ process_features = partial(
97
+ self._process_features,
98
+ start_token_embeds=self.embed_tokens(self.start_tokens),
99
+ end_token_embeds=self.embed_tokens(self.end_tokens),
100
+ )
101
+ return [process_features(f) for f in features]
mm_projector/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "./mm_projector",
3
+ "architectures": [
4
+ "MultimodalProjector"
5
+ ],
6
+ "mm_projector_type": "mlp_downsample",
7
+ "model_type": "v2l_projector",
8
+ "torch_dtype": "bfloat16",
9
+ "transformers_version": "4.36.2"
10
+ }
mm_projector/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:499e29faf2e9e1f08905674a15c34c1c4db51248836f4bf113aaf3c0cb78eabd
3
+ size 99654160
mm_utils.py ADDED
@@ -0,0 +1,572 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 NVIDIA CORPORATION & AFFILIATES
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ #
15
+ # SPDX-License-Identifier: Apache-2.0
16
+
17
+ # dynamic_preprocess and find_closest_aspect_ratio are referenced from https://github.com/OpenGVLab/InternVL
18
+
19
+ import base64
20
+ import os
21
+ import tempfile
22
+ from io import BytesIO
23
+
24
+ import numpy as np
25
+ import torch
26
+ from PIL import Image
27
+ from transformers import StoppingCriteria
28
+
29
+ from llava.constants import DEFAULT_IMAGE_TOKEN
30
+
31
+
32
+ def get_frame_from_vcap(vidcap, num_frames=10, max_fps=0.0, fps=None, frame_count=None, video_file_name=None):
33
+ import cv2
34
+
35
+ if fps == None or frame_count == None:
36
+ # if one of fps or frame_count is None, still recompute
37
+ fps = vidcap.get(cv2.CAP_PROP_FPS)
38
+ frame_count = int(vidcap.get(cv2.CAP_PROP_FRAME_COUNT))
39
+ if fps == 0 or frame_count == 0:
40
+ print(f"Video file not found. return empty images. {video_file_name}")
41
+ return [
42
+ Image.new("RGB", (720, 720)),
43
+ ] * num_frames, 0
44
+
45
+ duration = frame_count / fps
46
+ frame_interval = frame_count // num_frames
47
+ if frame_interval == 0 and frame_count <= 1:
48
+ print(f"frame_interval is equal to 0. return empty image. {video_file_name}")
49
+ return [
50
+ Image.new("RGB", (720, 720)),
51
+ ] * num_frames, 0
52
+ # print("duration:", duration, "frames:", frame_count, "intervals:", frame_interval)
53
+
54
+ images = []
55
+ count = 0
56
+ success = True
57
+ frame_indices = np.linspace(0, frame_count - 1, num_frames, dtype=int)
58
+ while success:
59
+ # print("frame_count:", frame_count, "count:", count, "num_frames:", num_frames, "frame_interval:", frame_interval)
60
+ if frame_count >= num_frames:
61
+ success, frame = vidcap.read()
62
+ if count in frame_indices:
63
+ try:
64
+ img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
65
+ im_pil = Image.fromarray(img)
66
+ images.append(im_pil)
67
+ except BaseException:
68
+ continue
69
+ if len(images) >= num_frames:
70
+ return images, num_frames
71
+ count += 1
72
+ else:
73
+ # Left padding frames if the video is not long enough
74
+ success, frame = vidcap.read()
75
+ if success:
76
+ try:
77
+ img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
78
+ im_pil = Image.fromarray(img)
79
+ images.append(im_pil)
80
+ except BaseException:
81
+ continue
82
+ count += 1
83
+ else:
84
+ break
85
+ if len(images) == 0:
86
+ raise ValueError("Did not find enough frames in the video. return empty image.")
87
+
88
+ return images, len(images)
89
+
90
+
91
+ def get_frame_from_vcap_with_fps(vidcap, num_frames=10, max_fps=0.0, fps=None, frame_count=None, video_file_name=None):
92
+ """
93
+ num_frames is the max number of frames the model can support.
94
+ frame_count is the number of frames in the input video.
95
+ max_fps is the max FPS of the model can support.
96
+ fps is the fps of the input video.
97
+ """
98
+
99
+ import random
100
+
101
+ import cv2
102
+
103
+ if fps == None or frame_count == None:
104
+ # if one of fps or frame_count is None, still recompute
105
+ fps = vidcap.get(cv2.CAP_PROP_FPS)
106
+ frame_count = int(vidcap.get(cv2.CAP_PROP_FRAME_COUNT))
107
+
108
+ if fps == 0 or frame_count == 0:
109
+ print(f"Video file not found. return empty images. {video_file_name}")
110
+ empty_video_frames = int(random.uniform(2, 8 * max_fps))
111
+ return [
112
+ Image.new("RGB", (720, 720)),
113
+ ] * empty_video_frames, 0
114
+
115
+ duration = frame_count / fps
116
+ # print("duration:", duration, "frames:", frame_count, "fps:", fps, "num_frames:", num_frames, "max_fps:", max_fps)
117
+ # If the video is too long (longer than max_fps and num_frames can support),
118
+ # we will use lower fps to sample frames.
119
+ if duration >= num_frames / max_fps:
120
+ frame_interval = frame_count // num_frames
121
+
122
+ # If the video is too short, we will skip the video if there is only one frame.
123
+ if frame_interval == 0 and frame_count <= 1:
124
+ print(f"frame_interval is equal to 0. return empty image. {video_file_name}")
125
+ empty_video_frames = int(random.uniform(2, 8 * max_fps))
126
+ return [
127
+ Image.new("RGB", (720, 720)),
128
+ ] * empty_video_frames, 0
129
+
130
+ images = []
131
+ count = 0
132
+ success = True
133
+ frame_indices = np.linspace(0, frame_count - 1, num_frames, dtype=int)
134
+
135
+ while success:
136
+ if frame_count >= num_frames:
137
+ # success, frame = vidcap.read()
138
+ if count in frame_indices:
139
+ success, frame = vidcap.read()
140
+ try:
141
+ img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
142
+ im_pil = Image.fromarray(img)
143
+ images.append(im_pil)
144
+ except:
145
+ # print("Failed to read frame:", count)
146
+ continue
147
+ if len(images) >= num_frames:
148
+ return images, num_frames
149
+ else:
150
+ success = vidcap.grab()
151
+ count += 1
152
+ else:
153
+ # Left padding frames if the video is not long enough
154
+ success, frame = vidcap.read()
155
+ if success:
156
+ try:
157
+ img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
158
+ im_pil = Image.fromarray(img)
159
+ images.append(im_pil)
160
+ except:
161
+ # print("Failed to read frame:", count)
162
+ continue
163
+ count += 1
164
+ else:
165
+ break
166
+ else:
167
+ frames_required = int(duration * max_fps)
168
+ frame_indices = np.linspace(0, frame_count - 1, frames_required, dtype=int)
169
+ if frames_required == 0:
170
+ print(f"frames_required is fewer than 2. Duration {duration}, return empty image.")
171
+ empty_video_frames = int(random.uniform(2, 8 * max_fps))
172
+ return [
173
+ Image.new("RGB", (720, 720)),
174
+ ] * empty_video_frames, 0
175
+ elif frames_required == 1:
176
+ frame_indices = np.linspace(0, frame_count - 1, 2, dtype=int)
177
+ images = []
178
+ count = 0
179
+ looked = 0
180
+ success = True
181
+
182
+ while success:
183
+ success, frame = vidcap.read()
184
+ if success and (looked in frame_indices):
185
+ try:
186
+ img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
187
+ im_pil = Image.fromarray(img)
188
+ images.append(im_pil)
189
+ except:
190
+ continue
191
+ count += 1
192
+ looked += 1
193
+
194
+ if len(images) == 0:
195
+ empty_video_frames = int(random.uniform(2, 8 * max_fps))
196
+ return [
197
+ Image.new("RGB", (720, 720)),
198
+ ] * empty_video_frames, 0
199
+ else:
200
+ return images, len(images)
201
+
202
+
203
+ def opencv_extract_frames(vpath_or_bytesio, frames=6, max_fps=0.0, fps=None, frame_count=None):
204
+ """
205
+ Extract frames from a video using OpenCV.
206
+
207
+ Args:
208
+ vpath_or_bytesio (str or BytesIO): Path to the video file or BytesIO object containing the video.
209
+ frames (int): Number of frames to extract from the video.
210
+ fps (float): Frames per second of the video. If 0.0, the function will extract frames at equal intervals.
211
+
212
+ Returns:
213
+ list: List of PIL Images extracted from the video.
214
+
215
+ Raises:
216
+ NotImplementedError: If the type of `vpath_or_bytesio` is not supported.
217
+ """
218
+ import cv2
219
+
220
+ if isinstance(vpath_or_bytesio, str):
221
+ vidcap = cv2.VideoCapture(vpath_or_bytesio)
222
+ if max_fps > 0.0:
223
+ return get_frame_from_vcap_with_fps(
224
+ vidcap, frames, max_fps, fps=fps, frame_count=frame_count, video_file_name=vpath_or_bytesio
225
+ )
226
+ return get_frame_from_vcap(
227
+ vidcap, frames, max_fps, fps=fps, frame_count=frame_count, video_file_name=vpath_or_bytesio
228
+ )
229
+ elif isinstance(vpath_or_bytesio, (BytesIO,)):
230
+ # assuming mp4
231
+ with tempfile.NamedTemporaryFile(delete=True, suffix=".mp4") as temp_video:
232
+ temp_video.write(vpath_or_bytesio.read())
233
+ temp_video_name = temp_video.name
234
+ vidcap = cv2.VideoCapture(temp_video_name)
235
+ if max_fps > 0.0:
236
+ return get_frame_from_vcap_with_fps(
237
+ vidcap, frames, max_fps, fps=fps, frame_count=frame_count, video_file_name=temp_video_name
238
+ )
239
+ return get_frame_from_vcap(
240
+ vidcap, frames, max_fps, fps=fps, frame_count=frame_count, video_file_name=temp_video_name
241
+ )
242
+ else:
243
+ raise NotImplementedError(type(vpath_or_bytesio))
244
+
245
+
246
+ def load_image_from_base64(image):
247
+ return Image.open(BytesIO(base64.b64decode(image)))
248
+
249
+
250
+ def expand2square(pil_img, background_color):
251
+ """
252
+ Expand the given PIL image to a square shape by adding padding.
253
+
254
+ Parameters:
255
+ - pil_img: The PIL image to be expanded.
256
+ - background_color: The color of the padding to be added.
257
+
258
+ Returns:
259
+ - The expanded PIL image.
260
+
261
+ If the image is already square, it is returned as is.
262
+ If the image is wider than it is tall, padding is added to the top and bottom.
263
+ If the image is taller than it is wide, padding is added to the left and right.
264
+ """
265
+ width, height = pil_img.size
266
+ if pil_img.mode == "L":
267
+ background_color = background_color[0]
268
+ if width == height:
269
+ return pil_img
270
+ elif width > height:
271
+ result = Image.new(pil_img.mode, (width, width), background_color)
272
+ result.paste(pil_img, (0, (width - height) // 2))
273
+ return result
274
+ else:
275
+ result = Image.new(pil_img.mode, (height, height), background_color)
276
+ result.paste(pil_img, ((height - width) // 2, 0))
277
+ return result
278
+
279
+
280
+ def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
281
+ best_ratio_diff = float("inf")
282
+ best_ratio = (1, 1)
283
+ area = width * height
284
+ for ratio in target_ratios:
285
+ target_aspect_ratio = ratio[0] / ratio[1]
286
+ ratio_diff = abs(aspect_ratio - target_aspect_ratio)
287
+ if ratio_diff < best_ratio_diff:
288
+ best_ratio_diff = ratio_diff
289
+ best_ratio = ratio
290
+ elif ratio_diff == best_ratio_diff:
291
+ if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
292
+ best_ratio = ratio
293
+ return best_ratio
294
+
295
+
296
+ def dynamic_preprocess(image, min_num=1, max_num=12, image_size=384, use_thumbnail=True):
297
+ orig_width, orig_height = image.size
298
+ aspect_ratio = orig_width / orig_height
299
+
300
+ # calculate the existing image aspect ratio
301
+ target_ratios = {
302
+ (i, j)
303
+ for n in range(min_num, max_num + 1)
304
+ for i in range(1, n + 1)
305
+ for j in range(1, n + 1)
306
+ if i * j <= max_num and i * j >= min_num
307
+ }
308
+ target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
309
+
310
+ # find the closest aspect ratio to the target
311
+ target_aspect_ratio = find_closest_aspect_ratio(aspect_ratio, target_ratios, orig_width, orig_height, image_size)
312
+
313
+ # calculate the target width and height
314
+ target_width = image_size * target_aspect_ratio[0]
315
+ target_height = image_size * target_aspect_ratio[1]
316
+ blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
317
+
318
+ # resize the image
319
+ resized_img = image.resize((target_width, target_height))
320
+ processed_images = []
321
+ for i in range(blocks):
322
+ box = (
323
+ (i % (target_width // image_size)) * image_size,
324
+ (i // (target_width // image_size)) * image_size,
325
+ ((i % (target_width // image_size)) + 1) * image_size,
326
+ ((i // (target_width // image_size)) + 1) * image_size,
327
+ )
328
+ # split the image
329
+ split_img = resized_img.crop(box)
330
+ processed_images.append(split_img)
331
+ assert len(processed_images) == blocks
332
+ if use_thumbnail and len(processed_images) != 1:
333
+ thumbnail_img = image.resize((image_size, image_size))
334
+ processed_images.append(thumbnail_img)
335
+ return processed_images
336
+
337
+
338
+ def dynamic_s2_preprocess(image, s2_scales=[384, 768, 1152], max_num=12, image_size=384):
339
+ orig_width, orig_height = image.size
340
+ aspect_ratio = orig_width / orig_height
341
+ min_num = (s2_scales[-1] // s2_scales[0]) ** 2 # at least use number of tiles as the largest scale
342
+
343
+ processed_images = []
344
+
345
+ ##########################################################################################
346
+ ############# Add tiles for all but the last scale using fixed squre ratio ###############
347
+ ##########################################################################################
348
+
349
+ for scale in s2_scales[:-1]:
350
+ target_width = image_size * (scale // s2_scales[0])
351
+ target_height = image_size * (scale // s2_scales[0])
352
+ blocks = (scale // s2_scales[0]) ** 2
353
+
354
+ # resize the image
355
+ resized_img = image.resize((target_width, target_height))
356
+ for i in range(blocks):
357
+ box = (
358
+ (i % (target_width // image_size)) * image_size,
359
+ (i // (target_width // image_size)) * image_size,
360
+ ((i % (target_width // image_size)) + 1) * image_size,
361
+ ((i // (target_width // image_size)) + 1) * image_size,
362
+ )
363
+ # split the image
364
+ split_img = resized_img.crop(box)
365
+ processed_images.append(split_img)
366
+
367
+ ##########################################################################################
368
+ ################ Add tiles for the last scale using dynamic aspect ratio #################
369
+ ##########################################################################################
370
+
371
+ # calculate the existing image aspect ratio
372
+ target_ratios = {
373
+ (i, j)
374
+ for n in range(min_num, max_num + 1)
375
+ for i in range(1, n + 1)
376
+ for j in range(1, n + 1)
377
+ if i * j <= max_num and i * j >= min_num
378
+ }
379
+ target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
380
+
381
+ # find the closest aspect ratio to the target
382
+ target_aspect_ratio = find_closest_aspect_ratio(aspect_ratio, target_ratios, orig_width, orig_height, image_size)
383
+
384
+ # calculate the target width and height
385
+ target_width = image_size * target_aspect_ratio[0]
386
+ target_height = image_size * target_aspect_ratio[1]
387
+ blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
388
+
389
+ # resize the image
390
+ resized_img = image.resize((target_width, target_height))
391
+ for i in range(blocks):
392
+ box = (
393
+ (i % (target_width // image_size)) * image_size,
394
+ (i // (target_width // image_size)) * image_size,
395
+ ((i % (target_width // image_size)) + 1) * image_size,
396
+ ((i // (target_width // image_size)) + 1) * image_size,
397
+ )
398
+ # split the image
399
+ split_img = resized_img.crop(box)
400
+ processed_images.append(split_img)
401
+
402
+ return processed_images, (target_aspect_ratio[1], target_aspect_ratio[0])
403
+
404
+
405
+ def dynamic_process_images_and_prompt(images, prompt, data_args, image_folder=None, max_tiles=None):
406
+ prompt = prompt.split(DEFAULT_IMAGE_TOKEN)
407
+ idx = 0
408
+ all_images = []
409
+ for img in images:
410
+ processed_images = process_image(img, data_args, image_folder, enable_dynamic_res=True, max_tiles=max_tiles)
411
+ all_images.append(processed_images)
412
+ prompt.insert(idx + 1, f"{DEFAULT_IMAGE_TOKEN}\n" * processed_images.shape[0])
413
+ idx += 2
414
+ prompt = "".join(prompt)
415
+ if all_images:
416
+ all_images = torch.cat(all_images)
417
+ else:
418
+ all_images = None
419
+ prompt = prompt.replace(DEFAULT_IMAGE_TOKEN, "")
420
+ return all_images, prompt
421
+
422
+
423
+ def dynamic_s2_process_images_and_prompt(images, prompt, data_args, image_folder=None):
424
+ idx = 0
425
+ all_images = []
426
+ all_block_size = []
427
+ for img in images:
428
+ processed_images, block_size = process_image(img, data_args, image_folder, enable_dynamic_s2=True)
429
+ all_images.append(processed_images)
430
+ all_block_size.append(block_size)
431
+ idx += 2
432
+ if all_images:
433
+ all_images = torch.cat(all_images)
434
+ else:
435
+ all_images = None
436
+ return all_images, all_block_size
437
+
438
+
439
+ def process_image(
440
+ image_file, data_args, image_folder, enable_dynamic_res=False, enable_dynamic_s2=False, max_tiles=None
441
+ ):
442
+ processor = data_args.image_processor
443
+ if isinstance(image_file, str):
444
+ if image_folder is not None:
445
+ image = Image.open(os.path.join(image_folder, image_file)).convert("RGB")
446
+ else:
447
+ image = Image.open(image_file).convert("RGB")
448
+ else:
449
+ # image is stored in bytearray
450
+ image = image_file
451
+ image = image.convert("RGB")
452
+ if hasattr(data_args.image_processor, "crop_size"):
453
+ # CLIP vision tower
454
+ crop_size = data_args.image_processor.crop_size
455
+ else:
456
+ # SIGLIP vision tower
457
+ assert hasattr(data_args.image_processor, "size")
458
+ crop_size = data_args.image_processor.size
459
+ if "dynamic_s2" in data_args.image_aspect_ratio and enable_dynamic_s2:
460
+ assert crop_size["height"] == crop_size["width"]
461
+ images, block_size = dynamic_s2_preprocess(
462
+ image, s2_scales=data_args.s2_scales, max_num=data_args.max_tiles, image_size=crop_size["height"]
463
+ )
464
+ images = [processor.preprocess(image, return_tensors="pt")["pixel_values"][0] for image in images]
465
+ return torch.stack(images), block_size
466
+ if "dynamic" in data_args.image_aspect_ratio and enable_dynamic_res:
467
+ assert crop_size["height"] == crop_size["width"]
468
+ if max_tiles is not None:
469
+ max_num = max_tiles
470
+ else:
471
+ max_num = data_args.max_tiles
472
+ images = dynamic_preprocess(image, min_num=data_args.min_tiles, max_num=max_num, image_size=crop_size["height"])
473
+ images = [processor.preprocess(image, return_tensors="pt")["pixel_values"][0] for image in images]
474
+ return torch.stack(images)
475
+
476
+ if data_args.image_aspect_ratio == "resize":
477
+ image = image.resize((crop_size["width"], crop_size["height"]))
478
+ if data_args.image_aspect_ratio == "pad":
479
+
480
+ def expand2square(pil_img, background_color):
481
+ width, height = pil_img.size
482
+ if width == height:
483
+ return pil_img
484
+ elif width > height:
485
+ result = Image.new(pil_img.mode, (width, width), background_color)
486
+ result.paste(pil_img, (0, (width - height) // 2))
487
+ return result
488
+ else:
489
+ result = Image.new(pil_img.mode, (height, height), background_color)
490
+ result.paste(pil_img, ((height - width) // 2, 0))
491
+ return result
492
+
493
+ image = expand2square(image, tuple(int(x * 255) for x in processor.image_mean))
494
+ image = processor.preprocess(image, return_tensors="pt")["pixel_values"][0]
495
+ else:
496
+ # Using default behavior of the vision encoder
497
+ # For CLIP, default is central crop
498
+ # For Radio, default is central crop
499
+ # For Siglip, default is resize
500
+ # For InternVIT, default is resize
501
+ image = processor.preprocess(image, return_tensors="pt")["pixel_values"][0]
502
+ return image
503
+
504
+
505
+ def process_images(images, image_processor, model_cfg, enable_dynamic_res=False, max_tiles=None):
506
+ model_cfg.image_processor = image_processor
507
+ new_images = [
508
+ process_image(image, model_cfg, None, enable_dynamic_res=enable_dynamic_res, max_tiles=max_tiles)
509
+ for image in images
510
+ ]
511
+
512
+ if all(x.shape == new_images[0].shape for x in new_images):
513
+ if len(new_images[0].shape) == 4:
514
+ new_images = torch.cat(new_images, dim=0)
515
+ elif len(new_images[0].shape) == 3:
516
+ new_images = torch.stack(new_images, dim=0)
517
+ else:
518
+ raise ValueError(f"new_images rank does not equal to 4, rank: {len(new_images[0].shape)}")
519
+ else:
520
+ raise ValueError("The shape of images in new_images is different!")
521
+ return new_images
522
+
523
+
524
+ def tokenizer_image_token(prompt, tokenizer, return_tensors=None):
525
+ return tokenizer(prompt, return_tensors=return_tensors).input_ids[0]
526
+
527
+
528
+ def is_gemma_tokenizer(tokenizer):
529
+ return "gemma" in tokenizer.__class__.__name__.lower()
530
+
531
+
532
+ def get_model_name_from_path(model_path):
533
+ model_path = model_path.strip("/")
534
+ model_paths = model_path.split("/")
535
+ if model_paths[-1].startswith("checkpoint-"):
536
+ return model_paths[-2] + "_" + model_paths[-1]
537
+ else:
538
+ return model_paths[-1]
539
+
540
+
541
+ class KeywordsStoppingCriteria(StoppingCriteria):
542
+ def __init__(self, keywords, tokenizer, input_ids):
543
+ self.keywords = keywords
544
+ self.keyword_ids = []
545
+ self.max_keyword_len = 0
546
+ for keyword in keywords:
547
+ cur_keyword_ids = tokenizer(keyword).input_ids
548
+ if len(cur_keyword_ids) > 1 and cur_keyword_ids[0] == tokenizer.bos_token_id:
549
+ cur_keyword_ids = cur_keyword_ids[1:]
550
+ if len(cur_keyword_ids) > self.max_keyword_len:
551
+ self.max_keyword_len = len(cur_keyword_ids)
552
+ self.keyword_ids.append(torch.tensor(cur_keyword_ids))
553
+ self.tokenizer = tokenizer
554
+ self.start_len = input_ids.shape[1]
555
+
556
+ def call_for_batch(self, output_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
557
+ offset = min(output_ids.shape[1] - self.start_len, self.max_keyword_len)
558
+ self.keyword_ids = [keyword_id.to(output_ids.device) for keyword_id in self.keyword_ids]
559
+ for keyword_id in self.keyword_ids:
560
+ if (output_ids[0, -keyword_id.shape[0] :] == keyword_id).all():
561
+ return True
562
+ outputs = self.tokenizer.batch_decode(output_ids[:, -offset:], skip_special_tokens=True)[0]
563
+ for keyword in self.keywords:
564
+ if keyword in outputs:
565
+ return True
566
+ return False
567
+
568
+ def __call__(self, output_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
569
+ outputs = []
570
+ for i in range(output_ids.shape[0]):
571
+ outputs.append(self.call_for_batch(output_ids[i].unsqueeze(0), scores))
572
+ return all(outputs)
modeling_vila.py ADDED
@@ -0,0 +1,1120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import copy
2
+ import json
3
+ import logging
4
+ import math
5
+ import os
6
+ import os.path
7
+ import os.path as osp
8
+ import shutil
9
+ import warnings
10
+ from abc import ABC
11
+ from collections import OrderedDict, defaultdict, deque
12
+ from copy import deepcopy
13
+ from itertools import chain
14
+ from threading import Thread
15
+ from typing import Any, Dict, List, Optional, Tuple, Union
16
+
17
+ import torch
18
+ import torch.distributed as dist
19
+ import torch.nn as nn
20
+ import torch.nn.functional as F
21
+ import torchvision
22
+ from einops import rearrange
23
+ from PIL import Image
24
+ from transformers import (
25
+ AutoConfig,
26
+ AutoModel,
27
+ AutoProcessor,
28
+ AutoTokenizer,
29
+ GenerationConfig,
30
+ LogitsProcessor,
31
+ PretrainedConfig,
32
+ PreTrainedModel,
33
+ Qwen2Config,
34
+ Qwen2ForCausalLM,
35
+ Qwen2PreTrainedModel,
36
+ TextIteratorStreamer,
37
+ )
38
+ from transformers.modeling_outputs import CausalLMOutputWithPast
39
+ from transformers.modeling_utils import ContextManagers, no_init_weights
40
+
41
+ from .base_projector import MultimodalProjector, MultimodalProjectorConfig
42
+ from .builder import build_llm_and_tokenizer
43
+ from .configuration_vila import VILAConfig
44
+ from .constants import *
45
+ from .conversation import SeparatorStyle, default_conversation
46
+ from .media import extract_media
47
+ from .media_encoder import BasicImageEncoder, BasicVideoEncoder
48
+ from .mm_utils import process_image, process_images
49
+ from .siglip_encoder import SiglipVisionTower, SiglipVisionTowerDynamicS2, SiglipVisionTowerS2
50
+ from .tokenizer_utils import tokenize_conversation
51
+ from .utils import get_model_config
52
+
53
+
54
+ # from llava.constants import DEFAULT_IMAGE_TOKEN, IGNORE_INDEX, NUM_EXTRA_TOKENS
55
+ # quick hack for remote code
56
+ def get_pg_manager():
57
+ return None
58
+
59
+
60
+ def get_model_weights_dtype(model: nn.Module):
61
+ pass
62
+
63
+
64
+ def build_mm_projector(model_type_or_path: str, config: PretrainedConfig) -> PreTrainedModel:
65
+ if model_type_or_path is None:
66
+ return None
67
+ ## load from pretrained model
68
+ if config.resume_path:
69
+ assert os.path.exists(model_type_or_path), f"Resume mm projector path {model_type_or_path} does not exist!"
70
+ return MultimodalProjector.from_pretrained(model_type_or_path, config)
71
+ ## build from scratch
72
+ else:
73
+ mm_projector_cfg = MultimodalProjectorConfig(model_type_or_path)
74
+ mm_projector = MultimodalProjector(mm_projector_cfg, config)
75
+ return mm_projector
76
+
77
+
78
+ def check_dot_in_model_path(model_path: str):
79
+ """Check if the model path contains dot, which will affect the remote code loading."""
80
+ if osp.isdir(model_path): # local model
81
+ if "." in osp.abspath(model_path):
82
+ return True
83
+ else: # remote model
84
+ if "." in model_path:
85
+ return True
86
+ return False
87
+
88
+
89
+ def get_vila_version(model_path: str) -> str:
90
+ VERSIONS = ["vila1.5", "vila-u", "longvila", "nvila", "vila-m3"]
91
+ for version in VERSIONS:
92
+ if version in model_path.lower():
93
+ return version
94
+ return None
95
+
96
+
97
+ def generate_jinja_template(conv_mode: str) -> str:
98
+ if conv_mode == "vicuna_v1":
99
+ return """{% set system_prompt = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions." %}
100
+ {% set roles = ["USER", "ASSISTANT"] %}
101
+ {% set sep = " " %}
102
+ {% set sep2 = "</s>" %}
103
+
104
+ {{ system_prompt }}
105
+
106
+ {% for message in messages %}
107
+ {% if message['role'] == roles[0] %}
108
+ {{ roles[0] }}{{ sep }}{{ message['content'] }}{{ sep2 }}
109
+ {% else %}
110
+ {{ roles[1] }}{{ sep }}{{ message['content'] }}{{ sep2 }}
111
+ {% endif %}
112
+ {% endfor %}"""
113
+ elif conv_mode == "llama_3":
114
+ return """{% set system_prompt = "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful language and vision assistant. You are able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language." %}
115
+ {% set roles = ["<|start_header_id|>user<|end_header_id|>\n\n", "<|start_header_id|>assistant<|end_header_id|>\n\n"] %}
116
+ {% set sep = "<|eot_id|>" %}
117
+ {% set sep2 = "<|end_of_text|>" %}
118
+
119
+ {{ system_prompt }}
120
+
121
+ {% for message in messages %}
122
+ {% if message['role'] == 'user' %}
123
+ {{ roles[0] }}{{ message['content'] }}{{ sep }}
124
+ {% else %}
125
+ {{ roles[1] }}{{ message['content'] }}{{ sep }}
126
+ {% endif %}
127
+ {% endfor %}
128
+
129
+ {{ sep2 }}"""
130
+ elif conv_mode == "hermes_2":
131
+ return """{% set system_prompt = "<|im_start|>system\nAnswer the questions." %}
132
+ {% set roles = ["<|im_start|>user\n", "<|im_start|>assistant\n"] %}
133
+ {% set sep = "<|im_end|>" %}
134
+
135
+ {{ system_prompt }}{{ sep }}
136
+
137
+ {% for message in messages %}
138
+ {% if message['role'] == 'user' %}
139
+ {{ roles[0] }}{{ message['content'] }}{{ sep }}
140
+ {% else %}
141
+ {{ roles[1] }}{{ message['content'] }}{{ sep }}
142
+ {% endif %}
143
+ {% endfor %}"""
144
+ else:
145
+ raise NotImplementedError(f"Jinja template generation is not implemented for {conv_mode}.")
146
+
147
+
148
+ def build_vision_tower(model_name_or_path: str, config: PretrainedConfig) -> PreTrainedModel:
149
+ ## skip vision tower instantiation
150
+ if model_name_or_path is None:
151
+ return None
152
+
153
+ vision_tower_arch = None
154
+ if config.resume_path and "radio" not in model_name_or_path:
155
+ assert os.path.exists(model_name_or_path), f"Resume vision tower path {model_name_or_path} does not exist!"
156
+ vision_tower_cfg = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
157
+ vision_tower_arch = vision_tower_cfg.architectures[0].lower()
158
+ vision_tower_name = vision_tower_arch if vision_tower_arch is not None else model_name_or_path
159
+
160
+ use_s2 = getattr(config, "s2", False)
161
+ use_dynamic_s2 = getattr(config, "dynamic_s2", False)
162
+
163
+ if "siglip" in vision_tower_name:
164
+ if use_dynamic_s2:
165
+ vision_tower = SiglipVisionTowerDynamicS2(model_name_or_path, config)
166
+ elif use_s2:
167
+ vision_tower = SiglipVisionTowerS2(model_name_or_path, config)
168
+ else:
169
+ vision_tower = SiglipVisionTower(model_name_or_path, config)
170
+ else:
171
+ raise NotImplementedError(f"Unknown vision tower: {model_name_or_path}")
172
+
173
+ config.mm_hidden_size = (
174
+ vision_tower.config.hidden_size if not (use_s2 or use_dynamic_s2) else vision_tower.hidden_size
175
+ )
176
+ return vision_tower
177
+
178
+
179
+ class VILAPretrainedModel(PreTrainedModel):
180
+ config_class = VILAConfig
181
+ main_input_name = "input_embeds"
182
+ supports_gradient_checkpointing = True
183
+ _supports_flash_attn_2 = True
184
+
185
+ def __init__(self, config: VILAConfig, *args, **kwargs):
186
+ super().__init__(config)
187
+ self.config = config
188
+ cfgs = get_model_config(config)
189
+ if len(cfgs) == 3:
190
+ llm_cfg, vision_tower_cfg, mm_projector_cfg = cfgs
191
+ else:
192
+ raise ValueError("`llm_cfg` `mm_projector_cfg` `vision_tower_cfg` not found in the config.")
193
+
194
+ # loading on cpu by default
195
+ device_map = kwargs.get("device_map", "cpu")
196
+ self.mm_projector = build_mm_projector(mm_projector_cfg, config)
197
+ self.vision_tower = build_vision_tower(vision_tower_cfg, config)
198
+ if "auto" in device_map or "cuda" in device_map:
199
+ self.mm_projector = self.mm_projector.cuda()
200
+ self.vision_tower = self.vision_tower.cuda()
201
+ # set device_map auto can autoamtically shard llm to different devices
202
+ self.llm, self.tokenizer = self.init_llm(llm_cfg, config, device_map=device_map)
203
+
204
+ self.encoders = {"image": BasicImageEncoder(self), "video": BasicVideoEncoder(self)}
205
+
206
+ self.post_config()
207
+ self.is_loaded = True
208
+
209
+ assert (
210
+ self.llm is not None or self.vision_tower is not None or self.mm_projector is not None
211
+ ), "At least one of the components must be instantiated."
212
+
213
+ @classmethod
214
+ def convert_vila_dev_ckpt_to_remote(
215
+ self,
216
+ model_path: str,
217
+ output_dir: str = None,
218
+ vila_version: str | None = None,
219
+ conv_mode: str | None = None,
220
+ *model_args,
221
+ **kwargs,
222
+ ):
223
+ # assert type(self) == VILAForCasualLM, "This method is only available for VILAForCasualLM."
224
+ from huggingface_hub import HfApi, snapshot_download
225
+
226
+ if os.path.isdir(model_path):
227
+ model_path = model_path
228
+ api = HfApi()
229
+
230
+ if check_dot_in_model_path(model_path) and output_dir is None:
231
+ raise ValueError(
232
+ f"Model path {model_path} contains a dot, which will affect the remote code loading. Please specify the output directory without dot in the path to fix this issue."
233
+ )
234
+ if output_dir is not None and "." in output_dir:
235
+ raise ValueError(
236
+ f"Output directory {output_dir} contains a dot, which will affect the remote code loading. Please specify a valid output directory without dots."
237
+ )
238
+ if vila_version is None:
239
+ vila_version = get_vila_version(model_path)
240
+
241
+ if api.repo_exists(model_path):
242
+ model_path = snapshot_download(model_path, local_dir=output_dir)
243
+ print("downloading HF model to", model_path)
244
+
245
+ cfg_path = os.path.join(model_path, "config.json")
246
+ config = json.load(open(cfg_path))
247
+ config["version"] = "2.0" # nvila tag
248
+ config["architectures"] = ["VILAForCasualLM"]
249
+ config["auto_map"] = {
250
+ "AutoConfig": "modeling_vila.VILAConfig",
251
+ "AutoModel": "modeling_vila.VILAForCasualLM",
252
+ "AutoModelForCausalLM": "modeling_vila.VILAForCasualLM",
253
+ }
254
+ config["model_type"] = "vila"
255
+ if vila_version in ["vila1.5", "vila-m3"]:
256
+ if conv_mode is None:
257
+ raise ValueError(f"Please specify the conversation mode for {model_path}.")
258
+ config["chat_template"] = conv_mode
259
+ jinja_template = generate_jinja_template(conv_mode)
260
+ jinja_path = os.path.join(model_path, f"{conv_mode}.jinja")
261
+ with open(jinja_path, "w") as f:
262
+ f.write(jinja_template)
263
+ json.dump(config, open(cfg_path, "w"), indent=2)
264
+ self.copy_remote_py_files(model_path)
265
+
266
+ @classmethod
267
+ def copy_remote_py_files(cls, output_dir):
268
+ ## copy .py and REAMDE for next loading remote code
269
+ current_file_path = os.path.abspath(__file__)
270
+ current_folder = os.path.dirname(current_file_path)
271
+ for file_name in os.listdir(current_folder):
272
+ if file_name.endswith(".py") or file_name.endswith(".jinja"):
273
+ full_file_name = os.path.join(current_folder, file_name)
274
+ if os.path.isfile(full_file_name):
275
+ shutil.copy(full_file_name, output_dir)
276
+ print("[HF remote code] copying", full_file_name, "to", output_dir)
277
+
278
+ def save_pretrained(self, output_dir, state_dict=None):
279
+ if state_dict is None:
280
+ # other wise fetch from deepspeed
281
+ # state_dict = accelerator.get_state_dict(is_deepspeed_enabled)
282
+ state_dict = self.state_dict()
283
+
284
+ if getattr(self, "tokenizer", None):
285
+ self.tokenizer.save_pretrained(osp.join(output_dir, "llm"))
286
+
287
+ if self.get_llm():
288
+ print(f"saving llm to {osp.join(output_dir, 'llm')}")
289
+ self.llm.config._name_or_path = osp.join(output_dir, "llm")
290
+ llm_state_dict = OrderedDict({k.split("llm.")[-1]: v for k, v in state_dict.items() if "llm" in k})
291
+ self.llm.save_pretrained(os.path.join(output_dir, "llm"), state_dict=llm_state_dict)
292
+ self.config.llm_cfg = self.llm.config
293
+
294
+ if self.get_vision_tower():
295
+ print(f"saving vision_tower to {osp.join(output_dir, 'vision_tower')}")
296
+ self.vision_tower.config._name_or_path = osp.join(output_dir, "vision_tower")
297
+ vision_tower_state_dict = OrderedDict(
298
+ {k.split("vision_tower.vision_tower.")[-1]: v for k, v in state_dict.items() if "vision_tower" in k}
299
+ )
300
+ self.vision_tower.vision_tower.save_pretrained(
301
+ os.path.join(output_dir, "vision_tower"),
302
+ state_dict=vision_tower_state_dict,
303
+ )
304
+ self.vision_tower.image_processor.save_pretrained(os.path.join(output_dir, "vision_tower"))
305
+ self.config.vision_tower_cfg = self.vision_tower.config
306
+ if hasattr(self.config.vision_tower_cfg, "auto_map"):
307
+ if "radio" not in self.get_vision_tower().__class__.__name__.lower():
308
+ delattr(self.config.vision_tower_cfg, "auto_map")
309
+
310
+ if self.get_mm_projector():
311
+ print(f"saving mm_projector to {osp.join(output_dir, 'mm_projector')}")
312
+ self.mm_projector.config._name_or_path = osp.join(output_dir, "mm_projector")
313
+ mm_projector_state_dict = OrderedDict(
314
+ {k.split("mm_projector.")[-1]: v for k, v in state_dict.items() if "mm_projector" in k}
315
+ )
316
+ self.mm_projector.save_pretrained(
317
+ os.path.join(output_dir, "mm_projector"),
318
+ state_dict=mm_projector_state_dict,
319
+ )
320
+ self.config.mm_projector_cfg = self.mm_projector.config
321
+
322
+ ## update and save top-level config
323
+ self.config._name_or_path = output_dir
324
+ self.config.architectures = [self.__class__.__name__]
325
+ self.config.save_pretrained(output_dir)
326
+
327
+ ## copy .py and REAMDE for next loading remote code
328
+ self.copy_remote_py_files(output_dir)
329
+
330
+ @classmethod
331
+ def from_pretrained(
332
+ cls,
333
+ pretrained_model_name_or_path: Optional[str] = None,
334
+ *model_args,
335
+ config: Optional[Union[PretrainedConfig, str, os.PathLike]] = None,
336
+ cache_dir: Optional[Union[str, os.PathLike]] = None,
337
+ ignore_mismatched_sizes: bool = False,
338
+ force_download: bool = False,
339
+ local_files_only: bool = False,
340
+ token: Optional[Union[str, bool]] = None,
341
+ revision: str = "main",
342
+ use_safetensors: Optional[bool] = None,
343
+ weights_only: bool = True,
344
+ **kwargs,
345
+ ):
346
+ config = AutoConfig.from_pretrained(pretrained_model_name_or_path, trust_remote_code=True)
347
+ return cls._from_config(config, **kwargs)
348
+
349
+ def init_llm(self, llm_config, config, *args, **kwargs):
350
+ self.llm, self.tokenizer = build_llm_and_tokenizer(llm_config, config, *args, **kwargs)
351
+ # hard coded for NVILA
352
+ # variables for XGrammar
353
+ # print("DEBUG", len(self.tokenizer.added_tokens_encoder.keys()), self.tokenizer.added_tokens_encoder.keys())
354
+ NUM_EXTRA_TOKENS = len(self.tokenizer.added_tokens_encoder.keys())
355
+
356
+ # TODO: SENTINEL_TOKEN is not added, need to check with Zhijian
357
+ self.vocab_size = self.tokenizer.vocab_size + NUM_EXTRA_TOKENS
358
+ # XGrammar tokenizer and grammar compiler
359
+ # lazy init only when specified json output during inference
360
+ self.grammar_compiler = None
361
+
362
+ self.llm.resize_token_embeddings(len(self.tokenizer))
363
+ return self.llm, self.tokenizer
364
+
365
+ def post_config(self):
366
+ ######################################################################
367
+ # TODO: need to check dtype with jason
368
+ self.llm = self.llm.to(torch.float16)
369
+ self.mm_projector = self.mm_projector.to(torch.float16)
370
+ self.vision_tower = self.vision_tower.to(torch.float16)
371
+ ######################################################################
372
+ self.training = self.llm.training
373
+ ## configuration
374
+ if getattr(self.config, "llm_cfg", None) is None:
375
+ self.config.llm_cfg = self.llm.config
376
+ if getattr(self.config, "vision_tower_cfg", None) is None:
377
+ self.config.vision_tower_cfg = self.vision_tower.config
378
+ if getattr(self.config, "mm_projector_cfg", None) is None:
379
+ self.config.mm_projector_cfg = self.mm_projector.config
380
+
381
+ def get_llm(self):
382
+ llm = getattr(self, "llm", None)
383
+ if type(llm) is list:
384
+ llm = llm[0]
385
+ return llm
386
+
387
+ def get_lm_head(self):
388
+ lm_head = getattr(self.get_llm(), "lm_head", None)
389
+ return lm_head
390
+
391
+ def get_vision_tower(self):
392
+ vision_tower = getattr(self, "vision_tower", None)
393
+ if type(vision_tower) is list:
394
+ vision_tower = vision_tower[0]
395
+ return vision_tower
396
+
397
+ def get_mm_projector(self):
398
+ mm_projector = getattr(self, "mm_projector", None)
399
+ if type(mm_projector) is list:
400
+ mm_projector = mm_projector[0]
401
+ return mm_projector
402
+
403
+ def freezed_module_patch(self):
404
+ """
405
+ Huggingface will call model.train() at each training_step. To ensure the expected behaviors for modules like dropout, batchnorm, etc., we need to call model.eval() for the freezed modules.
406
+ """
407
+ if self.training:
408
+ if self.get_llm() and not getattr(self.config, "tune_language_model", False):
409
+ pass
410
+ # logging.warning("Caution: Your LLM is currently in training mode, ensuring accurate gradient computation. Please be vigilant, particularly regarding BatchNorm and Dropout operations.")
411
+ if self.get_vision_tower() and not getattr(self.config, "tune_vision_tower", False):
412
+ self.get_vision_tower().eval()
413
+ if self.get_mm_projector() and not getattr(self.config, "tune_mm_projector", False):
414
+ self.get_mm_projector().eval()
415
+
416
+
417
+ class VILAForCasualLM(VILAPretrainedModel):
418
+ def __init__(self, config: VILAConfig, *args, **kwargs):
419
+ super().__init__(config, *args, **kwargs)
420
+
421
+ def merge_features_for_dynamic_s2(self, image_features, block_sizes):
422
+ scales = self.get_vision_tower().scales
423
+ resize_output_to_scale_idx = self.get_vision_tower().resize_output_to_scale_idx
424
+
425
+ image_features_each_image = []
426
+ new_block_sizes = []
427
+ block_cnt = 0
428
+ for block_size_each_image in block_sizes:
429
+ if block_size_each_image is None:
430
+ cur_features = image_features[block_cnt : block_cnt + 1]
431
+ cur_features = rearrange(cur_features, "1 (h w) c -> 1 c h w", h=int(cur_features.shape[1] ** 0.5))
432
+ cur_features = cur_features.repeat(1, len(scales), 1, 1)
433
+ image_features_each_image.append(cur_features)
434
+ new_block_sizes.append((1, 1))
435
+ block_cnt += 1
436
+ else:
437
+ cur_features_each_scale = []
438
+ for scale in scales[:-1]:
439
+ num_blocks_this_scale = (scale // scales[0]) ** 2
440
+ cur_features_each_scale.append(
441
+ self.merge_chessboard(
442
+ image_features[block_cnt : block_cnt + num_blocks_this_scale],
443
+ num_split_h=scale // scales[0],
444
+ num_split_w=scale // scales[0],
445
+ )
446
+ ) # 1 * C * H * W
447
+ block_cnt += num_blocks_this_scale
448
+ num_blocks_last_scale = block_size_each_image[0] * block_size_each_image[1]
449
+ cur_features_each_scale.append(
450
+ self.merge_chessboard(
451
+ image_features[block_cnt : block_cnt + num_blocks_last_scale],
452
+ num_split_h=block_size_each_image[0],
453
+ num_split_w=block_size_each_image[1],
454
+ )
455
+ ) # 1 * C * H * W
456
+ block_cnt += num_blocks_last_scale
457
+
458
+ # resize and concat features from different scales
459
+ output_size = cur_features_each_scale[resize_output_to_scale_idx].shape[-2:]
460
+ cur_features = torch.cat(
461
+ [
462
+ F.interpolate(cur_features_each_scale[i].to(torch.float32), size=output_size, mode="area").to(
463
+ cur_features_each_scale[i].dtype
464
+ )
465
+ for i in range(len(cur_features_each_scale))
466
+ ],
467
+ dim=1,
468
+ )
469
+ # cur_features = rearrange(cur_features, "1 c h w -> (h w) c")
470
+
471
+ image_features_each_image.append(cur_features)
472
+
473
+ if resize_output_to_scale_idx == len(scales) - 1 or resize_output_to_scale_idx == -1:
474
+ new_block_sizes.append(block_size_each_image)
475
+ else:
476
+ new_block_sizes.append(
477
+ (
478
+ scales[resize_output_to_scale_idx] // scales[0],
479
+ scales[resize_output_to_scale_idx] // scales[0],
480
+ )
481
+ )
482
+
483
+ assert block_cnt == len(image_features)
484
+
485
+ return image_features_each_image, new_block_sizes
486
+
487
+ def encode_images(self, images, block_sizes: Optional[Optional[Tuple[int, ...]]] = None):
488
+ if block_sizes is None:
489
+ block_sizes = [None] * len(images)
490
+ if getattr(self.config, "dynamic_s2", False):
491
+ image_features = self.get_vision_tower()(images)
492
+ image_features, new_block_sizes = self.merge_features_for_dynamic_s2(image_features, block_sizes)
493
+
494
+ image_features = [
495
+ self.split_chessboard(x, block_size[0], block_size[1])
496
+ for x, block_size in zip(image_features, new_block_sizes)
497
+ ] # list of B * C * H * W tensors
498
+ image_features = torch.cat(
499
+ [rearrange(x, "b c h w -> b (h w) c") for x in image_features], dim=0
500
+ ) # B * N * C
501
+ image_features = self.get_mm_projector()(image_features)
502
+ image_features = list(
503
+ image_features.split([block_size[0] * block_size[1] for block_size in new_block_sizes], dim=0)
504
+ )
505
+ image_features = [
506
+ self.merge_chessboard(x, block_size[0], block_size[1])
507
+ for x, block_size in zip(image_features, new_block_sizes)
508
+ ] # list of 1 * C * H * W tensors
509
+ image_features = [rearrange(x, "1 c h w -> (h w) c") for x in image_features] # list of N * C tensors
510
+ if all([feature.shape[0] == image_features[0].shape[0] for feature in image_features]):
511
+ image_features = torch.stack(image_features, dim=0)
512
+ else:
513
+ image_features = self.get_vision_tower()(images)
514
+ image_features = self.get_mm_projector()(image_features)
515
+ return image_features
516
+
517
+ def _embed(
518
+ self,
519
+ input_ids: torch.Tensor,
520
+ media: Dict[str, List[torch.Tensor]],
521
+ media_config: Dict[str, Dict[str, Any]],
522
+ labels: Optional[torch.Tensor],
523
+ attention_mask: Optional[torch.Tensor],
524
+ ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
525
+ labels = labels if labels is not None else torch.full_like(input_ids, IGNORE_INDEX)
526
+ attention_mask = attention_mask if attention_mask is not None else torch.ones_like(input_ids, dtype=torch.bool)
527
+
528
+ # PROCESS_GROUP_MANAGER = get_pg_manager()
529
+ PROCESS_GROUP_MANAGER = None
530
+ if PROCESS_GROUP_MANAGER is not None:
531
+ for name in media:
532
+ self.encoders[name].end_tokens = None
533
+
534
+ # Extract text and media embeddings
535
+ text_embeds = self.llm.model.embed_tokens(input_ids)
536
+ media_embeds = self.__embed_media_tokens(media, media_config)
537
+
538
+ # This is a workaround to make sure the dummy embeddings are consumed
539
+ while media_embeds.get("dummy"):
540
+ dummy_embed = media_embeds["dummy"].popleft()
541
+ text_embeds += torch.sum(dummy_embed) * 0
542
+
543
+ # Remove padding
544
+ batch_size = labels.shape[0]
545
+ text_embeds = [text_embeds[k][attention_mask[k]] for k in range(batch_size)]
546
+ labels = [labels[k][attention_mask[k]] for k in range(batch_size)]
547
+
548
+ # Build inverse mapping from token ID to media name
549
+ media_tokens = {}
550
+ for name, token_id in self.tokenizer.media_token_ids.items():
551
+ media_tokens[token_id] = name
552
+
553
+ # Fuse text and media embeddings
554
+ inputs_m, labels_m = [], []
555
+ for k in range(batch_size):
556
+ inputs_mk, labels_mk = [], []
557
+ pos = 0
558
+ while pos < len(labels[k]):
559
+ if input_ids[k][pos].item() in media_tokens:
560
+ end = pos + 1
561
+ name = media_tokens[input_ids[k][pos].item()]
562
+ input = media_embeds[name].popleft()
563
+ label = torch.full([input.shape[0]], IGNORE_INDEX, device=labels[k].device, dtype=labels[k].dtype)
564
+ else:
565
+ end = pos
566
+ while end < len(labels[k]) and input_ids[k][end].item() not in media_tokens:
567
+ end += 1
568
+ input = text_embeds[k][pos:end]
569
+ label = labels[k][pos:end]
570
+ inputs_mk.append(input)
571
+ labels_mk.append(label)
572
+ pos = end
573
+ inputs_m.append(torch.cat(inputs_mk, dim=0))
574
+ labels_m.append(torch.cat(labels_mk, dim=0))
575
+ inputs, labels = inputs_m, labels_m
576
+
577
+ # Check if all media embeddings are consumed
578
+ for name in media_embeds:
579
+ if media_embeds[name]:
580
+ raise ValueError(f"Not all {name} embeddings are consumed!")
581
+
582
+ # Truncate sequences to `model_max_length` as media embeddings are inserted
583
+ inputs, labels = self.__truncate_sequence(inputs, labels)
584
+
585
+ # Pad sequences to the longest one in the batch
586
+ return self.__batchify_sequence(inputs, labels)
587
+
588
+ def __embed_media_tokens(
589
+ self,
590
+ media: Dict[str, List[torch.Tensor]],
591
+ media_config: Dict[str, Dict[str, Any]],
592
+ ) -> Dict[str, List[torch.Tensor]]:
593
+ embeds = defaultdict(deque)
594
+ for name in media:
595
+ if self.training:
596
+ # Gather metainfo of media objects from all ranks
597
+ info = [{"shape": tensor.shape, "dtype": tensor.dtype} for tensor in media.get(name, [])]
598
+ infos = list(chain(*distributed.all_gather(info)))
599
+
600
+ # The entire batch does not contain any media objects of this type.
601
+ if not infos:
602
+ continue
603
+
604
+ # Create a dummy tensor to ensure the encoder is called, otherwise the training will hang.
605
+ if media.get(name) is None or len(media[name]) == 0:
606
+ dummy = torch.zeros(infos[0]["shape"], dtype=infos[0]["dtype"], device=self.device)
607
+ embeds["dummy"].extend(self.encoders[name]([dummy], media_config[name]))
608
+ continue
609
+ embeds[name] = deque(self.encoders[name](media[name], media_config[name]))
610
+ return embeds
611
+
612
+ def __truncate_sequence(
613
+ self, inputs: List[torch.Tensor], labels: List[torch.Tensor]
614
+ ) -> Tuple[torch.Tensor, torch.Tensor]:
615
+ if self.training and any(len(input) > self.tokenizer.model_max_length for input in inputs):
616
+ warnings.warn(f"Truncating sequences to `model_max_length` ({self.tokenizer.model_max_length}).")
617
+ inputs = [input[: self.tokenizer.model_max_length] for input in inputs]
618
+ labels = [label[: self.tokenizer.model_max_length] for label in labels]
619
+ return inputs, labels
620
+
621
+ def __batchify_sequence(
622
+ self, inputs: List[torch.Tensor], labels: List[torch.Tensor]
623
+ ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
624
+ batch_size = len(inputs)
625
+ device = inputs[0].device
626
+ hidden_size = inputs[0].shape[1]
627
+ max_length = max(inputs[k].shape[0] for k in range(batch_size))
628
+ attention_mask = torch.ones((batch_size, max_length), dtype=torch.bool, device=device)
629
+
630
+ inputs_p, labels_p = [], []
631
+ for k in range(batch_size):
632
+ size_pk = max_length - inputs[k].shape[0]
633
+ inputs_pk = torch.zeros((size_pk, hidden_size), dtype=inputs[k].dtype, device=device)
634
+ labels_pk = torch.full((size_pk,), IGNORE_INDEX, dtype=labels[k].dtype, device=device)
635
+ if self.tokenizer.padding_side == "right":
636
+ attention_mask[k, inputs[k].shape[0] :] = False
637
+ inputs_pk = torch.cat([inputs[k], inputs_pk], dim=0)
638
+ labels_pk = torch.cat([labels[k], labels_pk], dim=0)
639
+ else:
640
+ attention_mask[k, : -inputs[k].shape[0]] = False
641
+ inputs_pk = torch.cat([inputs_pk, inputs[k]], dim=0)
642
+ labels_pk = torch.cat([labels_pk, labels[k]], dim=0)
643
+ inputs_p.append(inputs_pk)
644
+ labels_p.append(labels_pk)
645
+
646
+ inputs = torch.stack(inputs_p, dim=0)
647
+ labels = torch.stack(labels_p, dim=0)
648
+ return inputs, labels, attention_mask
649
+
650
+ def repack_multimodal_data(self, inputs_embeds, attention_mask, position_ids, labels):
651
+ # Handle sequence parallelism
652
+ PROCESS_GROUP_MANAGER = get_pg_manager()
653
+
654
+ # We do re-sharding instead of packing here to ensure the sequence length is the same across all ranks.
655
+ if PROCESS_GROUP_MANAGER is not None:
656
+ sp_degree = PROCESS_GROUP_MANAGER.sp_degree
657
+ sp_rank = PROCESS_GROUP_MANAGER.sp_rank
658
+ sp_group = PROCESS_GROUP_MANAGER.sp_pg
659
+ ring_degree = PROCESS_GROUP_MANAGER.ring_degree
660
+ ring_rank = PROCESS_GROUP_MANAGER.ring_rank
661
+ ring_type = PROCESS_GROUP_MANAGER.ring_type
662
+ ulysses_degree = PROCESS_GROUP_MANAGER.ulysses_degree
663
+ ulysses_rank = PROCESS_GROUP_MANAGER.ulysses_rank
664
+
665
+ bs, shard_seqlen = position_ids.shape
666
+ sp_seq_len = [torch.zeros(1, dtype=torch.int64, device=position_ids.device) for _ in range(sp_degree)]
667
+ dist.all_gather(sp_seq_len, torch.tensor(shard_seqlen, device=position_ids.device), group=sp_group)
668
+ sp_seq_len_cat = torch.cat(sp_seq_len, dim=0)
669
+
670
+ if sp_rank == 0:
671
+ original_start_id = 0
672
+ else:
673
+ original_start_id = torch.sum(sp_seq_len_cat[:sp_rank]).item()
674
+ original_end_id = torch.sum(sp_seq_len_cat[: sp_rank + 1]).item()
675
+
676
+ # Gather attention_mask, position_ids, labels and input_embeds
677
+ all_inputs_embeds = torch.zeros(
678
+ bs,
679
+ torch.sum(sp_seq_len_cat),
680
+ inputs_embeds.shape[-1],
681
+ dtype=inputs_embeds.dtype,
682
+ device=inputs_embeds.device,
683
+ ).contiguous()
684
+ all_inputs_embeds[:, original_start_id:original_end_id, :] += inputs_embeds
685
+ dist.barrier(group=sp_group)
686
+ dist.all_reduce(all_inputs_embeds, group=sp_group)
687
+ dist.barrier(group=sp_group)
688
+
689
+ attention_mask_list = [
690
+ torch.zeros((bs, sp_seq_len[i]), dtype=attention_mask.dtype, device=attention_mask.device)
691
+ for i in range(sp_degree)
692
+ ]
693
+ position_ids_list = [
694
+ torch.zeros((bs, sp_seq_len[i]), dtype=position_ids.dtype, device=position_ids.device)
695
+ for i in range(sp_degree)
696
+ ]
697
+ labels_list = [
698
+ torch.zeros((bs, sp_seq_len[i]), dtype=labels.dtype, device=labels.device) for i in range(sp_degree)
699
+ ]
700
+
701
+ dist.all_gather(attention_mask_list, attention_mask, group=sp_group)
702
+ dist.all_gather(position_ids_list, position_ids, group=sp_group)
703
+ dist.all_gather(labels_list, labels, group=sp_group)
704
+
705
+ effective_seqlen_list = [attention_mask_list[i].sum(dim=-1) for i in range(sp_degree)]
706
+ effective_seqlen = torch.stack(effective_seqlen_list, dim=-1)
707
+ effective_seqlen_batch_list = torch.unbind(effective_seqlen, dim=0)
708
+
709
+ global_attention_mask_list = []
710
+ global_position_ids_list = []
711
+ global_labels_list = []
712
+ global_inputs_embeds_list = []
713
+ for i in range(bs):
714
+ global_attention_mask_batch_list = []
715
+ global_position_ids_batch_list = []
716
+ global_labels_batch_list = []
717
+ global_inputs_embeds_batch_list = []
718
+ for j in range(sp_degree):
719
+ eff_len = effective_seqlen_batch_list[i][j]
720
+ prev_len = torch.sum(sp_seq_len_cat[:j]).item() if j > 0 else 0
721
+
722
+ global_attention_mask_batch_list.append(attention_mask_list[j][i, :eff_len])
723
+ global_position_ids_batch_list.append(position_ids_list[j][i, :eff_len])
724
+ global_labels_batch_list.append(labels_list[j][i, :eff_len])
725
+ global_inputs_embeds_batch_list.append(all_inputs_embeds[i, prev_len : prev_len + eff_len, :])
726
+ global_attention_mask_list.append(torch.cat(global_attention_mask_batch_list, dim=0))
727
+ global_position_ids_list.append(torch.cat(global_position_ids_batch_list, dim=0))
728
+ global_labels_list.append(torch.cat(global_labels_batch_list, dim=0))
729
+ global_inputs_embeds_list.append(torch.cat(global_inputs_embeds_batch_list, dim=0))
730
+
731
+ global_attention_mask = torch.nn.utils.rnn.pad_sequence(
732
+ global_attention_mask_list, batch_first=True, padding_value=False
733
+ )
734
+ global_position_ids = torch.nn.utils.rnn.pad_sequence(
735
+ global_position_ids_list, batch_first=True, padding_value=-1
736
+ )
737
+ global_labels = torch.nn.utils.rnn.pad_sequence(
738
+ global_labels_list, batch_first=True, padding_value=IGNORE_INDEX
739
+ )
740
+ global_inputs_embeds = torch.nn.utils.rnn.pad_sequence(
741
+ global_inputs_embeds_list, batch_first=True, padding_value=0
742
+ )
743
+
744
+ # Re-shard the inputs
745
+ if ring_degree > 1:
746
+ total_effective_seqlen = torch.sum(effective_seqlen, dim=1)
747
+ new_seqlen_per_rank = total_effective_seqlen // sp_degree
748
+ assert torch.all(
749
+ total_effective_seqlen % sp_degree == 0
750
+ ), "total_effective_seqlen must be divisible by sp_degree"
751
+
752
+ max_new_seqlen = torch.max(new_seqlen_per_rank).item()
753
+
754
+ new_attention_mask = torch.zeros(
755
+ (bs, max_new_seqlen), dtype=global_attention_mask.dtype, device=global_attention_mask.device
756
+ )
757
+ new_position_ids = torch.zeros(
758
+ (bs, max_new_seqlen), dtype=global_position_ids.dtype, device=global_position_ids.device
759
+ )
760
+ new_labels = torch.full(
761
+ (bs, max_new_seqlen), IGNORE_INDEX, dtype=global_labels.dtype, device=global_labels.device
762
+ )
763
+ new_inputs_embeds = torch.zeros(
764
+ (bs, max_new_seqlen, global_inputs_embeds.shape[-1]),
765
+ dtype=global_inputs_embeds.dtype,
766
+ device=global_inputs_embeds.device,
767
+ )
768
+
769
+ if ring_type == "ring_varlen":
770
+ for i in range(bs):
771
+ start_idx = new_seqlen_per_rank[i] * sp_rank
772
+ end_idx = start_idx + new_seqlen_per_rank[i]
773
+ new_attention_mask[i, : new_seqlen_per_rank[i]] = global_attention_mask[i, start_idx:end_idx]
774
+ new_position_ids[i, : new_seqlen_per_rank[i]] = global_position_ids[i, start_idx:end_idx]
775
+ new_labels[i, : new_seqlen_per_rank[i]] = global_labels[i, start_idx:end_idx]
776
+ new_inputs_embeds[i, : new_seqlen_per_rank[i], :] = global_inputs_embeds[
777
+ i, start_idx:end_idx, :
778
+ ]
779
+ elif ring_type == "zigzag_ring_varlen":
780
+ chunk_size = total_effective_seqlen // (2 * sp_degree)
781
+ for i in range(bs):
782
+ # Zigzag pattern indices
783
+ if sp_degree == ring_degree:
784
+ forward_rank_idx = sp_rank
785
+ backward_rank_idx = 2 * sp_degree - sp_rank - 1
786
+ else:
787
+ ulysses_offset = ulysses_rank * ring_degree * 2
788
+ forward_rank_idx = ring_rank + ulysses_offset
789
+ backward_rank_idx = sp_degree - ring_rank - 1 + ulysses_offset
790
+
791
+ # Calculate start and end indices for the forward and backward zigzag
792
+ start_idx_fwd = forward_rank_idx * chunk_size[i]
793
+ end_idx_fwd = start_idx_fwd + chunk_size[i]
794
+
795
+ start_idx_bwd = backward_rank_idx * chunk_size[i]
796
+ end_idx_bwd = start_idx_bwd + chunk_size[i]
797
+
798
+ # Fill new tensors with zigzag data
799
+ new_attention_mask[i, : chunk_size[i]] = global_attention_mask[i, start_idx_fwd:end_idx_fwd]
800
+ new_attention_mask[i, chunk_size[i] : 2 * chunk_size[i]] = global_attention_mask[
801
+ i, start_idx_bwd:end_idx_bwd
802
+ ]
803
+
804
+ new_position_ids[i, : chunk_size[i]] = global_position_ids[i, start_idx_fwd:end_idx_fwd]
805
+ new_position_ids[i, chunk_size[i] : 2 * chunk_size[i]] = global_position_ids[
806
+ i, start_idx_bwd:end_idx_bwd
807
+ ]
808
+
809
+ new_labels[i, : chunk_size[i]] = global_labels[i, start_idx_fwd:end_idx_fwd]
810
+ new_labels[i, chunk_size[i] : 2 * chunk_size[i]] = global_labels[i, start_idx_bwd:end_idx_bwd]
811
+
812
+ new_inputs_embeds[i, : chunk_size[i], :] = global_inputs_embeds[i, start_idx_fwd:end_idx_fwd, :]
813
+ new_inputs_embeds[i, chunk_size[i] : 2 * chunk_size[i], :] = global_inputs_embeds[
814
+ i, start_idx_bwd:end_idx_bwd, :
815
+ ]
816
+ else:
817
+ raise ValueError(f"Invalid ring_type: {ring_type}")
818
+ else:
819
+ global_seq_len = global_attention_mask.shape[-1]
820
+ seq_len_sharded = global_seq_len // sp_degree
821
+ start_idx_reshard = seq_len_sharded * sp_rank
822
+ end_idx_reshard = start_idx_reshard + seq_len_sharded if sp_rank < sp_degree - 1 else global_seq_len
823
+
824
+ new_attention_mask = torch.narrow(
825
+ global_attention_mask, 1, start_idx_reshard, end_idx_reshard - start_idx_reshard
826
+ )
827
+ new_position_ids = torch.narrow(
828
+ global_position_ids, 1, start_idx_reshard, end_idx_reshard - start_idx_reshard
829
+ )
830
+ new_labels = torch.narrow(global_labels, 1, start_idx_reshard, end_idx_reshard - start_idx_reshard)
831
+ new_inputs_embeds = torch.narrow(
832
+ global_inputs_embeds, 1, start_idx_reshard, end_idx_reshard - start_idx_reshard
833
+ )
834
+
835
+ return new_inputs_embeds, new_attention_mask, new_position_ids, new_labels
836
+
837
+ device = inputs_embeds.device
838
+ batch_size = inputs_embeds.shape[0]
839
+ seqlens = [attention_mask[k].sum().item() for k in range(batch_size)]
840
+
841
+ # Pack all sequences together
842
+ inputs_embeds_p = [inputs_embeds[k][attention_mask[k]] for k in range(batch_size)]
843
+ attention_mask_p = [torch.ones(seqlens[k], dtype=torch.int, device=device) for k in range(batch_size)]
844
+ position_ids_p = [torch.arange(seqlens[k], dtype=torch.int, device=device) for k in range(batch_size)]
845
+ labels_p = [labels[k][attention_mask[k]] for k in range(batch_size)]
846
+
847
+ # Add one dummy token at the end of the packed sequence to ensure that `_get_unpacked_data` will be called
848
+ inputs_embeds_p.append(torch.zeros(1, inputs_embeds.shape[-1], dtype=inputs_embeds.dtype, device=device))
849
+ attention_mask_p.append(torch.tensor([0], dtype=torch.int, device=device))
850
+ position_ids_p.append(torch.tensor([0], dtype=torch.int, device=device))
851
+ labels_p.append(torch.tensor([IGNORE_INDEX], dtype=torch.int, device=device))
852
+
853
+ # Mask the first token of each sequence to avoid contamination
854
+ for label in labels_p:
855
+ label[0] = IGNORE_INDEX
856
+
857
+ # Batch the data
858
+ inputs_embeds_p = torch.cat(inputs_embeds_p, dim=0).unsqueeze(0)
859
+ attention_mask_p = torch.cat(attention_mask_p, dim=0).unsqueeze(0)
860
+ position_ids_p = torch.cat(position_ids_p, dim=0).unsqueeze(0)
861
+ labels_p = torch.cat(labels_p, dim=0).unsqueeze(0)
862
+
863
+ if hasattr(
864
+ self, "pad_to_multiple_of"
865
+ ): # related to quantization, please refer to ModelArguments for more information.
866
+ assert len(labels_p.shape) == 2
867
+ batch_size, max_length, cur_length = labels_p.shape[0], labels_p.shape[1], labels_p.shape[1]
868
+ hidden_size = inputs_embeds_p.shape[-1]
869
+
870
+ if max_length % self.pad_to_multiple_of != 0:
871
+ max_length = ((max_length // self.pad_to_multiple_of) + 1) * self.pad_to_multiple_of
872
+ difference = max_length - cur_length
873
+
874
+ inputs_embeds_p = torch.cat(
875
+ (
876
+ inputs_embeds_p,
877
+ torch.full((batch_size, difference, hidden_size), self.llm.pad_token_id).to(inputs_embeds_p),
878
+ ),
879
+ dim=1,
880
+ )
881
+ labels_p = torch.cat((labels_p, torch.full((batch_size, difference), IGNORE_INDEX).to(labels_p)), dim=1)
882
+ attention_mask_p = torch.cat(
883
+ (
884
+ attention_mask_p,
885
+ torch.zeros((batch_size, difference), dtype=torch.bool).to(attention_mask_p),
886
+ ),
887
+ dim=1,
888
+ )
889
+ position_ids_p = torch.cat(
890
+ (position_ids_p, torch.full((batch_size, difference), -1).to(position_ids_p)), dim=1
891
+ )
892
+
893
+ return inputs_embeds_p, attention_mask_p, position_ids_p, labels_p
894
+
895
+ def get_xgr_logits_processor(self, response_format) -> List[LogitsProcessor]:
896
+ raise NotImplementedError("This method is not implemented for VILA model.")
897
+ # Convert response format to logits processor
898
+ import xgrammar as xgr
899
+
900
+ logging.info("[XGrammar] Compiling grammar for contrained output")
901
+
902
+ if self.grammar_compiler is None:
903
+ # logging.info(f"[XGrammar] {self.tokenizer}, {self.tokenizer.vocab_size}, {self.vocab_size}")
904
+ self.grammar_compiler = xgr.GrammarCompiler(
905
+ xgr.TokenizerInfo.from_huggingface(self.tokenizer, vocab_size=self.vocab_size)
906
+ )
907
+
908
+ if response_format.type == "json_schema":
909
+ compiled_grammar = self.grammar_compiler.compile_json_schema(
910
+ response_format.json_schema.schema_,
911
+ indent=2,
912
+ )
913
+ else:
914
+ compiled_grammar = self.grammar_compiler.compile_builtin_json_grammar()
915
+
916
+ return [xgr.contrib.hf.LogitsProcessor(compiled_grammar)]
917
+
918
+ def forward(
919
+ self,
920
+ input_ids: torch.LongTensor = None,
921
+ media: Optional[Dict[str, List[torch.Tensor]]] = None,
922
+ images: Optional[torch.FloatTensor] = None,
923
+ media_config: Optional[List] = None,
924
+ attention_mask: Optional[torch.Tensor] = None,
925
+ position_ids: Optional[torch.LongTensor] = None,
926
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
927
+ inputs_embeds: Optional[torch.FloatTensor] = None,
928
+ labels: Optional[torch.LongTensor] = None,
929
+ packing: bool = True,
930
+ force_packing: bool = False,
931
+ seqlens_in_batch: Optional[torch.LongTensor] = None,
932
+ dpo_forward: bool = False,
933
+ **kwargs,
934
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
935
+ self.freezed_module_patch()
936
+
937
+ if images is not None:
938
+ if media is not None:
939
+ raise ValueError("Both 'media' and 'images' are provided. Please provide only one.")
940
+ print("The 'images' argument is deprecated. Please use 'media' instead.")
941
+ media = {"image": images}
942
+
943
+ if media_config is None:
944
+ media_config = defaultdict(dict)
945
+
946
+ if inputs_embeds is None:
947
+ inputs_embeds, labels, attention_mask = self._embed(input_ids, media, media_config, labels, attention_mask)
948
+
949
+ if force_packing or (packing and self.training and not dpo_forward):
950
+ if seqlens_in_batch is None:
951
+ seqlens_in_batch = torch.sum(attention_mask, dim=1)
952
+ set_seqlens_in_batch(seqlens_in_batch)
953
+
954
+ (inputs_embeds, attention_mask, position_ids, labels) = self.repack_multimodal_data(
955
+ inputs_embeds, attention_mask, position_ids, labels
956
+ )
957
+
958
+ outputs = self.llm(
959
+ inputs_embeds=inputs_embeds,
960
+ attention_mask=attention_mask,
961
+ position_ids=position_ids,
962
+ past_key_values=past_key_values,
963
+ labels=labels,
964
+ **kwargs,
965
+ )
966
+
967
+ if self.training and getattr(self.config, "time_token_ids", []):
968
+ outputs.loss = soft_cross_entropy(
969
+ outputs.logits,
970
+ labels,
971
+ soft_tokens=self.config.time_token_ids,
972
+ std=self.config.soft_ce_std,
973
+ )
974
+
975
+ if dpo_forward:
976
+ return outputs.logits, labels
977
+
978
+ return outputs
979
+
980
+ @torch.inference_mode()
981
+ def generate(
982
+ self,
983
+ input_ids: Optional[torch.FloatTensor] = None,
984
+ media: Optional[Dict[str, List[torch.Tensor]]] = None,
985
+ media_config: Dict[str, Dict[str, Any]] = None,
986
+ attention_mask: Optional[torch.LongTensor] = None,
987
+ **generation_kwargs,
988
+ ):
989
+ inputs_embeds, _, attention_mask = self._embed(input_ids, media, media_config, None, attention_mask)
990
+ return self.llm.generate(inputs_embeds=inputs_embeds, attention_mask=attention_mask, **generation_kwargs)
991
+
992
+ @torch.inference_mode()
993
+ def generate_content(
994
+ self,
995
+ prompt: Union[str, List],
996
+ generation_config: Optional[GenerationConfig] = None,
997
+ response_format=None,
998
+ ) -> str:
999
+ # TODO(zhijianl): Support directly taking conversation as input
1000
+ conversation = [{"from": "human", "value": prompt}]
1001
+
1002
+ # Convert response format to logits processor
1003
+ if response_format:
1004
+ xgr_logits_processor = self.get_xgr_logits_processor(response_format)
1005
+ else:
1006
+ xgr_logits_processor = None
1007
+
1008
+ # Extract media from the conversation
1009
+
1010
+ # TODO (extract and preprocess should be done together, as the preprocess of image and video can be different, i.e. when dynamic res is used)
1011
+ media = extract_media(conversation, self.config)
1012
+
1013
+ # Process media
1014
+ media_config = defaultdict(dict)
1015
+ for name in media:
1016
+ if name == "image":
1017
+ if len(media["image"]) == 1 and self.config.image_aspect_ratio in ["dynamic", "dynamic_s2"]:
1018
+ self.config.image_processor = self.vision_tower.image_processor
1019
+ if self.config.image_aspect_ratio == "dynamic":
1020
+ images = process_image(media["image"][0], self.config, None, enable_dynamic_res=True).half()
1021
+ conversation[0]["value"] = conversation[0]["value"].replace(
1022
+ DEFAULT_IMAGE_TOKEN, f"{DEFAULT_IMAGE_TOKEN}\n" * images.shape[0]
1023
+ )
1024
+ else:
1025
+ if type(self.config.s2_scales) is str:
1026
+ self.config.s2_scales = list(map(int, self.config.s2_scales.split(",")))
1027
+ images, block_sizes = process_image(
1028
+ media["image"][0], self.config, None, enable_dynamic_s2=True
1029
+ )
1030
+ images = images.half()
1031
+ media_config[name]["block_sizes"] = [block_sizes]
1032
+ else:
1033
+ images = process_images(media["image"], self.vision_tower.image_processor, self.config).half()
1034
+ media[name] = [image for image in images]
1035
+ elif name == "video":
1036
+ if self.config.image_aspect_ratio == "dynamic" and self.config.video_max_tiles > 1:
1037
+ media[name] = [
1038
+ process_images(
1039
+ images,
1040
+ self.vision_tower.image_processor,
1041
+ self.config,
1042
+ enable_dynamic_res=True,
1043
+ max_tiles=self.config.video_max_tiles,
1044
+ ).half()
1045
+ for images in media[name]
1046
+ ]
1047
+ elif self.config.image_aspect_ratio == "dynamic_s2" and self.config.video_max_tiles > 1:
1048
+ self.config.image_processor = self.vision_tower.image_processor
1049
+ if type(self.config.s2_scales) is str:
1050
+ self.config.s2_scales = list(map(int, self.config.s2_scales.split(",")))
1051
+ media[name] = [
1052
+ torch.cat(
1053
+ [
1054
+ process_image(
1055
+ image,
1056
+ self.config,
1057
+ None,
1058
+ enable_dynamic_s2=True,
1059
+ max_tiles=self.config.video_max_tiles,
1060
+ )[0].half()
1061
+ for image in images
1062
+ ]
1063
+ )
1064
+ for images in media[name]
1065
+ ]
1066
+ else:
1067
+ media[name] = [
1068
+ process_images(images, self.vision_tower.image_processor, self.config).half()
1069
+ for images in media[name]
1070
+ ]
1071
+ else:
1072
+ raise ValueError(f"Unsupported media type: {name}")
1073
+
1074
+ # Tokenize the conversation
1075
+ input_ids = tokenize_conversation(conversation, self.tokenizer, add_generation_prompt=True).cuda().unsqueeze(0)
1076
+
1077
+ # Set up the generation config
1078
+ generation_config = generation_config or self.default_generation_config
1079
+
1080
+ # Generate the response
1081
+ try:
1082
+ output_ids = self.generate(
1083
+ input_ids=input_ids,
1084
+ media=media,
1085
+ media_config=media_config,
1086
+ generation_config=generation_config,
1087
+ logits_processor=xgr_logits_processor, # structured generation
1088
+ )
1089
+ except ValueError:
1090
+ if not generation_config.do_sample:
1091
+ raise
1092
+ # FIXME(zhijianl): This is a temporary workaround for the sampling issue
1093
+ logging.warning("Generation failed with sampling, retrying with greedy decoding.")
1094
+ generation_config.do_sample = False
1095
+ output_ids = self.generate(
1096
+ input_ids=input_ids,
1097
+ media=media,
1098
+ media_config=media_config,
1099
+ generation_config=generation_config,
1100
+ logits_processor=xgr_logits_processor,
1101
+ )
1102
+
1103
+ # Decode the response
1104
+ response = self.tokenizer.decode(output_ids[0], skip_special_tokens=True).strip()
1105
+ return response
1106
+
1107
+ @property
1108
+ def default_generation_config(self) -> GenerationConfig:
1109
+ generation_config = copy.deepcopy(self.generation_config or GenerationConfig())
1110
+ if self.tokenizer.eos_token_id is None:
1111
+ raise ValueError("Tokenizer must have an EOS token")
1112
+ if generation_config.max_length == GenerationConfig().max_length:
1113
+ generation_config.max_length = self.tokenizer.model_max_length
1114
+ if generation_config.pad_token_id is None:
1115
+ generation_config.pad_token_id = self.tokenizer.pad_token_id or self.tokenizer.eos_token_id
1116
+ if generation_config.bos_token_id is None:
1117
+ generation_config.bos_token_id = self.tokenizer.bos_token_id or self.tokenizer.eos_token_id
1118
+ if generation_config.eos_token_id is None:
1119
+ generation_config.eos_token_id = self.tokenizer.eos_token_id
1120
+ return generation_config
siglip_encoder.py ADDED
@@ -0,0 +1,286 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 NVIDIA CORPORATION & AFFILIATES
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ #
15
+ # SPDX-License-Identifier: Apache-2.0
16
+
17
+ import torch
18
+ import torch.nn as nn
19
+ import torch.nn.functional as F
20
+ from accelerate.hooks import add_hook_to_module
21
+ from einops import rearrange
22
+ from s2wrapper import forward as multiscale_forward
23
+ from transformers import AutoConfig, PretrainedConfig, PreTrainedModel, SiglipImageProcessor
24
+ from transformers.image_processing_utils import BaseImageProcessor
25
+ from transformers.integrations.deepspeed import is_deepspeed_zero3_enabled
26
+ from transformers.models.siglip import SiglipVisionModel
27
+
28
+
29
+ class VisionTower(nn.Module):
30
+ def __init__(self, vision_tower, args, delay_load=False):
31
+ super().__init__()
32
+
33
+ self.is_loaded = False
34
+
35
+ self.vision_tower_name = vision_tower
36
+ self.select_layer = getattr(args, "mm_vision_select_layer", -2)
37
+ self.select_feature = getattr(args, "mm_vision_select_feature", "patch")
38
+
39
+ self.cfg_only = None
40
+
41
+ def feature_select(self, image_forward_outs):
42
+ image_features = image_forward_outs.hidden_states[self.select_layer]
43
+ if self.select_feature == "patch":
44
+ image_features = image_features[:, 1:]
45
+ elif self.select_feature == "cls_patch":
46
+ image_features = image_features
47
+ else:
48
+ raise ValueError(f"Unexpected select feature: {self.select_feature}")
49
+ return image_features
50
+
51
+ def _maybe_resize_pos_embeds(
52
+ self,
53
+ model: PreTrainedModel,
54
+ image_processor: BaseImageProcessor,
55
+ resolution: int = -1,
56
+ interpolate_mode: str = "linear",
57
+ ):
58
+ if resolution in [model.config.image_size, -1]:
59
+ return
60
+ print(
61
+ f"Resizing vision model's position embeddings to support higher vision resolution: from {model.config.image_size} to {resolution} ..."
62
+ )
63
+ embeddings = model.vision_model.embeddings
64
+ patch_size = embeddings.patch_size
65
+ num_new_tokens = int((resolution // patch_size) ** 2)
66
+
67
+ old_embeddings = embeddings.position_embedding
68
+ match interpolate_mode:
69
+ case "linear":
70
+ ## Step 1: Calculate the corresponding patch ID (pid) in the current resolution (M patches) based on the target resolution (N patches). Formula: pid = pid / N * M
71
+ ## Step 2: Obtain new embeddings by interpolating between the embeddings of the two nearest calculated patch IDs. Formula: new_embeds = (pid - floor(pid)) * embeds[ceil(pid)] + (ceil(pid) - pid) * embeds[floor(pid)]
72
+ import torch
73
+ import torch.nn as nn
74
+
75
+ if is_deepspeed_zero3_enabled():
76
+ import deepspeed
77
+
78
+ with deepspeed.zero.GatheredParameters([old_embeddings.weight], modifier_rank=None):
79
+ old_num_tokens, old_embedding_dim = old_embeddings.weight.size()
80
+ else:
81
+ old_num_tokens, old_embedding_dim = old_embeddings.weight.size()
82
+ new_embeddings = nn.Embedding(
83
+ num_new_tokens,
84
+ old_embedding_dim,
85
+ dtype=old_embeddings.weight.dtype,
86
+ device=old_embeddings.weight.device,
87
+ )
88
+ mapped_indices = (
89
+ torch.arange(num_new_tokens).to(old_embeddings.weight.device)
90
+ / (num_new_tokens - 1)
91
+ * (old_num_tokens - 1)
92
+ )
93
+ floor_indices = torch.clamp(mapped_indices.floor().long(), min=0, max=old_num_tokens - 1)
94
+ ceil_indices = torch.clamp(mapped_indices.ceil().long(), min=0, max=old_num_tokens - 1)
95
+ if is_deepspeed_zero3_enabled():
96
+ params = [old_embeddings.weight, new_embeddings.weight]
97
+ with deepspeed.zero.GatheredParameters(params, modifier_rank=0):
98
+ interpolated_embeds = (mapped_indices - floor_indices)[:, None] * old_embeddings.weight.data[
99
+ ceil_indices, :
100
+ ] + (ceil_indices - mapped_indices)[:, None] * old_embeddings.weight.data[floor_indices, :]
101
+ else:
102
+ interpolated_embeds = (mapped_indices - floor_indices)[:, None] * old_embeddings.weight.data[
103
+ ceil_indices, :
104
+ ] + (ceil_indices - mapped_indices)[:, None] * old_embeddings.weight.data[floor_indices, :]
105
+ new_embeddings.weight.data = interpolated_embeds
106
+ case _:
107
+ raise NotImplementedError
108
+
109
+ if hasattr(old_embeddings, "_hf_hook"):
110
+ hook = old_embeddings._hf_hook
111
+ add_hook_to_module(new_embeddings, hook)
112
+ new_embeddings.requires_grad_(old_embeddings.weight.requires_grad)
113
+ ## update vision encoder's configurations
114
+ model.config.image_size = resolution
115
+ if hasattr(image_processor, "crop_size"):
116
+ # CLIP vision tower
117
+ image_processor.crop_size = resolution
118
+ else:
119
+ # SIGLIP vision tower
120
+ assert hasattr(image_processor, "size")
121
+ image_processor.size = {"height": resolution, "width": resolution}
122
+ ## TODO define a '_reinitialize' method for VisionTower
123
+ embeddings.position_embedding = new_embeddings
124
+ embeddings.image_size = resolution
125
+ embeddings.num_patches = embeddings.num_positions = num_new_tokens
126
+ embeddings.position_ids = (
127
+ torch.arange(embeddings.num_positions).expand((1, -1)).to(old_embeddings.weight.device)
128
+ )
129
+
130
+ def forward(self, images):
131
+ if type(images) is list:
132
+ image_features = []
133
+ for image in images:
134
+ image_forward_out = self.vision_tower(
135
+ image.to(device=self.device, dtype=self.dtype).unsqueeze(0),
136
+ output_hidden_states=True,
137
+ )
138
+ image_feature = self.feature_select(image_forward_out).to(image.dtype)
139
+ image_features.append(image_feature)
140
+ else:
141
+ image_forward_outs = self.vision_tower(
142
+ images.to(device=self.device, dtype=self.dtype),
143
+ output_hidden_states=True,
144
+ )
145
+ image_features = self.feature_select(image_forward_outs).to(images.dtype)
146
+
147
+ return image_features
148
+
149
+ @property
150
+ def dummy_feature(self):
151
+ return torch.zeros(1, self.hidden_size, device=self.device, dtype=self.dtype)
152
+
153
+ @property
154
+ def dtype(self):
155
+ return self.vision_tower.dtype
156
+
157
+ @property
158
+ def device(self):
159
+ return self.vision_tower.device
160
+
161
+ @property
162
+ def config(self):
163
+ if self.is_loaded:
164
+ return self.vision_tower.config
165
+ else:
166
+ return self.cfg_only
167
+
168
+ @property
169
+ def hidden_size(self):
170
+ return self.config.hidden_size
171
+
172
+ @property
173
+ def num_patches(self):
174
+ return (self.config.image_size // self.config.patch_size) ** 2
175
+
176
+
177
+ class VisionTowerS2(VisionTower):
178
+ def __init__(self, vision_tower, args, delay_load=False):
179
+ super().__init__(vision_tower, args, delay_load)
180
+
181
+ self.scales = list(map(int, args.s2_scales.split(",")))
182
+ self.scales.sort()
183
+ self.max_split_size = args.s2_max_split_size
184
+ self.resize_output_to_scale_idx = getattr(args, "s2_resize_output_to_scale_idx", 0)
185
+
186
+ def forward_feature(self, images):
187
+ image_forward_outs = self.vision_tower(
188
+ images.to(device=self.device, dtype=self.dtype), output_hidden_states=True
189
+ )
190
+ image_features = self.feature_select(image_forward_outs).to(images.dtype)
191
+ return image_features
192
+
193
+ def forward(self, images):
194
+ if type(images) is list:
195
+ image_feature = []
196
+ for image in images:
197
+ image_feature = multiscale_forward(
198
+ self.forward_feature,
199
+ image.unsqueeze(0),
200
+ img_sizes=self.scales,
201
+ max_split_size=self.max_split_size,
202
+ resize_output_to_idx=self.resize_output_to_scale_idx,
203
+ )
204
+ image_features.append(image_feature)
205
+ else:
206
+ image_features = multiscale_forward(
207
+ self.forward_feature,
208
+ images,
209
+ img_sizes=self.scales,
210
+ max_split_size=self.max_split_size,
211
+ resize_output_to_idx=self.resize_output_to_scale_idx,
212
+ )
213
+
214
+ return image_features
215
+
216
+ @property
217
+ def hidden_size(self):
218
+ return self.config.hidden_size * len(self.scales)
219
+
220
+
221
+ class VisionTowerDynamicS2(VisionTower):
222
+ def __init__(self, vision_tower, args, delay_load=False):
223
+ super().__init__(vision_tower, args, delay_load)
224
+
225
+ self.scales = list(map(int, args.s2_scales.split(",")))
226
+ self.scales.sort()
227
+ self.max_split_size = args.s2_max_split_size
228
+ self.resize_output_to_scale_idx = getattr(args, "s2_resize_output_to_scale_idx", 0)
229
+
230
+ def forward_feature(self, images):
231
+ image_forward_outs = self.vision_tower(
232
+ images.to(device=self.device, dtype=self.dtype), output_hidden_states=True
233
+ )
234
+ image_features = self.feature_select(image_forward_outs).to(images.dtype)
235
+ return image_features
236
+
237
+ def forward(self, images):
238
+ assert type(images) is not list
239
+ image_features = self.forward_feature(images)
240
+
241
+ return image_features
242
+
243
+ @property
244
+ def hidden_size(self):
245
+ return self.config.hidden_size * len(self.scales)
246
+
247
+
248
+ class SiglipVisionTower(VisionTower):
249
+ def __init__(self, model_name_or_path: str, config: PretrainedConfig) -> None:
250
+ super().__init__(model_name_or_path, config)
251
+ # TODO(ligengl): why pass config here leading to errors?
252
+ self.vision_tower = SiglipVisionModel.from_pretrained(
253
+ model_name_or_path,
254
+ attn_implementation=config._attn_implementation,
255
+ torch_dtype=eval(config.model_dtype),
256
+ )
257
+ self.image_processor = SiglipImageProcessor.from_pretrained(model_name_or_path)
258
+ self.is_loaded = True
259
+
260
+
261
+ class SiglipVisionTowerS2(VisionTowerS2):
262
+ def __init__(self, model_name_or_path: str, config: PretrainedConfig) -> None:
263
+ super().__init__(model_name_or_path, config)
264
+ self.vision_tower = SiglipVisionModel.from_pretrained(
265
+ model_name_or_path,
266
+ attn_implementation=config._attn_implementation,
267
+ torch_dtype=eval(config.model_dtype),
268
+ )
269
+ self.image_processor = SiglipImageProcessor.from_pretrained(model_name_or_path)
270
+ # Make sure it crops/resizes the image to the largest scale in self.scales to maintain high-res information
271
+ self.image_processor.size["height"] = self.image_processor.size["width"] = self.scales[-1]
272
+ self.is_loaded = True
273
+
274
+
275
+ class SiglipVisionTowerDynamicS2(VisionTowerDynamicS2):
276
+ def __init__(self, model_name_or_path: str, config: PretrainedConfig) -> None:
277
+ super().__init__(model_name_or_path, config)
278
+ self.vision_tower = SiglipVisionModel.from_pretrained(
279
+ model_name_or_path,
280
+ attn_implementation="flash_attention_2",
281
+ torch_dtype=eval(config.model_dtype),
282
+ )
283
+ self.image_processor = SiglipImageProcessor.from_pretrained(model_name_or_path)
284
+ # Make sure it crops/resizes the image to the largest scale in self.scales to maintain high-res information
285
+ self.image_processor.size["height"] = self.image_processor.size["width"] = self.scales[0]
286
+ self.is_loaded = True
tokenizer_utils.py ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 NVIDIA CORPORATION & AFFILIATES
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ #
15
+ # SPDX-License-Identifier: Apache-2.0
16
+
17
+ from typing import Any, Dict, List, Optional, Sequence
18
+
19
+ import torch
20
+ import transformers
21
+
22
+ from .constants import IGNORE_INDEX, SENTINEL_TOKEN
23
+ from .conversation import SeparatorStyle, default_conversation
24
+ from .mm_utils import tokenizer_image_token
25
+
26
+ # __all__ = [
27
+ # "tokenize_conversation",
28
+ # "preprocess_conversation",
29
+ # "infer_stop_tokens",
30
+ # ]
31
+
32
+ DUMMY_CONVERSATION = [
33
+ {"from": "human", "value": "question"},
34
+ {"from": "gpt", "value": "answer"},
35
+ ] * 10
36
+
37
+
38
+ def tokenize_conversation_legacy(
39
+ messages: Sequence[Dict[str, str]],
40
+ tokenizer: transformers.PreTrainedTokenizer,
41
+ add_generation_prompt: bool = False,
42
+ overrides: Optional[Dict[str, str]] = None,
43
+ no_system_prompt: bool = False,
44
+ ) -> torch.Tensor:
45
+ conv = default_conversation.copy()
46
+ roles = {"human": conv.roles[0], "gpt": conv.roles[1]}
47
+
48
+ if no_system_prompt:
49
+ conv.system = ""
50
+
51
+ # Skip the first message if it is not from human
52
+ if messages[0]["from"] != "human":
53
+ messages = messages[1:]
54
+
55
+ # Add a generation prompt if needed
56
+ if add_generation_prompt:
57
+ messages.append({"from": "gpt", "value": None})
58
+
59
+ conv.messages = []
60
+ for turn, message in enumerate(messages):
61
+ role = roles[message["from"]]
62
+ assert role == conv.roles[turn % 2]
63
+ if overrides is not None and message["from"] in overrides:
64
+ conv.append_message(role, overrides[message["from"]])
65
+ else:
66
+ conv.append_message(role, message["value"])
67
+
68
+ return tokenizer_image_token(conv.get_prompt(), tokenizer, return_tensors="pt")
69
+
70
+
71
+ def tokenize_conversation(
72
+ messages: Sequence[Dict[str, str]],
73
+ tokenizer: transformers.PreTrainedTokenizer,
74
+ add_generation_prompt: bool = False,
75
+ overrides: Optional[Dict[str, str]] = None,
76
+ no_system_prompt: bool = False,
77
+ ) -> torch.Tensor:
78
+ # Normalize the conversation before tokenization
79
+ for message in messages:
80
+ message["value"] = message["value"].strip()
81
+
82
+ if default_conversation.sep_style != SeparatorStyle.AUTO:
83
+ return tokenize_conversation_legacy(
84
+ messages,
85
+ tokenizer,
86
+ add_generation_prompt=add_generation_prompt,
87
+ overrides=overrides,
88
+ no_system_prompt=no_system_prompt,
89
+ )
90
+
91
+ conversation = []
92
+ for m in messages:
93
+ message = {}
94
+ if m["from"] == "human":
95
+ message["role"] = "user"
96
+ elif m["from"] == "gpt":
97
+ message["role"] = "assistant"
98
+ else:
99
+ raise ValueError(f"Unexpected sender '{m['from']}' in conversation entry.")
100
+
101
+ message["content"] = m["value"]
102
+ if overrides is not None and m["from"] in overrides:
103
+ message["content"] = overrides[m["from"]]
104
+ conversation.append(message)
105
+
106
+ if no_system_prompt:
107
+ conversation = [{"role": "system", "content": ""}] + conversation
108
+
109
+ text = tokenizer.apply_chat_template(
110
+ conversation,
111
+ add_generation_prompt=add_generation_prompt,
112
+ tokenize=False,
113
+ )
114
+ return tokenizer_image_token(text, tokenizer, return_tensors="pt")
115
+
116
+
117
+ def _maybe_add_sentinel_token(tokenizer: transformers.PreTrainedTokenizer) -> None:
118
+ if not hasattr(tokenizer, "sentinel_token"):
119
+ tokenizer.add_tokens([SENTINEL_TOKEN], special_tokens=True)
120
+ tokenizer.sentinel_token = SENTINEL_TOKEN
121
+ tokenizer.sentinel_token_id = tokenizer.convert_tokens_to_ids(SENTINEL_TOKEN)
122
+
123
+
124
+ def preprocess_conversation(
125
+ conversation: Sequence[Dict[str, str]],
126
+ tokenizer: transformers.PreTrainedTokenizer,
127
+ no_system_prompt: bool = False,
128
+ retried: bool = False,
129
+ ) -> Dict[str, Any]:
130
+ inputs = tokenize_conversation(conversation, tokenizer, no_system_prompt=no_system_prompt)
131
+ labels = torch.ones_like(inputs) * IGNORE_INDEX
132
+
133
+ # Generate the template by replacing the assistant's response with a sentinel.
134
+ _maybe_add_sentinel_token(tokenizer)
135
+ template = tokenize_conversation(
136
+ conversation, tokenizer, overrides={"gpt": SENTINEL_TOKEN}, no_system_prompt=no_system_prompt
137
+ )
138
+
139
+ # Remove sentinel tokens from the template.
140
+ mask = torch.ones_like(template, dtype=torch.bool)
141
+ for k in range(template.size(0) - 1):
142
+ if template[k] == tokenizer.sentinel_token_id:
143
+ mask[k : k + 2] = False
144
+ # NOTE(zhijianl): This is to handle the corner case where there is an empty token before the sentinel token.
145
+ if k > 0 and retried:
146
+ mask[k - 1] = False
147
+ template = template[mask]
148
+
149
+ # Match the tokenized conversation with the template (with no assistant's response).
150
+ # Every token that is not matched will be included in the label for training.
151
+ p = 0
152
+ for k in range(inputs.size(0)):
153
+ if p < template.size(0) and inputs[k] == template[p]:
154
+ p += 1
155
+ else:
156
+ labels[k] = inputs[k]
157
+
158
+ # Mask all tokens in the label if the template is not fully matched.
159
+ if p < template.size(0):
160
+ if not retried:
161
+ return preprocess_conversation(
162
+ conversation,
163
+ tokenizer,
164
+ no_system_prompt=no_system_prompt,
165
+ retried=True,
166
+ )
167
+ print(f"Failed to process the conversation: '{conversation}'. All tokens will be masked in the label.")
168
+ labels[:] = IGNORE_INDEX
169
+
170
+ return {"input_ids": inputs, "labels": labels}
171
+
172
+
173
+ def infer_stop_tokens(tokenizer: transformers.PreTrainedTokenizer) -> List[str]:
174
+ _maybe_add_sentinel_token(tokenizer)
175
+ template = tokenize_conversation(DUMMY_CONVERSATION, tokenizer, overrides={"gpt": SENTINEL_TOKEN})
176
+
177
+ stop_tokens = {tokenizer.eos_token}
178
+ for k in range(template.size(0) - 1):
179
+ if template[k] == tokenizer.sentinel_token_id:
180
+ stop_token = tokenizer.decode(template[k + 1])
181
+ stop_tokens.add(stop_token)
182
+ return list(stop_tokens)
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
utils.py ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 NVIDIA CORPORATION & AFFILIATES
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ #
15
+ # SPDX-License-Identifier: Apache-2.0
16
+ # This file is modified from https://github.com/haotian-liu/LLaVA/
17
+ import os
18
+ import os.path as osp
19
+
20
+ from huggingface_hub import repo_exists, snapshot_download
21
+ from huggingface_hub.utils import HFValidationError, validate_repo_id
22
+ from transformers import AutoConfig, PretrainedConfig
23
+
24
+
25
+ def get_model_config(config):
26
+ default_keys = ["llm_cfg", "vision_tower_cfg", "mm_projector_cfg"]
27
+
28
+ if hasattr(config, "_name_or_path") and len(config._name_or_path) >= 2:
29
+ root_path = config._name_or_path
30
+ else:
31
+ root_path = config.resume_path
32
+
33
+ # download from huggingface
34
+ if root_path is not None and not osp.exists(root_path):
35
+ try:
36
+ valid_hf_repo = repo_exists(root_path)
37
+ except HFValidationError as e:
38
+ valid_hf_repo = False
39
+ if valid_hf_repo:
40
+ root_path = snapshot_download(root_path)
41
+
42
+ return_list = []
43
+ for key in default_keys:
44
+ cfg = getattr(config, key, None)
45
+ if isinstance(cfg, dict):
46
+ try:
47
+ return_list.append(os.path.join(root_path, key[:-4]))
48
+ except:
49
+ raise ValueError(f"Cannot find resume path in config for {key}!")
50
+ elif isinstance(cfg, PretrainedConfig):
51
+ return_list.append(os.path.join(root_path, key[:-4]))
52
+ elif isinstance(cfg, str):
53
+ return_list.append(cfg)
54
+
55
+ return return_list
56
+
57
+
58
+ def get_model_config_fp8(config):
59
+ default_keys = ["llm_cfg", "vision_tower_cfg", "mm_projector_cfg"]
60
+
61
+ if hasattr(config, "_name_or_path") and len(config._name_or_path) >= 2:
62
+ root_path = config._name_or_path
63
+ else:
64
+ root_path = config.resume_path
65
+
66
+ # download from huggingface
67
+ if root_path is not None and not osp.exists(root_path):
68
+ try:
69
+ valid_hf_repo = repo_exists(root_path)
70
+ except HFValidationError as e:
71
+ valid_hf_repo = False
72
+ if valid_hf_repo:
73
+ root_path = snapshot_download(root_path)
74
+
75
+ return_list = []
76
+ for key in default_keys:
77
+ cfg = getattr(config, key, None)
78
+ if isinstance(cfg, dict):
79
+ try:
80
+ return_list.append(os.path.join(root_path, key[:-4]))
81
+ except:
82
+ raise ValueError(f"Cannot find resume path in config for {key}!")
83
+ elif isinstance(cfg, PretrainedConfig):
84
+ return_list.append(os.path.join(root_path, key[:-4]))
85
+ elif isinstance(cfg, str):
86
+ return_list.append(cfg)
87
+
88
+ # fp8_llm
89
+ key = "fp8_llm_cfg"
90
+ directory_path = os.path.join(root_path, key[:-4])
91
+ assert os.path.isdir(directory_path) and os.listdir(
92
+ directory_path
93
+ ), "You need to first convert the model weights to FP8 explicitly."
94
+ return_list.append(directory_path)
95
+
96
+ return return_list
97
+
98
+
99
+ def get_model_config_fp8(config):
100
+ default_keys = ["llm_cfg", "vision_tower_cfg", "mm_projector_cfg"]
101
+
102
+ if hasattr(config, "_name_or_path") and len(config._name_or_path) >= 2:
103
+ root_path = config._name_or_path
104
+ else:
105
+ root_path = config.resume_path
106
+
107
+ # download from huggingface
108
+ if root_path is not None and not osp.exists(root_path):
109
+ try:
110
+ valid_hf_repo = repo_exists(root_path)
111
+ except HFValidationError as e:
112
+ valid_hf_repo = False
113
+ if valid_hf_repo:
114
+ root_path = snapshot_download(root_path)
115
+
116
+ return_list = []
117
+ for key in default_keys:
118
+ cfg = getattr(config, key, None)
119
+ if isinstance(cfg, dict):
120
+ try:
121
+ return_list.append(os.path.join(root_path, key[:-4]))
122
+ except:
123
+ raise ValueError(f"Cannot find resume path in config for {key}!")
124
+ elif isinstance(cfg, PretrainedConfig):
125
+ return_list.append(os.path.join(root_path, key[:-4]))
126
+ elif isinstance(cfg, str):
127
+ return_list.append(cfg)
128
+
129
+ # fp8_llm
130
+ key = "fp8_llm_cfg"
131
+ directory_path = os.path.join(root_path, key[:-4])
132
+ assert os.path.isdir(directory_path) and os.listdir(
133
+ directory_path
134
+ ), "You need to first convert the model weights to FP8 explicitly."
135
+ return_list.append(directory_path)
136
+
137
+ return return_list
138
+
139
+
140
+ def is_mm_model(model_path):
141
+ """
142
+ Check if the model at the given path is a visual language model.
143
+
144
+ Args:
145
+ model_path (str): The path to the model.
146
+
147
+ Returns:
148
+ bool: True if the model is an MM model, False otherwise.
149
+ """
150
+ config = AutoConfig.from_pretrained(model_path)
151
+ architectures = config.architectures
152
+ for architecture in architectures:
153
+ if "llava" in architecture.lower():
154
+ return True
155
+ return False
156
+
157
+
158
+ def auto_upgrade(config):
159
+ cfg = AutoConfig.from_pretrained(config)
160
+ if "llava" in config and "llava" not in cfg.model_type:
161
+ assert cfg.model_type == "llama"
162
+ print("You are using newer LLaVA code base, while the checkpoint of v0 is from older code base.")
163
+ print("You must upgrade the checkpoint to the new code base (this can be done automatically).")
164
+ confirm = input("Please confirm that you want to upgrade the checkpoint. [Y/N]")
165
+ if confirm.lower() in ["y", "yes"]:
166
+ print("Upgrading checkpoint...")
167
+ assert len(cfg.architectures) == 1
168
+ setattr(cfg.__class__, "model_type", "llava")
169
+ cfg.architectures[0] = "LlavaLlamaForCausalLM"
170
+ cfg.save_pretrained(config)
171
+ print("Checkpoint upgraded.")
172
+ else:
173
+ print("Checkpoint upgrade aborted.")
174
+ exit(1)
vicuna_v1.jinja ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {% set system_prompt = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions." %}
2
+ {% set roles = ["USER", "ASSISTANT"] %}
3
+ {% set sep = " " %}
4
+ {% set sep2 = "</s>" %}
5
+
6
+ {{ system_prompt }}
7
+
8
+ {% for message in messages %}
9
+ {% if message['role'] == roles[0] %}
10
+ {{ roles[0] }}{{ sep }}{{ message['content'] }}{{ sep2 }}
11
+ {% else %}
12
+ {{ roles[1] }}{{ sep }}{{ message['content'] }}{{ sep2 }}
13
+ {% endif %}
14
+ {% endfor %}
vision_tower/config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "./vision_tower",
3
+ "architectures": [
4
+ "SiglipVisionModel"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "hidden_act": "gelu_pytorch_tanh",
8
+ "hidden_size": 1152,
9
+ "image_size": 384,
10
+ "intermediate_size": 4304,
11
+ "layer_norm_eps": 1e-06,
12
+ "model_type": "siglip_vision_model",
13
+ "num_attention_heads": 16,
14
+ "num_channels": 3,
15
+ "num_hidden_layers": 27,
16
+ "patch_size": 14,
17
+ "torch_dtype": "bfloat16",
18
+ "transformers_version": "4.36.2"
19
+ }
vision_tower/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:311d24667a691c38689221cb52788a94223562dd274674401072929751f2793b
3
+ size 856506120
vision_tower/preprocessor_config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_convert_rgb": true,
3
+ "do_normalize": true,
4
+ "do_rescale": true,
5
+ "do_resize": true,
6
+ "image_mean": [
7
+ 0.5,
8
+ 0.5,
9
+ 0.5
10
+ ],
11
+ "image_processor_type": "SiglipImageProcessor",
12
+ "image_std": [
13
+ 0.5,
14
+ 0.5,
15
+ 0.5
16
+ ],
17
+ "processor_class": "SiglipProcessor",
18
+ "resample": 3,
19
+ "rescale_factor": 0.00392156862745098,
20
+ "size": {
21
+ "height": 384,
22
+ "width": 384
23
+ }
24
+ }