DLight1551 commited on
Commit
ea7a380
1 Parent(s): 27e1e9d
README.md CHANGED
@@ -1,3 +1,82 @@
1
  ---
2
  license: apache-2.0
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: text-generation
4
  ---
5
+
6
+
7
+ <p align="center">
8
+ <img src="logo.png" width="400"/>
9
+ <p>
10
+
11
+ <p align="center">
12
+ <b><font size="6">InternLM-XComposer2</font></b>
13
+ <p>
14
+
15
+ <div align="center">
16
+
17
+ [💻Github Repo](https://github.com/InternLM/InternLM-XComposer)
18
+
19
+ </div>
20
+
21
+ **InternLM-XComposer2** is a vision-language large model (VLLM) based on [InternLM2](https://github.com/InternLM/InternLM) for advanced text-image comprehension and composition.
22
+
23
+ We release InternLM-XComposer2 series in two versions:
24
+
25
+ - InternLM-XComposer2-VL: The pretrained VLLM model with InternLM2 as the initialization of the LLM, achieving strong performance on various multimodal benchmarks.
26
+ - InternLM-XComposer2: The finetuned VLLM for *Free-from Interleaved Text-Image Composition*.
27
+
28
+ ### Import from Transformers
29
+ To load the InternLM-XComposer2-7B model using Transformers, use the following code:
30
+ ```python
31
+ import torch
32
+ from PIL import image
33
+ from transformers import AutoTokenizer, AutoModelForCausalLM
34
+ ckpt_path = "internlm/internlm-xcomposer2-vl-7b"
35
+ tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True).cuda()
36
+ # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
37
+ model = AutoModelForCausalLM.from_pretrained(ckpt_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
38
+ model = model.eval()
39
+ model.vit.resize_pos()
40
+ img_path_list = [
41
+ './panda.jpg',
42
+ './bamboo.jpeg',
43
+ ]
44
+ images = []
45
+ for img_path in img_path_list:
46
+ image = Image.open(img_path).convert("RGB")
47
+ image = model.vis_processor(image)
48
+ images.append(image)
49
+ image = torch.stack(images)
50
+ query = '<ImageHere> <ImageHere>please write an article based on the images. Title: my favorite animal.'
51
+ response, history = model.chat(tokenizer, query=query, image=image, history=[], meta_instruction='')
52
+ print(response)
53
+ # in this animal kingdom, there are many species of animals. Each animal has its own special charm and characteristics. Among them, I like pandas the most. Pandas have a big black circle on their white furry faces, so they look very cute. It's not surprising that people call them "bearcats." But do you know why they're called pandas? Because pandas only eat bamboo shoots.\n\npandas' favorite food is bamboo shoots. The color of fresh bamboo shoots is light green. There is some starch in it, which can be used to make delicious food. But because panda's stomach doesn't produce amylase, it needs to consume large amounts of bamboo shoots every day to meet its body's nutritional needs. As a result, pandas spend most of their time eating bamboo, as well as sleeping. However, pandas cannot eat any meat except bamboo shoots. When pandas are hungry, they may go into the field to look for ants or other insects to eat. In fact, when pandas really want to eat meat, they can easily get away from it.\n\nbesides their love of eating bamboo, pandas also have another interesting characteristic: they always walk backwards. This makes them look slow and lazy. Although they seem sluggish, they actually run at speeds of up to 35 km/h (21.7 mph) when they need to escape danger! So don't underestimate pandas just because they're lazy!\n\nunfortunately, due to the destruction of natural habitats by humans, there are currently less than 1,800 pandas left in the world. I hope everyone can help me save pandas and protect our environment!
54
+ ```
55
+
56
+ ### 通过 Transformers 加载
57
+ 通过以下的代码加载 InternLM-XComposer2-7B 模型
58
+
59
+ ```python
60
+ import torch
61
+ from transformers import AutoTokenizer, AutoModelForCausalLM
62
+ ckpt_path = "internlm/internlm-xcomposer2-vl-7b"
63
+ tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True).cuda()
64
+ # `torch_dtype=torch.float16` 可以令模型以 float16 精度加载,否则 transformers 会将模型加载为 float32,导致显存不足
65
+ model = AutoModelForCausalLM.from_pretrained(ckpt_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
66
+ model = model.eval()
67
+ model.vit.resize_pos()
68
+ img_path_list = [
69
+ './panda.jpg',
70
+ './bamboo.jpeg',
71
+ ]
72
+ images = []
73
+ for img_path in img_path_list:
74
+ image = Image.open(img_path).convert("RGB")
75
+ image = model.vis_processor(image)
76
+ images.append(image)
77
+ image = torch.stack(images)
78
+ query = '<ImageHere> <ImageHere>请根据图片写一篇作文:我最喜欢的小动物。要求:选准角度,确定立意,明确文体,自拟标题;不要套作,不得抄袭;不得泄露个人信息。'
79
+ response, history = model.chat(tokenizer, query=query, image=image, history=[], meta_instruction='')
80
+ print(response)
81
+ # 我最喜欢的小动物\n说起我喜欢的动物,那可多了,有活泼可爱的小白兔、机灵的猴子、忠诚的狗……但是我最喜欢的还是可爱的大熊猫。\n大熊猫是哺乳动物中的一种,主要分布在中国四川、陕西和甘肃等地的山区。它有着大大的眼睛,圆圆的耳朵,胖乎乎的身子,最特别的是它的身体黑白相间,所以大家都叫它“黑白仔”。\n因为它的长相很呆萌,很多人都特别喜欢它,于是就有了许多关于它的玩具。在动物园里可以看到许多熊猫玩具和熊猫主题的衣服,还看到很多小朋友抱着熊猫玩偶在玩呢!\n说到熊猫吃竹子了,那可是它们的最爱,几乎每天都吃不腻。别看它长得肥肥胖胖的,其实它也很瘦啊,都是被肚子里的竹子给撑大的哦!熊猫每次吃东西的时候,都会用两只前爪抓住竹子,然后津津有味地吃起来。\n熊猫虽然看上去温顺又憨厚,但是它发起脾气来也是不客气的。如果你去逗它,惹得它生气了,它会举起它的两个爪子,往你身上挥舞着。这时候你可不能还手哦,因为它那一巴掌下去,可不是闹着玩的,会把你打得鼻青脸肿的哦!如果它觉得无聊了,也会把自己扔进竹筐里来回滚动,好像一个球一样在地上翻滚。看着就让人忍不住想去摸摸它,抱抱它。\n你们知道吗?现在我们的国宝大熊猫已经濒临灭绝了,所以现在我们要好好保护大熊猫,让他们健康快乐地成长。
82
+ ```
build_mlp.py ADDED
@@ -0,0 +1,218 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ import re
3
+
4
+ import torch
5
+ import torch.nn as nn
6
+ from transformers import CLIPVisionModel
7
+
8
+
9
+ def build_vision_tower():
10
+ #vision_tower = 'openai/clip-vit-large-patch14-336'
11
+ vision_tower = '/mnt/petrelfs/share_data/dongxiaoyi/share_models/clip_l_336'
12
+ return CLIPVisionTower(vision_tower)
13
+
14
+
15
+ def build_vision_projector():
16
+ projector_type = 'mlp2x_gelu'
17
+ mm_hidden_size = 1024
18
+ hidden_size = 4096
19
+
20
+ mlp_gelu_match = re.match(r'^mlp(\d+)x_gelu$', projector_type)
21
+ if mlp_gelu_match:
22
+ mlp_depth = int(mlp_gelu_match.group(1))
23
+ modules = [nn.Linear(mm_hidden_size, hidden_size)]
24
+ for _ in range(1, mlp_depth):
25
+ modules.append(nn.GELU())
26
+ modules.append(nn.Linear(hidden_size, hidden_size))
27
+ return nn.Sequential(*modules)
28
+
29
+ if projector_type == 'identity':
30
+ return IdentityMap()
31
+
32
+ raise ValueError(f'Unknown projector type: {projector_type}')
33
+
34
+
35
+ class IdentityMap(nn.Module):
36
+
37
+ def __init__(self):
38
+ super().__init__()
39
+
40
+ def forward(self, x, *args, **kwargs):
41
+ return x
42
+
43
+ @property
44
+ def config(self):
45
+ return {'mm_projector_type': 'identity'}
46
+
47
+
48
+ class CLIPVisionTower(nn.Module):
49
+
50
+ def __init__(self, vision_tower):
51
+ super().__init__()
52
+
53
+ self.is_loaded = False
54
+ self.is_resize_pos = False
55
+
56
+ self.vision_tower_name = vision_tower
57
+ self.select_layer = -1
58
+ self.select_feature = 'patch'
59
+ self.load_model()
60
+ self.resize_pos()
61
+
62
+ def load_model(self):
63
+ self.vision_tower = CLIPVisionModel.from_pretrained(
64
+ self.vision_tower_name)
65
+ self.vision_tower.requires_grad_(False)
66
+
67
+ self.is_loaded = True
68
+
69
+ def resize_pos(self):
70
+ pos_embed_checkpoint = self.vision_tower.vision_model.embeddings.position_embedding.weight
71
+ pos_embed_checkpoint = pos_embed_checkpoint.unsqueeze(0)
72
+ orig_size = 24
73
+ new_size = 35
74
+
75
+ if pos_embed_checkpoint.shape[1] == new_size**2 + 1:
76
+ self.is_resize_pos = True
77
+ else:
78
+ embedding_size = pos_embed_checkpoint.shape[-1]
79
+ num_extra_tokens = 1
80
+ new_num = new_size**2 + num_extra_tokens
81
+ print('Position interpolate from %dx%d to %dx%d' %
82
+ (orig_size, orig_size, new_size, new_size))
83
+ extra_tokens = pos_embed_checkpoint[:, :num_extra_tokens]
84
+ # only the position tokens are interpolated
85
+ pos_tokens = pos_embed_checkpoint[:, num_extra_tokens:]
86
+ pos_tokens = pos_tokens.reshape(-1, orig_size, orig_size,
87
+ embedding_size).permute(
88
+ 0, 3, 1, 2)
89
+ pos_tokens = torch.nn.functional.interpolate(
90
+ pos_tokens,
91
+ size=(new_size, new_size),
92
+ mode='bicubic',
93
+ align_corners=False)
94
+ pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2)
95
+ new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1)
96
+
97
+ new_pos_embed = new_pos_embed.squeeze(0)
98
+
99
+ self.vision_tower.vision_model.embeddings.position_embedding = torch.nn.Embedding(
100
+ new_num, 1024)
101
+ self.vision_tower.vision_model.embeddings.position_embedding.weight = torch.nn.Parameter(
102
+ new_pos_embed.to(pos_embed_checkpoint.dtype))
103
+ self.vision_tower.vision_model.embeddings.position_ids = torch.arange(
104
+ new_num).expand((1, -1))
105
+
106
+ self.is_resize_pos = True
107
+
108
+ def feature_select(self, image_forward_outs):
109
+ image_features = image_forward_outs.hidden_states[self.select_layer]
110
+ if self.select_feature == 'patch':
111
+ image_features = image_features[:, 1:]
112
+ elif self.select_feature == 'cls_patch':
113
+ image_features = image_features
114
+ else:
115
+ raise ValueError(
116
+ f'Unexpected select feature: {self.select_feature}')
117
+ return image_features
118
+
119
+ def forward(self, images):
120
+ if not self.is_loaded:
121
+ self.load_model()
122
+ if type(images) is list:
123
+ image_features = []
124
+ for image in images:
125
+ image_forward_out = self.vision_tower(
126
+ image.to(device=self.device,
127
+ dtype=self.dtype).unsqueeze(0),
128
+ output_hidden_states=True)
129
+ image_feature = self.feature_select(image_forward_out).to(
130
+ image.dtype)
131
+ image_features.append(image_feature)
132
+ else:
133
+ image_forward_outs = self.vision_tower(
134
+ images.to(device=self.device, dtype=self.dtype),
135
+ output_hidden_states=True)
136
+ image_features = self.feature_select(image_forward_outs).to(
137
+ images.dtype)
138
+
139
+ return image_features
140
+
141
+ @property
142
+ def dummy_feature(self):
143
+ return torch.zeros(
144
+ 1, self.hidden_size, device=self.device, dtype=self.dtype)
145
+
146
+ @property
147
+ def dtype(self):
148
+ return self.vision_tower.dtype
149
+
150
+ @property
151
+ def device(self):
152
+ return self.vision_tower.device
153
+
154
+ @property
155
+ def config(self):
156
+ if self.is_loaded:
157
+ return self.vision_tower.config
158
+ else:
159
+ return self.cfg_only
160
+
161
+ @property
162
+ def hidden_size(self):
163
+ return self.config.hidden_size
164
+
165
+ @property
166
+ def num_patches(self):
167
+ return (self.config.image_size // self.config.patch_size)**2
168
+
169
+
170
+ class PLoRA(nn.Linear):
171
+
172
+ def __init__(self,
173
+ in_features: int,
174
+ out_features: int,
175
+ bias: bool = True,
176
+ device=None,
177
+ dtype=None,
178
+ lora_r=8,
179
+ lora_alpha=16,
180
+ lora_dropout=0.05,
181
+ lora_len=0,
182
+ **kwargs) -> None:
183
+ super().__init__(in_features, out_features, bias, device, dtype)
184
+ self.lora_r = lora_r
185
+ self.lora_alpha = lora_alpha
186
+ self.lora_len = lora_len
187
+ if lora_dropout > 0.:
188
+ self.lora_dropout = nn.Dropout(p=lora_dropout)
189
+ else:
190
+ self.lora_dropout = lambda x: x
191
+ self.lora_scaling = self.lora_alpha / self.lora_r
192
+
193
+ self.Plora_A = nn.Linear(
194
+ in_features, self.lora_r, bias=False, device=device, dtype=dtype)
195
+ self.Plora_B = nn.Linear(
196
+ self.lora_r, out_features, bias=False, device=device, dtype=dtype)
197
+
198
+ self.reset_parameters()
199
+
200
+ def reset_parameters(self):
201
+ if hasattr(self, 'lora_A'):
202
+ # initialize A the same way as the default for nn.Linear and B to zero
203
+ nn.init.kaiming_uniform_(self.lora_A.weight, a=math.sqrt(5))
204
+ nn.init.zeros_(self.lora_B.weight)
205
+
206
+ def forward(self, x, im_mask=None):
207
+ res = super().forward(x)
208
+ if im_mask is not None:
209
+ if torch.sum(im_mask) > 0:
210
+ part_x = x[im_mask]
211
+ res[im_mask] += self.Plora_B(
212
+ self.Plora_A(
213
+ self.lora_dropout(part_x))) * self.lora_scaling
214
+ else:
215
+ part_x = x[:, :1]
216
+ res[:, :1] += self.Plora_B(
217
+ self.Plora_A(self.lora_dropout(part_x))) * 0
218
+ return res
config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "InternLMXComposer2ForCausalLM"
4
+ ],
5
+ "auto_map": {
6
+ "AutoConfig": "configuration_internlm_xcomposer2.InternLMXcomposer2Config",
7
+ "AutoModel": "modeling_internlm_xcomposer2.InternLMXComposer2ForCausalLM",
8
+ "AutoModelForCausalLM": "modeling_internlm_xcomposer2.InternLMXComposer2ForCausalLM"
9
+ },
10
+ "bias": false,
11
+ "bos_token_id": 1,
12
+ "eos_token_id": 2,
13
+ "hidden_act": "silu",
14
+ "hidden_size": 4096,
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 14336,
17
+ "max_length": 4096,
18
+ "max_position_embeddings": 32768,
19
+ "model_type": "internlmxcomposer2",
20
+ "num_attention_heads": 32,
21
+ "num_hidden_layers": 32,
22
+ "num_key_value_heads": 8,
23
+ "pad_token_id": 2,
24
+ "rms_norm_eps": 1e-05,
25
+ "rope_scaling": {
26
+ "factor": 1.0,
27
+ "type": "dynamic"
28
+ },
29
+ "rope_theta": 1000000,
30
+ "tie_word_embeddings": false,
31
+ "torch_dtype": "bfloat16",
32
+ "transformers_version": "4.33.1",
33
+ "use_cache": false,
34
+ "vocab_size": 92544,
35
+ "img_size": 490
36
+ }
configuration_internlm_xcomposer2.py ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright (c) InternLM. All rights reserved.
3
+ #
4
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
5
+ # and OPT implementations in this library. It has been modified from its
6
+ # original forms to accommodate minor architectural differences compared
7
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
8
+ #
9
+ # Licensed under the Apache License, Version 2.0 (the "License");
10
+ # you may not use this file except in compliance with the License.
11
+ # You may obtain a copy of the License at
12
+ #
13
+ # http://www.apache.org/licenses/LICENSE-2.0
14
+ #
15
+ # Unless required by applicable law or agreed to in writing, software
16
+ # distributed under the License is distributed on an "AS IS" BASIS,
17
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18
+ # See the License for the specific language governing permissions and
19
+ # limitations under the License.
20
+ """ InternLM model configuration"""
21
+
22
+ from transformers.configuration_utils import PretrainedConfig
23
+ from transformers.utils import logging
24
+
25
+ logger = logging.get_logger(__name__)
26
+
27
+ INTERNLM_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
28
+
29
+
30
+ class InternLMXcomposer2Config(PretrainedConfig):
31
+ r"""
32
+ This is the configuration class to store the configuration of a [`InternLMModel`]. It is used to instantiate
33
+ an InternLM model according to the specified arguments, defining the model architecture. Instantiating a
34
+ configuration with the defaults will yield a similar configuration to that of the InternLM-7B.
35
+
36
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
37
+ documentation from [`PretrainedConfig`] for more information.
38
+
39
+
40
+ Args:
41
+ vocab_size (`int`, *optional*, defaults to 32000):
42
+ Vocabulary size of the InternLM model. Defines the number of different tokens that can be represented by the
43
+ `inputs_ids` passed when calling [`InternLMModel`]
44
+ hidden_size (`int`, *optional*, defaults to 4096):
45
+ Dimension of the hidden representations.
46
+ intermediate_size (`int`, *optional*, defaults to 11008):
47
+ Dimension of the MLP representations.
48
+ num_hidden_layers (`int`, *optional*, defaults to 32):
49
+ Number of hidden layers in the Transformer encoder.
50
+ num_attention_heads (`int`, *optional*, defaults to 32):
51
+ Number of attention heads for each attention layer in the Transformer encoder.
52
+ num_key_value_heads (`int`, *optional*):
53
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
54
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
55
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
56
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
57
+ by meanpooling all the original heads within that group. For more details checkout [this
58
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
59
+ `num_attention_heads`.
60
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
61
+ The non-linear activation function (function or string) in the decoder.
62
+ max_position_embeddings (`int`, *optional*, defaults to 2048):
63
+ The maximum sequence length that this model might ever be used with. Typically set this to something large
64
+ just in case (e.g., 512 or 1024 or 2048).
65
+ initializer_range (`float`, *optional*, defaults to 0.02):
66
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
67
+ rms_norm_eps (`float`, *optional*, defaults to 1e-12):
68
+ The epsilon used by the rms normalization layers.
69
+ use_cache (`bool`, *optional*, defaults to `True`):
70
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
71
+ relevant if `config.is_decoder=True`.
72
+ tie_word_embeddings(`bool`, *optional*, defaults to `False`):
73
+ Whether to tie weight embeddings
74
+ Example:
75
+
76
+ ```python
77
+ >>> from transformers import InternLMModel, InternLMConfig
78
+
79
+ >>> # Initializing a InternLM internlm-7b style configuration
80
+ >>> configuration = InternLMConfig()
81
+
82
+ >>> # Initializing a model from the internlm-7b style configuration
83
+ >>> model = InternLMModel(configuration)
84
+
85
+ >>> # Accessing the model configuration
86
+ >>> configuration = model.config
87
+ ```"""
88
+ model_type = "internlm"
89
+ _auto_class = "AutoConfig"
90
+
91
+ def __init__( # pylint: disable=W0102
92
+ self,
93
+ vocab_size=103168,
94
+ hidden_size=4096,
95
+ intermediate_size=11008,
96
+ num_hidden_layers=32,
97
+ num_attention_heads=32,
98
+ num_key_value_heads=None,
99
+ hidden_act="silu",
100
+ max_position_embeddings=2048,
101
+ initializer_range=0.02,
102
+ rms_norm_eps=1e-6,
103
+ use_cache=True,
104
+ pad_token_id=0,
105
+ bos_token_id=1,
106
+ eos_token_id=2,
107
+ tie_word_embeddings=False,
108
+ bias=True,
109
+ rope_theta=10000,
110
+ rope_scaling=None,
111
+ **kwargs,
112
+ ):
113
+ self.vocab_size = vocab_size
114
+ self.max_position_embeddings = max_position_embeddings
115
+ self.hidden_size = hidden_size
116
+ self.intermediate_size = intermediate_size
117
+ self.num_hidden_layers = num_hidden_layers
118
+ self.num_attention_heads = num_attention_heads
119
+ self.bias = bias
120
+
121
+ if num_key_value_heads is None:
122
+ num_key_value_heads = num_attention_heads
123
+ self.num_key_value_heads = num_key_value_heads
124
+
125
+ self.hidden_act = hidden_act
126
+ self.initializer_range = initializer_range
127
+ self.rms_norm_eps = rms_norm_eps
128
+ self.use_cache = use_cache
129
+ self.rope_theta = rope_theta
130
+ self.rope_scaling = rope_scaling
131
+ self._rope_scaling_validation()
132
+ super().__init__(
133
+ pad_token_id=pad_token_id,
134
+ bos_token_id=bos_token_id,
135
+ eos_token_id=eos_token_id,
136
+ tie_word_embeddings=tie_word_embeddings,
137
+ **kwargs,
138
+ )
139
+
140
+ def _rope_scaling_validation(self):
141
+ """
142
+ Validate the `rope_scaling` configuration.
143
+ """
144
+ if self.rope_scaling is None:
145
+ return
146
+
147
+ if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 2:
148
+ raise ValueError(
149
+ "`rope_scaling` must be a dictionary with with two fields, `type` and `factor`, "
150
+ f"got {self.rope_scaling}"
151
+ )
152
+ rope_scaling_type = self.rope_scaling.get("type", None)
153
+ rope_scaling_factor = self.rope_scaling.get("factor", None)
154
+ if rope_scaling_type is None or rope_scaling_type not in ["linear", "dynamic"]:
155
+ raise ValueError(
156
+ f"`rope_scaling`'s type field must be one of ['linear', 'dynamic'], got {rope_scaling_type}"
157
+ )
158
+ if rope_scaling_factor is None or not isinstance(rope_scaling_factor, float) or rope_scaling_factor < 1.0:
159
+ raise ValueError(f"`rope_scaling`'s factor field must be a float >= 1, got {rope_scaling_factor}")
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "max_length": 1600,
6
+ "pad_token_id": 2,
7
+ "transformers_version": "4.33.1",
8
+ "use_cache": false
9
+ }
modeling_internlm2.py ADDED
@@ -0,0 +1,965 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # # Copyright (c) InternLM. All rights reserved.
2
+ #
3
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
4
+ # and OPT implementations in this library. It has been modified from its
5
+ # original forms to accommodate minor architectural differences compared
6
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
7
+ #
8
+ # Licensed under the Apache License, Version 2.0 (the "License");
9
+ # you may not use this file except in compliance with the License.
10
+ # You may obtain a copy of the License at
11
+ #
12
+ # http://www.apache.org/licenses/LICENSE-2.0
13
+ #
14
+ # Unless required by applicable law or agreed to in writing, software
15
+ # distributed under the License is distributed on an "AS IS" BASIS,
16
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17
+ # See the License for the specific language governing permissions and
18
+ # limitations under the License.
19
+ """PyTorch InternLM2 model."""
20
+ import math
21
+ import warnings
22
+ from typing import List, Optional, Tuple, Union
23
+
24
+ import torch
25
+ import torch.utils.checkpoint
26
+ from einops import rearrange
27
+ from torch import nn
28
+ from transformers.activations import ACT2FN
29
+ from transformers.modeling_outputs import BaseModelOutputWithPast
30
+ from transformers.modeling_utils import PreTrainedModel
31
+ from transformers.utils import (add_start_docstrings,
32
+ add_start_docstrings_to_model_forward, logging)
33
+
34
+ try:
35
+ from transformers.generation.streamers import BaseStreamer
36
+ except: # noqa # pylint: disable=bare-except
37
+ BaseStreamer = None
38
+
39
+ from .build_mlp import PLoRA
40
+ from .configuration_internlm_xcomposer2 import InternLMXcomposer2Config as InternLM2Config
41
+ logger = logging.get_logger(__name__)
42
+
43
+ _CONFIG_FOR_DOC = 'InternLM2Config'
44
+
45
+
46
+ # Copied from transformers.models.bart.modeling_bart._make_causal_mask
47
+ def _make_causal_mask(input_ids_shape: torch.Size,
48
+ dtype: torch.dtype,
49
+ device: torch.device,
50
+ past_key_values_length: int = 0):
51
+ """Make causal mask used for bi-directional self-attention."""
52
+ bsz, tgt_len = input_ids_shape
53
+ mask = torch.full((tgt_len, tgt_len),
54
+ torch.tensor(torch.finfo(dtype).min, device=device),
55
+ device=device)
56
+ mask_cond = torch.arange(mask.size(-1), device=device)
57
+ mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
58
+ mask = mask.to(dtype)
59
+
60
+ if past_key_values_length > 0:
61
+ mask = torch.cat([
62
+ torch.zeros(
63
+ tgt_len, past_key_values_length, dtype=dtype, device=device),
64
+ mask
65
+ ],
66
+ dim=-1)
67
+ return mask[None, None, :, :].expand(bsz, 1, tgt_len,
68
+ tgt_len + past_key_values_length)
69
+
70
+
71
+ # Copied from transformers.models.bart.modeling_bart._expand_mask
72
+ def _expand_mask(mask: torch.Tensor,
73
+ dtype: torch.dtype,
74
+ tgt_len: Optional[int] = None):
75
+ """Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len,
76
+ src_seq_len]`."""
77
+ bsz, src_len = mask.size()
78
+ tgt_len = tgt_len if tgt_len is not None else src_len
79
+
80
+ expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len,
81
+ src_len).to(dtype)
82
+
83
+ inverted_mask = 1.0 - expanded_mask
84
+
85
+ return inverted_mask.masked_fill(
86
+ inverted_mask.to(torch.bool),
87
+ torch.finfo(dtype).min)
88
+
89
+
90
+ class InternLM2RMSNorm(nn.Module):
91
+
92
+ def __init__(self, hidden_size, eps=1e-6):
93
+ """InternLM2RMSNorm is equivalent to T5LayerNorm."""
94
+ super().__init__()
95
+ self.weight = nn.Parameter(torch.ones(hidden_size))
96
+ self.variance_epsilon = eps
97
+
98
+ def forward(self, hidden_states):
99
+ input_dtype = hidden_states.dtype
100
+ hidden_states = hidden_states.to(torch.float32)
101
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
102
+ hidden_states = hidden_states * torch.rsqrt(variance +
103
+ self.variance_epsilon)
104
+ return self.weight * hidden_states.to(input_dtype)
105
+
106
+
107
+ class InternLM2RotaryEmbedding(nn.Module):
108
+
109
+ def __init__(self,
110
+ dim,
111
+ max_position_embeddings=2048,
112
+ base=10000,
113
+ device=None):
114
+ super().__init__()
115
+
116
+ self.dim = dim
117
+ self.max_position_embeddings = max_position_embeddings
118
+ self.base = base
119
+ inv_freq = 1.0 / (
120
+ self.base
121
+ **(torch.arange(0, self.dim, 2).float().to(device) / self.dim))
122
+ self.register_buffer('inv_freq', inv_freq, persistent=False)
123
+
124
+ # Build here to make `torch.jit.trace` work.
125
+ self._set_cos_sin_cache(
126
+ seq_len=max_position_embeddings,
127
+ device=self.inv_freq.device,
128
+ dtype=torch.get_default_dtype())
129
+
130
+ def _set_cos_sin_cache(self, seq_len, device, dtype):
131
+ self.max_seq_len_cached = seq_len
132
+ t = torch.arange(
133
+ self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype)
134
+
135
+ freqs = torch.einsum('i,j->ij', t, self.inv_freq)
136
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
137
+ emb = torch.cat((freqs, freqs), dim=-1)
138
+ self.register_buffer(
139
+ 'cos_cached', emb.cos().to(dtype), persistent=False)
140
+ self.register_buffer(
141
+ 'sin_cached', emb.sin().to(dtype), persistent=False)
142
+
143
+ def forward(self, x, seq_len=None):
144
+ # x: [bs, num_attention_heads, seq_len, head_size]
145
+ if seq_len > self.max_seq_len_cached:
146
+ self._set_cos_sin_cache(
147
+ seq_len=seq_len, device=x.device, dtype=x.dtype)
148
+
149
+ return (
150
+ self.cos_cached[:seq_len].to(dtype=x.dtype),
151
+ self.sin_cached[:seq_len].to(dtype=x.dtype),
152
+ )
153
+
154
+
155
+ class InternLM2LinearScalingRotaryEmbedding(InternLM2RotaryEmbedding):
156
+ """InternLM2RotaryEmbedding extended with linear scaling.
157
+
158
+ Credits to the Reddit user /u/kaiokendev
159
+ """
160
+
161
+ def __init__(self,
162
+ dim,
163
+ max_position_embeddings=2048,
164
+ base=10000,
165
+ device=None,
166
+ scaling_factor=1.0):
167
+ self.scaling_factor = scaling_factor
168
+ super().__init__(dim, max_position_embeddings, base, device)
169
+
170
+ def _set_cos_sin_cache(self, seq_len, device, dtype):
171
+ self.max_seq_len_cached = seq_len
172
+ t = torch.arange(
173
+ self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype)
174
+ t = t / self.scaling_factor
175
+
176
+ freqs = torch.einsum('i,j->ij', t, self.inv_freq)
177
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
178
+ emb = torch.cat((freqs, freqs), dim=-1)
179
+ self.register_buffer(
180
+ 'cos_cached', emb.cos().to(dtype), persistent=False)
181
+ self.register_buffer(
182
+ 'sin_cached', emb.sin().to(dtype), persistent=False)
183
+
184
+
185
+ class InternLM2DynamicNTKScalingRotaryEmbedding(InternLM2RotaryEmbedding):
186
+ """InternLM2RotaryEmbedding extended with Dynamic NTK scaling.
187
+
188
+ Credits to the Reddit users /u/bloc97 and /u/emozilla.
189
+ """
190
+
191
+ def __init__(self,
192
+ dim,
193
+ max_position_embeddings=2048,
194
+ base=10000,
195
+ device=None,
196
+ scaling_factor=1.0):
197
+ self.scaling_factor = scaling_factor
198
+ super().__init__(dim, max_position_embeddings, base, device)
199
+
200
+ def _set_cos_sin_cache(self, seq_len, device, dtype):
201
+ self.max_seq_len_cached = seq_len
202
+
203
+ if seq_len > self.max_position_embeddings:
204
+ base = self.base * ((self.scaling_factor * seq_len /
205
+ self.max_position_embeddings) -
206
+ (self.scaling_factor - 1))**(
207
+ self.dim / (self.dim - 2))
208
+ inv_freq = 1.0 / (
209
+ base
210
+ **(torch.arange(0, self.dim, 2).float().to(device) / self.dim))
211
+ self.register_buffer('inv_freq', inv_freq, persistent=False)
212
+
213
+ t = torch.arange(
214
+ self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype)
215
+
216
+ freqs = torch.einsum('i,j->ij', t, self.inv_freq)
217
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
218
+ emb = torch.cat((freqs, freqs), dim=-1)
219
+ self.register_buffer(
220
+ 'cos_cached', emb.cos().to(dtype), persistent=False)
221
+ self.register_buffer(
222
+ 'sin_cached', emb.sin().to(dtype), persistent=False)
223
+
224
+
225
+ def rotate_half(x):
226
+ """Rotates half the hidden dims of the input."""
227
+ x1 = x[..., :x.shape[-1] // 2]
228
+ x2 = x[..., x.shape[-1] // 2:]
229
+ return torch.cat((-x2, x1), dim=-1)
230
+
231
+
232
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids):
233
+ # The first two dimensions of cos and sin are always 1, so we can `squeeze` them.
234
+ cos = cos.squeeze(1).squeeze(0) # [seq_len, dim]
235
+ sin = sin.squeeze(1).squeeze(0) # [seq_len, dim]
236
+ cos = cos.unsqueeze(0).unsqueeze(0).expand(len(position_ids), -1, -1, -1)
237
+ sin = sin.unsqueeze(0).unsqueeze(0).expand(len(position_ids), -1, -1, -1)
238
+ if q.size(2) == 1:
239
+ q_embed = (q * cos[:, :, -1:, :]) + (
240
+ rotate_half(q) * sin[:, :, -1:, :])
241
+ else:
242
+ q_embed = (q * cos) + (rotate_half(q) * sin)
243
+
244
+ if k.size(2) == 1:
245
+ k_embed = (k * cos[:, :, -1:, :]) + (
246
+ rotate_half(k) * sin[:, :, -1:, :])
247
+ else:
248
+ k_embed = (k * cos) + (rotate_half(k) * sin)
249
+
250
+ return q_embed, k_embed
251
+
252
+
253
+ class InternLM2MLP(nn.Module):
254
+
255
+ def __init__(self, config):
256
+ super().__init__()
257
+ self.config = config
258
+ self.hidden_size = config.hidden_size
259
+ self.intermediate_size = config.intermediate_size
260
+
261
+ self.w1 = PLoRA(
262
+ self.hidden_size,
263
+ self.intermediate_size,
264
+ bias=False,
265
+ lora_r=256,
266
+ lora_alpha=256,
267
+ lora_len=576)
268
+ self.w3 = PLoRA(
269
+ self.hidden_size,
270
+ self.intermediate_size,
271
+ bias=False,
272
+ lora_r=256,
273
+ lora_alpha=256,
274
+ lora_len=576)
275
+ self.w2 = PLoRA(
276
+ self.intermediate_size,
277
+ self.hidden_size,
278
+ bias=False,
279
+ lora_r=256,
280
+ lora_alpha=256,
281
+ lora_len=576)
282
+
283
+ self.act_fn = ACT2FN[config.hidden_act]
284
+
285
+ def forward(self, x, im_mask):
286
+ down_proj = self.w2(
287
+ self.act_fn(self.w1(x, im_mask)) * self.w3(x, im_mask), im_mask)
288
+
289
+ return down_proj
290
+
291
+
292
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
293
+ """This is the equivalent of torch.repeat_interleave(x, dim=1,
294
+ repeats=n_rep).
295
+
296
+ The hidden states go from (batch, num_key_value_heads, seqlen, head_dim) to
297
+ (batch, num_attention_heads, seqlen, head_dim)
298
+ """
299
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
300
+ if n_rep == 1:
301
+ return hidden_states
302
+ hidden_states = hidden_states[:, :,
303
+ None, :, :].expand(batch,
304
+ num_key_value_heads,
305
+ n_rep, slen, head_dim)
306
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen,
307
+ head_dim)
308
+
309
+
310
+ class InternLM2Attention(nn.Module):
311
+ """Multi-headed attention from 'Attention Is All You Need' paper."""
312
+
313
+ def __init__(self, config: InternLM2Config):
314
+ super().__init__()
315
+ self.config = config
316
+ self.hidden_size = config.hidden_size
317
+ self.num_heads = config.num_attention_heads
318
+ self.head_dim = self.hidden_size // self.num_heads
319
+ self.num_key_value_heads = config.num_key_value_heads
320
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
321
+ self.max_position_embeddings = config.max_position_embeddings
322
+ self.is_causal = True
323
+
324
+ if (self.head_dim * self.num_heads) != self.hidden_size:
325
+ raise ValueError(
326
+ f'hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}'
327
+ f' and `num_heads`: {self.num_heads}).')
328
+
329
+ self.wqkv = PLoRA(
330
+ self.hidden_size,
331
+ (self.num_heads + 2 * self.num_key_value_heads) * self.head_dim,
332
+ bias=config.bias,
333
+ lora_r=256,
334
+ lora_alpha=256,
335
+ lora_len=576)
336
+
337
+ self.wo = PLoRA(
338
+ self.num_heads * self.head_dim,
339
+ self.hidden_size,
340
+ bias=config.bias,
341
+ lora_r=256,
342
+ lora_alpha=256,
343
+ lora_len=576)
344
+ self._init_rope()
345
+
346
+ def _init_rope(self):
347
+ if self.config.rope_scaling is None:
348
+ self.rotary_emb = InternLM2RotaryEmbedding(
349
+ self.head_dim,
350
+ max_position_embeddings=self.max_position_embeddings,
351
+ base=self.config.rope_theta,
352
+ )
353
+ else:
354
+ scaling_type = self.config.rope_scaling['type']
355
+ scaling_factor = self.config.rope_scaling['factor']
356
+ if scaling_type == 'dynamic':
357
+ self.rotary_emb = InternLM2DynamicNTKScalingRotaryEmbedding(
358
+ self.head_dim,
359
+ max_position_embeddings=self.max_position_embeddings,
360
+ base=self.config.rope_theta,
361
+ scaling_factor=scaling_factor)
362
+ else:
363
+ raise ValueError(
364
+ "Currently we only support rotary embedding's type being 'dynamic'."
365
+ )
366
+ return self.rotary_emb
367
+
368
+ def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
369
+ return tensor.view(bsz, seq_len, self.num_heads,
370
+ self.head_dim).transpose(1, 2).contiguous()
371
+
372
+ def forward(
373
+ self,
374
+ hidden_states: torch.Tensor,
375
+ attention_mask: Optional[torch.Tensor] = None,
376
+ position_ids: Optional[torch.LongTensor] = None,
377
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
378
+ output_attentions: bool = False,
379
+ use_cache: bool = False,
380
+ im_mask: Optional[Tuple[torch.Tensor]] = None,
381
+ **kwargs,
382
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor],
383
+ Optional[Tuple[torch.Tensor]]]:
384
+ if 'padding_mask' in kwargs:
385
+ warnings.warn(
386
+ 'Passing `padding_mask` is deprecated and will be removed in v4.37. '
387
+ 'Please make sure use `attention_mask` instead.`')
388
+
389
+ bsz, q_len, _ = hidden_states.size()
390
+
391
+ qkv_states = self.wqkv(hidden_states, im_mask)
392
+
393
+ qkv_states = rearrange(
394
+ qkv_states,
395
+ 'b q (h gs d) -> b q h gs d',
396
+ gs=2 + self.num_key_value_groups,
397
+ d=self.head_dim,
398
+ )
399
+
400
+ query_states = qkv_states[..., :self.num_key_value_groups, :]
401
+ query_states = rearrange(query_states, 'b q h gs d -> b q (h gs) d')
402
+ key_states = qkv_states[..., -2, :]
403
+ value_states = qkv_states[..., -1, :]
404
+
405
+ query_states = query_states.transpose(1, 2)
406
+ key_states = key_states.transpose(1, 2)
407
+ value_states = value_states.transpose(1, 2)
408
+
409
+ kv_seq_len = key_states.shape[-2]
410
+ if past_key_value is not None:
411
+ kv_seq_len += past_key_value[0].shape[-2]
412
+ cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
413
+ query_states, key_states = apply_rotary_pos_emb(
414
+ query_states, key_states, cos, sin, position_ids)
415
+
416
+ if past_key_value is not None:
417
+ # reuse k, v, self_attention
418
+ key_states = torch.cat([past_key_value[0], key_states], dim=2)
419
+ value_states = torch.cat([past_key_value[1], value_states], dim=2)
420
+
421
+ past_key_value = (key_states, value_states) if use_cache else None
422
+
423
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
424
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
425
+
426
+ attn_weights = torch.matmul(query_states, key_states.transpose(
427
+ 2, 3)) / math.sqrt(self.head_dim)
428
+
429
+ if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
430
+ raise ValueError(
431
+ f'Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is'
432
+ f' {attn_weights.size()}')
433
+
434
+ if attention_mask is not None:
435
+ if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
436
+ raise ValueError(
437
+ f'Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}'
438
+ )
439
+ attn_weights = attn_weights + attention_mask
440
+
441
+ # upcast attention to fp32
442
+ attn_weights = nn.functional.softmax(
443
+ attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
444
+ attn_output = torch.matmul(attn_weights, value_states)
445
+
446
+ if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
447
+ raise ValueError(
448
+ f'`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is'
449
+ f' {attn_output.size()}')
450
+
451
+ attn_output = attn_output.transpose(1, 2).contiguous()
452
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
453
+
454
+ attn_output = self.wo(attn_output, im_mask)
455
+
456
+ if not output_attentions:
457
+ attn_weights = None
458
+
459
+ return attn_output, attn_weights, past_key_value
460
+
461
+
462
+ class InternLM2FlashAttention2(InternLM2Attention):
463
+ """InternLM2 flash attention module.
464
+
465
+ This module inherits from `InternLM2Attention` as the weights of the module
466
+ stays untouched. The only required change would be on the forward pass
467
+ where it needs to correctly call the public API of flash attention and deal
468
+ with padding tokens in case the input contains any of them.
469
+ """
470
+
471
+ def forward(
472
+ self,
473
+ hidden_states: torch.Tensor,
474
+ attention_mask: Optional[torch.LongTensor] = None,
475
+ position_ids: Optional[torch.LongTensor] = None,
476
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
477
+ output_attentions: bool = False,
478
+ use_cache: bool = False,
479
+ im_mask: Optional[Tuple[torch.Tensor]] = None,
480
+ **kwargs,
481
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor],
482
+ Optional[Tuple[torch.Tensor]]]:
483
+ # InternLM2FlashAttention2 attention does not support output_attentions
484
+ if 'padding_mask' in kwargs:
485
+ warnings.warn(
486
+ 'Passing `padding_mask` is deprecated and will be removed in v4.37. '
487
+ 'Please make sure use `attention_mask` instead.`')
488
+
489
+ # overwrite attention_mask with padding_mask
490
+ attention_mask = kwargs.pop('padding_mask')
491
+
492
+ output_attentions = False
493
+
494
+ bsz, q_len, _ = hidden_states.size()
495
+
496
+ qkv_states = self.wqkv(hidden_states, im_mask)
497
+
498
+ qkv_states = rearrange(
499
+ qkv_states,
500
+ 'b q (h gs d) -> b q h gs d',
501
+ gs=self.num_heads + 2 * self.num_key_value_heads,
502
+ d=self.head_dim,
503
+ q=q_len,
504
+ )
505
+
506
+ query_states = qkv_states[..., :self.num_key_value_groups, :]
507
+ query_states = rearrange(query_states, 'b q h gs d -> b q (h gs) d')
508
+ key_states = qkv_states[..., -2, :]
509
+ value_states = qkv_states[..., -1, :]
510
+
511
+ kv_seq_len = key_states.shape[-2]
512
+ if past_key_value is not None:
513
+ kv_seq_len += past_key_value[0].shape[-2]
514
+
515
+ cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
516
+
517
+ query_states, key_states = apply_rotary_pos_emb(
518
+ query_states, key_states, cos, sin, position_ids)
519
+
520
+ if past_key_value is not None:
521
+ # reuse k, v, self_attention
522
+ key_states = torch.cat([past_key_value[0], key_states], dim=2)
523
+ value_states = torch.cat([past_key_value[1], value_states], dim=2)
524
+
525
+ past_key_value = (key_states, value_states) if use_cache else None
526
+
527
+ query_states = query_states.transpose(1, 2)
528
+ key_states = key_states.transpose(1, 2)
529
+ value_states = value_states.transpose(1, 2)
530
+
531
+ dropout_rate = 0.0 if not self.training else self.attention_dropout
532
+
533
+ # In PEFT, usually we cast the layer norms in float32 for training stability reasons
534
+ # therefore the input hidden states gets silently casted in float32. Hence, we need
535
+ # cast them back in the correct dtype just to be sure everything works as expected.
536
+ # This might slowdown training & inference so it is recommended to not cast the LayerNorms
537
+ # in fp32. (InternLM2RMSNorm handles it correctly)
538
+
539
+ input_dtype = query_states.dtype
540
+ if input_dtype == torch.float32:
541
+ # Handle the case where the model is quantized
542
+ if hasattr(self.config, '_pre_quantization_dtype'):
543
+ target_dtype = self.config._pre_quantization_dtype
544
+ else:
545
+ target_dtype = self.q_proj.weight.dtype
546
+
547
+ logger.warning_once(
548
+ f'The input hidden states seems to be silently casted in float32, this might be related to'
549
+ f' the fact you have upcasted embedding or layer norm layers in float32. We will cast back '
550
+ f'the input in {target_dtype}.')
551
+
552
+ query_states = query_states.to(target_dtype)
553
+ key_states = key_states.to(target_dtype)
554
+ value_states = value_states.to(target_dtype)
555
+
556
+ attn_output = self._flash_attention_forward(
557
+ query_states,
558
+ key_states,
559
+ value_states,
560
+ attention_mask,
561
+ q_len,
562
+ dropout=dropout_rate)
563
+
564
+ attn_output = attn_output.reshape(bsz, q_len,
565
+ self.hidden_size).contiguous()
566
+ attn_output = self.wo(attn_output, im_mask)
567
+
568
+ if not output_attentions:
569
+ attn_weights = None
570
+
571
+ return attn_output, attn_weights, past_key_value
572
+
573
+
574
+ class InternLM2DecoderLayer(nn.Module):
575
+
576
+ def __init__(self, config: InternLM2Config):
577
+ super().__init__()
578
+ self.hidden_size = config.hidden_size
579
+ self.attention = (
580
+ InternLM2Attention(config=config)
581
+ if not getattr(config, '_flash_attn_2_enabled', False) else
582
+ InternLM2FlashAttention2(config=config))
583
+ self.feed_forward = InternLM2MLP(config)
584
+ self.attention_norm = InternLM2RMSNorm(
585
+ config.hidden_size, eps=config.rms_norm_eps)
586
+ self.ffn_norm = InternLM2RMSNorm(
587
+ config.hidden_size, eps=config.rms_norm_eps)
588
+
589
+ def forward(
590
+ self,
591
+ hidden_states: torch.Tensor,
592
+ attention_mask: Optional[torch.Tensor] = None,
593
+ position_ids: Optional[torch.LongTensor] = None,
594
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
595
+ output_attentions: Optional[bool] = False,
596
+ use_cache: Optional[bool] = False,
597
+ im_mask: Optional[Tuple[torch.Tensor]] = None,
598
+ **kwargs,
599
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor,
600
+ torch.FloatTensor]]]:
601
+ """
602
+ Args:
603
+ hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
604
+ attention_mask (`torch.FloatTensor`, *optional*):
605
+ attention mask of size `(batch_size, sequence_length)` if flash attention is used or `(batch_size, 1,
606
+ query_sequence_length, key_sequence_length)` if default attention is used.
607
+ output_attentions (`bool`, *optional*):
608
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
609
+ returned tensors for more detail.
610
+ use_cache (`bool`, *optional*):
611
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
612
+ (see `past_key_values`).
613
+ past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
614
+ """
615
+ if 'padding_mask' in kwargs:
616
+ warnings.warn(
617
+ 'Passing `padding_mask` is deprecated and will be removed in v4.37. '
618
+ 'Please make sure use `attention_mask` instead.`')
619
+
620
+ residual = hidden_states
621
+
622
+ hidden_states = self.attention_norm(hidden_states)
623
+
624
+ # Self Attention
625
+ hidden_states, self_attn_weights, present_key_value = self.attention(
626
+ hidden_states=hidden_states,
627
+ attention_mask=attention_mask,
628
+ position_ids=position_ids,
629
+ past_key_value=past_key_value,
630
+ output_attentions=output_attentions,
631
+ use_cache=use_cache,
632
+ im_mask=im_mask,
633
+ **kwargs,
634
+ )
635
+ hidden_states = residual + hidden_states
636
+
637
+ # Fully Connected
638
+ residual = hidden_states
639
+ hidden_states = self.ffn_norm(hidden_states)
640
+ hidden_states = self.feed_forward(hidden_states, im_mask)
641
+ hidden_states = residual + hidden_states
642
+
643
+ outputs = (hidden_states, )
644
+
645
+ if output_attentions:
646
+ outputs += (self_attn_weights, )
647
+
648
+ if use_cache:
649
+ outputs += (present_key_value, )
650
+
651
+ return outputs
652
+
653
+
654
+ InternLM2_START_DOCSTRING = r"""
655
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
656
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
657
+ etc.)
658
+
659
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
660
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
661
+ and behavior.
662
+
663
+ Parameters:
664
+ config ([`InternLM2Config`]):
665
+ Model configuration class with all the parameters of the model. Initializing with a config file does not
666
+ load the weights associated with the model, only the configuration. Check out the
667
+ [`~PreTrainedModel.from_pretrained`] method to load the model weights.
668
+ """
669
+
670
+
671
+ @add_start_docstrings(
672
+ 'The bare InternLM2 Model outputting raw hidden-states without any specific head on top.',
673
+ InternLM2_START_DOCSTRING,
674
+ )
675
+ class InternLM2PreTrainedModel(PreTrainedModel):
676
+ config_class = InternLM2Config
677
+ base_model_prefix = 'model'
678
+ supports_gradient_checkpointing = True
679
+ _no_split_modules = ['InternLM2DecoderLayer']
680
+ _skip_keys_device_placement = 'past_key_values'
681
+ _supports_flash_attn_2 = True
682
+
683
+ def _init_weights(self, module):
684
+ std = self.config.initializer_range
685
+ if isinstance(module, nn.Linear):
686
+ module.weight.data.normal_(mean=0.0, std=std)
687
+ if module.bias is not None:
688
+ module.bias.data.zero_()
689
+ elif isinstance(module, nn.Embedding):
690
+ module.weight.data.normal_(mean=0.0, std=std)
691
+ if module.padding_idx is not None:
692
+ module.weight.data[module.padding_idx].zero_()
693
+
694
+
695
+ InternLM2_INPUTS_DOCSTRING = r"""
696
+ Args:
697
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
698
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
699
+ it.
700
+
701
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
702
+ [`PreTrainedTokenizer.__call__`] for details.
703
+
704
+ [What are input IDs?](../glossary#input-ids)
705
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
706
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
707
+
708
+ - 1 for tokens that are **not masked**,
709
+ - 0 for tokens that are **masked**.
710
+
711
+ [What are attention masks?](../glossary#attention-mask)
712
+
713
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
714
+ [`PreTrainedTokenizer.__call__`] for details.
715
+
716
+ If `past_key_values` is used, optionally only the last `input_ids` have to be input (see
717
+ `past_key_values`).
718
+
719
+ If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
720
+ and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
721
+ information on the default strategy.
722
+
723
+ - 1 indicates the head is **not masked**,
724
+ - 0 indicates the head is **masked**.
725
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
726
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
727
+ config.n_positions - 1]`.
728
+
729
+ [What are position IDs?](../glossary#position-ids)
730
+ past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or
731
+ when `config.use_cache=True`):
732
+ Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
733
+ `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
734
+ `(batch_size, num_heads, decoder_sequence_length, embed_size_per_head)`.
735
+
736
+ Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
737
+ blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
738
+
739
+ If `past_key_values` are used, the user can optionally input only the last `input_ids` (those that don't
740
+ have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `input_ids`
741
+ of shape `(batch_size, sequence_length)`.
742
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
743
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
744
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
745
+ model's internal embedding lookup matrix.
746
+ use_cache (`bool`, *optional*):
747
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
748
+ `past_key_values`).
749
+ output_attentions (`bool`, *optional*):
750
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
751
+ tensors for more detail.
752
+ output_hidden_states (`bool`, *optional*):
753
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
754
+ more detail.
755
+ return_dict (`bool`, *optional*):
756
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
757
+ """
758
+
759
+
760
+ @add_start_docstrings(
761
+ 'The bare InternLM2 Model outputting raw hidden-states without any specific head on top.',
762
+ InternLM2_START_DOCSTRING,
763
+ )
764
+ class InternLM2Model(InternLM2PreTrainedModel):
765
+ """Transformer decoder consisting of *config.num_hidden_layers* layers.
766
+ Each layer is a [`InternLM2DecoderLayer`]
767
+
768
+ Args:
769
+ config: InternLM2Config
770
+ """
771
+
772
+ _auto_class = 'AutoModel'
773
+
774
+ def __init__(self, config: InternLM2Config):
775
+ super().__init__(config)
776
+ self.padding_idx = config.pad_token_id
777
+ self.vocab_size = config.vocab_size
778
+
779
+ self.tok_embeddings = nn.Embedding(config.vocab_size,
780
+ config.hidden_size,
781
+ self.padding_idx)
782
+ self.layers = nn.ModuleList([
783
+ InternLM2DecoderLayer(config)
784
+ for _ in range(config.num_hidden_layers)
785
+ ])
786
+ self.norm = InternLM2RMSNorm(
787
+ config.hidden_size, eps=config.rms_norm_eps)
788
+
789
+ self.gradient_checkpointing = False
790
+ # Initialize weights and apply final processing
791
+ self.post_init()
792
+
793
+ def get_input_embeddings(self):
794
+ return self.tok_embeddings
795
+
796
+ def set_input_embeddings(self, value):
797
+ self.tok_embeddings = value
798
+
799
+ # Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask
800
+ def _prepare_decoder_attention_mask(self, attention_mask, input_shape,
801
+ inputs_embeds, past_key_values_length):
802
+ # create causal mask
803
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
804
+ combined_attention_mask = None
805
+ if input_shape[-1] > 1:
806
+ combined_attention_mask = _make_causal_mask(
807
+ input_shape,
808
+ inputs_embeds.dtype,
809
+ device=inputs_embeds.device,
810
+ past_key_values_length=past_key_values_length,
811
+ )
812
+
813
+ if attention_mask is not None:
814
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
815
+ expanded_attn_mask = _expand_mask(
816
+ attention_mask, inputs_embeds.dtype,
817
+ tgt_len=input_shape[-1]).to(inputs_embeds.device)
818
+ combined_attention_mask = (
819
+ expanded_attn_mask if combined_attention_mask is None else
820
+ expanded_attn_mask + combined_attention_mask)
821
+
822
+ return combined_attention_mask
823
+
824
+ @add_start_docstrings_to_model_forward(InternLM2_INPUTS_DOCSTRING)
825
+ def forward(self,
826
+ input_ids: torch.LongTensor = None,
827
+ attention_mask: Optional[torch.Tensor] = None,
828
+ position_ids: Optional[torch.LongTensor] = None,
829
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
830
+ inputs_embeds: Optional[torch.FloatTensor] = None,
831
+ use_cache: Optional[bool] = None,
832
+ output_attentions: Optional[bool] = None,
833
+ output_hidden_states: Optional[bool] = None,
834
+ return_dict: Optional[bool] = None,
835
+ **kwargs) -> Union[Tuple, BaseModelOutputWithPast]:
836
+
837
+ im_mask = kwargs.get('im_mask', None)
838
+
839
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
840
+ output_hidden_states = (
841
+ output_hidden_states if output_hidden_states is not None else
842
+ self.config.output_hidden_states)
843
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
844
+
845
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
846
+
847
+ # retrieve input_ids and inputs_embeds
848
+ if input_ids is not None and inputs_embeds is not None:
849
+ raise ValueError(
850
+ 'You cannot specify both input_ids and inputs_embeds at the same time'
851
+ )
852
+ elif input_ids is not None:
853
+ batch_size, seq_length = input_ids.shape[:2]
854
+ elif inputs_embeds is not None:
855
+ batch_size, seq_length = inputs_embeds.shape[:2]
856
+ else:
857
+ raise ValueError(
858
+ 'You have to specify either input_ids or inputs_embeds')
859
+
860
+ seq_length_with_past = seq_length
861
+ past_key_values_length = 0
862
+ if past_key_values is not None:
863
+ past_key_values_length = past_key_values[0][0].shape[2]
864
+ seq_length_with_past = seq_length_with_past + past_key_values_length
865
+
866
+ if position_ids is None:
867
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
868
+ position_ids = torch.arange(
869
+ past_key_values_length,
870
+ seq_length + past_key_values_length,
871
+ dtype=torch.long,
872
+ device=device)
873
+ position_ids = position_ids.unsqueeze(0)
874
+
875
+ if inputs_embeds is None:
876
+ inputs_embeds = self.tok_embeddings(input_ids)
877
+ im_mask = torch.zeros(inputs_embeds.shape[:2]).to(
878
+ inputs_embeds.device).bool()
879
+ # embed positions
880
+ if attention_mask is None:
881
+ attention_mask = torch.ones((batch_size, seq_length_with_past),
882
+ dtype=torch.bool,
883
+ device=inputs_embeds.device)
884
+ attention_mask = self._prepare_decoder_attention_mask(
885
+ attention_mask, (batch_size, seq_length), inputs_embeds,
886
+ past_key_values_length)
887
+
888
+ # embed positions
889
+ hidden_states = inputs_embeds
890
+
891
+ if self.gradient_checkpointing and self.training:
892
+ if use_cache:
893
+ logger.warning_once(
894
+ '`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...'
895
+ )
896
+ use_cache = False
897
+
898
+ # decoder layers
899
+ all_hidden_states = () if output_hidden_states else None
900
+ all_self_attns = () if output_attentions else None
901
+ next_decoder_cache = () if use_cache else None
902
+
903
+ for idx, decoder_layer in enumerate(self.layers):
904
+ if output_hidden_states:
905
+ all_hidden_states += (hidden_states, )
906
+
907
+ past_key_value = past_key_values[
908
+ idx] if past_key_values is not None else None
909
+
910
+ if self.gradient_checkpointing and self.training:
911
+
912
+ def create_custom_forward(module):
913
+
914
+ def custom_forward(*inputs):
915
+ # None for past_key_value
916
+ return module(*inputs, output_attentions, None,
917
+ im_mask)
918
+
919
+ return custom_forward
920
+
921
+ layer_outputs = torch.utils.checkpoint.checkpoint(
922
+ create_custom_forward(decoder_layer),
923
+ hidden_states,
924
+ attention_mask,
925
+ position_ids,
926
+ None,
927
+ )
928
+ else:
929
+ layer_outputs = decoder_layer(
930
+ hidden_states,
931
+ attention_mask=attention_mask,
932
+ position_ids=position_ids,
933
+ past_key_value=past_key_value,
934
+ output_attentions=output_attentions,
935
+ use_cache=use_cache,
936
+ im_mask=im_mask,
937
+ )
938
+
939
+ hidden_states = layer_outputs[0]
940
+
941
+ if use_cache:
942
+ next_decoder_cache += (
943
+ layer_outputs[2 if output_attentions else 1], )
944
+
945
+ if output_attentions:
946
+ all_self_attns += (layer_outputs[1], )
947
+
948
+ hidden_states = self.norm(hidden_states)
949
+
950
+ # add hidden states from the last decoder layer
951
+ if output_hidden_states:
952
+ all_hidden_states += (hidden_states, )
953
+
954
+ next_cache = next_decoder_cache if use_cache else None
955
+ if not return_dict:
956
+ return tuple(
957
+ v for v in
958
+ [hidden_states, next_cache, all_hidden_states, all_self_attns]
959
+ if v is not None)
960
+ return BaseModelOutputWithPast(
961
+ last_hidden_state=hidden_states,
962
+ past_key_values=next_cache,
963
+ hidden_states=all_hidden_states,
964
+ attentions=all_self_attns,
965
+ )
modeling_internlm_xcomposer2.py ADDED
@@ -0,0 +1,608 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # # Copyright (c) InternLM. All rights reserved.
2
+ #
3
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
4
+ # and OPT implementations in this library. It has been modified from its
5
+ # original forms to accommodate minor architectural differences compared
6
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
7
+ #
8
+ # Licensed under the Apache License, Version 2.0 (the "License");
9
+ # you may not use this file except in compliance with the License.
10
+ # You may obtain a copy of the License at
11
+ #
12
+ # http://www.apache.org/licenses/LICENSE-2.0
13
+ #
14
+ # Unless required by applicable law or agreed to in writing, software
15
+ # distributed under the License is distributed on an "AS IS" BASIS,
16
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17
+ # See the License for the specific language governing permissions and
18
+ # limitations under the License.
19
+ """PyTorch InternLMXComposer2 model."""
20
+ import copy
21
+ import queue
22
+ import threading
23
+ from typing import List, Optional, Tuple, Union
24
+
25
+ import torch
26
+ import torch.utils.checkpoint
27
+ from PIL import Image
28
+ from torch import nn
29
+ from torch.nn import CrossEntropyLoss
30
+ from torchvision import transforms
31
+ from torchvision.transforms.functional import InterpolationMode
32
+ from transformers.modeling_outputs import CausalLMOutputWithPast
33
+ from transformers.utils import (add_start_docstrings_to_model_forward,
34
+ replace_return_docstrings)
35
+
36
+ try:
37
+ from transformers.generation.streamers import BaseStreamer
38
+ except: # noqa # pylint: disable=bare-except
39
+ BaseStreamer = None
40
+
41
+ from .build_mlp import build_vision_projector, build_vision_tower
42
+ from .configuration_internlm_xcomposer2 import InternLMXcomposer2Config
43
+ from .modeling_internlm2 import (InternLM2_INPUTS_DOCSTRING, InternLM2Model,
44
+ InternLM2PreTrainedModel)
45
+
46
+ _CONFIG_FOR_DOC = 'InternLMXcomposer2Config'
47
+
48
+
49
+ class InternLMXComposer2ForCausalLM(InternLM2PreTrainedModel):
50
+ _auto_class = 'AutoModelForCausalLM'
51
+
52
+ _tied_weights_keys = ['output.weight']
53
+
54
+ def __init__(self, config):
55
+ super().__init__(config)
56
+ self.model = InternLM2Model(config)
57
+ self.vocab_size = config.vocab_size
58
+ self.output = nn.Linear(
59
+ config.hidden_size, config.vocab_size, bias=False)
60
+ self.tokenizer = None
61
+
62
+ self.max_length = config.max_length
63
+ print(f'Set max length to {self.max_length}')
64
+ # Initialize weights and apply final processing
65
+ self.post_init()
66
+
67
+ self.vit = build_vision_tower()
68
+ self.vision_proj = build_vision_projector()
69
+
70
+ self.vis_processor = transforms.Compose([
71
+ transforms.Resize((config.img_size, config.img_size),
72
+ interpolation=InterpolationMode.BICUBIC),
73
+ transforms.ToTensor(),
74
+ transforms.Normalize((0.48145466, 0.4578275, 0.40821073),
75
+ (0.26862954, 0.26130258, 0.27577711)),
76
+ ])
77
+
78
+ def _set_gradient_checkpointing(self, module, value=False):
79
+ if isinstance(module, InternLM2Model):
80
+ module.gradient_checkpointing = value
81
+ if value:
82
+ self.vit.vision_tower.vision_model.encoder.gradient_checkpointing = value
83
+
84
+ def get_input_embeddings(self):
85
+ return self.model.tok_embeddings
86
+
87
+ def set_input_embeddings(self, value):
88
+ self.model.tok_embeddings = value
89
+
90
+ def get_output_embeddings(self):
91
+ return self.output
92
+
93
+ def set_output_embeddings(self, new_embeddings):
94
+ self.output = new_embeddings
95
+
96
+ def set_decoder(self, decoder):
97
+ self.model = decoder
98
+
99
+ def get_decoder(self):
100
+ return self.model
101
+
102
+ def encode_text(self, text, add_special_tokens=False):
103
+ token = self.tokenizer(
104
+ text, return_tensors='pt',
105
+ add_special_tokens=add_special_tokens).input_ids.to(self.device)
106
+ embs = self.model.tok_embeddings(token)
107
+ return embs
108
+
109
+ def encode_img(self, image):
110
+ if image is None:
111
+ return None
112
+ if isinstance(image, str):
113
+ image = Image.open(image).convert('RGB')
114
+ image = self.vis_processor(image).unsqueeze(0).to(self.device)
115
+ else:
116
+ assert isinstance(image, torch.Tensor)
117
+
118
+ img_embeds, atts_img, img_target = self.img2emb(image)
119
+ return img_embeds
120
+
121
+ def img2emb(self, image):
122
+ img_embeds = self.vision_proj(self.vit(image.to(self.device)))
123
+ atts_img = torch.ones(
124
+ img_embeds.size()[:-1], dtype=torch.long).to(img_embeds.device)
125
+
126
+ img_target = torch.ones(
127
+ img_embeds.size()[:2], dtype=torch.long).to(
128
+ img_embeds.device) * -100
129
+
130
+ return img_embeds, atts_img, img_target
131
+
132
+ def prompt_wrap(self, img_embeds, prompt):
133
+ batch_size = img_embeds.shape[0]
134
+ p_before, p_after = prompt.split('<ImageHere>')
135
+ p_before_tokens = self.tokenizer(
136
+ p_before, return_tensors='pt',
137
+ add_special_tokens=True).to(img_embeds.device)
138
+
139
+ p_before_embeds = self.model.tok_embeddings(
140
+ p_before_tokens.input_ids).expand(batch_size, -1, -1)
141
+ wrapped_img_embeds = torch.cat([p_before_embeds, img_embeds], dim=1)
142
+
143
+ wrapped_atts_img = torch.ones(
144
+ wrapped_img_embeds.size()[:-1],
145
+ dtype=torch.long).to(img_embeds.device)
146
+
147
+ wrapped_target = torch.ones(
148
+ batch_size, wrapped_img_embeds.shape[1], dtype=torch.long).to(
149
+ img_embeds.device) * -100
150
+
151
+ return wrapped_img_embeds, wrapped_atts_img, wrapped_target
152
+
153
+ def text2emb(self, text, add_special=False):
154
+ to_regress_tokens = self.tokenizer(
155
+ text,
156
+ return_tensors='pt',
157
+ padding='longest',
158
+ truncation=True,
159
+ add_special_tokens=add_special).to(self.device)
160
+
161
+ targets = self.mask_human_targets(to_regress_tokens.input_ids)
162
+ targets = targets.to(self.device)
163
+ return to_regress_tokens, targets
164
+
165
+ def interleav_wrap_chat(self, tokenizer, query, image, history, meta_instruction):
166
+ prompt = ''
167
+ if meta_instruction:
168
+ prompt += f"""[UNUSED_TOKEN_146]system\n{meta_instruction}[UNUSED_TOKEN_145]\n"""
169
+ for record in history:
170
+ prompt += f"""[UNUSED_TOKEN_146]user\n{record[0]}[UNUSED_TOKEN_145]\n[UNUSED_TOKEN_146]assistant\n{record[1]}[UNUSED_TOKEN_145]\n"""
171
+ prompt += f"""[UNUSED_TOKEN_146]user\n{query}[UNUSED_TOKEN_145]\n[UNUSED_TOKEN_146]assistant\n"""
172
+
173
+ im_len = image.shape[1]
174
+ image_nums = len(image)
175
+ parts = prompt.split('<ImageHere>')
176
+ wrap_embeds, wrap_im_mask = [], []
177
+ temp_len = 0
178
+
179
+ for idx, part in enumerate(parts):
180
+ if len(part) > 0:
181
+ part_tokens = tokenizer(part, return_tensors='pt').to(self.device)
182
+ part_embeds = self.model.tok_embeddings(
183
+ part_tokens.input_ids)
184
+ wrap_embeds.append(part_embeds)
185
+ wrap_im_mask.append(torch.zeros(part_embeds.shape[:2]))
186
+ temp_len += part_embeds.shape[1]
187
+ if idx < image_nums:
188
+ wrap_embeds.append(image[idx].unsqueeze(0))
189
+ wrap_im_mask.append(torch.ones(1, image[idx].shape[0]))
190
+ temp_len += im_len
191
+
192
+ if temp_len > self.max_length:
193
+ break
194
+
195
+ wrap_embeds = torch.cat(wrap_embeds, dim=1)
196
+ wrap_im_mask = torch.cat(wrap_im_mask, dim=1)
197
+ wrap_embeds = wrap_embeds[:, :self.max_length].to(self.device)
198
+ wrap_im_mask = wrap_im_mask[:, :self.max_length].to(self.device).bool()
199
+ inputs = {
200
+ 'inputs_embeds': wrap_embeds
201
+ }
202
+ return inputs, wrap_im_mask
203
+
204
+ def interleav_wrap(self, img_list, text_list):
205
+ wrap_embeds_list, wrap_atts_list = [], []
206
+ wrap_target_list, wrap_im_mask_list = [], []
207
+
208
+ for image, text in zip(img_list, text_list):
209
+ img_embeds, atts_img, img_target = self.img2emb(image)
210
+ text = text[0]
211
+ parts = text.split('<ImageHere>')
212
+ wrap_tokens, wrap_embeds, wrap_atts, wrap_im_mask = [], [], [], []
213
+ temp_len = 0
214
+ image_nums, im_len = img_embeds.shape[:2]
215
+ need_bos = True
216
+ for idx, part in enumerate(parts):
217
+ if len(part) > 0:
218
+ part_tokens = self.tokenizer(
219
+ part,
220
+ return_tensors='pt',
221
+ padding='longest',
222
+ add_special_tokens=need_bos).to(self.device)
223
+ if need_bos:
224
+ need_bos = False
225
+ wrap_tokens.append(part_tokens.input_ids)
226
+ part_embeds = self.model.tok_embeddings(
227
+ part_tokens.input_ids)
228
+ wrap_embeds.append(part_embeds)
229
+ wrap_atts.append(part_tokens.attention_mask)
230
+ wrap_im_mask.append(
231
+ torch.zeros(part_embeds.shape[:2]).to(self.device))
232
+
233
+ temp_len += part_embeds.shape[1]
234
+ if idx < image_nums:
235
+ wrap_tokens.append(img_target[idx].unsqueeze(0))
236
+ wrap_embeds.append(img_embeds[idx].unsqueeze(0))
237
+ wrap_atts.append(atts_img[idx].unsqueeze(0))
238
+ wrap_im_mask.append(
239
+ torch.ones_like(atts_img[idx].unsqueeze(0)))
240
+
241
+ temp_len += im_len
242
+ if temp_len > self.max_length:
243
+ break
244
+
245
+ wrap_tokens = torch.cat(wrap_tokens, dim=1)
246
+ wrap_embeds = torch.cat(wrap_embeds, dim=1)
247
+ wrap_atts = torch.cat(wrap_atts, dim=1)
248
+ wrap_im_mask = torch.cat(wrap_im_mask, dim=1)
249
+
250
+ wrap_target = self.mask_human_targets(wrap_tokens).to(self.device)
251
+
252
+ wrap_embeds = wrap_embeds[:, :self.max_length].to(self.device)
253
+ wrap_atts = wrap_atts[:, :self.max_length].to(self.device)
254
+ wrap_target = wrap_target[:, :self.max_length].to(self.device)
255
+ wrap_im_mask = wrap_im_mask[:, :self.max_length].to(self.device)
256
+
257
+ wrap_embeds_list.append(wrap_embeds)
258
+ wrap_atts_list.append(wrap_atts)
259
+ wrap_target_list.append(wrap_target)
260
+ wrap_im_mask_list.append(wrap_im_mask)
261
+
262
+ wrap_embeds = torch.cat(wrap_embeds_list)
263
+ wrap_atts = torch.cat(wrap_atts_list)
264
+ wrap_target = torch.cat(wrap_target_list)
265
+ wrap_im_mask = torch.cat(wrap_im_mask_list)
266
+ return wrap_embeds, wrap_atts, wrap_target, wrap_im_mask
267
+
268
+ def mask_human_targets(self, input_ids, pure=False):
269
+ target_batch = []
270
+ for bs in range(input_ids.shape[0]):
271
+ ids = input_ids[bs]
272
+ targets = copy.deepcopy(ids)
273
+ end_count = 0
274
+ last_eoa = 0
275
+ for i, temp_id in enumerate(ids):
276
+ if temp_id == 92542:
277
+ if end_count % 2 == 0:
278
+ targets[last_eoa:i + 6] = -100
279
+ else:
280
+ last_eoa = i + 1
281
+ end_count += 1
282
+ # # eos and following pad
283
+ elif temp_id == 2:
284
+ # loss on eos, but not on pad
285
+ targets[i + 1:] = -100
286
+ break
287
+ # trunction, end at last question
288
+ if temp_id != 2 and end_count % 2 == 0:
289
+ # mask all after the last answer
290
+ targets[last_eoa + 1:] = -100
291
+ target_batch.append(targets.unsqueeze(0))
292
+ target_batch = torch.cat(target_batch, dim=0)
293
+ return target_batch
294
+
295
+ @add_start_docstrings_to_model_forward(InternLM2_INPUTS_DOCSTRING)
296
+ @replace_return_docstrings(
297
+ output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
298
+ def forward(self,
299
+ input_ids: torch.LongTensor = None,
300
+ attention_mask: Optional[torch.Tensor] = None,
301
+ position_ids: Optional[torch.LongTensor] = None,
302
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
303
+ inputs_embeds: Optional[torch.FloatTensor] = None,
304
+ labels: Optional[torch.LongTensor] = None,
305
+ use_cache: Optional[bool] = None,
306
+ output_attentions: Optional[bool] = None,
307
+ output_hidden_states: Optional[bool] = None,
308
+ return_dict: Optional[bool] = None,
309
+ **kwargs) -> Union[Tuple, CausalLMOutputWithPast]:
310
+ r"""
311
+ Args:
312
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
313
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
314
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
315
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
316
+ Returns:
317
+ """
318
+
319
+ samples = kwargs.get('samples', None)
320
+ if samples:
321
+ if samples['data_type'][0] == 'text':
322
+ has_img = False
323
+ elif samples['data_type'][0] == 'multi':
324
+ has_img = True
325
+ else:
326
+ raise NotImplementedError
327
+
328
+ # encode text
329
+ text = samples['text_input']
330
+ # encode image
331
+ if has_img:
332
+ image = samples['image']
333
+ to_regress_embeds, attention_mask, targets, im_mask = self.interleav_wrap(
334
+ image, text)
335
+ else:
336
+ to_regress_tokens, targets = self.text2emb(
337
+ text, add_special=True)
338
+ to_regress_embeds = self.model.tok_embeddings(
339
+ to_regress_tokens.input_ids)
340
+ attention_mask = to_regress_tokens.attention_mask
341
+ im_mask = torch.zeros(to_regress_embeds.shape[:2]).cuda()
342
+
343
+ inputs_embeds = to_regress_embeds[:, :self.max_length]
344
+ attention_mask = attention_mask[:, :self.max_length]
345
+ targets = targets[:, :self.max_length]
346
+ im_mask = im_mask[:, :self.max_length].bool()
347
+ labels = targets
348
+ else:
349
+ im_mask = kwargs.get('im_mask', None)
350
+ if im_mask is None and inputs_embeds is not None:
351
+ im_mask = torch.zeros(inputs_embeds.shape[:2]).to(
352
+ inputs_embeds.device)
353
+ im_mask = im_mask.bool()
354
+
355
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
356
+ output_hidden_states = (
357
+ output_hidden_states if output_hidden_states is not None else
358
+ self.config.output_hidden_states)
359
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
360
+
361
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
362
+ outputs = self.model(
363
+ input_ids=input_ids,
364
+ attention_mask=attention_mask,
365
+ position_ids=position_ids,
366
+ past_key_values=past_key_values,
367
+ inputs_embeds=inputs_embeds,
368
+ use_cache=use_cache,
369
+ output_attentions=output_attentions,
370
+ output_hidden_states=output_hidden_states,
371
+ return_dict=return_dict,
372
+ im_mask=im_mask,
373
+ )
374
+
375
+ hidden_states = outputs[0]
376
+ logits = self.output(hidden_states)
377
+ logits = logits.float()
378
+
379
+ loss = None
380
+ if labels is not None:
381
+ # Shift so that tokens < n predict n
382
+ shift_logits = logits[..., :-1, :].contiguous()
383
+ shift_labels = labels[..., 1:].contiguous()
384
+ # Flatten the tokens
385
+ loss_fct = CrossEntropyLoss()
386
+ shift_logits = shift_logits.view(-1, self.config.vocab_size)
387
+ shift_labels = shift_labels.view(-1)
388
+ # Enable model parallelism
389
+ shift_labels = shift_labels.to(shift_logits.device)
390
+ loss = loss_fct(shift_logits, shift_labels)
391
+
392
+ if not return_dict:
393
+ output = (logits, ) + outputs[1:]
394
+ return (loss, ) + output if loss is not None else output
395
+
396
+ return CausalLMOutputWithPast(
397
+ loss=loss,
398
+ logits=logits,
399
+ past_key_values=outputs.past_key_values,
400
+ hidden_states=outputs.hidden_states,
401
+ attentions=outputs.attentions,
402
+ )
403
+
404
+ def prepare_inputs_for_generation(self,
405
+ input_ids,
406
+ past_key_values=None,
407
+ attention_mask=None,
408
+ inputs_embeds=None,
409
+ im_mask=None,
410
+ **kwargs):
411
+ if past_key_values is not None:
412
+ past_length = past_key_values[0][0].shape[2]
413
+
414
+ # Some generation methods already pass only the last input ID
415
+ if input_ids.shape[1] > past_length:
416
+ remove_prefix_length = past_length
417
+ else:
418
+ # Default to old behavior: keep only final ID
419
+ remove_prefix_length = input_ids.shape[1] - 1
420
+
421
+ input_ids = input_ids[:, remove_prefix_length:]
422
+
423
+ position_ids = kwargs.get('position_ids', None)
424
+ if attention_mask is not None and position_ids is None:
425
+ # create position_ids on the fly for batch generation
426
+ position_ids = attention_mask.long().cumsum(-1) - 1
427
+ position_ids.masked_fill_(attention_mask == 0, 1)
428
+ if past_key_values:
429
+ position_ids = position_ids[:, -input_ids.shape[1]:]
430
+
431
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
432
+ if inputs_embeds is not None and past_key_values is None:
433
+ model_inputs = {'inputs_embeds': inputs_embeds}
434
+ else:
435
+ model_inputs = {'input_ids': input_ids}
436
+
437
+ im_mask = im_mask
438
+
439
+ model_inputs.update({
440
+ 'position_ids': position_ids,
441
+ 'past_key_values': past_key_values,
442
+ 'use_cache': kwargs.get('use_cache'),
443
+ 'attention_mask': attention_mask,
444
+ 'im_mask': im_mask,
445
+ })
446
+ return model_inputs
447
+
448
+ @staticmethod
449
+ def _reorder_cache(past_key_values, beam_idx):
450
+ reordered_past = ()
451
+ for layer_past in past_key_values:
452
+ reordered_past += (tuple(
453
+ past_state.index_select(0, beam_idx.to(past_state.device))
454
+ for past_state in layer_past), )
455
+ return reordered_past
456
+
457
+ def build_inputs(self,
458
+ tokenizer,
459
+ query: str,
460
+ history: List[Tuple[str, str]] = [],
461
+ meta_instruction=''):
462
+ prompt = ''
463
+ if meta_instruction:
464
+ prompt += f"""<s>[UNUSED_TOKEN_146]system\n{meta_instruction}[UNUSED_TOKEN_145]\n"""
465
+ else:
466
+ prompt += '<s>'
467
+ for record in history:
468
+ prompt += f"""[UNUSED_TOKEN_146]user\n{record[0]}[UNUSED_TOKEN_145]\n[UNUSED_TOKEN_146]assistant\n{record[1]}[UNUSED_TOKEN_145]\n"""
469
+ prompt += f"""[UNUSED_TOKEN_146]user\n{query}[UNUSED_TOKEN_145]\n[UNUSED_TOKEN_146]assistant\n"""
470
+ return tokenizer([prompt], return_tensors='pt')
471
+
472
+ @torch.no_grad()
473
+ def chat(
474
+ self,
475
+ tokenizer,
476
+ query: str,
477
+ image: torch.Tensor = None,
478
+ history: List[Tuple[str, str]] = [],
479
+ streamer: Optional[BaseStreamer] = None,
480
+ max_new_tokens: int = 1024,
481
+ do_sample: bool = True,
482
+ temperature: float = 1.0,
483
+ top_p: float = 0.8,
484
+ repetition_penalty: float=1.005,
485
+ meta_instruction:
486
+ str = 'You are an AI assistant whose name is InternLM-XComposer (浦���·灵笔).\n'
487
+ '- InternLM-XComposer (浦语·灵笔) is a multi-modality conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.\n'
488
+ '- InternLM-XComposer (浦语·灵笔) can understand and communicate fluently in the language chosen by the user such as English and 中文.\n'
489
+ '- InternLM-XComposer (浦语·灵笔) is capable of comprehending and articulating responses effectively based on the provided image.',
490
+ **kwargs,
491
+ ):
492
+ if image is None:
493
+ inputs = self.build_inputs(tokenizer, query, history, meta_instruction)
494
+ im_mask = torch.zeros(inputs['input_ids'].shape[:2]).cuda().bool()
495
+ else:
496
+ image = self.encode_img(image)
497
+ inputs, im_mask = self.interleav_wrap_chat(tokenizer, query, image, history, meta_instruction)
498
+ inputs = {
499
+ k: v.to(self.device)
500
+ for k, v in inputs.items() if torch.is_tensor(v)
501
+ }
502
+ # also add end-of-assistant token in eos token id to avoid unnecessary generation
503
+ eos_token_id = [
504
+ tokenizer.eos_token_id,
505
+ tokenizer.convert_tokens_to_ids(['[UNUSED_TOKEN_145]'])[0]
506
+ ]
507
+ outputs = self.generate(
508
+ **inputs,
509
+ streamer=streamer,
510
+ max_new_tokens=max_new_tokens,
511
+ do_sample=do_sample,
512
+ temperature=temperature,
513
+ top_p=top_p,
514
+ eos_token_id=eos_token_id,
515
+ repetition_penalty=repetition_penalty,
516
+ im_mask=im_mask,
517
+ **kwargs,
518
+ )
519
+ if image is None:
520
+ outputs = outputs[0].cpu().tolist()[len(inputs['input_ids'][0]):]
521
+ else:
522
+ outputs = outputs[0].cpu().tolist()
523
+ response = tokenizer.decode(outputs, skip_special_tokens=True)
524
+ response = response.split('[UNUSED_TOKEN_145]')[0]
525
+ history = history + [(query, response)]
526
+ return response, history
527
+
528
+ @torch.no_grad()
529
+ def stream_chat(
530
+ self,
531
+ tokenizer,
532
+ query: str,
533
+ history: List[Tuple[str, str]] = [],
534
+ max_new_tokens: int = 1024,
535
+ do_sample: bool = True,
536
+ temperature: float = 0.8,
537
+ top_p: float = 0.8,
538
+ **kwargs,
539
+ ):
540
+ """Return a generator in format: (response, history) Eg.
541
+
542
+ ('你好,有什么可以帮助您的吗', [('你好', '你好,有什么可以帮助您的吗')]) ('你好,有什么可以帮助您的吗?', [('你好',
543
+ '你好,有什么可以帮助您的吗?')])
544
+ """
545
+ if BaseStreamer is None:
546
+ raise ModuleNotFoundError(
547
+ 'The version of `transformers` is too low. Please make sure '
548
+ 'that you have installed `transformers>=4.28.0`.')
549
+
550
+ response_queue = queue.Queue(maxsize=20)
551
+
552
+ class ChatStreamer(BaseStreamer):
553
+
554
+ def __init__(self, tokenizer) -> None:
555
+ super().__init__()
556
+ self.tokenizer = tokenizer
557
+ self.queue = response_queue
558
+ self.query = query
559
+ self.history = history
560
+ self.response = ''
561
+ self.received_inputs = False
562
+ self.queue.put(
563
+ (self.response, history + [(self.query, self.response)]))
564
+
565
+ def put(self, value):
566
+ if len(value.shape) > 1 and value.shape[0] > 1:
567
+ raise ValueError('ChatStreamer only supports batch size 1')
568
+ elif len(value.shape) > 1:
569
+ value = value[0]
570
+
571
+ if not self.received_inputs:
572
+ # The first received value is input_ids, ignore here
573
+ self.received_inputs = True
574
+ return
575
+
576
+ token = self.tokenizer.decode([value[-1]],
577
+ skip_special_tokens=True)
578
+ if token.strip() != '[UNUSED_TOKEN_145]':
579
+ self.response = self.response + token
580
+ history = self.history + [(self.query, self.response)]
581
+ self.queue.put((self.response, history))
582
+
583
+ def end(self):
584
+ self.queue.put(None)
585
+
586
+ def stream_producer():
587
+ return self.chat(
588
+ tokenizer=tokenizer,
589
+ query=query,
590
+ streamer=ChatStreamer(tokenizer=tokenizer),
591
+ history=history,
592
+ max_new_tokens=max_new_tokens,
593
+ do_sample=do_sample,
594
+ temperature=temperature,
595
+ top_p=top_p,
596
+ **kwargs,
597
+ )
598
+
599
+ def consumer():
600
+ producer = threading.Thread(target=stream_producer)
601
+ producer.start()
602
+ while True:
603
+ res = response_queue.get()
604
+ if res is None:
605
+ return
606
+ yield res
607
+
608
+ return consumer()
pytorch_model-00001-of-00002.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21fc151cb2dc7c497466f8f7d761d6532497b7d324b4a6d9af5b3401b4954fd9
3
+ size 9983919738
pytorch_model-00002-of-00002.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da2c539854823c42464c73bbe22e7e2ea800a4448ff0a6be000aa6f49d435364
3
+ size 7350094452
pytorch_model.bin.index.json ADDED
@@ -0,0 +1,947 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 17333676032
4
+ },
5
+ "weight_map": {
6
+ "model.layers.0.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
7
+ "model.layers.0.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
8
+ "model.layers.0.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
9
+ "model.layers.0.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
10
+ "model.layers.0.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
11
+ "model.layers.0.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
12
+ "model.layers.0.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
13
+ "model.layers.0.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
14
+ "model.layers.0.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
15
+ "model.layers.0.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
16
+ "model.layers.0.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
17
+ "model.layers.0.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
18
+ "model.layers.0.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
19
+ "model.layers.0.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
20
+ "model.layers.0.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
21
+ "model.layers.0.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
22
+ "model.layers.0.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
23
+ "model.layers.1.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
24
+ "model.layers.1.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
25
+ "model.layers.1.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
26
+ "model.layers.1.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
27
+ "model.layers.1.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
28
+ "model.layers.1.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
29
+ "model.layers.1.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
30
+ "model.layers.1.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
31
+ "model.layers.1.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
32
+ "model.layers.1.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
33
+ "model.layers.1.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
34
+ "model.layers.1.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
35
+ "model.layers.1.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
36
+ "model.layers.1.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
37
+ "model.layers.1.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
38
+ "model.layers.1.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
39
+ "model.layers.1.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
40
+ "model.layers.10.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
41
+ "model.layers.10.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
42
+ "model.layers.10.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
43
+ "model.layers.10.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
44
+ "model.layers.10.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
45
+ "model.layers.10.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
46
+ "model.layers.10.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
47
+ "model.layers.10.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
48
+ "model.layers.10.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
49
+ "model.layers.10.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
50
+ "model.layers.10.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
51
+ "model.layers.10.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
52
+ "model.layers.10.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
53
+ "model.layers.10.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
54
+ "model.layers.10.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
55
+ "model.layers.10.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
56
+ "model.layers.10.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
57
+ "model.layers.11.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
58
+ "model.layers.11.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
59
+ "model.layers.11.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
60
+ "model.layers.11.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
61
+ "model.layers.11.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
62
+ "model.layers.11.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
63
+ "model.layers.11.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
64
+ "model.layers.11.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
65
+ "model.layers.11.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
66
+ "model.layers.11.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
67
+ "model.layers.11.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
68
+ "model.layers.11.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
69
+ "model.layers.11.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
70
+ "model.layers.11.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
71
+ "model.layers.11.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
72
+ "model.layers.11.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
73
+ "model.layers.11.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
74
+ "model.layers.12.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
75
+ "model.layers.12.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
76
+ "model.layers.12.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
77
+ "model.layers.12.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
78
+ "model.layers.12.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
79
+ "model.layers.12.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
80
+ "model.layers.12.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
81
+ "model.layers.12.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
82
+ "model.layers.12.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
83
+ "model.layers.12.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
84
+ "model.layers.12.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
85
+ "model.layers.12.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
86
+ "model.layers.12.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
87
+ "model.layers.12.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
88
+ "model.layers.12.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
89
+ "model.layers.12.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
90
+ "model.layers.12.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
91
+ "model.layers.13.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
92
+ "model.layers.13.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
93
+ "model.layers.13.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
94
+ "model.layers.13.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
95
+ "model.layers.13.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
96
+ "model.layers.13.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
97
+ "model.layers.13.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
98
+ "model.layers.13.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
99
+ "model.layers.13.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
100
+ "model.layers.13.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
101
+ "model.layers.13.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
102
+ "model.layers.13.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
103
+ "model.layers.13.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
104
+ "model.layers.13.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
105
+ "model.layers.13.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
106
+ "model.layers.13.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
107
+ "model.layers.13.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
108
+ "model.layers.14.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
109
+ "model.layers.14.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
110
+ "model.layers.14.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
111
+ "model.layers.14.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
112
+ "model.layers.14.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
113
+ "model.layers.14.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
114
+ "model.layers.14.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
115
+ "model.layers.14.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
116
+ "model.layers.14.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
117
+ "model.layers.14.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
118
+ "model.layers.14.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
119
+ "model.layers.14.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
120
+ "model.layers.14.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
121
+ "model.layers.14.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
122
+ "model.layers.14.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
123
+ "model.layers.14.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
124
+ "model.layers.14.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
125
+ "model.layers.15.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
126
+ "model.layers.15.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
127
+ "model.layers.15.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
128
+ "model.layers.15.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
129
+ "model.layers.15.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
130
+ "model.layers.15.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
131
+ "model.layers.15.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
132
+ "model.layers.15.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
133
+ "model.layers.15.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
134
+ "model.layers.15.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
135
+ "model.layers.15.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
136
+ "model.layers.15.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
137
+ "model.layers.15.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
138
+ "model.layers.15.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
139
+ "model.layers.15.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
140
+ "model.layers.15.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
141
+ "model.layers.15.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
142
+ "model.layers.16.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
143
+ "model.layers.16.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
144
+ "model.layers.16.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
145
+ "model.layers.16.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
146
+ "model.layers.16.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
147
+ "model.layers.16.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
148
+ "model.layers.16.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
149
+ "model.layers.16.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
150
+ "model.layers.16.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
151
+ "model.layers.16.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
152
+ "model.layers.16.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
153
+ "model.layers.16.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
154
+ "model.layers.16.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
155
+ "model.layers.16.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
156
+ "model.layers.16.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
157
+ "model.layers.16.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
158
+ "model.layers.16.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
159
+ "model.layers.17.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
160
+ "model.layers.17.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
161
+ "model.layers.17.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
162
+ "model.layers.17.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
163
+ "model.layers.17.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
164
+ "model.layers.17.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
165
+ "model.layers.17.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
166
+ "model.layers.17.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
167
+ "model.layers.17.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
168
+ "model.layers.17.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
169
+ "model.layers.17.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
170
+ "model.layers.17.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
171
+ "model.layers.17.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
172
+ "model.layers.17.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
173
+ "model.layers.17.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
174
+ "model.layers.17.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
175
+ "model.layers.17.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
176
+ "model.layers.18.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
177
+ "model.layers.18.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
178
+ "model.layers.18.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
179
+ "model.layers.18.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
180
+ "model.layers.18.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
181
+ "model.layers.18.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
182
+ "model.layers.18.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
183
+ "model.layers.18.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
184
+ "model.layers.18.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
185
+ "model.layers.18.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
186
+ "model.layers.18.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
187
+ "model.layers.18.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
188
+ "model.layers.18.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
189
+ "model.layers.18.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
190
+ "model.layers.18.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
191
+ "model.layers.18.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
192
+ "model.layers.18.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
193
+ "model.layers.19.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
194
+ "model.layers.19.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
195
+ "model.layers.19.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
196
+ "model.layers.19.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
197
+ "model.layers.19.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
198
+ "model.layers.19.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
199
+ "model.layers.19.attention_norm.weight": "pytorch_model-00002-of-00002.bin",
200
+ "model.layers.19.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
201
+ "model.layers.19.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
202
+ "model.layers.19.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
203
+ "model.layers.19.feed_forward.w2.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
204
+ "model.layers.19.feed_forward.w2.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
205
+ "model.layers.19.feed_forward.w2.weight": "pytorch_model-00002-of-00002.bin",
206
+ "model.layers.19.feed_forward.w3.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
207
+ "model.layers.19.feed_forward.w3.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
208
+ "model.layers.19.feed_forward.w3.weight": "pytorch_model-00002-of-00002.bin",
209
+ "model.layers.19.ffn_norm.weight": "pytorch_model-00002-of-00002.bin",
210
+ "model.layers.2.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
211
+ "model.layers.2.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
212
+ "model.layers.2.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
213
+ "model.layers.2.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
214
+ "model.layers.2.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
215
+ "model.layers.2.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
216
+ "model.layers.2.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
217
+ "model.layers.2.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
218
+ "model.layers.2.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
219
+ "model.layers.2.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
220
+ "model.layers.2.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
221
+ "model.layers.2.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
222
+ "model.layers.2.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
223
+ "model.layers.2.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
224
+ "model.layers.2.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
225
+ "model.layers.2.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
226
+ "model.layers.2.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
227
+ "model.layers.20.attention.wo.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
228
+ "model.layers.20.attention.wo.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
229
+ "model.layers.20.attention.wo.weight": "pytorch_model-00002-of-00002.bin",
230
+ "model.layers.20.attention.wqkv.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
231
+ "model.layers.20.attention.wqkv.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
232
+ "model.layers.20.attention.wqkv.weight": "pytorch_model-00002-of-00002.bin",
233
+ "model.layers.20.attention_norm.weight": "pytorch_model-00002-of-00002.bin",
234
+ "model.layers.20.feed_forward.w1.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
235
+ "model.layers.20.feed_forward.w1.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
236
+ "model.layers.20.feed_forward.w1.weight": "pytorch_model-00002-of-00002.bin",
237
+ "model.layers.20.feed_forward.w2.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
238
+ "model.layers.20.feed_forward.w2.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
239
+ "model.layers.20.feed_forward.w2.weight": "pytorch_model-00002-of-00002.bin",
240
+ "model.layers.20.feed_forward.w3.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
241
+ "model.layers.20.feed_forward.w3.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
242
+ "model.layers.20.feed_forward.w3.weight": "pytorch_model-00002-of-00002.bin",
243
+ "model.layers.20.ffn_norm.weight": "pytorch_model-00002-of-00002.bin",
244
+ "model.layers.21.attention.wo.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
245
+ "model.layers.21.attention.wo.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
246
+ "model.layers.21.attention.wo.weight": "pytorch_model-00002-of-00002.bin",
247
+ "model.layers.21.attention.wqkv.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
248
+ "model.layers.21.attention.wqkv.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
249
+ "model.layers.21.attention.wqkv.weight": "pytorch_model-00002-of-00002.bin",
250
+ "model.layers.21.attention_norm.weight": "pytorch_model-00002-of-00002.bin",
251
+ "model.layers.21.feed_forward.w1.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
252
+ "model.layers.21.feed_forward.w1.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
253
+ "model.layers.21.feed_forward.w1.weight": "pytorch_model-00002-of-00002.bin",
254
+ "model.layers.21.feed_forward.w2.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
255
+ "model.layers.21.feed_forward.w2.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
256
+ "model.layers.21.feed_forward.w2.weight": "pytorch_model-00002-of-00002.bin",
257
+ "model.layers.21.feed_forward.w3.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
258
+ "model.layers.21.feed_forward.w3.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
259
+ "model.layers.21.feed_forward.w3.weight": "pytorch_model-00002-of-00002.bin",
260
+ "model.layers.21.ffn_norm.weight": "pytorch_model-00002-of-00002.bin",
261
+ "model.layers.22.attention.wo.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
262
+ "model.layers.22.attention.wo.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
263
+ "model.layers.22.attention.wo.weight": "pytorch_model-00002-of-00002.bin",
264
+ "model.layers.22.attention.wqkv.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
265
+ "model.layers.22.attention.wqkv.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
266
+ "model.layers.22.attention.wqkv.weight": "pytorch_model-00002-of-00002.bin",
267
+ "model.layers.22.attention_norm.weight": "pytorch_model-00002-of-00002.bin",
268
+ "model.layers.22.feed_forward.w1.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
269
+ "model.layers.22.feed_forward.w1.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
270
+ "model.layers.22.feed_forward.w1.weight": "pytorch_model-00002-of-00002.bin",
271
+ "model.layers.22.feed_forward.w2.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
272
+ "model.layers.22.feed_forward.w2.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
273
+ "model.layers.22.feed_forward.w2.weight": "pytorch_model-00002-of-00002.bin",
274
+ "model.layers.22.feed_forward.w3.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
275
+ "model.layers.22.feed_forward.w3.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
276
+ "model.layers.22.feed_forward.w3.weight": "pytorch_model-00002-of-00002.bin",
277
+ "model.layers.22.ffn_norm.weight": "pytorch_model-00002-of-00002.bin",
278
+ "model.layers.23.attention.wo.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
279
+ "model.layers.23.attention.wo.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
280
+ "model.layers.23.attention.wo.weight": "pytorch_model-00002-of-00002.bin",
281
+ "model.layers.23.attention.wqkv.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
282
+ "model.layers.23.attention.wqkv.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
283
+ "model.layers.23.attention.wqkv.weight": "pytorch_model-00002-of-00002.bin",
284
+ "model.layers.23.attention_norm.weight": "pytorch_model-00002-of-00002.bin",
285
+ "model.layers.23.feed_forward.w1.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
286
+ "model.layers.23.feed_forward.w1.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
287
+ "model.layers.23.feed_forward.w1.weight": "pytorch_model-00002-of-00002.bin",
288
+ "model.layers.23.feed_forward.w2.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
289
+ "model.layers.23.feed_forward.w2.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
290
+ "model.layers.23.feed_forward.w2.weight": "pytorch_model-00002-of-00002.bin",
291
+ "model.layers.23.feed_forward.w3.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
292
+ "model.layers.23.feed_forward.w3.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
293
+ "model.layers.23.feed_forward.w3.weight": "pytorch_model-00002-of-00002.bin",
294
+ "model.layers.23.ffn_norm.weight": "pytorch_model-00002-of-00002.bin",
295
+ "model.layers.24.attention.wo.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
296
+ "model.layers.24.attention.wo.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
297
+ "model.layers.24.attention.wo.weight": "pytorch_model-00002-of-00002.bin",
298
+ "model.layers.24.attention.wqkv.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
299
+ "model.layers.24.attention.wqkv.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
300
+ "model.layers.24.attention.wqkv.weight": "pytorch_model-00002-of-00002.bin",
301
+ "model.layers.24.attention_norm.weight": "pytorch_model-00002-of-00002.bin",
302
+ "model.layers.24.feed_forward.w1.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
303
+ "model.layers.24.feed_forward.w1.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
304
+ "model.layers.24.feed_forward.w1.weight": "pytorch_model-00002-of-00002.bin",
305
+ "model.layers.24.feed_forward.w2.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
306
+ "model.layers.24.feed_forward.w2.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
307
+ "model.layers.24.feed_forward.w2.weight": "pytorch_model-00002-of-00002.bin",
308
+ "model.layers.24.feed_forward.w3.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
309
+ "model.layers.24.feed_forward.w3.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
310
+ "model.layers.24.feed_forward.w3.weight": "pytorch_model-00002-of-00002.bin",
311
+ "model.layers.24.ffn_norm.weight": "pytorch_model-00002-of-00002.bin",
312
+ "model.layers.25.attention.wo.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
313
+ "model.layers.25.attention.wo.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
314
+ "model.layers.25.attention.wo.weight": "pytorch_model-00002-of-00002.bin",
315
+ "model.layers.25.attention.wqkv.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
316
+ "model.layers.25.attention.wqkv.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
317
+ "model.layers.25.attention.wqkv.weight": "pytorch_model-00002-of-00002.bin",
318
+ "model.layers.25.attention_norm.weight": "pytorch_model-00002-of-00002.bin",
319
+ "model.layers.25.feed_forward.w1.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
320
+ "model.layers.25.feed_forward.w1.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
321
+ "model.layers.25.feed_forward.w1.weight": "pytorch_model-00002-of-00002.bin",
322
+ "model.layers.25.feed_forward.w2.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
323
+ "model.layers.25.feed_forward.w2.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
324
+ "model.layers.25.feed_forward.w2.weight": "pytorch_model-00002-of-00002.bin",
325
+ "model.layers.25.feed_forward.w3.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
326
+ "model.layers.25.feed_forward.w3.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
327
+ "model.layers.25.feed_forward.w3.weight": "pytorch_model-00002-of-00002.bin",
328
+ "model.layers.25.ffn_norm.weight": "pytorch_model-00002-of-00002.bin",
329
+ "model.layers.26.attention.wo.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
330
+ "model.layers.26.attention.wo.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
331
+ "model.layers.26.attention.wo.weight": "pytorch_model-00002-of-00002.bin",
332
+ "model.layers.26.attention.wqkv.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
333
+ "model.layers.26.attention.wqkv.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
334
+ "model.layers.26.attention.wqkv.weight": "pytorch_model-00002-of-00002.bin",
335
+ "model.layers.26.attention_norm.weight": "pytorch_model-00002-of-00002.bin",
336
+ "model.layers.26.feed_forward.w1.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
337
+ "model.layers.26.feed_forward.w1.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
338
+ "model.layers.26.feed_forward.w1.weight": "pytorch_model-00002-of-00002.bin",
339
+ "model.layers.26.feed_forward.w2.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
340
+ "model.layers.26.feed_forward.w2.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
341
+ "model.layers.26.feed_forward.w2.weight": "pytorch_model-00002-of-00002.bin",
342
+ "model.layers.26.feed_forward.w3.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
343
+ "model.layers.26.feed_forward.w3.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
344
+ "model.layers.26.feed_forward.w3.weight": "pytorch_model-00002-of-00002.bin",
345
+ "model.layers.26.ffn_norm.weight": "pytorch_model-00002-of-00002.bin",
346
+ "model.layers.27.attention.wo.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
347
+ "model.layers.27.attention.wo.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
348
+ "model.layers.27.attention.wo.weight": "pytorch_model-00002-of-00002.bin",
349
+ "model.layers.27.attention.wqkv.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
350
+ "model.layers.27.attention.wqkv.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
351
+ "model.layers.27.attention.wqkv.weight": "pytorch_model-00002-of-00002.bin",
352
+ "model.layers.27.attention_norm.weight": "pytorch_model-00002-of-00002.bin",
353
+ "model.layers.27.feed_forward.w1.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
354
+ "model.layers.27.feed_forward.w1.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
355
+ "model.layers.27.feed_forward.w1.weight": "pytorch_model-00002-of-00002.bin",
356
+ "model.layers.27.feed_forward.w2.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
357
+ "model.layers.27.feed_forward.w2.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
358
+ "model.layers.27.feed_forward.w2.weight": "pytorch_model-00002-of-00002.bin",
359
+ "model.layers.27.feed_forward.w3.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
360
+ "model.layers.27.feed_forward.w3.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
361
+ "model.layers.27.feed_forward.w3.weight": "pytorch_model-00002-of-00002.bin",
362
+ "model.layers.27.ffn_norm.weight": "pytorch_model-00002-of-00002.bin",
363
+ "model.layers.28.attention.wo.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
364
+ "model.layers.28.attention.wo.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
365
+ "model.layers.28.attention.wo.weight": "pytorch_model-00002-of-00002.bin",
366
+ "model.layers.28.attention.wqkv.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
367
+ "model.layers.28.attention.wqkv.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
368
+ "model.layers.28.attention.wqkv.weight": "pytorch_model-00002-of-00002.bin",
369
+ "model.layers.28.attention_norm.weight": "pytorch_model-00002-of-00002.bin",
370
+ "model.layers.28.feed_forward.w1.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
371
+ "model.layers.28.feed_forward.w1.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
372
+ "model.layers.28.feed_forward.w1.weight": "pytorch_model-00002-of-00002.bin",
373
+ "model.layers.28.feed_forward.w2.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
374
+ "model.layers.28.feed_forward.w2.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
375
+ "model.layers.28.feed_forward.w2.weight": "pytorch_model-00002-of-00002.bin",
376
+ "model.layers.28.feed_forward.w3.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
377
+ "model.layers.28.feed_forward.w3.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
378
+ "model.layers.28.feed_forward.w3.weight": "pytorch_model-00002-of-00002.bin",
379
+ "model.layers.28.ffn_norm.weight": "pytorch_model-00002-of-00002.bin",
380
+ "model.layers.29.attention.wo.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
381
+ "model.layers.29.attention.wo.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
382
+ "model.layers.29.attention.wo.weight": "pytorch_model-00002-of-00002.bin",
383
+ "model.layers.29.attention.wqkv.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
384
+ "model.layers.29.attention.wqkv.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
385
+ "model.layers.29.attention.wqkv.weight": "pytorch_model-00002-of-00002.bin",
386
+ "model.layers.29.attention_norm.weight": "pytorch_model-00002-of-00002.bin",
387
+ "model.layers.29.feed_forward.w1.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
388
+ "model.layers.29.feed_forward.w1.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
389
+ "model.layers.29.feed_forward.w1.weight": "pytorch_model-00002-of-00002.bin",
390
+ "model.layers.29.feed_forward.w2.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
391
+ "model.layers.29.feed_forward.w2.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
392
+ "model.layers.29.feed_forward.w2.weight": "pytorch_model-00002-of-00002.bin",
393
+ "model.layers.29.feed_forward.w3.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
394
+ "model.layers.29.feed_forward.w3.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
395
+ "model.layers.29.feed_forward.w3.weight": "pytorch_model-00002-of-00002.bin",
396
+ "model.layers.29.ffn_norm.weight": "pytorch_model-00002-of-00002.bin",
397
+ "model.layers.3.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
398
+ "model.layers.3.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
399
+ "model.layers.3.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
400
+ "model.layers.3.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
401
+ "model.layers.3.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
402
+ "model.layers.3.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
403
+ "model.layers.3.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
404
+ "model.layers.3.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
405
+ "model.layers.3.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
406
+ "model.layers.3.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
407
+ "model.layers.3.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
408
+ "model.layers.3.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
409
+ "model.layers.3.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
410
+ "model.layers.3.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
411
+ "model.layers.3.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
412
+ "model.layers.3.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
413
+ "model.layers.3.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
414
+ "model.layers.30.attention.wo.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
415
+ "model.layers.30.attention.wo.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
416
+ "model.layers.30.attention.wo.weight": "pytorch_model-00002-of-00002.bin",
417
+ "model.layers.30.attention.wqkv.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
418
+ "model.layers.30.attention.wqkv.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
419
+ "model.layers.30.attention.wqkv.weight": "pytorch_model-00002-of-00002.bin",
420
+ "model.layers.30.attention_norm.weight": "pytorch_model-00002-of-00002.bin",
421
+ "model.layers.30.feed_forward.w1.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
422
+ "model.layers.30.feed_forward.w1.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
423
+ "model.layers.30.feed_forward.w1.weight": "pytorch_model-00002-of-00002.bin",
424
+ "model.layers.30.feed_forward.w2.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
425
+ "model.layers.30.feed_forward.w2.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
426
+ "model.layers.30.feed_forward.w2.weight": "pytorch_model-00002-of-00002.bin",
427
+ "model.layers.30.feed_forward.w3.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
428
+ "model.layers.30.feed_forward.w3.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
429
+ "model.layers.30.feed_forward.w3.weight": "pytorch_model-00002-of-00002.bin",
430
+ "model.layers.30.ffn_norm.weight": "pytorch_model-00002-of-00002.bin",
431
+ "model.layers.31.attention.wo.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
432
+ "model.layers.31.attention.wo.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
433
+ "model.layers.31.attention.wo.weight": "pytorch_model-00002-of-00002.bin",
434
+ "model.layers.31.attention.wqkv.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
435
+ "model.layers.31.attention.wqkv.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
436
+ "model.layers.31.attention.wqkv.weight": "pytorch_model-00002-of-00002.bin",
437
+ "model.layers.31.attention_norm.weight": "pytorch_model-00002-of-00002.bin",
438
+ "model.layers.31.feed_forward.w1.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
439
+ "model.layers.31.feed_forward.w1.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
440
+ "model.layers.31.feed_forward.w1.weight": "pytorch_model-00002-of-00002.bin",
441
+ "model.layers.31.feed_forward.w2.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
442
+ "model.layers.31.feed_forward.w2.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
443
+ "model.layers.31.feed_forward.w2.weight": "pytorch_model-00002-of-00002.bin",
444
+ "model.layers.31.feed_forward.w3.Plora_A.weight": "pytorch_model-00002-of-00002.bin",
445
+ "model.layers.31.feed_forward.w3.Plora_B.weight": "pytorch_model-00002-of-00002.bin",
446
+ "model.layers.31.feed_forward.w3.weight": "pytorch_model-00002-of-00002.bin",
447
+ "model.layers.31.ffn_norm.weight": "pytorch_model-00002-of-00002.bin",
448
+ "model.layers.4.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
449
+ "model.layers.4.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
450
+ "model.layers.4.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
451
+ "model.layers.4.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
452
+ "model.layers.4.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
453
+ "model.layers.4.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
454
+ "model.layers.4.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
455
+ "model.layers.4.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
456
+ "model.layers.4.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
457
+ "model.layers.4.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
458
+ "model.layers.4.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
459
+ "model.layers.4.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
460
+ "model.layers.4.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
461
+ "model.layers.4.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
462
+ "model.layers.4.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
463
+ "model.layers.4.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
464
+ "model.layers.4.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
465
+ "model.layers.5.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
466
+ "model.layers.5.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
467
+ "model.layers.5.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
468
+ "model.layers.5.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
469
+ "model.layers.5.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
470
+ "model.layers.5.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
471
+ "model.layers.5.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
472
+ "model.layers.5.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
473
+ "model.layers.5.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
474
+ "model.layers.5.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
475
+ "model.layers.5.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
476
+ "model.layers.5.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
477
+ "model.layers.5.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
478
+ "model.layers.5.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
479
+ "model.layers.5.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
480
+ "model.layers.5.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
481
+ "model.layers.5.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
482
+ "model.layers.6.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
483
+ "model.layers.6.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
484
+ "model.layers.6.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
485
+ "model.layers.6.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
486
+ "model.layers.6.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
487
+ "model.layers.6.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
488
+ "model.layers.6.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
489
+ "model.layers.6.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
490
+ "model.layers.6.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
491
+ "model.layers.6.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
492
+ "model.layers.6.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
493
+ "model.layers.6.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
494
+ "model.layers.6.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
495
+ "model.layers.6.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
496
+ "model.layers.6.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
497
+ "model.layers.6.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
498
+ "model.layers.6.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
499
+ "model.layers.7.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
500
+ "model.layers.7.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
501
+ "model.layers.7.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
502
+ "model.layers.7.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
503
+ "model.layers.7.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
504
+ "model.layers.7.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
505
+ "model.layers.7.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
506
+ "model.layers.7.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
507
+ "model.layers.7.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
508
+ "model.layers.7.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
509
+ "model.layers.7.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
510
+ "model.layers.7.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
511
+ "model.layers.7.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
512
+ "model.layers.7.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
513
+ "model.layers.7.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
514
+ "model.layers.7.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
515
+ "model.layers.7.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
516
+ "model.layers.8.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
517
+ "model.layers.8.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
518
+ "model.layers.8.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
519
+ "model.layers.8.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
520
+ "model.layers.8.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
521
+ "model.layers.8.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
522
+ "model.layers.8.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
523
+ "model.layers.8.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
524
+ "model.layers.8.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
525
+ "model.layers.8.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
526
+ "model.layers.8.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
527
+ "model.layers.8.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
528
+ "model.layers.8.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
529
+ "model.layers.8.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
530
+ "model.layers.8.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
531
+ "model.layers.8.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
532
+ "model.layers.8.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
533
+ "model.layers.9.attention.wo.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
534
+ "model.layers.9.attention.wo.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
535
+ "model.layers.9.attention.wo.weight": "pytorch_model-00001-of-00002.bin",
536
+ "model.layers.9.attention.wqkv.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
537
+ "model.layers.9.attention.wqkv.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
538
+ "model.layers.9.attention.wqkv.weight": "pytorch_model-00001-of-00002.bin",
539
+ "model.layers.9.attention_norm.weight": "pytorch_model-00001-of-00002.bin",
540
+ "model.layers.9.feed_forward.w1.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
541
+ "model.layers.9.feed_forward.w1.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
542
+ "model.layers.9.feed_forward.w1.weight": "pytorch_model-00001-of-00002.bin",
543
+ "model.layers.9.feed_forward.w2.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
544
+ "model.layers.9.feed_forward.w2.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
545
+ "model.layers.9.feed_forward.w2.weight": "pytorch_model-00001-of-00002.bin",
546
+ "model.layers.9.feed_forward.w3.Plora_A.weight": "pytorch_model-00001-of-00002.bin",
547
+ "model.layers.9.feed_forward.w3.Plora_B.weight": "pytorch_model-00001-of-00002.bin",
548
+ "model.layers.9.feed_forward.w3.weight": "pytorch_model-00001-of-00002.bin",
549
+ "model.layers.9.ffn_norm.weight": "pytorch_model-00001-of-00002.bin",
550
+ "model.norm.weight": "pytorch_model-00002-of-00002.bin",
551
+ "model.tok_embeddings.weight": "pytorch_model-00001-of-00002.bin",
552
+ "output.weight": "pytorch_model-00002-of-00002.bin",
553
+ "vision_proj.0.bias": "pytorch_model-00002-of-00002.bin",
554
+ "vision_proj.0.weight": "pytorch_model-00002-of-00002.bin",
555
+ "vision_proj.2.bias": "pytorch_model-00002-of-00002.bin",
556
+ "vision_proj.2.weight": "pytorch_model-00002-of-00002.bin",
557
+ "vit.vision_tower.vision_model.embeddings.class_embedding": "pytorch_model-00002-of-00002.bin",
558
+ "vit.vision_tower.vision_model.embeddings.patch_embedding.weight": "pytorch_model-00002-of-00002.bin",
559
+ "vit.vision_tower.vision_model.embeddings.position_embedding.weight": "pytorch_model-00002-of-00002.bin",
560
+ "vit.vision_tower.vision_model.encoder.layers.0.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
561
+ "vit.vision_tower.vision_model.encoder.layers.0.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
562
+ "vit.vision_tower.vision_model.encoder.layers.0.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
563
+ "vit.vision_tower.vision_model.encoder.layers.0.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
564
+ "vit.vision_tower.vision_model.encoder.layers.0.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
565
+ "vit.vision_tower.vision_model.encoder.layers.0.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
566
+ "vit.vision_tower.vision_model.encoder.layers.0.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
567
+ "vit.vision_tower.vision_model.encoder.layers.0.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
568
+ "vit.vision_tower.vision_model.encoder.layers.0.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
569
+ "vit.vision_tower.vision_model.encoder.layers.0.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
570
+ "vit.vision_tower.vision_model.encoder.layers.0.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
571
+ "vit.vision_tower.vision_model.encoder.layers.0.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
572
+ "vit.vision_tower.vision_model.encoder.layers.0.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
573
+ "vit.vision_tower.vision_model.encoder.layers.0.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
574
+ "vit.vision_tower.vision_model.encoder.layers.0.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
575
+ "vit.vision_tower.vision_model.encoder.layers.0.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
576
+ "vit.vision_tower.vision_model.encoder.layers.1.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
577
+ "vit.vision_tower.vision_model.encoder.layers.1.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
578
+ "vit.vision_tower.vision_model.encoder.layers.1.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
579
+ "vit.vision_tower.vision_model.encoder.layers.1.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
580
+ "vit.vision_tower.vision_model.encoder.layers.1.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
581
+ "vit.vision_tower.vision_model.encoder.layers.1.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
582
+ "vit.vision_tower.vision_model.encoder.layers.1.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
583
+ "vit.vision_tower.vision_model.encoder.layers.1.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
584
+ "vit.vision_tower.vision_model.encoder.layers.1.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
585
+ "vit.vision_tower.vision_model.encoder.layers.1.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
586
+ "vit.vision_tower.vision_model.encoder.layers.1.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
587
+ "vit.vision_tower.vision_model.encoder.layers.1.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
588
+ "vit.vision_tower.vision_model.encoder.layers.1.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
589
+ "vit.vision_tower.vision_model.encoder.layers.1.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
590
+ "vit.vision_tower.vision_model.encoder.layers.1.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
591
+ "vit.vision_tower.vision_model.encoder.layers.1.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
592
+ "vit.vision_tower.vision_model.encoder.layers.10.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
593
+ "vit.vision_tower.vision_model.encoder.layers.10.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
594
+ "vit.vision_tower.vision_model.encoder.layers.10.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
595
+ "vit.vision_tower.vision_model.encoder.layers.10.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
596
+ "vit.vision_tower.vision_model.encoder.layers.10.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
597
+ "vit.vision_tower.vision_model.encoder.layers.10.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
598
+ "vit.vision_tower.vision_model.encoder.layers.10.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
599
+ "vit.vision_tower.vision_model.encoder.layers.10.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
600
+ "vit.vision_tower.vision_model.encoder.layers.10.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
601
+ "vit.vision_tower.vision_model.encoder.layers.10.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
602
+ "vit.vision_tower.vision_model.encoder.layers.10.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
603
+ "vit.vision_tower.vision_model.encoder.layers.10.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
604
+ "vit.vision_tower.vision_model.encoder.layers.10.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
605
+ "vit.vision_tower.vision_model.encoder.layers.10.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
606
+ "vit.vision_tower.vision_model.encoder.layers.10.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
607
+ "vit.vision_tower.vision_model.encoder.layers.10.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
608
+ "vit.vision_tower.vision_model.encoder.layers.11.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
609
+ "vit.vision_tower.vision_model.encoder.layers.11.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
610
+ "vit.vision_tower.vision_model.encoder.layers.11.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
611
+ "vit.vision_tower.vision_model.encoder.layers.11.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
612
+ "vit.vision_tower.vision_model.encoder.layers.11.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
613
+ "vit.vision_tower.vision_model.encoder.layers.11.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
614
+ "vit.vision_tower.vision_model.encoder.layers.11.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
615
+ "vit.vision_tower.vision_model.encoder.layers.11.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
616
+ "vit.vision_tower.vision_model.encoder.layers.11.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
617
+ "vit.vision_tower.vision_model.encoder.layers.11.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
618
+ "vit.vision_tower.vision_model.encoder.layers.11.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
619
+ "vit.vision_tower.vision_model.encoder.layers.11.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
620
+ "vit.vision_tower.vision_model.encoder.layers.11.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
621
+ "vit.vision_tower.vision_model.encoder.layers.11.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
622
+ "vit.vision_tower.vision_model.encoder.layers.11.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
623
+ "vit.vision_tower.vision_model.encoder.layers.11.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
624
+ "vit.vision_tower.vision_model.encoder.layers.12.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
625
+ "vit.vision_tower.vision_model.encoder.layers.12.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
626
+ "vit.vision_tower.vision_model.encoder.layers.12.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
627
+ "vit.vision_tower.vision_model.encoder.layers.12.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
628
+ "vit.vision_tower.vision_model.encoder.layers.12.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
629
+ "vit.vision_tower.vision_model.encoder.layers.12.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
630
+ "vit.vision_tower.vision_model.encoder.layers.12.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
631
+ "vit.vision_tower.vision_model.encoder.layers.12.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
632
+ "vit.vision_tower.vision_model.encoder.layers.12.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
633
+ "vit.vision_tower.vision_model.encoder.layers.12.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
634
+ "vit.vision_tower.vision_model.encoder.layers.12.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
635
+ "vit.vision_tower.vision_model.encoder.layers.12.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
636
+ "vit.vision_tower.vision_model.encoder.layers.12.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
637
+ "vit.vision_tower.vision_model.encoder.layers.12.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
638
+ "vit.vision_tower.vision_model.encoder.layers.12.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
639
+ "vit.vision_tower.vision_model.encoder.layers.12.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
640
+ "vit.vision_tower.vision_model.encoder.layers.13.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
641
+ "vit.vision_tower.vision_model.encoder.layers.13.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
642
+ "vit.vision_tower.vision_model.encoder.layers.13.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
643
+ "vit.vision_tower.vision_model.encoder.layers.13.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
644
+ "vit.vision_tower.vision_model.encoder.layers.13.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
645
+ "vit.vision_tower.vision_model.encoder.layers.13.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
646
+ "vit.vision_tower.vision_model.encoder.layers.13.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
647
+ "vit.vision_tower.vision_model.encoder.layers.13.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
648
+ "vit.vision_tower.vision_model.encoder.layers.13.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
649
+ "vit.vision_tower.vision_model.encoder.layers.13.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
650
+ "vit.vision_tower.vision_model.encoder.layers.13.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
651
+ "vit.vision_tower.vision_model.encoder.layers.13.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
652
+ "vit.vision_tower.vision_model.encoder.layers.13.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
653
+ "vit.vision_tower.vision_model.encoder.layers.13.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
654
+ "vit.vision_tower.vision_model.encoder.layers.13.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
655
+ "vit.vision_tower.vision_model.encoder.layers.13.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
656
+ "vit.vision_tower.vision_model.encoder.layers.14.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
657
+ "vit.vision_tower.vision_model.encoder.layers.14.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
658
+ "vit.vision_tower.vision_model.encoder.layers.14.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
659
+ "vit.vision_tower.vision_model.encoder.layers.14.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
660
+ "vit.vision_tower.vision_model.encoder.layers.14.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
661
+ "vit.vision_tower.vision_model.encoder.layers.14.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
662
+ "vit.vision_tower.vision_model.encoder.layers.14.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
663
+ "vit.vision_tower.vision_model.encoder.layers.14.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
664
+ "vit.vision_tower.vision_model.encoder.layers.14.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
665
+ "vit.vision_tower.vision_model.encoder.layers.14.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
666
+ "vit.vision_tower.vision_model.encoder.layers.14.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
667
+ "vit.vision_tower.vision_model.encoder.layers.14.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
668
+ "vit.vision_tower.vision_model.encoder.layers.14.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
669
+ "vit.vision_tower.vision_model.encoder.layers.14.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
670
+ "vit.vision_tower.vision_model.encoder.layers.14.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
671
+ "vit.vision_tower.vision_model.encoder.layers.14.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
672
+ "vit.vision_tower.vision_model.encoder.layers.15.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
673
+ "vit.vision_tower.vision_model.encoder.layers.15.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
674
+ "vit.vision_tower.vision_model.encoder.layers.15.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
675
+ "vit.vision_tower.vision_model.encoder.layers.15.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
676
+ "vit.vision_tower.vision_model.encoder.layers.15.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
677
+ "vit.vision_tower.vision_model.encoder.layers.15.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
678
+ "vit.vision_tower.vision_model.encoder.layers.15.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
679
+ "vit.vision_tower.vision_model.encoder.layers.15.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
680
+ "vit.vision_tower.vision_model.encoder.layers.15.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
681
+ "vit.vision_tower.vision_model.encoder.layers.15.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
682
+ "vit.vision_tower.vision_model.encoder.layers.15.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
683
+ "vit.vision_tower.vision_model.encoder.layers.15.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
684
+ "vit.vision_tower.vision_model.encoder.layers.15.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
685
+ "vit.vision_tower.vision_model.encoder.layers.15.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
686
+ "vit.vision_tower.vision_model.encoder.layers.15.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
687
+ "vit.vision_tower.vision_model.encoder.layers.15.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
688
+ "vit.vision_tower.vision_model.encoder.layers.16.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
689
+ "vit.vision_tower.vision_model.encoder.layers.16.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
690
+ "vit.vision_tower.vision_model.encoder.layers.16.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
691
+ "vit.vision_tower.vision_model.encoder.layers.16.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
692
+ "vit.vision_tower.vision_model.encoder.layers.16.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
693
+ "vit.vision_tower.vision_model.encoder.layers.16.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
694
+ "vit.vision_tower.vision_model.encoder.layers.16.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
695
+ "vit.vision_tower.vision_model.encoder.layers.16.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
696
+ "vit.vision_tower.vision_model.encoder.layers.16.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
697
+ "vit.vision_tower.vision_model.encoder.layers.16.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
698
+ "vit.vision_tower.vision_model.encoder.layers.16.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
699
+ "vit.vision_tower.vision_model.encoder.layers.16.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
700
+ "vit.vision_tower.vision_model.encoder.layers.16.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
701
+ "vit.vision_tower.vision_model.encoder.layers.16.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
702
+ "vit.vision_tower.vision_model.encoder.layers.16.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
703
+ "vit.vision_tower.vision_model.encoder.layers.16.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
704
+ "vit.vision_tower.vision_model.encoder.layers.17.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
705
+ "vit.vision_tower.vision_model.encoder.layers.17.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
706
+ "vit.vision_tower.vision_model.encoder.layers.17.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
707
+ "vit.vision_tower.vision_model.encoder.layers.17.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
708
+ "vit.vision_tower.vision_model.encoder.layers.17.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
709
+ "vit.vision_tower.vision_model.encoder.layers.17.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
710
+ "vit.vision_tower.vision_model.encoder.layers.17.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
711
+ "vit.vision_tower.vision_model.encoder.layers.17.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
712
+ "vit.vision_tower.vision_model.encoder.layers.17.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
713
+ "vit.vision_tower.vision_model.encoder.layers.17.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
714
+ "vit.vision_tower.vision_model.encoder.layers.17.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
715
+ "vit.vision_tower.vision_model.encoder.layers.17.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
716
+ "vit.vision_tower.vision_model.encoder.layers.17.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
717
+ "vit.vision_tower.vision_model.encoder.layers.17.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
718
+ "vit.vision_tower.vision_model.encoder.layers.17.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
719
+ "vit.vision_tower.vision_model.encoder.layers.17.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
720
+ "vit.vision_tower.vision_model.encoder.layers.18.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
721
+ "vit.vision_tower.vision_model.encoder.layers.18.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
722
+ "vit.vision_tower.vision_model.encoder.layers.18.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
723
+ "vit.vision_tower.vision_model.encoder.layers.18.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
724
+ "vit.vision_tower.vision_model.encoder.layers.18.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
725
+ "vit.vision_tower.vision_model.encoder.layers.18.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
726
+ "vit.vision_tower.vision_model.encoder.layers.18.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
727
+ "vit.vision_tower.vision_model.encoder.layers.18.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
728
+ "vit.vision_tower.vision_model.encoder.layers.18.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
729
+ "vit.vision_tower.vision_model.encoder.layers.18.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
730
+ "vit.vision_tower.vision_model.encoder.layers.18.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
731
+ "vit.vision_tower.vision_model.encoder.layers.18.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
732
+ "vit.vision_tower.vision_model.encoder.layers.18.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
733
+ "vit.vision_tower.vision_model.encoder.layers.18.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
734
+ "vit.vision_tower.vision_model.encoder.layers.18.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
735
+ "vit.vision_tower.vision_model.encoder.layers.18.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
736
+ "vit.vision_tower.vision_model.encoder.layers.19.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
737
+ "vit.vision_tower.vision_model.encoder.layers.19.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
738
+ "vit.vision_tower.vision_model.encoder.layers.19.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
739
+ "vit.vision_tower.vision_model.encoder.layers.19.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
740
+ "vit.vision_tower.vision_model.encoder.layers.19.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
741
+ "vit.vision_tower.vision_model.encoder.layers.19.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
742
+ "vit.vision_tower.vision_model.encoder.layers.19.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
743
+ "vit.vision_tower.vision_model.encoder.layers.19.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
744
+ "vit.vision_tower.vision_model.encoder.layers.19.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
745
+ "vit.vision_tower.vision_model.encoder.layers.19.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
746
+ "vit.vision_tower.vision_model.encoder.layers.19.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
747
+ "vit.vision_tower.vision_model.encoder.layers.19.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
748
+ "vit.vision_tower.vision_model.encoder.layers.19.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
749
+ "vit.vision_tower.vision_model.encoder.layers.19.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
750
+ "vit.vision_tower.vision_model.encoder.layers.19.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
751
+ "vit.vision_tower.vision_model.encoder.layers.19.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
752
+ "vit.vision_tower.vision_model.encoder.layers.2.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
753
+ "vit.vision_tower.vision_model.encoder.layers.2.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
754
+ "vit.vision_tower.vision_model.encoder.layers.2.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
755
+ "vit.vision_tower.vision_model.encoder.layers.2.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
756
+ "vit.vision_tower.vision_model.encoder.layers.2.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
757
+ "vit.vision_tower.vision_model.encoder.layers.2.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
758
+ "vit.vision_tower.vision_model.encoder.layers.2.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
759
+ "vit.vision_tower.vision_model.encoder.layers.2.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
760
+ "vit.vision_tower.vision_model.encoder.layers.2.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
761
+ "vit.vision_tower.vision_model.encoder.layers.2.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
762
+ "vit.vision_tower.vision_model.encoder.layers.2.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
763
+ "vit.vision_tower.vision_model.encoder.layers.2.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
764
+ "vit.vision_tower.vision_model.encoder.layers.2.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
765
+ "vit.vision_tower.vision_model.encoder.layers.2.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
766
+ "vit.vision_tower.vision_model.encoder.layers.2.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
767
+ "vit.vision_tower.vision_model.encoder.layers.2.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
768
+ "vit.vision_tower.vision_model.encoder.layers.20.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
769
+ "vit.vision_tower.vision_model.encoder.layers.20.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
770
+ "vit.vision_tower.vision_model.encoder.layers.20.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
771
+ "vit.vision_tower.vision_model.encoder.layers.20.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
772
+ "vit.vision_tower.vision_model.encoder.layers.20.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
773
+ "vit.vision_tower.vision_model.encoder.layers.20.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
774
+ "vit.vision_tower.vision_model.encoder.layers.20.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
775
+ "vit.vision_tower.vision_model.encoder.layers.20.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
776
+ "vit.vision_tower.vision_model.encoder.layers.20.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
777
+ "vit.vision_tower.vision_model.encoder.layers.20.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
778
+ "vit.vision_tower.vision_model.encoder.layers.20.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
779
+ "vit.vision_tower.vision_model.encoder.layers.20.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
780
+ "vit.vision_tower.vision_model.encoder.layers.20.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
781
+ "vit.vision_tower.vision_model.encoder.layers.20.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
782
+ "vit.vision_tower.vision_model.encoder.layers.20.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
783
+ "vit.vision_tower.vision_model.encoder.layers.20.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
784
+ "vit.vision_tower.vision_model.encoder.layers.21.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
785
+ "vit.vision_tower.vision_model.encoder.layers.21.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
786
+ "vit.vision_tower.vision_model.encoder.layers.21.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
787
+ "vit.vision_tower.vision_model.encoder.layers.21.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
788
+ "vit.vision_tower.vision_model.encoder.layers.21.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
789
+ "vit.vision_tower.vision_model.encoder.layers.21.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
790
+ "vit.vision_tower.vision_model.encoder.layers.21.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
791
+ "vit.vision_tower.vision_model.encoder.layers.21.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
792
+ "vit.vision_tower.vision_model.encoder.layers.21.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
793
+ "vit.vision_tower.vision_model.encoder.layers.21.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
794
+ "vit.vision_tower.vision_model.encoder.layers.21.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
795
+ "vit.vision_tower.vision_model.encoder.layers.21.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
796
+ "vit.vision_tower.vision_model.encoder.layers.21.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
797
+ "vit.vision_tower.vision_model.encoder.layers.21.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
798
+ "vit.vision_tower.vision_model.encoder.layers.21.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
799
+ "vit.vision_tower.vision_model.encoder.layers.21.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
800
+ "vit.vision_tower.vision_model.encoder.layers.22.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
801
+ "vit.vision_tower.vision_model.encoder.layers.22.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
802
+ "vit.vision_tower.vision_model.encoder.layers.22.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
803
+ "vit.vision_tower.vision_model.encoder.layers.22.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
804
+ "vit.vision_tower.vision_model.encoder.layers.22.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
805
+ "vit.vision_tower.vision_model.encoder.layers.22.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
806
+ "vit.vision_tower.vision_model.encoder.layers.22.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
807
+ "vit.vision_tower.vision_model.encoder.layers.22.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
808
+ "vit.vision_tower.vision_model.encoder.layers.22.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
809
+ "vit.vision_tower.vision_model.encoder.layers.22.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
810
+ "vit.vision_tower.vision_model.encoder.layers.22.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
811
+ "vit.vision_tower.vision_model.encoder.layers.22.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
812
+ "vit.vision_tower.vision_model.encoder.layers.22.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
813
+ "vit.vision_tower.vision_model.encoder.layers.22.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
814
+ "vit.vision_tower.vision_model.encoder.layers.22.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
815
+ "vit.vision_tower.vision_model.encoder.layers.22.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
816
+ "vit.vision_tower.vision_model.encoder.layers.23.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
817
+ "vit.vision_tower.vision_model.encoder.layers.23.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
818
+ "vit.vision_tower.vision_model.encoder.layers.23.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
819
+ "vit.vision_tower.vision_model.encoder.layers.23.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
820
+ "vit.vision_tower.vision_model.encoder.layers.23.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
821
+ "vit.vision_tower.vision_model.encoder.layers.23.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
822
+ "vit.vision_tower.vision_model.encoder.layers.23.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
823
+ "vit.vision_tower.vision_model.encoder.layers.23.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
824
+ "vit.vision_tower.vision_model.encoder.layers.23.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
825
+ "vit.vision_tower.vision_model.encoder.layers.23.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
826
+ "vit.vision_tower.vision_model.encoder.layers.23.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
827
+ "vit.vision_tower.vision_model.encoder.layers.23.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
828
+ "vit.vision_tower.vision_model.encoder.layers.23.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
829
+ "vit.vision_tower.vision_model.encoder.layers.23.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
830
+ "vit.vision_tower.vision_model.encoder.layers.23.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
831
+ "vit.vision_tower.vision_model.encoder.layers.23.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
832
+ "vit.vision_tower.vision_model.encoder.layers.3.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
833
+ "vit.vision_tower.vision_model.encoder.layers.3.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
834
+ "vit.vision_tower.vision_model.encoder.layers.3.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
835
+ "vit.vision_tower.vision_model.encoder.layers.3.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
836
+ "vit.vision_tower.vision_model.encoder.layers.3.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
837
+ "vit.vision_tower.vision_model.encoder.layers.3.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
838
+ "vit.vision_tower.vision_model.encoder.layers.3.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
839
+ "vit.vision_tower.vision_model.encoder.layers.3.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
840
+ "vit.vision_tower.vision_model.encoder.layers.3.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
841
+ "vit.vision_tower.vision_model.encoder.layers.3.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
842
+ "vit.vision_tower.vision_model.encoder.layers.3.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
843
+ "vit.vision_tower.vision_model.encoder.layers.3.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
844
+ "vit.vision_tower.vision_model.encoder.layers.3.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
845
+ "vit.vision_tower.vision_model.encoder.layers.3.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
846
+ "vit.vision_tower.vision_model.encoder.layers.3.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
847
+ "vit.vision_tower.vision_model.encoder.layers.3.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
848
+ "vit.vision_tower.vision_model.encoder.layers.4.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
849
+ "vit.vision_tower.vision_model.encoder.layers.4.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
850
+ "vit.vision_tower.vision_model.encoder.layers.4.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
851
+ "vit.vision_tower.vision_model.encoder.layers.4.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
852
+ "vit.vision_tower.vision_model.encoder.layers.4.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
853
+ "vit.vision_tower.vision_model.encoder.layers.4.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
854
+ "vit.vision_tower.vision_model.encoder.layers.4.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
855
+ "vit.vision_tower.vision_model.encoder.layers.4.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
856
+ "vit.vision_tower.vision_model.encoder.layers.4.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
857
+ "vit.vision_tower.vision_model.encoder.layers.4.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
858
+ "vit.vision_tower.vision_model.encoder.layers.4.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
859
+ "vit.vision_tower.vision_model.encoder.layers.4.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
860
+ "vit.vision_tower.vision_model.encoder.layers.4.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
861
+ "vit.vision_tower.vision_model.encoder.layers.4.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
862
+ "vit.vision_tower.vision_model.encoder.layers.4.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
863
+ "vit.vision_tower.vision_model.encoder.layers.4.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
864
+ "vit.vision_tower.vision_model.encoder.layers.5.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
865
+ "vit.vision_tower.vision_model.encoder.layers.5.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
866
+ "vit.vision_tower.vision_model.encoder.layers.5.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
867
+ "vit.vision_tower.vision_model.encoder.layers.5.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
868
+ "vit.vision_tower.vision_model.encoder.layers.5.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
869
+ "vit.vision_tower.vision_model.encoder.layers.5.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
870
+ "vit.vision_tower.vision_model.encoder.layers.5.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
871
+ "vit.vision_tower.vision_model.encoder.layers.5.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
872
+ "vit.vision_tower.vision_model.encoder.layers.5.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
873
+ "vit.vision_tower.vision_model.encoder.layers.5.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
874
+ "vit.vision_tower.vision_model.encoder.layers.5.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
875
+ "vit.vision_tower.vision_model.encoder.layers.5.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
876
+ "vit.vision_tower.vision_model.encoder.layers.5.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
877
+ "vit.vision_tower.vision_model.encoder.layers.5.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
878
+ "vit.vision_tower.vision_model.encoder.layers.5.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
879
+ "vit.vision_tower.vision_model.encoder.layers.5.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
880
+ "vit.vision_tower.vision_model.encoder.layers.6.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
881
+ "vit.vision_tower.vision_model.encoder.layers.6.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
882
+ "vit.vision_tower.vision_model.encoder.layers.6.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
883
+ "vit.vision_tower.vision_model.encoder.layers.6.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
884
+ "vit.vision_tower.vision_model.encoder.layers.6.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
885
+ "vit.vision_tower.vision_model.encoder.layers.6.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
886
+ "vit.vision_tower.vision_model.encoder.layers.6.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
887
+ "vit.vision_tower.vision_model.encoder.layers.6.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
888
+ "vit.vision_tower.vision_model.encoder.layers.6.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
889
+ "vit.vision_tower.vision_model.encoder.layers.6.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
890
+ "vit.vision_tower.vision_model.encoder.layers.6.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
891
+ "vit.vision_tower.vision_model.encoder.layers.6.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
892
+ "vit.vision_tower.vision_model.encoder.layers.6.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
893
+ "vit.vision_tower.vision_model.encoder.layers.6.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
894
+ "vit.vision_tower.vision_model.encoder.layers.6.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
895
+ "vit.vision_tower.vision_model.encoder.layers.6.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
896
+ "vit.vision_tower.vision_model.encoder.layers.7.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
897
+ "vit.vision_tower.vision_model.encoder.layers.7.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
898
+ "vit.vision_tower.vision_model.encoder.layers.7.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
899
+ "vit.vision_tower.vision_model.encoder.layers.7.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
900
+ "vit.vision_tower.vision_model.encoder.layers.7.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
901
+ "vit.vision_tower.vision_model.encoder.layers.7.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
902
+ "vit.vision_tower.vision_model.encoder.layers.7.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
903
+ "vit.vision_tower.vision_model.encoder.layers.7.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
904
+ "vit.vision_tower.vision_model.encoder.layers.7.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
905
+ "vit.vision_tower.vision_model.encoder.layers.7.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
906
+ "vit.vision_tower.vision_model.encoder.layers.7.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
907
+ "vit.vision_tower.vision_model.encoder.layers.7.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
908
+ "vit.vision_tower.vision_model.encoder.layers.7.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
909
+ "vit.vision_tower.vision_model.encoder.layers.7.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
910
+ "vit.vision_tower.vision_model.encoder.layers.7.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
911
+ "vit.vision_tower.vision_model.encoder.layers.7.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
912
+ "vit.vision_tower.vision_model.encoder.layers.8.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
913
+ "vit.vision_tower.vision_model.encoder.layers.8.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
914
+ "vit.vision_tower.vision_model.encoder.layers.8.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
915
+ "vit.vision_tower.vision_model.encoder.layers.8.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
916
+ "vit.vision_tower.vision_model.encoder.layers.8.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
917
+ "vit.vision_tower.vision_model.encoder.layers.8.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
918
+ "vit.vision_tower.vision_model.encoder.layers.8.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
919
+ "vit.vision_tower.vision_model.encoder.layers.8.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
920
+ "vit.vision_tower.vision_model.encoder.layers.8.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
921
+ "vit.vision_tower.vision_model.encoder.layers.8.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
922
+ "vit.vision_tower.vision_model.encoder.layers.8.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
923
+ "vit.vision_tower.vision_model.encoder.layers.8.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
924
+ "vit.vision_tower.vision_model.encoder.layers.8.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
925
+ "vit.vision_tower.vision_model.encoder.layers.8.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
926
+ "vit.vision_tower.vision_model.encoder.layers.8.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
927
+ "vit.vision_tower.vision_model.encoder.layers.8.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
928
+ "vit.vision_tower.vision_model.encoder.layers.9.layer_norm1.bias": "pytorch_model-00002-of-00002.bin",
929
+ "vit.vision_tower.vision_model.encoder.layers.9.layer_norm1.weight": "pytorch_model-00002-of-00002.bin",
930
+ "vit.vision_tower.vision_model.encoder.layers.9.layer_norm2.bias": "pytorch_model-00002-of-00002.bin",
931
+ "vit.vision_tower.vision_model.encoder.layers.9.layer_norm2.weight": "pytorch_model-00002-of-00002.bin",
932
+ "vit.vision_tower.vision_model.encoder.layers.9.mlp.fc1.bias": "pytorch_model-00002-of-00002.bin",
933
+ "vit.vision_tower.vision_model.encoder.layers.9.mlp.fc1.weight": "pytorch_model-00002-of-00002.bin",
934
+ "vit.vision_tower.vision_model.encoder.layers.9.mlp.fc2.bias": "pytorch_model-00002-of-00002.bin",
935
+ "vit.vision_tower.vision_model.encoder.layers.9.mlp.fc2.weight": "pytorch_model-00002-of-00002.bin",
936
+ "vit.vision_tower.vision_model.encoder.layers.9.self_attn.k_proj.bias": "pytorch_model-00002-of-00002.bin",
937
+ "vit.vision_tower.vision_model.encoder.layers.9.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
938
+ "vit.vision_tower.vision_model.encoder.layers.9.self_attn.out_proj.bias": "pytorch_model-00002-of-00002.bin",
939
+ "vit.vision_tower.vision_model.encoder.layers.9.self_attn.out_proj.weight": "pytorch_model-00002-of-00002.bin",
940
+ "vit.vision_tower.vision_model.encoder.layers.9.self_attn.q_proj.bias": "pytorch_model-00002-of-00002.bin",
941
+ "vit.vision_tower.vision_model.encoder.layers.9.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
942
+ "vit.vision_tower.vision_model.encoder.layers.9.self_attn.v_proj.bias": "pytorch_model-00002-of-00002.bin",
943
+ "vit.vision_tower.vision_model.encoder.layers.9.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
944
+ "vit.vision_tower.vision_model.pre_layrnorm.bias": "pytorch_model-00002-of-00002.bin",
945
+ "vit.vision_tower.vision_model.pre_layrnorm.weight": "pytorch_model-00002-of-00002.bin"
946
+ }
947
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "eos_token": "</s>",
4
+ "pad_token": "</s>",
5
+ "unk_token": "<unk>"
6
+ }
tokenization_internlm_xcomposer2.py ADDED
@@ -0,0 +1,252 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) InternLM. All rights reserved.
2
+ #
3
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
4
+ # and OPT implementations in this library. It has been modified from its
5
+ # original forms to accommodate minor architectural differences compared
6
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
7
+ #
8
+ # Licensed under the Apache License, Version 2.0 (the "License");
9
+ # you may not use this file except in compliance with the License.
10
+ # You may obtain a copy of the License at
11
+ #
12
+ # http://www.apache.org/licenses/LICENSE-2.0
13
+ #
14
+ # Unless required by applicable law or agreed to in writing, software
15
+ # distributed under the License is distributed on an "AS IS" BASIS,
16
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17
+ # See the License for the specific language governing permissions and
18
+ # limitations under the License.
19
+ """Tokenization classes for IntermLM."""
20
+ import os
21
+ from shutil import copyfile
22
+ from typing import Any, Dict, List, Optional, Tuple
23
+
24
+ import sentencepiece as spm
25
+ from transformers.tokenization_utils import PreTrainedTokenizer
26
+ from transformers.utils import logging
27
+
28
+ logger = logging.get_logger(__name__)
29
+
30
+ VOCAB_FILES_NAMES = {'vocab_file': './tokenizer.model'}
31
+
32
+ PRETRAINED_VOCAB_FILES_MAP = {}
33
+
34
+
35
+ class InternLMXComposer2Tokenizer(PreTrainedTokenizer):
36
+ """Construct a InternLM tokenizer. Based on byte-level Byte-Pair-Encoding.
37
+
38
+ Args:
39
+ vocab_file (`str`):
40
+ Path to the vocabulary file.
41
+ """
42
+
43
+ vocab_files_names = VOCAB_FILES_NAMES
44
+ pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
45
+ model_input_names = ['input_ids', 'attention_mask']
46
+ _auto_class = 'AutoTokenizer'
47
+
48
+ def __init__(
49
+ self,
50
+ vocab_file,
51
+ unk_token='<unk>',
52
+ bos_token='<s>',
53
+ eos_token='</s>',
54
+ pad_token='</s>',
55
+ sp_model_kwargs: Optional[Dict[str, Any]] = None,
56
+ add_bos_token=True,
57
+ add_eos_token=False,
58
+ decode_with_prefix_space=False,
59
+ clean_up_tokenization_spaces=False,
60
+ **kwargs,
61
+ ):
62
+ self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
63
+ self.vocab_file = vocab_file
64
+ self.add_bos_token = add_bos_token
65
+ self.add_eos_token = add_eos_token
66
+ self.decode_with_prefix_space = decode_with_prefix_space
67
+ self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
68
+ self.sp_model.Load(vocab_file)
69
+ self._no_prefix_space_tokens = None
70
+ super().__init__(
71
+ bos_token=bos_token,
72
+ eos_token=eos_token,
73
+ unk_token=unk_token,
74
+ pad_token=pad_token,
75
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
76
+ **kwargs,
77
+ )
78
+ """ Initialization"""
79
+
80
+ @property
81
+ def no_prefix_space_tokens(self):
82
+ if self._no_prefix_space_tokens is None:
83
+ vocab = self.convert_ids_to_tokens(list(range(self.vocab_size)))
84
+ self._no_prefix_space_tokens = {
85
+ i
86
+ for i, tok in enumerate(vocab) if not tok.startswith('▁')
87
+ }
88
+ return self._no_prefix_space_tokens
89
+
90
+ @property
91
+ def vocab_size(self):
92
+ """Returns vocab size."""
93
+ return self.sp_model.get_piece_size()
94
+
95
+ @property
96
+ def bos_token_id(self) -> Optional[int]:
97
+ return self.sp_model.bos_id()
98
+
99
+ @property
100
+ def eos_token_id(self) -> Optional[int]:
101
+ return self.sp_model.eos_id()
102
+
103
+ def get_vocab(self):
104
+ """Returns vocab as a dict."""
105
+ vocab = {
106
+ self.convert_ids_to_tokens(i): i
107
+ for i in range(self.vocab_size)
108
+ }
109
+ vocab.update(self.added_tokens_encoder)
110
+ return vocab
111
+
112
+ def _tokenize(self, text):
113
+ """Returns a tokenized string."""
114
+ return self.sp_model.encode(text, out_type=str)
115
+
116
+ def _convert_token_to_id(self, token):
117
+ """Converts a token (str) in an id using the vocab."""
118
+ return self.sp_model.piece_to_id(token)
119
+
120
+ def _convert_id_to_token(self, index):
121
+ """Converts an index (integer) in a token (str) using the vocab."""
122
+ token = self.sp_model.IdToPiece(index)
123
+ return token
124
+
125
+ def _maybe_add_prefix_space(self, tokens, decoded):
126
+ if tokens and tokens[0] not in self.no_prefix_space_tokens:
127
+ return ' ' + decoded
128
+ else:
129
+ return decoded
130
+
131
+ def convert_tokens_to_string(self, tokens):
132
+ """Converts a sequence of tokens (string) in a single string."""
133
+ current_sub_tokens = []
134
+ out_string = ''
135
+ prev_is_special = False
136
+ for token in tokens:
137
+ # make sure that special tokens are not decoded using sentencepiece model
138
+ if token in self.all_special_tokens:
139
+ if not prev_is_special:
140
+ out_string += ' '
141
+ out_string += self.sp_model.decode(current_sub_tokens) + token
142
+ prev_is_special = True
143
+ current_sub_tokens = []
144
+ else:
145
+ current_sub_tokens.append(token)
146
+ prev_is_special = False
147
+ out_string += self.sp_model.decode(current_sub_tokens)
148
+ out_string = self.clean_up_tokenization(out_string)
149
+ out_string = self._maybe_add_prefix_space(
150
+ tokens=tokens, decoded=out_string)
151
+ return out_string[1:]
152
+
153
+ def save_vocabulary(self,
154
+ save_directory,
155
+ filename_prefix: Optional[str] = None) -> Tuple[str]:
156
+ """Save the vocabulary and special tokens file to a directory.
157
+
158
+ Args:
159
+ save_directory (`str`):
160
+ The directory in which to save the vocabulary.
161
+
162
+ Returns:
163
+ `Tuple(str)`: Paths to the files saved.
164
+ """
165
+ if not os.path.isdir(save_directory):
166
+ logger.error(
167
+ f'Vocabulary path ({save_directory}) should be a directory')
168
+ return
169
+ out_vocab_file = os.path.join(
170
+ save_directory,
171
+ (filename_prefix + '-' if filename_prefix else '') +
172
+ VOCAB_FILES_NAMES['vocab_file'])
173
+
174
+ if os.path.abspath(self.vocab_file) != os.path.abspath(
175
+ out_vocab_file) and os.path.isfile(self.vocab_file):
176
+ copyfile(self.vocab_file, out_vocab_file)
177
+ elif not os.path.isfile(self.vocab_file):
178
+ with open(out_vocab_file, 'wb') as fi:
179
+ content_spiece_model = self.sp_model.serialized_model_proto()
180
+ fi.write(content_spiece_model)
181
+
182
+ return (out_vocab_file, )
183
+
184
+ def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
185
+ if self.add_bos_token:
186
+ bos_token_ids = [self.bos_token_id]
187
+ else:
188
+ bos_token_ids = []
189
+
190
+ output = bos_token_ids + token_ids_0
191
+
192
+ if token_ids_1 is not None:
193
+ output = output + token_ids_1
194
+
195
+ if self.add_eos_token:
196
+ output = output + [self.eos_token_id]
197
+
198
+ return output
199
+
200
+ def get_special_tokens_mask(
201
+ self,
202
+ token_ids_0: List[int],
203
+ token_ids_1: Optional[List[int]] = None,
204
+ already_has_special_tokens: bool = False) -> List[int]:
205
+ """Retrieve sequence ids from a token list that has no special tokens
206
+ added. This method is called when adding special tokens using the
207
+ tokenizer `prepare_for_model` method.
208
+
209
+ Args:
210
+ token_ids_0 (`List[int]`):
211
+ List of IDs.
212
+ token_ids_1 (`List[int]`, *optional*):
213
+ Optional second list of IDs for sequence pairs.
214
+ already_has_special_tokens (`bool`, *optional*, defaults to `False`):
215
+ Whether or not the token list is already formatted with special tokens for the model.
216
+
217
+ Returns:
218
+ `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
219
+ """
220
+ if already_has_special_tokens:
221
+ return super().get_special_tokens_mask(
222
+ token_ids_0=token_ids_0,
223
+ token_ids_1=token_ids_1,
224
+ already_has_special_tokens=True)
225
+
226
+ if token_ids_1 is None:
227
+ return [1] + ([0] * len(token_ids_0)) + [1]
228
+ return [1] + ([0] * len(token_ids_0)) + [1, 1] + (
229
+ [0] * len(token_ids_1)) + [1]
230
+
231
+ def create_token_type_ids_from_sequences(
232
+ self,
233
+ token_ids_0: List[int],
234
+ token_ids_1: Optional[List[int]] = None) -> List[int]:
235
+ """Create a mask from the two sequences passed to be used in a
236
+ sequence-pair classification task. T5 does not make use of token type
237
+ ids, therefore a list of zeros is returned.
238
+
239
+ Args:
240
+ token_ids_0 (`List[int]`):
241
+ List of IDs.
242
+ token_ids_1 (`List[int]`, *optional*):
243
+ Optional second list of IDs for sequence pairs.
244
+
245
+ Returns:
246
+ `List[int]`: List of zeros.
247
+ """
248
+ eos = [self.eos_token_id]
249
+
250
+ if token_ids_1 is None:
251
+ return len(token_ids_0 + eos) * [0]
252
+ return len(token_ids_0 + eos + token_ids_1 + eos) * [0]
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f868398fc4e05ee1e8aeba95ddf18ddcc45b8bce55d5093bead5bbf80429b48b
3
+ size 1477754
tokenizer_config.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoTokenizer": [
4
+ "tokenization_internlm_xcomposer2.InternLMXComposer2Tokenizer",
5
+ null
6
+ ]
7
+ },
8
+ "bos_token": "<s>",
9
+ "clean_up_tokenization_spaces": false,
10
+ "eos_token": "</s>",
11
+ "model_max_length": 1000000000000000019884624838656,
12
+ "pad_token": "</s>",
13
+ "padding_side": "right",
14
+ "tokenizer_class": "InternLMXComposer2Tokenizer",
15
+ "unk_token": "<unk>"
16
+ }
zero_to_fp32.py ADDED
@@ -0,0 +1,587 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+
3
+ # Copyright (c) Microsoft Corporation.
4
+ # SPDX-License-Identifier: Apache-2.0
5
+
6
+ # DeepSpeed Team
7
+
8
+ # This script extracts fp32 consolidated weights from a zero 1, 2 and 3 DeepSpeed checkpoints. It gets
9
+ # copied into the top level checkpoint dir, so the user can easily do the conversion at any point in
10
+ # the future. Once extracted, the weights don't require DeepSpeed and can be used in any
11
+ # application.
12
+ #
13
+ # example: python zero_to_fp32.py . pytorch_model.bin
14
+
15
+ import argparse
16
+ import torch
17
+ import glob
18
+ import math
19
+ import os
20
+ import re
21
+ from collections import OrderedDict
22
+ from dataclasses import dataclass
23
+
24
+ # while this script doesn't use deepspeed to recover data, since the checkpoints are pickled with
25
+ # DeepSpeed data structures it has to be available in the current python environment.
26
+ from deepspeed.utils import logger
27
+ from deepspeed.checkpoint.constants import (DS_VERSION, OPTIMIZER_STATE_DICT, SINGLE_PARTITION_OF_FP32_GROUPS,
28
+ FP32_FLAT_GROUPS, ZERO_STAGE, PARTITION_COUNT, PARAM_SHAPES, BUFFER_NAMES,
29
+ FROZEN_PARAM_SHAPES, FROZEN_PARAM_FRAGMENTS)
30
+
31
+
32
+ @dataclass
33
+ class zero_model_state:
34
+ buffers: dict()
35
+ param_shapes: dict()
36
+ shared_params: list
37
+ ds_version: int
38
+ frozen_param_shapes: dict()
39
+ frozen_param_fragments: dict()
40
+
41
+
42
+ debug = 0
43
+
44
+ # load to cpu
45
+ device = torch.device('cpu')
46
+
47
+
48
+ def atoi(text):
49
+ return int(text) if text.isdigit() else text
50
+
51
+
52
+ def natural_keys(text):
53
+ '''
54
+ alist.sort(key=natural_keys) sorts in human order
55
+ http://nedbatchelder.com/blog/200712/human_sorting.html
56
+ (See Toothy's implementation in the comments)
57
+ '''
58
+ return [atoi(c) for c in re.split(r'(\d+)', text)]
59
+
60
+
61
+ def get_model_state_file(checkpoint_dir, zero_stage):
62
+ if not os.path.isdir(checkpoint_dir):
63
+ raise FileNotFoundError(f"Directory '{checkpoint_dir}' doesn't exist")
64
+
65
+ # there should be only one file
66
+ if zero_stage <= 2:
67
+ file = os.path.join(checkpoint_dir, "mp_rank_00_model_states.pt")
68
+ elif zero_stage == 3:
69
+ file = os.path.join(checkpoint_dir, "zero_pp_rank_0_mp_rank_00_model_states.pt")
70
+
71
+ if not os.path.exists(file):
72
+ raise FileNotFoundError(f"can't find model states file at '{file}'")
73
+
74
+ return file
75
+
76
+
77
+ def get_checkpoint_files(checkpoint_dir, glob_pattern):
78
+ # XXX: need to test that this simple glob rule works for multi-node setup too
79
+ ckpt_files = sorted(glob.glob(os.path.join(checkpoint_dir, glob_pattern)), key=natural_keys)
80
+
81
+ if len(ckpt_files) == 0:
82
+ raise FileNotFoundError(f"can't find {glob_pattern} files in directory '{checkpoint_dir}'")
83
+
84
+ return ckpt_files
85
+
86
+
87
+ def get_optim_files(checkpoint_dir):
88
+ return get_checkpoint_files(checkpoint_dir, "*_optim_states.pt")
89
+
90
+
91
+ def get_model_state_files(checkpoint_dir):
92
+ return get_checkpoint_files(checkpoint_dir, "*_model_states.pt")
93
+
94
+
95
+ def parse_model_states(files):
96
+ zero_model_states = []
97
+ for file in files:
98
+ state_dict = torch.load(file, map_location=device)
99
+
100
+ if BUFFER_NAMES not in state_dict:
101
+ raise ValueError(f"{file} is not a model state checkpoint")
102
+ buffer_names = state_dict[BUFFER_NAMES]
103
+ if debug:
104
+ print("Found buffers:", buffer_names)
105
+
106
+ # recover just the buffers while restoring them to fp32 if they were saved in fp16
107
+ buffers = {k: v.float() for k, v in state_dict["module"].items() if k in buffer_names}
108
+ param_shapes = state_dict[PARAM_SHAPES]
109
+
110
+ # collect parameters that are included in param_shapes
111
+ param_names = []
112
+ for s in param_shapes:
113
+ for name in s.keys():
114
+ param_names.append(name)
115
+
116
+ # update with frozen parameters
117
+ frozen_param_shapes = state_dict.get(FROZEN_PARAM_SHAPES, None)
118
+ if frozen_param_shapes is not None:
119
+ if debug:
120
+ print(f"Found frozen_param_shapes: {frozen_param_shapes}")
121
+ param_names += list(frozen_param_shapes.keys())
122
+
123
+ # handle shared params
124
+ shared_params = [[k, v] for k, v in state_dict["shared_params"].items()]
125
+
126
+ ds_version = state_dict.get(DS_VERSION, None)
127
+
128
+ frozen_param_fragments = state_dict.get(FROZEN_PARAM_FRAGMENTS, None)
129
+
130
+ z_model_state = zero_model_state(buffers=buffers,
131
+ param_shapes=param_shapes,
132
+ shared_params=shared_params,
133
+ ds_version=ds_version,
134
+ frozen_param_shapes=frozen_param_shapes,
135
+ frozen_param_fragments=frozen_param_fragments)
136
+ zero_model_states.append(z_model_state)
137
+
138
+ return zero_model_states
139
+
140
+
141
+ def parse_optim_states(files, ds_checkpoint_dir):
142
+
143
+ total_files = len(files)
144
+ state_dicts = []
145
+ for f in files:
146
+ state_dict = torch.load(f, map_location=device)
147
+ # immediately discard the potentially huge 2 optimizer states as we only care for fp32 master weights
148
+ # and also handle the case where it was already removed by another helper script
149
+ state_dict["optimizer_state_dict"].pop("optimizer_state_dict", None)
150
+ state_dicts.append(state_dict)
151
+
152
+ if not ZERO_STAGE in state_dicts[0][OPTIMIZER_STATE_DICT]:
153
+ raise ValueError(f"{files[0]} is not a zero checkpoint")
154
+ zero_stage = state_dicts[0][OPTIMIZER_STATE_DICT][ZERO_STAGE]
155
+ world_size = state_dicts[0][OPTIMIZER_STATE_DICT][PARTITION_COUNT]
156
+
157
+ # For ZeRO-2 each param group can have different partition_count as data parallelism for expert
158
+ # parameters can be different from data parallelism for non-expert parameters. So we can just
159
+ # use the max of the partition_count to get the dp world_size.
160
+
161
+ if type(world_size) is list:
162
+ world_size = max(world_size)
163
+
164
+ if world_size != total_files:
165
+ raise ValueError(
166
+ f"Expected {world_size} of '*_optim_states.pt' under '{ds_checkpoint_dir}' but found {total_files} files. "
167
+ "Possibly due to an overwrite of an old checkpoint, or a checkpoint didn't get saved by one or more processes."
168
+ )
169
+
170
+ # the groups are named differently in each stage
171
+ if zero_stage <= 2:
172
+ fp32_groups_key = SINGLE_PARTITION_OF_FP32_GROUPS
173
+ elif zero_stage == 3:
174
+ fp32_groups_key = FP32_FLAT_GROUPS
175
+ else:
176
+ raise ValueError(f"unknown zero stage {zero_stage}")
177
+
178
+ if zero_stage <= 2:
179
+ fp32_flat_groups = [state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key] for i in range(len(state_dicts))]
180
+ elif zero_stage == 3:
181
+ # if there is more than one param group, there will be multiple flattened tensors - one
182
+ # flattened tensor per group - for simplicity merge them into a single tensor
183
+ #
184
+ # XXX: could make the script more memory efficient for when there are multiple groups - it
185
+ # will require matching the sub-lists of param_shapes for each param group flattened tensor
186
+
187
+ fp32_flat_groups = [
188
+ torch.cat(state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key], 0) for i in range(len(state_dicts))
189
+ ]
190
+
191
+ return zero_stage, world_size, fp32_flat_groups
192
+
193
+
194
+ def _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir):
195
+ """
196
+ Returns fp32 state_dict reconstructed from ds checkpoint
197
+
198
+ Args:
199
+ - ``ds_checkpoint_dir``: path to the deepspeed checkpoint folder (where the optimizer files are)
200
+
201
+ """
202
+ print(f"Processing zero checkpoint '{ds_checkpoint_dir}'")
203
+
204
+ optim_files = get_optim_files(ds_checkpoint_dir)
205
+ zero_stage, world_size, fp32_flat_groups = parse_optim_states(optim_files, ds_checkpoint_dir)
206
+ print(f"Detected checkpoint of type zero stage {zero_stage}, world_size: {world_size}")
207
+
208
+ model_files = get_model_state_files(ds_checkpoint_dir)
209
+
210
+ zero_model_states = parse_model_states(model_files)
211
+ print(f'Parsing checkpoint created by deepspeed=={zero_model_states[0].ds_version}')
212
+
213
+ if zero_stage <= 2:
214
+ return _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states)
215
+ elif zero_stage == 3:
216
+ return _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states)
217
+
218
+
219
+ def _zero2_merge_frozen_params(state_dict, zero_model_states):
220
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
221
+ return
222
+
223
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
224
+ frozen_param_fragments = zero_model_states[0].frozen_param_fragments
225
+
226
+ if debug:
227
+ num_elem = sum(s.numel() for s in frozen_param_shapes.values())
228
+ print(f'rank 0: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
229
+
230
+ wanted_params = len(frozen_param_shapes)
231
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
232
+ avail_numel = sum([p.numel() for p in frozen_param_fragments.values()])
233
+ print(f'Frozen params: Have {avail_numel} numels to process.')
234
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
235
+
236
+ total_params = 0
237
+ total_numel = 0
238
+ for name, shape in frozen_param_shapes.items():
239
+ total_params += 1
240
+ unpartitioned_numel = shape.numel()
241
+ total_numel += unpartitioned_numel
242
+
243
+ state_dict[name] = frozen_param_fragments[name]
244
+
245
+ if debug:
246
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
247
+
248
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
249
+
250
+
251
+ def _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
252
+ param_shapes = zero_model_states[0].param_shapes
253
+
254
+ # Reconstruction protocol:
255
+ #
256
+ # XXX: document this
257
+
258
+ if debug:
259
+ for i in range(world_size):
260
+ for j in range(len(fp32_flat_groups[0])):
261
+ print(f"{FP32_FLAT_GROUPS}[{i}][{j}].shape={fp32_flat_groups[i][j].shape}")
262
+
263
+ # XXX: memory usage doubles here (zero2)
264
+ num_param_groups = len(fp32_flat_groups[0])
265
+ merged_single_partition_of_fp32_groups = []
266
+ for i in range(num_param_groups):
267
+ merged_partitions = [sd[i] for sd in fp32_flat_groups]
268
+ full_single_fp32_vector = torch.cat(merged_partitions, 0)
269
+ merged_single_partition_of_fp32_groups.append(full_single_fp32_vector)
270
+ avail_numel = sum(
271
+ [full_single_fp32_vector.numel() for full_single_fp32_vector in merged_single_partition_of_fp32_groups])
272
+
273
+ if debug:
274
+ wanted_params = sum([len(shapes) for shapes in param_shapes])
275
+ wanted_numel = sum([sum(shape.numel() for shape in shapes.values()) for shapes in param_shapes])
276
+ # not asserting if there is a mismatch due to possible padding
277
+ print(f"Have {avail_numel} numels to process.")
278
+ print(f"Need {wanted_numel} numels in {wanted_params} params.")
279
+
280
+ # params
281
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
282
+ # out-of-core computing solution
283
+ total_numel = 0
284
+ total_params = 0
285
+ for shapes, full_single_fp32_vector in zip(param_shapes, merged_single_partition_of_fp32_groups):
286
+ offset = 0
287
+ avail_numel = full_single_fp32_vector.numel()
288
+ for name, shape in shapes.items():
289
+
290
+ unpartitioned_numel = shape.numel()
291
+ total_numel += unpartitioned_numel
292
+ total_params += 1
293
+
294
+ if debug:
295
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
296
+ state_dict[name] = full_single_fp32_vector.narrow(0, offset, unpartitioned_numel).view(shape)
297
+ offset += unpartitioned_numel
298
+
299
+ # Z2 started to align to 2*world_size to improve nccl performance. Therefore both offset and
300
+ # avail_numel can differ by anywhere between 0..2*world_size. Due to two unrelated complex
301
+ # paddings performed in the code it's almost impossible to predict the exact numbers w/o the
302
+ # live optimizer object, so we are checking that the numbers are within the right range
303
+ align_to = 2 * world_size
304
+
305
+ def zero2_align(x):
306
+ return align_to * math.ceil(x / align_to)
307
+
308
+ if debug:
309
+ print(f"original offset={offset}, avail_numel={avail_numel}")
310
+
311
+ offset = zero2_align(offset)
312
+ avail_numel = zero2_align(avail_numel)
313
+
314
+ if debug:
315
+ print(f"aligned offset={offset}, avail_numel={avail_numel}")
316
+
317
+ # Sanity check
318
+ if offset != avail_numel:
319
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
320
+
321
+ print(f"Reconstructed fp32 state dict with {total_params} params {total_numel} elements")
322
+
323
+
324
+ def _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states):
325
+ state_dict = OrderedDict()
326
+
327
+ # buffers
328
+ buffers = zero_model_states[0].buffers
329
+ state_dict.update(buffers)
330
+ if debug:
331
+ print(f"added {len(buffers)} buffers")
332
+
333
+ _zero2_merge_frozen_params(state_dict, zero_model_states)
334
+
335
+ _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
336
+
337
+ # recover shared parameters
338
+ for pair in zero_model_states[0].shared_params:
339
+ if pair[1] in state_dict:
340
+ state_dict[pair[0]] = state_dict[pair[1]]
341
+
342
+ return state_dict
343
+
344
+
345
+ def zero3_partitioned_param_info(unpartitioned_numel, world_size):
346
+ remainder = unpartitioned_numel % world_size
347
+ padding_numel = (world_size - remainder) if remainder else 0
348
+ partitioned_numel = math.ceil(unpartitioned_numel / world_size)
349
+ return partitioned_numel, padding_numel
350
+
351
+
352
+ def _zero3_merge_frozen_params(state_dict, world_size, zero_model_states):
353
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
354
+ return
355
+
356
+ if debug:
357
+ for i in range(world_size):
358
+ num_elem = sum(s.numel() for s in zero_model_states[i].frozen_param_fragments.values())
359
+ print(f'rank {i}: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
360
+
361
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
362
+ wanted_params = len(frozen_param_shapes)
363
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
364
+ avail_numel = sum([p.numel() for p in zero_model_states[0].frozen_param_fragments.values()]) * world_size
365
+ print(f'Frozen params: Have {avail_numel} numels to process.')
366
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
367
+
368
+ total_params = 0
369
+ total_numel = 0
370
+ for name, shape in zero_model_states[0].frozen_param_shapes.items():
371
+ total_params += 1
372
+ unpartitioned_numel = shape.numel()
373
+ total_numel += unpartitioned_numel
374
+
375
+ param_frags = tuple(model_state.frozen_param_fragments[name] for model_state in zero_model_states)
376
+ state_dict[name] = torch.cat(param_frags, 0).narrow(0, 0, unpartitioned_numel).view(shape)
377
+
378
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
379
+
380
+ if debug:
381
+ print(
382
+ f"Frozen params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
383
+ )
384
+
385
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
386
+
387
+
388
+ def _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
389
+ param_shapes = zero_model_states[0].param_shapes
390
+ avail_numel = fp32_flat_groups[0].numel() * world_size
391
+ # Reconstruction protocol: For zero3 we need to zip the partitions together at boundary of each
392
+ # param, re-consolidating each param, while dealing with padding if any
393
+
394
+ # merge list of dicts, preserving order
395
+ param_shapes = {k: v for d in param_shapes for k, v in d.items()}
396
+
397
+ if debug:
398
+ for i in range(world_size):
399
+ print(f"{FP32_FLAT_GROUPS}[{i}].shape={fp32_flat_groups[i].shape}")
400
+
401
+ wanted_params = len(param_shapes)
402
+ wanted_numel = sum(shape.numel() for shape in param_shapes.values())
403
+ # not asserting if there is a mismatch due to possible padding
404
+ avail_numel = fp32_flat_groups[0].numel() * world_size
405
+ print(f"Trainable params: Have {avail_numel} numels to process.")
406
+ print(f"Trainable params: Need {wanted_numel} numels in {wanted_params} params.")
407
+
408
+ # params
409
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
410
+ # out-of-core computing solution
411
+ offset = 0
412
+ total_numel = 0
413
+ total_params = 0
414
+ for name, shape in param_shapes.items():
415
+
416
+ unpartitioned_numel = shape.numel()
417
+ total_numel += unpartitioned_numel
418
+ total_params += 1
419
+
420
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
421
+
422
+ if debug:
423
+ print(
424
+ f"Trainable params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
425
+ )
426
+
427
+ # XXX: memory usage doubles here
428
+ state_dict[name] = torch.cat(
429
+ tuple(fp32_flat_groups[i].narrow(0, offset, partitioned_numel) for i in range(world_size)),
430
+ 0).narrow(0, 0, unpartitioned_numel).view(shape)
431
+ offset += partitioned_numel
432
+
433
+ offset *= world_size
434
+
435
+ # Sanity check
436
+ if offset != avail_numel:
437
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
438
+
439
+ print(f"Reconstructed Trainable fp32 state dict with {total_params} params {total_numel} elements")
440
+
441
+
442
+ def _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states):
443
+ state_dict = OrderedDict()
444
+
445
+ # buffers
446
+ buffers = zero_model_states[0].buffers
447
+ state_dict.update(buffers)
448
+ if debug:
449
+ print(f"added {len(buffers)} buffers")
450
+
451
+ _zero3_merge_frozen_params(state_dict, world_size, zero_model_states)
452
+
453
+ _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
454
+
455
+ # recover shared parameters
456
+ for pair in zero_model_states[0].shared_params:
457
+ if pair[1] in state_dict:
458
+ state_dict[pair[0]] = state_dict[pair[1]]
459
+
460
+ return state_dict
461
+
462
+
463
+ def get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag=None):
464
+ """
465
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated state_dict that can be loaded with
466
+ ``load_state_dict()`` and used for training without DeepSpeed or shared with others, for example
467
+ via a model hub.
468
+
469
+ Args:
470
+ - ``checkpoint_dir``: path to the desired checkpoint folder
471
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in 'latest' file. e.g., ``global_step14``
472
+
473
+ Returns:
474
+ - pytorch ``state_dict``
475
+
476
+ Note: this approach may not work if your application doesn't have sufficient free CPU memory and
477
+ you may need to use the offline approach using the ``zero_to_fp32.py`` script that is saved with
478
+ the checkpoint.
479
+
480
+ A typical usage might be ::
481
+
482
+ from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
483
+ # do the training and checkpoint saving
484
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu
485
+ model = model.cpu() # move to cpu
486
+ model.load_state_dict(state_dict)
487
+ # submit to model hub or save the model to share with others
488
+
489
+ In this example the ``model`` will no longer be usable in the deepspeed context of the same
490
+ application. i.e. you will need to re-initialize the deepspeed engine, since
491
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
492
+
493
+ If you want it all done for you, use ``load_state_dict_from_zero_checkpoint`` instead.
494
+
495
+ """
496
+ if tag is None:
497
+ latest_path = os.path.join(checkpoint_dir, 'latest')
498
+ if os.path.isfile(latest_path):
499
+ with open(latest_path, 'r') as fd:
500
+ tag = fd.read().strip()
501
+ else:
502
+ raise ValueError(f"Unable to find 'latest' file at {latest_path}")
503
+
504
+ ds_checkpoint_dir = os.path.join(checkpoint_dir, tag)
505
+
506
+ if not os.path.isdir(ds_checkpoint_dir):
507
+ raise FileNotFoundError(f"Directory '{ds_checkpoint_dir}' doesn't exist")
508
+
509
+ return _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir)
510
+
511
+
512
+ def convert_zero_checkpoint_to_fp32_state_dict(checkpoint_dir, output_file, tag=None):
513
+ """
514
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict`` file that can be
515
+ loaded with ``torch.load(file)`` + ``load_state_dict()`` and used for training without DeepSpeed.
516
+
517
+ Args:
518
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
519
+ - ``output_file``: path to the pytorch fp32 state_dict output file (e.g. path/pytorch_model.bin)
520
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
521
+ """
522
+
523
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag)
524
+ print(f"Saving fp32 state dict to {output_file}")
525
+ torch.save(state_dict, output_file)
526
+
527
+
528
+ def load_state_dict_from_zero_checkpoint(model, checkpoint_dir, tag=None):
529
+ """
530
+ 1. Put the provided model to cpu
531
+ 2. Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict``
532
+ 3. Load it into the provided model
533
+
534
+ Args:
535
+ - ``model``: the model object to update
536
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
537
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
538
+
539
+ Returns:
540
+ - ``model`: modified model
541
+
542
+ Make sure you have plenty of CPU memory available before you call this function. If you don't
543
+ have enough use the ``zero_to_fp32.py`` utility to do the conversion. You will find it
544
+ conveniently placed for you in the checkpoint folder.
545
+
546
+ A typical usage might be ::
547
+
548
+ from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
549
+ model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
550
+ # submit to model hub or save the model to share with others
551
+
552
+ Note, that once this was run, the ``model`` will no longer be usable in the deepspeed context
553
+ of the same application. i.e. you will need to re-initialize the deepspeed engine, since
554
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
555
+
556
+ """
557
+ logger.info(f"Extracting fp32 weights")
558
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag)
559
+
560
+ logger.info(f"Overwriting model with fp32 weights")
561
+ model = model.cpu()
562
+ model.load_state_dict(state_dict, strict=False)
563
+
564
+ return model
565
+
566
+
567
+ if __name__ == "__main__":
568
+
569
+ parser = argparse.ArgumentParser()
570
+ parser.add_argument("checkpoint_dir",
571
+ type=str,
572
+ help="path to the desired checkpoint folder, e.g., path/checkpoint-12")
573
+ parser.add_argument(
574
+ "output_file",
575
+ type=str,
576
+ help="path to the pytorch fp32 state_dict output file (e.g. path/checkpoint-12/pytorch_model.bin)")
577
+ parser.add_argument("-t",
578
+ "--tag",
579
+ type=str,
580
+ default=None,
581
+ help="checkpoint tag used as a unique identifier for checkpoint. e.g., global_step1")
582
+ parser.add_argument("-d", "--debug", action='store_true', help="enable debug")
583
+ args = parser.parse_args()
584
+
585
+ debug = args.debug
586
+
587
+ convert_zero_checkpoint_to_fp32_state_dict(args.checkpoint_dir, args.output_file, tag=args.tag)