irexyc bibimbap commited on
Commit
7ceb6fd
0 Parent(s):

Duplicate from bibimbap/Qwen-VL-Chat

Browse files

Co-authored-by: bibimbap <bibimbap@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ SimSun.ttf filter=lfs diff=lfs merge=lfs -text
37
+ assets/apple.jpeg filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,708 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh
4
+ - en
5
+ tags:
6
+ - qwen
7
+ pipeline_tag: text-generation
8
+ inference: false
9
+ ---
10
+
11
+ # Qwen-VL-Chat
12
+
13
+ <br>
14
+
15
+ <p align="center">
16
+ <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo_vl.jpg" width="400"/>
17
+ <p>
18
+ <br>
19
+
20
+ <p align="center">
21
+ Qwen-VL <a href="https://modelscope.cn/models/qwen/Qwen-VL/summary">🤖 <a> | <a href="https://huggingface.co/Qwen/Qwen-VL">🤗</a>&nbsp | Qwen-VL-Chat <a href="https://modelscope.cn/models/qwen/Qwen-VL-Chat/summary">🤖 <a>| <a href="https://huggingface.co/Qwen/Qwen-VL-Chat">🤗</a>&nbsp | Qwen-VL-Chat-Int4 <a href="https://huggingface.co/Qwen/Qwen-VL-Chat-Int4">🤗</a>
22
+ <br>
23
+ <a href="assets/wechat.png">WeChat</a>&nbsp&nbsp | &nbsp&nbsp<a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>&nbsp&nbsp | &nbsp&nbsp<a href="https://modelscope.cn/studios/qwen/Qwen-VL-Chat-Demo/summary">Demo</a>&nbsp | &nbsp<a href="https://arxiv.org/abs/2308.12966">Report</a>
24
+ </p>
25
+ <br>
26
+
27
+ **Qwen-VL** 是阿里云研发的大规模视觉语言模型(Large Vision Language Model, LVLM)。Qwen-VL 可以以图像、文本、检测框作为输入,并以文本和检测框作为输出。Qwen-VL 系列模型性能强大,具备多语言对话、多图交错对话等能力,并支持中文开放域定位和细粒度图像识别与理解。
28
+
29
+ **Qwen-VL** (Qwen Large Vision Language Model) is the visual multimodal version of the large model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-VL accepts image, text, and bounding box as inputs, outputs text and bounding box. The features of Qwen-VL include:
30
+
31
+ 目前,我们提供了Qwen-VL和Qwen-VL-Chat两个模型,分别为预训练模型和Chat模型。如果想了解更多关于模型的信息,请点击[链接](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md)查看我们的技术备忘录。本仓库为Qwen-VL-Chat仓库。
32
+
33
+ We release Qwen-VL and Qwen-VL-Chat, which are pretrained model and Chat model respectively. For more details about Qwen-VL, please refer to our [technical memo](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md). This repo is the one for Qwen-VL-Chat.
34
+ <br>
35
+
36
+ ## 安装要求 (Requirements)
37
+
38
+ * python 3.8及以上版本
39
+ * pytorch 1.12及以上版本,推荐2.0及以上版本
40
+ * 建议使用CUDA 11.4及以上(GPU用户需考虑此选项)
41
+ * python 3.8 and above
42
+ * pytorch 1.12 and above, 2.0 and above are recommended
43
+ * CUDA 11.4 and above are recommended (this is for GPU users)
44
+ <br>
45
+
46
+ ## 快速开始 (Quickstart)
47
+
48
+ 我们提供简单的示例来说明如何利用 🤗 Transformers 快速使用Qwen-VL-Chat。
49
+
50
+ 在开始前,请确保你已经配置好环境并安装好相关的代码包。最重要的是,确保你满足上述要求,然后安装相关的依赖库。
51
+
52
+ Below, we provide simple examples to show how to use Qwen-VL-Chat with 🤗 Transformers.
53
+
54
+ Before running the code, make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries.
55
+
56
+ ```bash
57
+ pip install -r requirements.txt
58
+ ```
59
+
60
+ 接下来你可以开始使用Transformers来使用我们的模型。关于视觉模块的更多用法,请参考[教程](TUTORIAL.md)。
61
+
62
+ Now you can start with Transformers. More usage aboue vision encoder, please refer to [tutorial](TUTORIAL_zh.md).
63
+
64
+ #### 🤗 Transformers
65
+
66
+ To use Qwen-VL-Chat for the inference, all you need to do is to input a few lines of codes as demonstrated below. However, **please make sure that you are using the latest code.**
67
+
68
+ ```python
69
+ from transformers import AutoModelForCausalLM, AutoTokenizer
70
+ from transformers.generation import GenerationConfig
71
+ import torch
72
+ torch.manual_seed(1234)
73
+
74
+ # Note: The default behavior now has injection attack prevention off.
75
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)
76
+
77
+ # use bf16
78
+ # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="auto", trust_remote_code=True, bf16=True).eval()
79
+ # use fp16
80
+ # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="auto", trust_remote_code=True, fp16=True).eval()
81
+ # use cpu only
82
+ # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="cpu", trust_remote_code=True).eval()
83
+ # use cuda device
84
+ model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="cuda", trust_remote_code=True).eval()
85
+
86
+ # Specify hyperparameters for generation (No need to do this if you are using transformers>=4.32.0)
87
+ # model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)
88
+
89
+ # 1st dialogue turn
90
+ query = tokenizer.from_list_format([
91
+ {'image': 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'},
92
+ {'text': '这是什么'},
93
+ ])
94
+ response, history = model.chat(tokenizer, query=query, history=None)
95
+ print(response)
96
+ # 图中是一名年轻女子在沙滩上和她的狗玩耍,狗的品种可能是拉布拉多。她们坐在沙滩上,狗的前腿抬起来,似乎在和人类击掌。两人之间充满了信任和爱。
97
+
98
+ # 2nd dialogue turn
99
+ response, history = model.chat(tokenizer, '输出"击掌"的检测框', history=history)
100
+ print(response)
101
+ # <ref>击掌</ref><box>(517,508),(589,611)</box>
102
+ image = tokenizer.draw_bbox_on_latest_picture(response, history)
103
+ if image:
104
+ image.save('1.jpg')
105
+ else:
106
+ print("no box")
107
+ ```
108
+
109
+ <p align="center">
110
+ <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo_highfive.jpg" width="500"/>
111
+ <p>
112
+ <br>
113
+
114
+ ## 量化 (Quantization)
115
+
116
+ ### 用法 (Usage)
117
+
118
+ 当前我们提供了基于[AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)的量化方案,并提供了Qwen-VL-Chat的Int4量化版本Qwen-VL-Chat-Int4 [点击此处](https://huggingface.co/Qwen/Qwen-VL-Chat-Int4)。该模型在效果评测上几乎无损,并在显存占用和推理速度上具有明显优势。
119
+
120
+ 下文说明如何使用该量化模型。开始之前,请确保你满足要求(如torch2.0及以上、transformers 4.32.0及以上,等)并安装所需的代码库:
121
+
122
+ We provide a new solution based on [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), and release an Int4 quantized model for Qwen-VL-Chat, Qwen-VL-Chat-Int4 [Click here](https://huggingface.co/Qwen/Qwen-VL-Chat-Int4), which achieves nearly lossless model effects but improved performance on both memory costs and inference speed.
123
+
124
+ Here we demonstrate how to use our provided quantized models for inference. Before you start, make sure you meet the requirements (e.g., torch 2.0 and above, transformers 4.32.0 and above, etc.) and install the required packages:
125
+
126
+ ```bash
127
+ pip install optimum
128
+ git clone https://github.com/JustinLin610/AutoGPTQ.git & cd AutoGPTQ
129
+ pip install -v .
130
+ ```
131
+
132
+ 如遇到安装 `auto-gptq` 的问题,建议您前往官方[repo](https://github.com/PanQiWei/AutoGPTQ) 寻找合适的wheel。
133
+
134
+ 随后你便可以按照上述用法,轻松调用量化模型:
135
+
136
+ If you meet problems installing `auto-gptq`, we advise you to check out the official [repo](https://github.com/PanQiWei/AutoGPTQ) to find a wheel.
137
+
138
+ Then you can load the quantized model easily and run inference as same as usual:
139
+
140
+ ```python
141
+ model = AutoModelForCausalLM.from_pretrained(
142
+ "Qwen/Qwen-VL-Chat-Int4",
143
+ device_map="auto",
144
+ trust_remote_code=True
145
+ ).eval()
146
+ # Either a local path or an u[](https://)rl between <img></img> tags.
147
+ image_path = 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'
148
+ response, history = model.chat(tokenizer, query=f'<img>{image_path}</img>这是什么', history=None)
149
+ print(response)
150
+ ```
151
+
152
+ ### 效果评测 (Performance)
153
+
154
+ 我们列出不同精度下模型在评测基准 **[TouchStone](https://github.com/OFA-Sys/TouchStone)** 上的表现,并发现量化模型并没有显著性能损失。结果如下所示:
155
+
156
+ We illustrate the model performance of both BF16 and Int4 models on the benchmark **[TouchStone](https://github.com/OFA-Sys/TouchStone)**, and we find that the quantized model does not suffer from significant performance degradation. Results are shown below:
157
+
158
+ | Quantization | ZH. | EN |
159
+ | ------------ | :--------: | :-----------: |
160
+ | BF16 | 401.2 | 645.2 |
161
+ | Int4 | 386.6 | 651.4 |
162
+
163
+ ### 推理速度 (Inference Speed)
164
+
165
+ 我们测算了在输入一张图片(即258个token)的条件下BF16和Int4的模型生成1792 (2048-258) 和 7934 (8192-258) 个token的平均速度。
166
+
167
+ We measured the average inference speed (tokens/s) of generating 1792 (2048-258) and 7934 (8192-258) tokens with the context of an image (which takes 258 tokens) under BF16 precision and Int4 quantization, respectively.
168
+
169
+ | Quantization | Speed (2048 tokens) | Speed (8192 tokens) |
170
+ | ------------ | :-----------------: | :-----------------: |
171
+ | BF16 | 28.87 | 24.32 |
172
+ | Int4 | 37.79 | 34.34 |
173
+
174
+ 推理速度测算是在单卡 A100-SXM4-80G GPU上运行,使用PyTorch 2.0.1及CUDA 11.4。
175
+
176
+ The profiling runs on a single A100-SXM4-80G GPU with PyTorch 2.0.1 and CUDA 11.4.
177
+
178
+ ### GPU显存占用 (GPU Memory Usage)
179
+
180
+ 我们还测算了在一张图片输入的条件下BF16和Int4模型生成1792 (2048-258) 和 7934 (8192-258) 个token所需显存。结果如下所示:
181
+
182
+ We also profile the peak GPU memory usage for encoding 1792 (2048-258) tokens (including an image) as context (and generating single token) and generating 7934 (8192-258) tokens (with an image as context) under BF16 or Int4 quantization level, respectively. The results are shown below.
183
+
184
+ | Quantization | Peak Usage for Encoding 2048 Tokens | Peak Usage for Generating 8192 Tokens |
185
+ | ------------ | :---------------------------------: | :-----------------------------------: |
186
+ | BF16 | 22.60GB | 28.01GB |
187
+ | Int4 | 11.82GB | 17.23GB |
188
+
189
+ 上述速度和显存测算使用[此���本](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile_mm.py)完成。
190
+
191
+ The above speed and memory profiling are conducted using [this script](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile_mm.py).
192
+ <br>
193
+
194
+ ## 评测
195
+
196
+ 我们从两个角度评测了两个模型的能力:
197
+
198
+ 1. 在**英文标准 Benchmark** 上评测模型的基础任务能力。目前评测了四大类多模态任务:
199
+
200
+ - Zero-shot Caption: 评测模型在未见过数据集上的零样本图片描述能力;
201
+ - General VQA: 评测模型的通用问答能力,例如判断题、颜色、个数、类目等问答能力;
202
+ - Text-based VQA:评测模型对于图片中文字相关的识别/问答能力,例如文档问答、图表问答、文字问答等;
203
+ - Referring Expression Compression:评测模型给定物体描述画检测框的能力;
204
+ 2. **试金石 (TouchStone)**:为了评测模型整体的图文对话能力和人类对齐水平。我们为此构建了一个基于 GPT4 打分来评测 LVLM 模型的 Benchmark:TouchStone。在 TouchStone-v0.1 中:
205
+
206
+ - 评测基准总计涵盖 300+张图片、800+道题目、27个类别。包括基础属性问答、人物地标问答、影视作品问答、视觉推理、反事实推理、诗歌创作、故事写作,商品比较、图片解题等**尽可能广泛的类别**。
207
+ - 为了弥补目前 GPT4 无法直接读取图片的缺陷,我们给所有的带评测图片提供了**人工标注的充分详细描述**,并且将图片的详细描述、问题和模型的输出结果一起交给 GPT4 打分。
208
+ - 评测同时包含英文版本和中文版本。
209
+
210
+ 评测结果如下:
211
+
212
+ We evaluated the model's ability from two perspectives:
213
+
214
+ 1. **Standard Benchmarks**: We evaluate the model's basic task capabilities on four major categories of multimodal tasks:
215
+
216
+ - Zero-shot Caption: Evaluate model's zero-shot image captioning ability on unseen datasets;
217
+ - General VQA: Evaluate the general question-answering ability of pictures, such as the judgment, color, number, category, etc;
218
+ - Text-based VQA: Evaluate the model's ability to recognize text in pictures, such as document QA, chart QA, etc;
219
+ - Referring Expression Comprehension: Evaluate the ability to localize a target object in an image described by a referring expression.
220
+ 2. **TouchStone**: To evaluate the overall text-image dialogue capability and alignment level with humans, we have constructed a benchmark called TouchStone, which is based on scoring with GPT4 to evaluate the LVLM model.
221
+
222
+ - The TouchStone benchmark covers a total of 300+ images, 800+ questions, and 27 categories. Such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc;
223
+ - In order to break the current limitation of GPT4 in terms of direct image input, TouchStone provides fine-grained image annotations by human labeling. These detailed annotations, along with the questions and the model's output, are then presented to GPT4 for scoring.
224
+ - The benchmark includes both English and Chinese versions.
225
+
226
+ The results of the evaluation are as follows:
227
+
228
+ Qwen-VL outperforms current SOTA generalist models on multiple VL tasks and has a more comprehensive coverage in terms of capability range.
229
+
230
+ <p align="center">
231
+ <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/radar.png" width="600"/>
232
+ <p>
233
+
234
+ ### 零样本图像描述 & 通用视觉问答 (Zero-shot Captioning & General VQA)
235
+
236
+ <table>
237
+ <thead>
238
+ <tr>
239
+ <th rowspan="2">Model type</th>
240
+ <th rowspan="2">Model</th>
241
+ <th colspan="2">Zero-shot Captioning</th>
242
+ <th colspan="5">General VQA</th>
243
+ </tr>
244
+ <tr>
245
+ <th>NoCaps</th>
246
+ <th>Flickr30K</th>
247
+ <th>VQAv2<sup>dev</sup></th>
248
+ <th>OK-VQA</th>
249
+ <th>GQA</th>
250
+ <th>SciQA-Img<br>(0-shot)</th>
251
+ <th>VizWiz<br>(0-shot)</th>
252
+ </tr>
253
+ </thead>
254
+ <tbody align="center">
255
+ <tr>
256
+ <td rowspan="10">Generalist<br>Models</td>
257
+ <td>Flamingo-9B</td>
258
+ <td>-</td>
259
+ <td>61.5</td>
260
+ <td>51.8</td>
261
+ <td>44.7</td>
262
+ <td>-</td>
263
+ <td>-</td>
264
+ <td>28.8</td>
265
+ </tr>
266
+ <tr>
267
+ <td>Flamingo-80B</td>
268
+ <td>-</td>
269
+ <td>67.2</td>
270
+ <td>56.3</td>
271
+ <td>50.6</td>
272
+ <td>-</td>
273
+ <td>-</td>
274
+ <td>31.6</td>
275
+ </tr>
276
+ <tr>
277
+ <td>Unified-IO-XL</td>
278
+ <td>100.0</td>
279
+ <td>-</td>
280
+ <td>77.9</td>
281
+ <td>54.0</td>
282
+ <td>-</td>
283
+ <td>-</td>
284
+ <td>-</td>
285
+ </tr>
286
+ <tr>
287
+ <td>Kosmos-1</td>
288
+ <td>-</td>
289
+ <td>67.1</td>
290
+ <td>51.0</td>
291
+ <td>-</td>
292
+ <td>-</td>
293
+ <td>-</td>
294
+ <td>29.2</td>
295
+ </tr>
296
+ <tr>
297
+ <td>Kosmos-2</td>
298
+ <td>-</td>
299
+ <td>66.7</td>
300
+ <td>45.6</td>
301
+ <td>-</td>
302
+ <td>-</td>
303
+ <td>-</td>
304
+ <td>-</td>
305
+ </tr>
306
+ <tr>
307
+ <td>BLIP-2 (Vicuna-13B)</td>
308
+ <td>103.9</td>
309
+ <td>71.6</td>
310
+ <td>65.0</td>
311
+ <td>45.9</td>
312
+ <td>32.3</td>
313
+ <td>61.0</td>
314
+ <td>19.6</td>
315
+ </tr>
316
+ <tr>
317
+ <td>InstructBLIP (Vicuna-13B)</td>
318
+ <td><strong>121.9</strong></td>
319
+ <td>82.8</td>
320
+ <td>-</td>
321
+ <td>-</td>
322
+ <td>49.5</td>
323
+ <td>63.1</td>
324
+ <td>33.4</td>
325
+ </tr>
326
+ <tr>
327
+ <td>Shikra (Vicuna-13B)</td>
328
+ <td>-</td>
329
+ <td>73.9</td>
330
+ <td>77.36</td>
331
+ <td>47.16</td>
332
+ <td>-</td>
333
+ <td>-</td>
334
+ <td>-</td>
335
+ </tr>
336
+ <tr>
337
+ <td><strong>Qwen-VL (Qwen-7B)</strong></td>
338
+ <td>121.4</td>
339
+ <td><b>85.8</b></td>
340
+ <td><b>78.8</b></td>
341
+ <td><b>58.6</b></td>
342
+ <td><b>59.3</b></td>
343
+ <td>67.1</td>
344
+ <td>35.2</td>
345
+ </tr>
346
+ <!-- <tr>
347
+ <td>Qwen-VL (4-shot)</td>
348
+ <td>-</td>
349
+ <td>-</td>
350
+ <td>-</td>
351
+ <td>63.6</td>
352
+ <td>-</td>
353
+ <td>-</td>
354
+ <td>39.1</td>
355
+ </tr> -->
356
+ <tr>
357
+ <td>Qwen-VL-Chat</td>
358
+ <td>120.2</td>
359
+ <td>81.0</td>
360
+ <td>78.2</td>
361
+ <td>56.6</td>
362
+ <td>57.5</td>
363
+ <td><b>68.2</b></td>
364
+ <td><b>38.9</b></td>
365
+ </tr>
366
+ <!-- <tr>
367
+ <td>Qwen-VL-Chat (4-shot)</td>
368
+ <td>-</td>
369
+ <td>-</td>
370
+ <td>-</td>
371
+ <td>60.6</td>
372
+ <td>-</td>
373
+ <td>-</td>
374
+ <td>44.45</td>
375
+ </tr> -->
376
+ <tr>
377
+ <td>Previous SOTA<br>(Per Task Fine-tuning)</td>
378
+ <td>-</td>
379
+ <td>127.0<br>(PALI-17B)</td>
380
+ <td>84.5<br>(InstructBLIP<br>-FlanT5-XL)</td>
381
+ <td>86.1<br>(PALI-X<br>-55B)</td>
382
+ <td>66.1<br>(PALI-X<br>-55B)</td>
383
+ <td>72.1<br>(CFR)</td>
384
+ <td>92.53<br>(LLaVa+<br>GPT-4)</td>
385
+ <td>70.9<br>(PALI-X<br>-55B)</td>
386
+ </tr>
387
+ </tbody>
388
+ </table>
389
+
390
+ - 在 Zero-shot Caption 中,Qwen-VL 在 Flickr30K 数据集上取得了 **SOTA** 的结果,并在 Nocaps 数据集上取得了和 InstructBlip 可竞争的结果。
391
+ - 在 General VQA 中,Qwen-VL 取得了 LVLM 模型同等量级和设定下 **SOTA** 的结果。
392
+ - For zero-shot image captioning, Qwen-VL achieves the **SOTA** on Flickr30K and competitive results on Nocaps with InstructBlip.
393
+ - For general VQA, Qwen-VL achieves the **SOTA** under the same generalist LVLM scale settings.
394
+
395
+ ### 文本导向的视觉问答 (Text-oriented VQA)
396
+
397
+ <table>
398
+ <thead>
399
+ <tr>
400
+ <th>Model type</th>
401
+ <th>Model</th>
402
+ <th>TextVQA</th>
403
+ <th>DocVQA</th>
404
+ <th>ChartQA</th>
405
+ <th>AI2D</th>
406
+ <th>OCR-VQA</th>
407
+ </tr>
408
+ </thead>
409
+ <tbody align="center">
410
+ <tr>
411
+ <td rowspan="5">Generalist Models</td>
412
+ <td>BLIP-2 (Vicuna-13B)</td>
413
+ <td>42.4</td>
414
+ <td>-</td>
415
+ <td>-</td>
416
+ <td>-</td>
417
+ <td>-</td>
418
+ </tr>
419
+ <tr>
420
+ <td>InstructBLIP (Vicuna-13B)</td>
421
+ <td>50.7</td>
422
+ <td>-</td>
423
+ <td>-</td>
424
+ <td>-</td>
425
+ <td>-</td>
426
+ </tr>
427
+ <tr>
428
+ <td>mPLUG-DocOwl (LLaMA-7B)</td>
429
+ <td>52.6</td>
430
+ <td>62.2</td>
431
+ <td>57.4</td>
432
+ <td>-</td>
433
+ <td>-</td>
434
+ </tr>
435
+ <tr>
436
+ <td>Pic2Struct-Large (1.3B)</td>
437
+ <td>-</td>
438
+ <td><b>76.6</b></td>
439
+ <td>58.6</td>
440
+ <td>42.1</td>
441
+ <td>71.3</td>
442
+ </tr>
443
+ <tr>
444
+ <td>Qwen-VL (Qwen-7B)</td>
445
+ <td><b>63.8</b></td>
446
+ <td>65.1</td>
447
+ <td><b>65.7</b></td>
448
+ <td><b>62.3</b></td>
449
+ <td><b>75.7</b></td>
450
+ </tr>
451
+ <tr>
452
+ <td>Specialist SOTAs<br>(Specialist/Finetuned)</td>
453
+ <td>PALI-X-55B (Single-task FT)<br>(Without OCR Pipeline)</td>
454
+ <td>71.44</td>
455
+ <td>80.0</td>
456
+ <td>70.0</td>
457
+ <td>81.2</td>
458
+ <td>75.0</td>
459
+ </tr>
460
+ </tbody>
461
+ </table>
462
+
463
+ - 在文字相关的识别/问答评测上,取得了当前规模下通用 LVLM 达到的最好结果。
464
+ - 分辨率对上述某几个评测非常重要,大部分 224 分辨率的开源 LVLM 模型无法完成以上评测,或只能通过切图的方式解决。Qwen-VL 将分辨率提升到 448,可以直接以端到端的方式进行以上评测。Qwen-VL 在很多任务上甚至超过了 1024 分辨率的 Pic2Struct-Large 模型。
465
+ - In text-related recognition/QA evaluation, Qwen-VL achieves the SOTA under the generalist LVLM scale settings.
466
+ - Resolution is important for several above evaluations. While most open-source LVLM models with 224 resolution are incapable of these evaluations or can only solve these by cutting images, Qwen-VL scales the resolution to 448 so that it can be evaluated end-to-end. Qwen-VL even outperforms Pic2Struct-Large models of 1024 resolution on some tasks.
467
+
468
+ ### 细粒度视觉定位 (Referring Expression Comprehension)
469
+
470
+ <table>
471
+ <thead>
472
+ <tr>
473
+ <th rowspan="2">Model type</th>
474
+ <th rowspan="2">Model</th>
475
+ <th colspan="3">RefCOCO</th>
476
+ <th colspan="3">RefCOCO+</th>
477
+ <th colspan="2">RefCOCOg</th>
478
+ <th>GRIT</th>
479
+ </tr>
480
+ <tr>
481
+ <th>val</th>
482
+ <th>test-A</th>
483
+ <th>test-B</th>
484
+ <th>val</th>
485
+ <th>test-A</th>
486
+ <th>test-B</th>
487
+ <th>val-u</th>
488
+ <th>test-u</th>
489
+ <th>refexp</th>
490
+ </tr>
491
+ </thead>
492
+ <tbody align="center">
493
+ <tr>
494
+ <td rowspan="8">Generalist Models</td>
495
+ <td>GPV-2</td>
496
+ <td>-</td>
497
+ <td>-</td>
498
+ <td>-</td>
499
+ <td>-</td>
500
+ <td>-</td>
501
+ <td>-</td>
502
+ <td>-</td>
503
+ <td>-</td>
504
+ <td>51.50</td>
505
+ </tr>
506
+ <tr>
507
+ <td>OFA-L*</td>
508
+ <td>79.96</td>
509
+ <td>83.67</td>
510
+ <td>76.39</td>
511
+ <td>68.29</td>
512
+ <td>76.00</td>
513
+ <td>61.75</td>
514
+ <td>67.57</td>
515
+ <td>67.58</td>
516
+ <td>61.70</td>
517
+ </tr>
518
+ <tr>
519
+ <td>Unified-IO</td>
520
+ <td>-</td>
521
+ <td>-</td>
522
+ <td>-</td>
523
+ <td>-</td>
524
+ <td>-</td>
525
+ <td>-</td>
526
+ <td>-</td>
527
+ <td>-</td>
528
+ <td><b>78.61</b></td>
529
+ </tr>
530
+ <tr>
531
+ <td>VisionLLM-H</td>
532
+ <td></td>
533
+ <td>86.70</td>
534
+ <td>-</td>
535
+ <td>-</td>
536
+ <td>-</td>
537
+ <td>-</td>
538
+ <td>-</td>
539
+ <td>-</td>
540
+ <td>-</td>
541
+ </tr>
542
+ <tr>
543
+ <td>Shikra-7B</td>
544
+ <td>87.01</td>
545
+ <td>90.61</td>
546
+ <td>80.24 </td>
547
+ <td>81.60</td>
548
+ <td>87.36</td>
549
+ <td>72.12</td>
550
+ <td>82.27</td>
551
+ <td>82.19</td>
552
+ <td>69.34</td>
553
+ </tr>
554
+ <tr>
555
+ <td>Shikra-13B</td>
556
+ <td>87.83 </td>
557
+ <td>91.11</td>
558
+ <td>81.81</td>
559
+ <td>82.89</td>
560
+ <td>87.79</td>
561
+ <td>74.41</td>
562
+ <td>82.64</td>
563
+ <td>83.16</td>
564
+ <td>69.03</td>
565
+ </tr>
566
+ <tr>
567
+ <td>Qwen-VL-7B</td>
568
+ <td><b>89.36</b></td>
569
+ <td>92.26</td>
570
+ <td><b>85.34</b></td>
571
+ <td><b>83.12</b></td>
572
+ <td>88.25</td>
573
+ <td><b>77.21</b></td>
574
+ <td>85.58</td>
575
+ <td>85.48</td>
576
+ <td>78.22</td>
577
+ </tr>
578
+ <tr>
579
+ <td>Qwen-VL-7B-Chat</td>
580
+ <td>88.55</td>
581
+ <td><b>92.27</b></td>
582
+ <td>84.51</td>
583
+ <td>82.82</td>
584
+ <td><b>88.59</b></td>
585
+ <td>76.79</td>
586
+ <td><b>85.96</b></td>
587
+ <td><b>86.32</b></td>
588
+ <td>-</td>
589
+ <tr>
590
+ <td rowspan="3">Specialist SOTAs<br>(Specialist/Finetuned)</td>
591
+ <td>G-DINO-L</td>
592
+ <td>90.56&nbsp;&nbsp;</td>
593
+ <td>93.19</td>
594
+ <td>88.24</td>
595
+ <td>82.75</td>
596
+ <td>88.95</td>
597
+ <td>75.92</td>
598
+ <td>86.13</td>
599
+ <td>87.02</td>
600
+ <td>-</td>
601
+ </tr>
602
+ <tr>
603
+ <td>UNINEXT-H</td>
604
+ <td>92.64 </td>
605
+ <td>94.33</td>
606
+ <td>91.46</td>
607
+ <td>85.24</td>
608
+ <td>89.63</td>
609
+ <td>79.79</td>
610
+ <td>88.73</td>
611
+ <td>89.37</td>
612
+ <td>-</td>
613
+ </tr>
614
+ <tr>
615
+ <td>ONE-PEACE</td>
616
+ <td>92.58 </td>
617
+ <td>94.18</td>
618
+ <td>89.26</td>
619
+ <td>88.77</td>
620
+ <td>92.21</td>
621
+ <td>83.23</td>
622
+ <td>89.22</td>
623
+ <td>89.27</td>
624
+ <td>-</td>
625
+ </tr>
626
+ </tbody>
627
+ </table>
628
+
629
+ - 在定位任务上,Qwen-VL 全面超过 Shikra-13B,取得了目前 Generalist LVLM 模型上在 Refcoco 上的 **SOTA**。
630
+ - Qwen-VL 并没有在任何中文定位数据上训练过,但通过中文 Caption 数据和 英文 Grounding 数据的训练,可以 Zero-shot 泛化出中文 Grounding 能力。
631
+
632
+ 我们提供了以上**所有**评测脚本以供复现我们的实验结果。请阅读 [eval/EVALUATION.md](eval/EVALUATION.md) 了解更多信息。
633
+
634
+ - Qwen-VL achieves the **SOTA** in all above referring expression comprehension benchmarks.
635
+ - Qwen-VL has not been trained on any Chinese grounding data, but it can still generalize to the Chinese Grounding tasks in a zero-shot way by training Chinese Caption data and English Grounding data.
636
+
637
+ We provide all of the above evaluation scripts for reproducing our experimental results. Please read [eval/EVALUATION.md](eval/EVALUATION.md) for more information.
638
+
639
+ ### 闲聊能力测评 (Chat Evaluation)
640
+
641
+ TouchStone 是一个基于 GPT4 打分来评测 LVLM 模型的图文对话能力和人类对齐水平的基准。它涵盖了 300+张图片、800+道题目、27个类别,包括基础属性、人物地标、视觉推理、诗歌创作、故事写作、商品比较、图片解题等**尽可能广泛的类别**。关于 TouchStone 的详细介绍,请参考[touchstone/README_CN.md](touchstone/README_CN.md)了解更多信息。
642
+
643
+ TouchStone is a benchmark based on scoring with GPT4 to evaluate the abilities of the LVLM model on text-image dialogue and alignment levels with humans. It covers a total of 300+ images, 800+ questions, and 27 categories, such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc. Please read [touchstone/README_CN.md](touchstone/README.md) for more information.
644
+
645
+ #### 英语 (English)
646
+
647
+ | Model | Score |
648
+ |---------------|-------|
649
+ | PandaGPT | 488.5 |
650
+ | MiniGPT4 | 531.7 |
651
+ | InstructBLIP | 552.4 |
652
+ | LLaMA-AdapterV2 | 590.1 |
653
+ | mPLUG-Owl | 605.4 |
654
+ | LLaVA | 602.7 |
655
+ | Qwen-VL-Chat | 645.2 |
656
+
657
+ #### 中文 (Chinese)
658
+
659
+ | Model | Score |
660
+ |---------------|-------|
661
+ | VisualGLM | 247.1 |
662
+ | Qwen-VL-Chat | 401.2 |
663
+
664
+ Qwen-VL-Chat 模型在中英文的对齐评测中均取得当前 LVLM 模型下的最好结果。
665
+
666
+ Qwen-VL-Chat has achieved the best results in both Chinese and English alignment evaluation.
667
+ <br>
668
+
669
+ ## 常见问题 (FAQ)
670
+
671
+ 如遇到问题,敬请查阅 [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ_zh.md)以及issue区,如仍无法解决再提交issue。
672
+
673
+ If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ.md) and the issues first to search a solution before you launch a new issue.
674
+ <br>
675
+
676
+ ## 使用协议 (License Agreement)
677
+
678
+ 研究人员与开发者可使用Qwen-VL和Qwen-VL-Chat或进行二次开发。我们同样允许商业使用,具体细节请查看[LICENSE](https://github.com/QwenLM/Qwen-VL/blob/master/LICENSE)。如需商用,请填写[问卷](https://dashscope.console.aliyun.com/openModelApply/qianwen)申请。
679
+
680
+ Researchers and developers are free to use the codes and model weights of both Qwen-VL and Qwen-VL-Chat. We also allow their commercial use. Check our license at [LICENSE](LICENSE) for more details.
681
+ <br>
682
+
683
+ ## 引用 (Citation)
684
+
685
+ 如果你觉得我们的论文和代码对你的研究有帮助,请考虑:star: 和引用 :pencil: :)
686
+
687
+ If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :)
688
+
689
+ ```BibTeX
690
+ @article{Qwen-VL,
691
+ title={Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities},
692
+ author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
693
+ journal={arXiv preprint arXiv:2308.12966},
694
+ year={2023}
695
+ }
696
+ ```
697
+ <br>
698
+
699
+ ## 联系我们 (Contact Us)
700
+
701
+ 如果你想给我们的研发团队和产品团队留言,请通过邮件(qianwen_opensource@alibabacloud.com)联系我们。
702
+
703
+ If you are interested to leave a message to either our research team or product team, feel free to send an email to qianwen_opensource@alibabacloud.com.
704
+
705
+ ```
706
+
707
+ ```
708
+
SimSun.ttf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca4da082cd970f0c8abaa79f213ddcbc475f7b5afabcb81b385998f9ebfbb53f
3
+ size 10499104
config.json ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "./",
3
+ "architectures": [
4
+ "QWenLMHeadModel"
5
+ ],
6
+ "attn_dropout_prob": 0.0,
7
+ "auto_map": {
8
+ "AutoConfig": "configuration_qwen.QWenConfig",
9
+ "AutoModelForCausalLM": "modeling_qwen.QWenLMHeadModel"
10
+ },
11
+ "bf16": false,
12
+ "emb_dropout_prob": 0.0,
13
+ "fp16": false,
14
+ "fp32": false,
15
+ "hidden_size": 4096,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 22016,
18
+ "kv_channels": 128,
19
+ "layer_norm_epsilon": 1e-06,
20
+ "max_position_embeddings": 8192,
21
+ "model_type": "qwen",
22
+ "no_bias": true,
23
+ "num_attention_heads": 32,
24
+ "num_hidden_layers": 32,
25
+ "onnx_safe": null,
26
+ "rotary_emb_base": 10000,
27
+ "rotary_pct": 1.0,
28
+ "scale_attn_weights": true,
29
+ "seq_length": 2048,
30
+ "tie_word_embeddings": false,
31
+ "tokenizer_type": "QWenTokenizer",
32
+ "torch_dtype": "bfloat16",
33
+ "transformers_version": "4.31.0",
34
+ "use_cache": true,
35
+ "use_dynamic_ntk": true,
36
+ "use_flash_attn": false,
37
+ "use_logn_attn": true,
38
+ "visual": {
39
+ "heads": 16,
40
+ "image_size": 448,
41
+ "image_start_id": 151857,
42
+ "layers": 48,
43
+ "mlp_ratio": 4.9231,
44
+ "output_dim": 4096,
45
+ "patch_size": 14,
46
+ "width": 1664
47
+ },
48
+ "vocab_size": 151936
49
+ }
configuration_qwen.py ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Alibaba Cloud.
2
+ #
3
+ # This source code is licensed under the license found in the
4
+ # LICENSE file in the root directory of this source tree.
5
+
6
+ from transformers import PretrainedConfig
7
+
8
+
9
+ class QWenConfig(PretrainedConfig):
10
+ model_type = "qwen"
11
+ keys_to_ignore_at_inference = ["past_key_values"]
12
+
13
+ def __init__(
14
+ self,
15
+ vocab_size=151936,
16
+ hidden_size=4096,
17
+ num_hidden_layers=32,
18
+ num_attention_heads=32,
19
+ emb_dropout_prob=0.0,
20
+ attn_dropout_prob=0.0,
21
+ layer_norm_epsilon=1e-6,
22
+ initializer_range=0.02,
23
+ max_position_embeddings=8192,
24
+ scale_attn_weights=True,
25
+ use_cache=True,
26
+ bf16=False,
27
+ fp16=False,
28
+ fp32=False,
29
+ kv_channels=128,
30
+ rotary_pct=1.0,
31
+ rotary_emb_base=10000,
32
+ use_dynamic_ntk=True,
33
+ use_logn_attn=True,
34
+ use_flash_attn="auto",
35
+ intermediate_size=22016,
36
+ no_bias=True,
37
+ tie_word_embeddings=False,
38
+ **kwargs,
39
+ ):
40
+ self.vocab_size = vocab_size
41
+ self.hidden_size = hidden_size
42
+ self.intermediate_size = intermediate_size
43
+ self.num_hidden_layers = num_hidden_layers
44
+ self.num_attention_heads = num_attention_heads
45
+ self.emb_dropout_prob = emb_dropout_prob
46
+ self.attn_dropout_prob = attn_dropout_prob
47
+ self.layer_norm_epsilon = layer_norm_epsilon
48
+ self.initializer_range = initializer_range
49
+ self.scale_attn_weights = scale_attn_weights
50
+ self.use_cache = use_cache
51
+ self.max_position_embeddings = max_position_embeddings
52
+ self.bf16 = bf16
53
+ self.fp16 = fp16
54
+ self.fp32 = fp32
55
+ self.kv_channels = kv_channels
56
+ self.rotary_pct = rotary_pct
57
+ self.rotary_emb_base = rotary_emb_base
58
+ self.use_dynamic_ntk = use_dynamic_ntk
59
+ self.use_logn_attn = use_logn_attn
60
+ self.use_flash_attn = use_flash_attn
61
+ self.no_bias = no_bias
62
+ super().__init__(
63
+ tie_word_embeddings=tie_word_embeddings,
64
+ **kwargs
65
+ )
generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "chat_format": "chatml",
3
+ "do_sample": true,
4
+ "eos_token_id": 151643,
5
+ "max_new_tokens": 512,
6
+ "max_window_size": 6144,
7
+ "pad_token_id": 151643,
8
+ "top_k": 0,
9
+ "top_p": 0.3,
10
+ "transformers_version": "4.31.0"
11
+ }
modeling_qwen.py ADDED
@@ -0,0 +1,1154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Alibaba Cloud.
2
+ #
3
+ # This source code is licensed under the license found in the
4
+ # LICENSE file in the root directory of this source tree.
5
+
6
+ import importlib
7
+ import math
8
+ from typing import TYPE_CHECKING, Optional, Tuple, Union, Callable, List, Any, Generator
9
+
10
+ import torch
11
+ import torch.nn.functional as F
12
+ import torch.utils.checkpoint
13
+ from torch.cuda.amp import autocast
14
+
15
+ from torch.nn import CrossEntropyLoss
16
+ from transformers import PreTrainedTokenizer, GenerationConfig, StoppingCriteriaList
17
+ from transformers.generation.logits_process import LogitsProcessorList
18
+
19
+ if TYPE_CHECKING:
20
+ from transformers.generation.streamers import BaseStreamer
21
+ from transformers.generation.utils import GenerateOutput
22
+ from transformers.modeling_outputs import (
23
+ BaseModelOutputWithPast,
24
+ CausalLMOutputWithPast,
25
+ )
26
+ from transformers.modeling_utils import PreTrainedModel
27
+ from transformers.utils import logging
28
+
29
+ try:
30
+ from einops import rearrange
31
+ except ImportError:
32
+ rearrange = None
33
+ from torch import nn
34
+
35
+ SUPPORT_CUDA = torch.cuda.is_available()
36
+ SUPPORT_BF16 = SUPPORT_CUDA and torch.cuda.is_bf16_supported()
37
+ SUPPORT_FP16 = SUPPORT_CUDA and torch.cuda.get_device_capability(0)[0] >= 7
38
+
39
+ from .configuration_qwen import QWenConfig
40
+ from .qwen_generation_utils import (
41
+ HistoryType,
42
+ make_context,
43
+ decode_tokens,
44
+ get_stop_words_ids,
45
+ StopWordsLogitsProcessor,
46
+ )
47
+ from .visual import VisionTransformer
48
+
49
+
50
+ logger = logging.get_logger(__name__)
51
+
52
+ _CHECKPOINT_FOR_DOC = "qwen"
53
+ _CONFIG_FOR_DOC = "QWenConfig"
54
+
55
+ QWen_PRETRAINED_MODEL_ARCHIVE_LIST = ["qwen-7b"]
56
+
57
+ _ERROR_BAD_CHAT_FORMAT = """\
58
+ We detect you are probably using the pretrained model (rather than chat model) for chatting, since the chat_format in generation_config is not "chatml".
59
+ If you are directly using the model downloaded from Huggingface, please make sure you are using our "Qwen/Qwen-7B-Chat" Huggingface model (rather than "Qwen/Qwen-7B") when you call model.chat().
60
+ 我们检测到您可能在使用预训练模型(而非chat模型)进行多轮chat,因为您当前在generation_config指定的chat_format,并未设置为我们在对话中所支持的"chatml"格式。
61
+ 如果您在直接使用我们从Huggingface提供的模型,请确保您在调用model.chat()时,使用的是"Qwen/Qwen-7B-Chat"模型(而非"Qwen/Qwen-7B"预训练模型)。
62
+ """
63
+
64
+ _SENTINEL = object()
65
+ _ERROR_STREAM_IN_CHAT = """\
66
+ Pass argument `stream` to model.chat() is buggy, deprecated, and marked for removal. Please use model.chat_stream(...) instead of model.chat(..., stream=True).
67
+ 向model.chat()传入参数stream的用法可能存在Bug,该用法已被废弃,将在未来被移除。请使用model.chat_stream(...)代替model.chat(..., stream=True)。
68
+ """
69
+
70
+ apply_rotary_emb_func = None
71
+ rms_norm = None
72
+
73
+
74
+ # Copied from transformers.models.bart.modeling_bart._make_causal_mask
75
+ def _make_causal_mask(
76
+ input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0
77
+ ):
78
+ """
79
+ Make causal mask used for bi-directional self-attention.
80
+ """
81
+ bsz, tgt_len = input_ids_shape
82
+ mask = torch.full((tgt_len, tgt_len), torch.finfo(dtype).min, device=device)
83
+ mask_cond = torch.arange(mask.size(-1), device=device)
84
+ mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
85
+ mask = mask.to(dtype)
86
+
87
+ if past_key_values_length > 0:
88
+ mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1)
89
+ return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)
90
+
91
+
92
+ # Copied from transformers.models.bart.modeling_bart._expand_mask
93
+ def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
94
+ """
95
+ Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
96
+ """
97
+ bsz, src_len = mask.size()
98
+ tgt_len = tgt_len if tgt_len is not None else src_len
99
+
100
+ expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
101
+
102
+ inverted_mask = 1.0 - expanded_mask
103
+
104
+ return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min)
105
+
106
+
107
+ class QWenAttention(nn.Module):
108
+ def __init__(self, config):
109
+ super().__init__()
110
+
111
+ self.register_buffer("masked_bias", torch.tensor(-1e4), persistent=False)
112
+ self.seq_length = config.seq_length
113
+
114
+ self.hidden_size = config.hidden_size
115
+ self.split_size = config.hidden_size
116
+ self.num_heads = config.num_attention_heads
117
+ self.head_dim = self.hidden_size // self.num_heads
118
+
119
+ self.scale_attn_weights = True
120
+
121
+ self.projection_size = config.kv_channels * config.num_attention_heads
122
+
123
+ assert self.projection_size % config.num_attention_heads == 0
124
+ self.hidden_size_per_attention_head = (
125
+ self.projection_size // config.num_attention_heads
126
+ )
127
+
128
+ self.c_attn = nn.Linear(config.hidden_size, 3 * self.projection_size)
129
+
130
+ self.c_proj = nn.Linear(
131
+ config.hidden_size, self.projection_size, bias=not config.no_bias
132
+ )
133
+
134
+ self.is_fp32 = not (config.bf16 or config.fp16)
135
+ self.bf16 = config.bf16
136
+
137
+ self.use_dynamic_ntk = config.use_dynamic_ntk
138
+ self.use_logn_attn = config.use_logn_attn
139
+
140
+ logn_list = [
141
+ math.log(i, self.seq_length) if i > self.seq_length else 1
142
+ for i in range(1, 32768)
143
+ ]
144
+ self.logn_tensor = torch.tensor(logn_list)[None, :, None, None]
145
+
146
+ self.attn_dropout = nn.Dropout(config.attn_dropout_prob)
147
+
148
+ def _attn(self, query, key, value, registered_causal_mask, attention_mask=None, head_mask=None):
149
+ attn_weights = torch.matmul(query, key.transpose(-1, -2))
150
+
151
+ if self.scale_attn_weights:
152
+ attn_weights = attn_weights / torch.full(
153
+ [],
154
+ value.size(-1) ** 0.5,
155
+ dtype=attn_weights.dtype,
156
+ device=attn_weights.device,
157
+ )
158
+
159
+ query_length, key_length = query.size(-2), key.size(-2)
160
+ # causal_mask = self.bias[
161
+ # :, :, key_length - query_length : key_length, :key_length
162
+ # ]
163
+ # mask_value = torch.finfo(attn_weights.dtype).min
164
+ # mask_value = torch.full([], mask_value, dtype=attn_weights.dtype).to(
165
+ # attn_weights.device
166
+ # )
167
+ # attn_weights = torch.where(
168
+ # causal_mask, attn_weights.to(attn_weights.dtype), mask_value
169
+ # )
170
+ attn_weights = attn_weights + attention_mask
171
+
172
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1)
173
+
174
+ attn_weights = attn_weights.type(value.dtype)
175
+ attn_weights = self.attn_dropout(attn_weights)
176
+
177
+ if head_mask is not None:
178
+ attn_weights = attn_weights * head_mask
179
+
180
+ attn_output = torch.matmul(attn_weights, value)
181
+ attn_output = attn_output.transpose(1, 2)
182
+
183
+ return attn_output, attn_weights
184
+
185
+ def _upcast_and_reordered_attn(
186
+ self, query, key, value, registered_causal_mask, attention_mask=None, head_mask=None
187
+ ):
188
+ bsz, num_heads, q_seq_len, dk = query.size()
189
+ _, _, k_seq_len, _ = key.size()
190
+
191
+ attn_weights = torch.empty(
192
+ bsz * num_heads,
193
+ q_seq_len,
194
+ k_seq_len,
195
+ dtype=torch.float32,
196
+ device=query.device,
197
+ )
198
+
199
+ scale_factor = 1.0
200
+ if self.scale_attn_weights:
201
+ scale_factor /= float(value.size(-1)) ** 0.5
202
+
203
+ with autocast(enabled=False):
204
+ q, k = query.reshape(-1, q_seq_len, dk), key.transpose(-1, -2).reshape(
205
+ -1, dk, k_seq_len
206
+ )
207
+ attn_weights = torch.baddbmm(
208
+ attn_weights, q.float(), k.float(), beta=0, alpha=scale_factor
209
+ )
210
+ attn_weights = attn_weights.reshape(bsz, num_heads, q_seq_len, k_seq_len)
211
+
212
+ query_length, key_length = query.size(-2), key.size(-2)
213
+ causal_mask = registered_causal_mask[
214
+ :, :, key_length - query_length : key_length, :key_length
215
+ ]
216
+ mask_value = torch.finfo(attn_weights.dtype).min
217
+ mask_value = torch.tensor(mask_value, dtype=attn_weights.dtype).to(
218
+ attn_weights.device
219
+ )
220
+ attn_weights = torch.where(causal_mask, attn_weights, mask_value)
221
+
222
+ if attention_mask is not None:
223
+ attn_weights = attn_weights + attention_mask
224
+
225
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1)
226
+
227
+ if attn_weights.dtype != torch.float32:
228
+ raise RuntimeError(
229
+ "Error with upcasting, attn_weights does not have dtype torch.float32"
230
+ )
231
+ attn_weights = attn_weights.type(value.dtype)
232
+ attn_weights = self.attn_dropout(attn_weights)
233
+
234
+ if head_mask is not None:
235
+ attn_weights = attn_weights * head_mask
236
+
237
+ attn_output = torch.matmul(attn_weights, value)
238
+
239
+ return attn_output, attn_weights
240
+
241
+ def _split_heads(self, tensor, num_heads, attn_head_size):
242
+ new_shape = tensor.size()[:-1] + (num_heads, attn_head_size)
243
+ tensor = tensor.view(new_shape)
244
+ return tensor
245
+
246
+ def _merge_heads(self, tensor, num_heads, attn_head_size):
247
+ tensor = tensor.contiguous()
248
+ new_shape = tensor.size()[:-2] + (num_heads * attn_head_size,)
249
+ return tensor.view(new_shape)
250
+
251
+ def forward(
252
+ self,
253
+ hidden_states: Optional[Tuple[torch.FloatTensor]],
254
+ rotary_pos_emb: Optional[List[torch.Tensor]] = None,
255
+ registered_causal_mask: Optional[torch.Tensor] = None,
256
+ layer_past: Optional[Tuple[torch.Tensor]] = None,
257
+ attention_mask: Optional[torch.FloatTensor] = None,
258
+ head_mask: Optional[torch.FloatTensor] = None,
259
+ encoder_hidden_states: Optional[torch.Tensor] = None,
260
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
261
+ output_attentions: Optional[bool] = False,
262
+ use_cache: Optional[bool] = False,
263
+ ):
264
+
265
+ mixed_x_layer = self.c_attn(hidden_states)
266
+
267
+ query, key, value = mixed_x_layer.split(self.split_size, dim=2)
268
+
269
+ query = self._split_heads(query, self.num_heads, self.head_dim)
270
+ key = self._split_heads(key, self.num_heads, self.head_dim)
271
+ value = self._split_heads(value, self.num_heads, self.head_dim)
272
+
273
+ if rotary_pos_emb is not None:
274
+ cur_len = query.shape[1]
275
+ rotary_pos_emb = [i[:, -cur_len:, :, :] for i in rotary_pos_emb]
276
+ rotary_pos_emb = (rotary_pos_emb,) * 2
277
+ q_pos_emb, k_pos_emb = rotary_pos_emb
278
+ # Slice the pos emb for current inference
279
+ query = apply_rotary_pos_emb(query, q_pos_emb)
280
+ key = apply_rotary_pos_emb(key, k_pos_emb)
281
+
282
+ if layer_past is not None:
283
+ past_key, past_value = layer_past[0], layer_past[1]
284
+ key = torch.cat((past_key, key), dim=1)
285
+ value = torch.cat((past_value, value), dim=1)
286
+
287
+ if use_cache:
288
+ present = (key, value)
289
+ else:
290
+ present = None
291
+
292
+ if self.use_logn_attn and not self.training:
293
+ if self.logn_tensor.device != query.device or self.logn_tensor.dtype != query.dtype:
294
+ self.logn_tensor = self.logn_tensor.to(query.device).type_as(query)
295
+ seq_start = key.size(1) - query.size(1)
296
+ seq_end = key.size(1)
297
+ logn_tensor = self.logn_tensor[:, seq_start:seq_end, :, :]
298
+ query = query * logn_tensor.expand_as(query)
299
+
300
+ query = query.permute(0, 2, 1, 3)
301
+ key = key.permute(0, 2, 1, 3)
302
+ value = value.permute(0, 2, 1, 3)
303
+ attn_output, attn_weight = self._attn(
304
+ query, key, value, registered_causal_mask, attention_mask, head_mask
305
+ )
306
+ context_layer = self._merge_heads(
307
+ attn_output, self.num_heads, self.head_dim
308
+ )
309
+
310
+ attn_output = self.c_proj(context_layer)
311
+
312
+ outputs = (attn_output, present)
313
+ if output_attentions:
314
+ outputs += (attn_weight,)
315
+
316
+ return outputs
317
+
318
+
319
+ class QWenMLP(nn.Module):
320
+ def __init__(self, config):
321
+ super().__init__()
322
+ self.w1 = nn.Linear(
323
+ config.hidden_size, config.intermediate_size // 2, bias=not config.no_bias
324
+ )
325
+ self.w2 = nn.Linear(
326
+ config.hidden_size, config.intermediate_size // 2, bias=not config.no_bias
327
+ )
328
+ ff_dim_in = config.intermediate_size // 2
329
+ self.c_proj = nn.Linear(ff_dim_in, config.hidden_size, bias=not config.no_bias)
330
+
331
+ def forward(self, hidden_states):
332
+ a1 = self.w1(hidden_states)
333
+ a2 = self.w2(hidden_states)
334
+ intermediate_parallel = a1 * F.silu(a2)
335
+ output = self.c_proj(intermediate_parallel)
336
+ return output
337
+
338
+ class QWenBlock(nn.Module):
339
+ def __init__(self, config):
340
+ super().__init__()
341
+ hidden_size = config.hidden_size
342
+ self.bf16 = config.bf16
343
+
344
+ self.ln_1 = RMSNorm(
345
+ hidden_size,
346
+ eps=config.layer_norm_epsilon,
347
+ )
348
+ self.attn = QWenAttention(config)
349
+ self.ln_2 = RMSNorm(
350
+ hidden_size,
351
+ eps=config.layer_norm_epsilon,
352
+ )
353
+
354
+ self.mlp = QWenMLP(config)
355
+
356
+ def forward(
357
+ self,
358
+ hidden_states: Optional[Tuple[torch.FloatTensor]],
359
+ rotary_pos_emb: Optional[List[torch.Tensor]] = None,
360
+ registered_causal_mask: Optional[torch.Tensor] = None,
361
+ layer_past: Optional[Tuple[torch.Tensor]] = None,
362
+ attention_mask: Optional[torch.FloatTensor] = None,
363
+ head_mask: Optional[torch.FloatTensor] = None,
364
+ encoder_hidden_states: Optional[torch.Tensor] = None,
365
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
366
+ use_cache: Optional[bool] = False,
367
+ output_attentions: Optional[bool] = False,
368
+ ):
369
+ layernorm_output = self.ln_1(hidden_states)
370
+
371
+ attn_outputs = self.attn(
372
+ layernorm_output,
373
+ rotary_pos_emb,
374
+ registered_causal_mask=registered_causal_mask,
375
+ layer_past=layer_past,
376
+ attention_mask=attention_mask,
377
+ head_mask=head_mask,
378
+ use_cache=use_cache,
379
+ output_attentions=output_attentions,
380
+ )
381
+ attn_output = attn_outputs[0]
382
+
383
+ outputs = attn_outputs[1:]
384
+
385
+ residual = hidden_states
386
+ layernorm_input = attn_output + residual
387
+
388
+ layernorm_output = self.ln_2(layernorm_input)
389
+
390
+ residual = layernorm_input
391
+ mlp_output = self.mlp(layernorm_output)
392
+ hidden_states = residual + mlp_output
393
+
394
+ if use_cache:
395
+ outputs = (hidden_states,) + outputs
396
+ else:
397
+ outputs = (hidden_states,) + outputs[1:]
398
+
399
+ return outputs
400
+
401
+
402
+ class QWenPreTrainedModel(PreTrainedModel):
403
+ config_class = QWenConfig
404
+ base_model_prefix = "transformer"
405
+ is_parallelizable = False
406
+ supports_gradient_checkpointing = True
407
+ _no_split_modules = ["QWenBlock"]
408
+
409
+ def __init__(self, *inputs, **kwargs):
410
+ super().__init__(*inputs, **kwargs)
411
+
412
+ def _init_weights(self, module):
413
+ """Initialize the weights."""
414
+ if isinstance(module, nn.Linear):
415
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
416
+ if module.bias is not None:
417
+ module.bias.data.zero_()
418
+ elif isinstance(module, nn.Embedding):
419
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
420
+ if module.padding_idx is not None:
421
+ module.weight.data[module.padding_idx].zero_()
422
+ elif isinstance(module, RMSNorm):
423
+ module.weight.data.fill_(1.0)
424
+
425
+ for name, p in module.named_parameters():
426
+ if name == "c_proj.weight":
427
+ p.data.normal_(
428
+ mean=0.0,
429
+ std=(
430
+ self.config.initializer_range
431
+ / math.sqrt(2 * self.config.num_hidden_layers)
432
+ ),
433
+ )
434
+
435
+ def _set_gradient_checkpointing(self, module, value=False):
436
+ if isinstance(module, QWenModel):
437
+ module.gradient_checkpointing = value
438
+
439
+
440
+ class QWenModel(QWenPreTrainedModel):
441
+ _keys_to_ignore_on_load_missing = ["attn.masked_bias"]
442
+
443
+ def __init__(self, config):
444
+ super().__init__(config)
445
+ self.vocab_size = config.vocab_size
446
+ self.num_hidden_layers = config.num_hidden_layers
447
+ self.embed_dim = config.hidden_size
448
+
449
+ self.gradient_checkpointing = False
450
+ self.use_dynamic_ntk = config.use_dynamic_ntk
451
+ self.seq_length = config.seq_length
452
+
453
+ self.wte = nn.Embedding(self.vocab_size, self.embed_dim)
454
+
455
+ self.drop = nn.Dropout(config.emb_dropout_prob)
456
+
457
+ if config.rotary_pct == 1.0:
458
+ self.rotary_ndims = None
459
+ else:
460
+ assert config.rotary_pct < 1
461
+ self.rotary_ndims = int(
462
+ config.kv_channels * config.rotary_pct
463
+ )
464
+ dim = (
465
+ self.rotary_ndims
466
+ if self.rotary_ndims is not None
467
+ else config.kv_channels
468
+ )
469
+ self.rotary_emb = RotaryEmbedding(dim, base=config.rotary_emb_base)
470
+
471
+ self.use_flash_attn = config.use_flash_attn
472
+ self.is_fp32 = not (config.bf16 or config.fp16)
473
+ self.registered_causal_mask = None
474
+ # if (
475
+ # self.use_flash_attn
476
+ # and flash_attn_unpadded_func is not None
477
+ # and not self.is_fp32
478
+ # ):
479
+ # self.registered_causal_mask = None
480
+ # else:
481
+ # max_positions = config.max_position_embeddings
482
+ # self.register_buffer(
483
+ # "registered_causal_mask",
484
+ # torch.tril(
485
+ # torch.ones((max_positions, max_positions), dtype=torch.bool)
486
+ # ).view(1, 1, max_positions, max_positions),
487
+ # persistent=False,
488
+ # )
489
+
490
+ self.h = nn.ModuleList(
491
+ [
492
+ QWenBlock(
493
+ config
494
+ )
495
+ for i in range(config.num_hidden_layers)
496
+ ]
497
+ )
498
+ self.ln_f = RMSNorm(
499
+ self.embed_dim,
500
+ eps=config.layer_norm_epsilon,
501
+ )
502
+
503
+ self.visual = VisionTransformer(**config.visual)
504
+
505
+ self.post_init()
506
+
507
+ def get_input_embeddings(self):
508
+ return self.wte
509
+
510
+ def set_input_embeddings(self, new_embeddings):
511
+ self.wte = new_embeddings
512
+
513
+ # Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask
514
+ def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):
515
+ # create causal mask
516
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
517
+ combined_attention_mask = None
518
+ if input_shape[-1] > 1:
519
+ combined_attention_mask = _make_causal_mask(
520
+ input_shape,
521
+ inputs_embeds.dtype,
522
+ device=inputs_embeds.device,
523
+ past_key_values_length=past_key_values_length,
524
+ )
525
+
526
+ if attention_mask is not None:
527
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
528
+ expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]).to(
529
+ inputs_embeds.device
530
+ )
531
+ combined_attention_mask = (
532
+ expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
533
+ )
534
+
535
+ return combined_attention_mask
536
+
537
+
538
+ def forward(
539
+ self,
540
+ input_ids: Optional[torch.LongTensor] = None,
541
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
542
+ attention_mask: Optional[torch.FloatTensor] = None,
543
+ token_type_ids: Optional[torch.LongTensor] = None,
544
+ position_ids: Optional[torch.LongTensor] = None,
545
+ head_mask: Optional[torch.FloatTensor] = None,
546
+ inputs_embeds: Optional[torch.FloatTensor] = None,
547
+ encoder_hidden_states: Optional[torch.Tensor] = None,
548
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
549
+ use_cache: Optional[bool] = None,
550
+ output_attentions: Optional[bool] = None,
551
+ output_hidden_states: Optional[bool] = None,
552
+ return_dict: Optional[bool] = None,
553
+ ):
554
+ if past_key_values is None and torch.any(input_ids == self.config.visual['image_start_id']):
555
+ bos_pos = torch.where(input_ids == self.config.visual['image_start_id'])
556
+ eos_pos = torch.where(input_ids == self.config.visual['image_start_id'] + 1)
557
+ assert (bos_pos[0] == eos_pos[0]).all()
558
+ img_pos = torch.stack((bos_pos[0], bos_pos[1], eos_pos[1]), dim=1)
559
+ images = []
560
+ for i, a, b in img_pos:
561
+ image = input_ids[i][a + 1 : b - 1].tolist()
562
+ image = image[ : image.index(self.config.visual['image_start_id'] + 2)]
563
+ images.append(bytes(image).decode('utf-8'))
564
+
565
+ images = self.visual.encode(images)
566
+ assert images.shape[0] == len(images)
567
+ else:
568
+ images = None
569
+
570
+ output_attentions = (
571
+ output_attentions
572
+ if output_attentions is not None
573
+ else self.config.output_attentions
574
+ )
575
+ output_hidden_states = (
576
+ output_hidden_states
577
+ if output_hidden_states is not None
578
+ else self.config.output_hidden_states
579
+ )
580
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
581
+ return_dict = (
582
+ return_dict if return_dict is not None else self.config.use_return_dict
583
+ )
584
+
585
+ if input_ids is not None and inputs_embeds is not None:
586
+ raise ValueError(
587
+ "You cannot specify both input_ids and inputs_embeds at the same time"
588
+ )
589
+ elif input_ids is not None:
590
+ input_shape = input_ids.size()
591
+ input_ids = input_ids.view(-1, input_shape[-1])
592
+ batch_size = input_ids.shape[0]
593
+ elif inputs_embeds is not None:
594
+ input_shape = inputs_embeds.size()[:-1]
595
+ batch_size = inputs_embeds.shape[0]
596
+ else:
597
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
598
+
599
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
600
+
601
+ if token_type_ids is not None:
602
+ token_type_ids = token_type_ids.view(-1, input_shape[-1])
603
+ if position_ids is not None:
604
+ position_ids = position_ids.view(-1, input_shape[-1])
605
+
606
+ if past_key_values is None:
607
+ past_length = 0
608
+ past_key_values = tuple([None] * len(self.h))
609
+ else:
610
+ past_length = past_key_values[0][0].size(-2)
611
+
612
+ if position_ids is None:
613
+ position_ids = torch.arange(
614
+ past_length,
615
+ input_shape[-1] + past_length,
616
+ dtype=torch.long,
617
+ device=device,
618
+ )
619
+ position_ids = position_ids.unsqueeze(0).view(-1, input_shape[-1])
620
+
621
+ encoder_attention_mask = None
622
+ head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
623
+
624
+ if inputs_embeds is None:
625
+ inputs_embeds = self.wte(input_ids)
626
+
627
+ if batch_size <= 0:
628
+ raise ValueError("batch_size has to be defined and > 0")
629
+ attention_mask = self._prepare_decoder_attention_mask(
630
+ attention_mask, input_shape, inputs_embeds, past_length
631
+ )
632
+
633
+ hidden_states = inputs_embeds
634
+
635
+ kv_seq_len = hidden_states.size()[1]
636
+ if past_key_values[0] is not None:
637
+ # past key values[0][0] shape: bs * seq_len * head_num * dim
638
+ kv_seq_len += past_key_values[0][0].shape[1]
639
+ if (
640
+ self.use_dynamic_ntk
641
+ and kv_seq_len == hidden_states.size()[1]
642
+ and not self.training
643
+ ):
644
+ context_value = math.log(kv_seq_len / self.seq_length, 2) + 1
645
+ ntk_alpha = 2 ** math.ceil(context_value) - 1
646
+ ntk_alpha = max(ntk_alpha, 1)
647
+ else:
648
+ ntk_alpha = self.rotary_emb._ntk_alpha_cached
649
+
650
+ rotary_pos_emb = self.rotary_emb(kv_seq_len, ntk_alpha=ntk_alpha)
651
+ for idx in range(len(rotary_pos_emb)):
652
+ rotary_pos_emb[idx] = rotary_pos_emb[idx].to(hidden_states.device)
653
+
654
+ hidden_states = self.drop(hidden_states)
655
+ if images is not None:
656
+ for idx, (i, a, b) in enumerate(img_pos):
657
+ hidden_states[i][a + 1 : b] = images[idx]
658
+ output_shape = input_shape + (hidden_states.size(-1),)
659
+
660
+ if self.gradient_checkpointing and self.training:
661
+ if use_cache:
662
+ logger.warning_once(
663
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
664
+ )
665
+ use_cache = False
666
+
667
+ presents = () if use_cache else None
668
+ all_self_attentions = () if output_attentions else None
669
+ all_hidden_states = () if output_hidden_states else None
670
+ for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)):
671
+
672
+ if output_hidden_states:
673
+ all_hidden_states = all_hidden_states + (hidden_states,)
674
+
675
+ if self.gradient_checkpointing and self.training:
676
+
677
+ def create_custom_forward(module):
678
+ def custom_forward(*inputs):
679
+ # None for past_key_value
680
+ return module(*inputs, use_cache, output_attentions)
681
+
682
+ return custom_forward
683
+
684
+ outputs = torch.utils.checkpoint.checkpoint(
685
+ create_custom_forward(block),
686
+ hidden_states,
687
+ rotary_pos_emb,
688
+ self.registered_causal_mask,
689
+ None,
690
+ attention_mask,
691
+ head_mask[i],
692
+ encoder_hidden_states,
693
+ encoder_attention_mask,
694
+ )
695
+ else:
696
+ outputs = block(
697
+ hidden_states,
698
+ layer_past=layer_past,
699
+ rotary_pos_emb=rotary_pos_emb,
700
+ registered_causal_mask=self.registered_causal_mask,
701
+ attention_mask=attention_mask,
702
+ head_mask=head_mask[i],
703
+ encoder_hidden_states=encoder_hidden_states,
704
+ encoder_attention_mask=encoder_attention_mask,
705
+ use_cache=use_cache,
706
+ output_attentions=output_attentions,
707
+ )
708
+
709
+ hidden_states = outputs[0]
710
+ if use_cache is True:
711
+ presents = presents + (outputs[1],)
712
+
713
+ if output_attentions:
714
+ all_self_attentions = all_self_attentions + (outputs[2 if use_cache else 1],)
715
+
716
+ hidden_states = self.ln_f(hidden_states)
717
+ hidden_states = hidden_states.view(output_shape)
718
+ # Add last hidden state
719
+ if output_hidden_states:
720
+ all_hidden_states = all_hidden_states + (hidden_states,)
721
+
722
+ if not return_dict:
723
+ return tuple(
724
+ v for v in [hidden_states, presents, all_hidden_states] if v is not None
725
+ )
726
+
727
+ return BaseModelOutputWithPast(
728
+ last_hidden_state=hidden_states,
729
+ past_key_values=presents,
730
+ hidden_states=all_hidden_states,
731
+ attentions=all_self_attentions,
732
+ )
733
+
734
+
735
+ class QWenLMHeadModel(QWenPreTrainedModel):
736
+ _keys_to_ignore_on_load_missing = [r"h\.\d+\.attn\.rotary_emb\.inv_freq"]
737
+ _keys_to_ignore_on_load_unexpected = [r"h\.\d+\.attn\.masked_bias"]
738
+
739
+ def __init__(self, config):
740
+ super().__init__(config)
741
+ assert (
742
+ config.bf16 + config.fp16 + config.fp32 <= 1
743
+ ), "Only one of \"bf16\", \"fp16\", \"fp32\" can be true"
744
+
745
+ autoset_precision = config.bf16 + config.fp16 + config.fp32 == 0
746
+
747
+ if autoset_precision:
748
+ if SUPPORT_BF16:
749
+ logger.warn(
750
+ "The model is automatically converting to bf16 for faster inference. "
751
+ "If you want to disable the automatic precision, please manually add bf16/fp16/fp32=True to \"AutoModelForCausalLM.from_pretrained\"."
752
+ )
753
+ config.bf16 = True
754
+ elif SUPPORT_FP16:
755
+ logger.warn(
756
+ "The model is automatically converting to fp16 for faster inference. "
757
+ "If you want to disable the automatic precision, please manually add bf16/fp16/fp32=True to \"AutoModelForCausalLM.from_pretrained\"."
758
+ )
759
+ config.fp16 = True
760
+ else:
761
+ config.fp32 = True
762
+
763
+ if config.bf16 and SUPPORT_CUDA and not SUPPORT_BF16:
764
+ logger.warn("Your device does NOT seem to support bf16, you can switch to fp16 or fp32 by by passing fp16/fp32=True in \"AutoModelForCausalLM.from_pretrained\".")
765
+ if config.fp16 and SUPPORT_CUDA and not SUPPORT_FP16:
766
+ logger.warn("Your device does NOT support faster inference with fp16, please switch to fp32 which is likely to be faster")
767
+ if config.fp32:
768
+ if SUPPORT_BF16:
769
+ logger.warn("Your device support faster inference by passing bf16=True in \"AutoModelForCausalLM.from_pretrained\".")
770
+ elif SUPPORT_FP16:
771
+ logger.warn("Your device support faster inference by passing fp16=True in \"AutoModelForCausalLM.from_pretrained\".")
772
+
773
+ self.transformer = QWenModel(config)
774
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
775
+
776
+ if config.bf16:
777
+ self.transformer.bfloat16()
778
+ self.lm_head.bfloat16()
779
+ if config.fp16:
780
+ self.transformer.half()
781
+ self.lm_head.half()
782
+ self.post_init()
783
+
784
+ def get_output_embeddings(self):
785
+ return self.lm_head
786
+
787
+ def set_output_embeddings(self, new_embeddings):
788
+ self.lm_head = new_embeddings
789
+
790
+ def prepare_inputs_for_generation(
791
+ self, input_ids, past_key_values=None, inputs_embeds=None, **kwargs
792
+ ):
793
+ token_type_ids = kwargs.get("token_type_ids", None)
794
+ if past_key_values:
795
+ input_ids = input_ids[:, -1].unsqueeze(-1)
796
+ if token_type_ids is not None:
797
+ token_type_ids = token_type_ids[:, -1].unsqueeze(-1)
798
+
799
+ attention_mask = kwargs.get("attention_mask", None)
800
+ position_ids = kwargs.get("position_ids", None)
801
+
802
+ if attention_mask is not None and position_ids is None:
803
+ position_ids = attention_mask.long().cumsum(-1) - 1
804
+ position_ids.masked_fill_(attention_mask == 0, 1)
805
+ if past_key_values:
806
+ position_ids = position_ids[:, -1].unsqueeze(-1)
807
+ else:
808
+ position_ids = None
809
+
810
+ if inputs_embeds is not None and past_key_values is None:
811
+ model_inputs = {"inputs_embeds": inputs_embeds}
812
+ else:
813
+ model_inputs = {"input_ids": input_ids}
814
+
815
+ model_inputs.update(
816
+ {
817
+ "past_key_values": past_key_values,
818
+ "use_cache": kwargs.get("use_cache"),
819
+ "position_ids": position_ids,
820
+ "attention_mask": attention_mask,
821
+ "token_type_ids": token_type_ids,
822
+ }
823
+ )
824
+ return model_inputs
825
+
826
+ def forward(
827
+ self,
828
+ input_ids: Optional[torch.LongTensor] = None,
829
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
830
+ attention_mask: Optional[torch.FloatTensor] = None,
831
+ token_type_ids: Optional[torch.LongTensor] = None,
832
+ position_ids: Optional[torch.LongTensor] = None,
833
+ head_mask: Optional[torch.FloatTensor] = None,
834
+ inputs_embeds: Optional[torch.FloatTensor] = None,
835
+ encoder_hidden_states: Optional[torch.Tensor] = None,
836
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
837
+ labels: Optional[torch.LongTensor] = None,
838
+ use_cache: Optional[bool] = None,
839
+ output_attentions: Optional[bool] = None,
840
+ output_hidden_states: Optional[bool] = None,
841
+ return_dict: Optional[bool] = None,
842
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
843
+
844
+ return_dict = (
845
+ return_dict if return_dict is not None else self.config.use_return_dict
846
+ )
847
+
848
+ transformer_outputs = self.transformer(
849
+ input_ids,
850
+ past_key_values=past_key_values,
851
+ attention_mask=attention_mask,
852
+ token_type_ids=token_type_ids,
853
+ position_ids=position_ids,
854
+ head_mask=head_mask,
855
+ inputs_embeds=inputs_embeds,
856
+ encoder_hidden_states=encoder_hidden_states,
857
+ encoder_attention_mask=encoder_attention_mask,
858
+ use_cache=use_cache,
859
+ output_attentions=output_attentions,
860
+ output_hidden_states=output_hidden_states,
861
+ return_dict=return_dict,
862
+ )
863
+ hidden_states = transformer_outputs[0]
864
+
865
+ lm_logits = self.lm_head(hidden_states)
866
+
867
+ loss = None
868
+ if labels is not None:
869
+ labels = labels.to(lm_logits.device)
870
+ shift_logits = lm_logits[..., :-1, :].contiguous()
871
+ shift_labels = labels[..., 1:].contiguous()
872
+ loss_fct = CrossEntropyLoss()
873
+ loss = loss_fct(
874
+ shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)
875
+ )
876
+
877
+ if not return_dict:
878
+ output = (lm_logits,) + transformer_outputs[1:]
879
+ return ((loss,) + output) if loss is not None else output
880
+
881
+ return CausalLMOutputWithPast(
882
+ loss=loss,
883
+ logits=lm_logits,
884
+ past_key_values=transformer_outputs.past_key_values,
885
+ hidden_states=transformer_outputs.hidden_states,
886
+ attentions=transformer_outputs.attentions,
887
+ )
888
+
889
+ @staticmethod
890
+ def _reorder_cache(
891
+ past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor
892
+ ) -> Tuple[Tuple[torch.Tensor]]:
893
+
894
+ return tuple(
895
+ tuple(
896
+ past_state.index_select(0, beam_idx.to(past_state.device))
897
+ for past_state in layer_past
898
+ )
899
+ for layer_past in past_key_values
900
+ )
901
+
902
+ def chat(
903
+ self,
904
+ tokenizer: PreTrainedTokenizer,
905
+ query: str,
906
+ history: Optional[HistoryType],
907
+ system: str = "You are a helpful assistant.",
908
+ append_history: bool = True,
909
+ stream: Optional[bool] = _SENTINEL,
910
+ stop_words_ids: Optional[List[List[int]]] = None,
911
+ generation_config: Optional[GenerationConfig] = None,
912
+ **kwargs,
913
+ ) -> Tuple[str, HistoryType]:
914
+ generation_config = generation_config if generation_config is not None else self.generation_config
915
+
916
+ assert stream is _SENTINEL, _ERROR_STREAM_IN_CHAT
917
+ assert generation_config.chat_format == 'chatml', _ERROR_BAD_CHAT_FORMAT
918
+ if history is None:
919
+ history = []
920
+ if stop_words_ids is None:
921
+ stop_words_ids = []
922
+
923
+ max_window_size = kwargs.get('max_window_size', None)
924
+ if max_window_size is None:
925
+ max_window_size = generation_config.max_window_size
926
+ raw_text, context_tokens = make_context(
927
+ tokenizer,
928
+ query,
929
+ history=history,
930
+ system=system,
931
+ max_window_size=max_window_size,
932
+ chat_format=generation_config.chat_format,
933
+ )
934
+
935
+ stop_words_ids.extend(get_stop_words_ids(
936
+ generation_config.chat_format, tokenizer
937
+ ))
938
+ input_ids = torch.tensor([context_tokens]).to(self.device)
939
+ outputs = self.generate(
940
+ input_ids,
941
+ stop_words_ids=stop_words_ids,
942
+ return_dict_in_generate=False,
943
+ generation_config=generation_config,
944
+ **kwargs,
945
+ )
946
+
947
+ response = decode_tokens(
948
+ outputs[0],
949
+ tokenizer,
950
+ raw_text_len=len(raw_text),
951
+ context_length=len(context_tokens),
952
+ chat_format=generation_config.chat_format,
953
+ verbose=False,
954
+ errors='replace'
955
+ )
956
+
957
+ if append_history:
958
+ history.append((query, response))
959
+
960
+ return response, history
961
+
962
+ def chat_stream(
963
+ self,
964
+ tokenizer: PreTrainedTokenizer,
965
+ query: str,
966
+ history: Optional[HistoryType],
967
+ system: str = "You are a helpful assistant.",
968
+ stop_words_ids: Optional[List[List[int]]] = None,
969
+ logits_processor: Optional[LogitsProcessorList] = None,
970
+ generation_config: Optional[GenerationConfig] = None,
971
+ **kwargs,
972
+ ) -> Generator[str, Any, None]:
973
+ generation_config = generation_config if generation_config is not None else self.generation_config
974
+ assert generation_config.chat_format == 'chatml', _ERROR_BAD_CHAT_FORMAT
975
+ if history is None:
976
+ history = []
977
+ if stop_words_ids is None:
978
+ stop_words_ids = []
979
+
980
+ max_window_size = kwargs.get('max_window_size', None)
981
+ if max_window_size is None:
982
+ max_window_size = generation_config.max_window_size
983
+ raw_text, context_tokens = make_context(
984
+ tokenizer,
985
+ query,
986
+ history=history,
987
+ system=system,
988
+ max_window_size=max_window_size,
989
+ chat_format=generation_config.chat_format,
990
+ )
991
+
992
+ stop_words_ids.extend(get_stop_words_ids(
993
+ generation_config.chat_format, tokenizer
994
+ ))
995
+ if stop_words_ids is not None:
996
+ stop_words_logits_processor = StopWordsLogitsProcessor(
997
+ stop_words_ids=stop_words_ids,
998
+ eos_token_id=generation_config.eos_token_id,
999
+ )
1000
+ if logits_processor is None:
1001
+ logits_processor = LogitsProcessorList([stop_words_logits_processor])
1002
+ else:
1003
+ logits_processor.append(stop_words_logits_processor)
1004
+ input_ids = torch.tensor([context_tokens]).to(self.device)
1005
+
1006
+ from transformers_stream_generator.main import NewGenerationMixin, StreamGenerationConfig
1007
+ self.__class__.generate_stream = NewGenerationMixin.generate
1008
+ self.__class__.sample_stream = NewGenerationMixin.sample_stream
1009
+ stream_config = StreamGenerationConfig(**generation_config.to_dict(), do_stream=True)
1010
+
1011
+ def stream_generator():
1012
+ outputs = []
1013
+ for token in self.generate_stream(
1014
+ input_ids,
1015
+ return_dict_in_generate=False,
1016
+ generation_config=stream_config,
1017
+ logits_processor=logits_processor,
1018
+ seed=-1,
1019
+ **kwargs):
1020
+ outputs.append(token.item())
1021
+ yield tokenizer.decode(outputs, skip_special_tokens=True, errors='ignore')
1022
+
1023
+ return stream_generator()
1024
+
1025
+ def generate(
1026
+ self,
1027
+ inputs: Optional[torch.Tensor] = None,
1028
+ generation_config: Optional[GenerationConfig] = None,
1029
+ logits_processor: Optional[LogitsProcessorList] = None,
1030
+ stopping_criteria: Optional[StoppingCriteriaList] = None,
1031
+ prefix_allowed_tokens_fn: Optional[
1032
+ Callable[[int, torch.Tensor], List[int]]
1033
+ ] = None,
1034
+ synced_gpus: Optional[bool] = None,
1035
+ assistant_model: Optional["PreTrainedModel"] = None,
1036
+ streamer: Optional["BaseStreamer"] = None,
1037
+ **kwargs,
1038
+ ) -> Union[GenerateOutput, torch.LongTensor]:
1039
+ generation_config = generation_config if generation_config is not None else self.generation_config
1040
+
1041
+ # Process stop_words_ids.
1042
+ stop_words_ids = kwargs.pop("stop_words_ids", None)
1043
+ if stop_words_ids is None and generation_config is not None:
1044
+ stop_words_ids = getattr(generation_config, "stop_words_ids", None)
1045
+ if stop_words_ids is None:
1046
+ stop_words_ids = getattr(generation_config, "stop_words_ids", None)
1047
+
1048
+ if stop_words_ids is not None:
1049
+ stop_words_logits_processor = StopWordsLogitsProcessor(
1050
+ stop_words_ids=stop_words_ids,
1051
+ eos_token_id=generation_config.eos_token_id,
1052
+ )
1053
+ if logits_processor is None:
1054
+ logits_processor = LogitsProcessorList([stop_words_logits_processor])
1055
+ else:
1056
+ logits_processor.append(stop_words_logits_processor)
1057
+
1058
+ return super().generate(
1059
+ inputs,
1060
+ generation_config=generation_config,
1061
+ logits_processor=logits_processor,
1062
+ stopping_criteria=stopping_criteria,
1063
+ prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,
1064
+ synced_gpus=synced_gpus,
1065
+ assistant_model=assistant_model,
1066
+ streamer=streamer,
1067
+ **kwargs,
1068
+ )
1069
+
1070
+
1071
+ class RotaryEmbedding(torch.nn.Module):
1072
+ def __init__(self, dim, base=10000):
1073
+ super().__init__()
1074
+ self.dim = dim
1075
+ self.base = base
1076
+ self.inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float() / dim))
1077
+ if importlib.util.find_spec("einops") is None:
1078
+ raise RuntimeError("einops is required for Rotary Embedding")
1079
+
1080
+ self._rotary_pos_emb_cache = None
1081
+ self._seq_len_cached = 0
1082
+ self._ntk_alpha_cached = 1.0
1083
+
1084
+ def update_rotary_pos_emb_cache(self, max_seq_len, offset=0, ntk_alpha=1.0):
1085
+ seqlen = max_seq_len + offset
1086
+ if seqlen > self._seq_len_cached or ntk_alpha != self._ntk_alpha_cached:
1087
+ base = self.base * ntk_alpha ** (self.dim / (self.dim - 2))
1088
+ self.inv_freq = 1.0 / (
1089
+ base
1090
+ ** (
1091
+ torch.arange(0, self.dim, 2, device=self.inv_freq.device).float()
1092
+ / self.dim
1093
+ )
1094
+ )
1095
+ self._seq_len_cached = max(2 * seqlen, 16)
1096
+ self._ntk_alpha_cached = ntk_alpha
1097
+ seq = torch.arange(self._seq_len_cached, device=self.inv_freq.device)
1098
+ freqs = torch.outer(seq.type_as(self.inv_freq), self.inv_freq)
1099
+
1100
+ emb = torch.cat((freqs, freqs), dim=-1)
1101
+ from einops import rearrange
1102
+
1103
+ emb = rearrange(emb, "n d -> 1 n 1 d")
1104
+
1105
+ cos, sin = emb.cos(), emb.sin()
1106
+ self._rotary_pos_emb_cache = [cos, sin]
1107
+
1108
+ def forward(self, max_seq_len, offset=0, ntk_alpha=1.0):
1109
+ self.update_rotary_pos_emb_cache(max_seq_len, offset, ntk_alpha)
1110
+ cos, sin = self._rotary_pos_emb_cache
1111
+ return [cos[:, offset : offset + max_seq_len], sin[:, offset : offset + max_seq_len]]
1112
+
1113
+
1114
+ def _rotate_half(x):
1115
+ from einops import rearrange
1116
+
1117
+ x = rearrange(x, "... (j d) -> ... j d", j=2)
1118
+ x1, x2 = x.unbind(dim=-2)
1119
+ return torch.cat((-x2, x1), dim=-1)
1120
+
1121
+
1122
+ def apply_rotary_pos_emb(t, freqs):
1123
+ cos, sin = freqs
1124
+ if apply_rotary_emb_func is not None and t.is_cuda:
1125
+ t_ = t.float()
1126
+ cos = cos.squeeze(0).squeeze(1)[:, : cos.shape[-1] // 2]
1127
+ sin = sin.squeeze(0).squeeze(1)[:, : sin.shape[-1] // 2]
1128
+ output = apply_rotary_emb_func(t_, cos, sin).type_as(t)
1129
+ return output
1130
+ else:
1131
+ rot_dim = freqs[0].shape[-1]
1132
+ cos, sin = freqs
1133
+ t_, t_pass_ = t[..., :rot_dim], t[..., rot_dim:]
1134
+ t_ = t_.float()
1135
+ t_pass_ = t_pass_.float()
1136
+ t_ = (t_ * cos) + (_rotate_half(t_) * sin)
1137
+ return torch.cat((t_, t_pass_), dim=-1).type_as(t)
1138
+
1139
+
1140
+ class RMSNorm(torch.nn.Module):
1141
+ def __init__(self, dim: int, eps: float = 1e-6):
1142
+ super().__init__()
1143
+ self.eps = eps
1144
+ self.weight = nn.Parameter(torch.ones(dim))
1145
+
1146
+ def _norm(self, x):
1147
+ return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)
1148
+
1149
+ def forward(self, x):
1150
+ if rms_norm is not None and x.is_cuda:
1151
+ return rms_norm(x, self.weight, self.eps)
1152
+ else:
1153
+ output = self._norm(x.float()).type_as(x)
1154
+ return output * self.weight
pytorch_model-00001-of-00010.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8096d6aefba5b263f06ecd9f19b76e1780fefe2aa7010eab4f1235b18253e8f7
3
+ size 1964070447
pytorch_model-00002-of-00010.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:244b3c40adad8ec6f8e94d59a443607e2d6d11fee4838ca787ea1873ef3c3c20
3
+ size 1933792141
pytorch_model-00003-of-00010.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fce23c694efdf7b28c152f9ac77ba40a5e114d066708fab8079db0ee967db93
3
+ size 1933792141
pytorch_model-00004-of-00010.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fcc2950c12804b413838b291bbd302b375a2caf80aa5ec5dd5588e9083675bd
3
+ size 1990406779
pytorch_model-00005-of-00010.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a83706861cd18b0b7a58a67022b92a13d94d6c740fa003a0c4a44bc18938988
3
+ size 1923281531
pytorch_model-00006-of-00010.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4af6fbf1423f86de3f88942ec2460641a22570d1c41347625043d9933e51c3ca
3
+ size 1933783675
pytorch_model-00007-of-00010.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5171efa571c59248f314a4ebfb56d5f4dede27f89a126d17ce5400eb6220d82f
3
+ size 1933792205
pytorch_model-00008-of-00010.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f79876d7cc162272687daac6dd09778b4bce04895bc44f67c53cc4a9b803fc5
3
+ size 1975364669
pytorch_model-00009-of-00010.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02b652b95c6dffc2754d2cc3d8166b35c542726a571b979fc2b59417c218f196
3
+ size 1994925011
pytorch_model-00010-of-00010.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23baadda455bbc1f5238bd7b1eb2386e1d17312c8cfaa3f9a3e5bdde44883aa4
3
+ size 1730968463
pytorch_model.bin.index.json ADDED
@@ -0,0 +1,860 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 19313870336
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "pytorch_model-00010-of-00010.bin",
7
+ "transformer.h.0.attn.c_attn.bias": "pytorch_model-00001-of-00010.bin",
8
+ "transformer.h.0.attn.c_attn.weight": "pytorch_model-00001-of-00010.bin",
9
+ "transformer.h.0.attn.c_proj.weight": "pytorch_model-00001-of-00010.bin",
10
+ "transformer.h.0.ln_1.weight": "pytorch_model-00001-of-00010.bin",
11
+ "transformer.h.0.ln_2.weight": "pytorch_model-00001-of-00010.bin",
12
+ "transformer.h.0.mlp.c_proj.weight": "pytorch_model-00001-of-00010.bin",
13
+ "transformer.h.0.mlp.w1.weight": "pytorch_model-00001-of-00010.bin",
14
+ "transformer.h.0.mlp.w2.weight": "pytorch_model-00001-of-00010.bin",
15
+ "transformer.h.1.attn.c_attn.bias": "pytorch_model-00001-of-00010.bin",
16
+ "transformer.h.1.attn.c_attn.weight": "pytorch_model-00001-of-00010.bin",
17
+ "transformer.h.1.attn.c_proj.weight": "pytorch_model-00001-of-00010.bin",
18
+ "transformer.h.1.ln_1.weight": "pytorch_model-00001-of-00010.bin",
19
+ "transformer.h.1.ln_2.weight": "pytorch_model-00001-of-00010.bin",
20
+ "transformer.h.1.mlp.c_proj.weight": "pytorch_model-00002-of-00010.bin",
21
+ "transformer.h.1.mlp.w1.weight": "pytorch_model-00001-of-00010.bin",
22
+ "transformer.h.1.mlp.w2.weight": "pytorch_model-00001-of-00010.bin",
23
+ "transformer.h.10.attn.c_attn.bias": "pytorch_model-00003-of-00010.bin",
24
+ "transformer.h.10.attn.c_attn.weight": "pytorch_model-00003-of-00010.bin",
25
+ "transformer.h.10.attn.c_proj.weight": "pytorch_model-00003-of-00010.bin",
26
+ "transformer.h.10.ln_1.weight": "pytorch_model-00003-of-00010.bin",
27
+ "transformer.h.10.ln_2.weight": "pytorch_model-00003-of-00010.bin",
28
+ "transformer.h.10.mlp.c_proj.weight": "pytorch_model-00003-of-00010.bin",
29
+ "transformer.h.10.mlp.w1.weight": "pytorch_model-00003-of-00010.bin",
30
+ "transformer.h.10.mlp.w2.weight": "pytorch_model-00003-of-00010.bin",
31
+ "transformer.h.11.attn.c_attn.bias": "pytorch_model-00003-of-00010.bin",
32
+ "transformer.h.11.attn.c_attn.weight": "pytorch_model-00003-of-00010.bin",
33
+ "transformer.h.11.attn.c_proj.weight": "pytorch_model-00003-of-00010.bin",
34
+ "transformer.h.11.ln_1.weight": "pytorch_model-00003-of-00010.bin",
35
+ "transformer.h.11.ln_2.weight": "pytorch_model-00003-of-00010.bin",
36
+ "transformer.h.11.mlp.c_proj.weight": "pytorch_model-00004-of-00010.bin",
37
+ "transformer.h.11.mlp.w1.weight": "pytorch_model-00004-of-00010.bin",
38
+ "transformer.h.11.mlp.w2.weight": "pytorch_model-00004-of-00010.bin",
39
+ "transformer.h.12.attn.c_attn.bias": "pytorch_model-00004-of-00010.bin",
40
+ "transformer.h.12.attn.c_attn.weight": "pytorch_model-00004-of-00010.bin",
41
+ "transformer.h.12.attn.c_proj.weight": "pytorch_model-00004-of-00010.bin",
42
+ "transformer.h.12.ln_1.weight": "pytorch_model-00004-of-00010.bin",
43
+ "transformer.h.12.ln_2.weight": "pytorch_model-00004-of-00010.bin",
44
+ "transformer.h.12.mlp.c_proj.weight": "pytorch_model-00004-of-00010.bin",
45
+ "transformer.h.12.mlp.w1.weight": "pytorch_model-00004-of-00010.bin",
46
+ "transformer.h.12.mlp.w2.weight": "pytorch_model-00004-of-00010.bin",
47
+ "transformer.h.13.attn.c_attn.bias": "pytorch_model-00004-of-00010.bin",
48
+ "transformer.h.13.attn.c_attn.weight": "pytorch_model-00004-of-00010.bin",
49
+ "transformer.h.13.attn.c_proj.weight": "pytorch_model-00004-of-00010.bin",
50
+ "transformer.h.13.ln_1.weight": "pytorch_model-00004-of-00010.bin",
51
+ "transformer.h.13.ln_2.weight": "pytorch_model-00004-of-00010.bin",
52
+ "transformer.h.13.mlp.c_proj.weight": "pytorch_model-00004-of-00010.bin",
53
+ "transformer.h.13.mlp.w1.weight": "pytorch_model-00004-of-00010.bin",
54
+ "transformer.h.13.mlp.w2.weight": "pytorch_model-00004-of-00010.bin",
55
+ "transformer.h.14.attn.c_attn.bias": "pytorch_model-00004-of-00010.bin",
56
+ "transformer.h.14.attn.c_attn.weight": "pytorch_model-00004-of-00010.bin",
57
+ "transformer.h.14.attn.c_proj.weight": "pytorch_model-00004-of-00010.bin",
58
+ "transformer.h.14.ln_1.weight": "pytorch_model-00004-of-00010.bin",
59
+ "transformer.h.14.ln_2.weight": "pytorch_model-00004-of-00010.bin",
60
+ "transformer.h.14.mlp.c_proj.weight": "pytorch_model-00004-of-00010.bin",
61
+ "transformer.h.14.mlp.w1.weight": "pytorch_model-00004-of-00010.bin",
62
+ "transformer.h.14.mlp.w2.weight": "pytorch_model-00004-of-00010.bin",
63
+ "transformer.h.15.attn.c_attn.bias": "pytorch_model-00004-of-00010.bin",
64
+ "transformer.h.15.attn.c_attn.weight": "pytorch_model-00004-of-00010.bin",
65
+ "transformer.h.15.attn.c_proj.weight": "pytorch_model-00004-of-00010.bin",
66
+ "transformer.h.15.ln_1.weight": "pytorch_model-00004-of-00010.bin",
67
+ "transformer.h.15.ln_2.weight": "pytorch_model-00004-of-00010.bin",
68
+ "transformer.h.15.mlp.c_proj.weight": "pytorch_model-00004-of-00010.bin",
69
+ "transformer.h.15.mlp.w1.weight": "pytorch_model-00004-of-00010.bin",
70
+ "transformer.h.15.mlp.w2.weight": "pytorch_model-00004-of-00010.bin",
71
+ "transformer.h.16.attn.c_attn.bias": "pytorch_model-00004-of-00010.bin",
72
+ "transformer.h.16.attn.c_attn.weight": "pytorch_model-00004-of-00010.bin",
73
+ "transformer.h.16.attn.c_proj.weight": "pytorch_model-00005-of-00010.bin",
74
+ "transformer.h.16.ln_1.weight": "pytorch_model-00004-of-00010.bin",
75
+ "transformer.h.16.ln_2.weight": "pytorch_model-00005-of-00010.bin",
76
+ "transformer.h.16.mlp.c_proj.weight": "pytorch_model-00005-of-00010.bin",
77
+ "transformer.h.16.mlp.w1.weight": "pytorch_model-00005-of-00010.bin",
78
+ "transformer.h.16.mlp.w2.weight": "pytorch_model-00005-of-00010.bin",
79
+ "transformer.h.17.attn.c_attn.bias": "pytorch_model-00005-of-00010.bin",
80
+ "transformer.h.17.attn.c_attn.weight": "pytorch_model-00005-of-00010.bin",
81
+ "transformer.h.17.attn.c_proj.weight": "pytorch_model-00005-of-00010.bin",
82
+ "transformer.h.17.ln_1.weight": "pytorch_model-00005-of-00010.bin",
83
+ "transformer.h.17.ln_2.weight": "pytorch_model-00005-of-00010.bin",
84
+ "transformer.h.17.mlp.c_proj.weight": "pytorch_model-00005-of-00010.bin",
85
+ "transformer.h.17.mlp.w1.weight": "pytorch_model-00005-of-00010.bin",
86
+ "transformer.h.17.mlp.w2.weight": "pytorch_model-00005-of-00010.bin",
87
+ "transformer.h.18.attn.c_attn.bias": "pytorch_model-00005-of-00010.bin",
88
+ "transformer.h.18.attn.c_attn.weight": "pytorch_model-00005-of-00010.bin",
89
+ "transformer.h.18.attn.c_proj.weight": "pytorch_model-00005-of-00010.bin",
90
+ "transformer.h.18.ln_1.weight": "pytorch_model-00005-of-00010.bin",
91
+ "transformer.h.18.ln_2.weight": "pytorch_model-00005-of-00010.bin",
92
+ "transformer.h.18.mlp.c_proj.weight": "pytorch_model-00005-of-00010.bin",
93
+ "transformer.h.18.mlp.w1.weight": "pytorch_model-00005-of-00010.bin",
94
+ "transformer.h.18.mlp.w2.weight": "pytorch_model-00005-of-00010.bin",
95
+ "transformer.h.19.attn.c_attn.bias": "pytorch_model-00005-of-00010.bin",
96
+ "transformer.h.19.attn.c_attn.weight": "pytorch_model-00005-of-00010.bin",
97
+ "transformer.h.19.attn.c_proj.weight": "pytorch_model-00005-of-00010.bin",
98
+ "transformer.h.19.ln_1.weight": "pytorch_model-00005-of-00010.bin",
99
+ "transformer.h.19.ln_2.weight": "pytorch_model-00005-of-00010.bin",
100
+ "transformer.h.19.mlp.c_proj.weight": "pytorch_model-00005-of-00010.bin",
101
+ "transformer.h.19.mlp.w1.weight": "pytorch_model-00005-of-00010.bin",
102
+ "transformer.h.19.mlp.w2.weight": "pytorch_model-00005-of-00010.bin",
103
+ "transformer.h.2.attn.c_attn.bias": "pytorch_model-00002-of-00010.bin",
104
+ "transformer.h.2.attn.c_attn.weight": "pytorch_model-00002-of-00010.bin",
105
+ "transformer.h.2.attn.c_proj.weight": "pytorch_model-00002-of-00010.bin",
106
+ "transformer.h.2.ln_1.weight": "pytorch_model-00002-of-00010.bin",
107
+ "transformer.h.2.ln_2.weight": "pytorch_model-00002-of-00010.bin",
108
+ "transformer.h.2.mlp.c_proj.weight": "pytorch_model-00002-of-00010.bin",
109
+ "transformer.h.2.mlp.w1.weight": "pytorch_model-00002-of-00010.bin",
110
+ "transformer.h.2.mlp.w2.weight": "pytorch_model-00002-of-00010.bin",
111
+ "transformer.h.20.attn.c_attn.bias": "pytorch_model-00005-of-00010.bin",
112
+ "transformer.h.20.attn.c_attn.weight": "pytorch_model-00005-of-00010.bin",
113
+ "transformer.h.20.attn.c_proj.weight": "pytorch_model-00005-of-00010.bin",
114
+ "transformer.h.20.ln_1.weight": "pytorch_model-00005-of-00010.bin",
115
+ "transformer.h.20.ln_2.weight": "pytorch_model-00005-of-00010.bin",
116
+ "transformer.h.20.mlp.c_proj.weight": "pytorch_model-00005-of-00010.bin",
117
+ "transformer.h.20.mlp.w1.weight": "pytorch_model-00005-of-00010.bin",
118
+ "transformer.h.20.mlp.w2.weight": "pytorch_model-00005-of-00010.bin",
119
+ "transformer.h.21.attn.c_attn.bias": "pytorch_model-00006-of-00010.bin",
120
+ "transformer.h.21.attn.c_attn.weight": "pytorch_model-00006-of-00010.bin",
121
+ "transformer.h.21.attn.c_proj.weight": "pytorch_model-00006-of-00010.bin",
122
+ "transformer.h.21.ln_1.weight": "pytorch_model-00005-of-00010.bin",
123
+ "transformer.h.21.ln_2.weight": "pytorch_model-00006-of-00010.bin",
124
+ "transformer.h.21.mlp.c_proj.weight": "pytorch_model-00006-of-00010.bin",
125
+ "transformer.h.21.mlp.w1.weight": "pytorch_model-00006-of-00010.bin",
126
+ "transformer.h.21.mlp.w2.weight": "pytorch_model-00006-of-00010.bin",
127
+ "transformer.h.22.attn.c_attn.bias": "pytorch_model-00006-of-00010.bin",
128
+ "transformer.h.22.attn.c_attn.weight": "pytorch_model-00006-of-00010.bin",
129
+ "transformer.h.22.attn.c_proj.weight": "pytorch_model-00006-of-00010.bin",
130
+ "transformer.h.22.ln_1.weight": "pytorch_model-00006-of-00010.bin",
131
+ "transformer.h.22.ln_2.weight": "pytorch_model-00006-of-00010.bin",
132
+ "transformer.h.22.mlp.c_proj.weight": "pytorch_model-00006-of-00010.bin",
133
+ "transformer.h.22.mlp.w1.weight": "pytorch_model-00006-of-00010.bin",
134
+ "transformer.h.22.mlp.w2.weight": "pytorch_model-00006-of-00010.bin",
135
+ "transformer.h.23.attn.c_attn.bias": "pytorch_model-00006-of-00010.bin",
136
+ "transformer.h.23.attn.c_attn.weight": "pytorch_model-00006-of-00010.bin",
137
+ "transformer.h.23.attn.c_proj.weight": "pytorch_model-00006-of-00010.bin",
138
+ "transformer.h.23.ln_1.weight": "pytorch_model-00006-of-00010.bin",
139
+ "transformer.h.23.ln_2.weight": "pytorch_model-00006-of-00010.bin",
140
+ "transformer.h.23.mlp.c_proj.weight": "pytorch_model-00006-of-00010.bin",
141
+ "transformer.h.23.mlp.w1.weight": "pytorch_model-00006-of-00010.bin",
142
+ "transformer.h.23.mlp.w2.weight": "pytorch_model-00006-of-00010.bin",
143
+ "transformer.h.24.attn.c_attn.bias": "pytorch_model-00006-of-00010.bin",
144
+ "transformer.h.24.attn.c_attn.weight": "pytorch_model-00006-of-00010.bin",
145
+ "transformer.h.24.attn.c_proj.weight": "pytorch_model-00006-of-00010.bin",
146
+ "transformer.h.24.ln_1.weight": "pytorch_model-00006-of-00010.bin",
147
+ "transformer.h.24.ln_2.weight": "pytorch_model-00006-of-00010.bin",
148
+ "transformer.h.24.mlp.c_proj.weight": "pytorch_model-00006-of-00010.bin",
149
+ "transformer.h.24.mlp.w1.weight": "pytorch_model-00006-of-00010.bin",
150
+ "transformer.h.24.mlp.w2.weight": "pytorch_model-00006-of-00010.bin",
151
+ "transformer.h.25.attn.c_attn.bias": "pytorch_model-00006-of-00010.bin",
152
+ "transformer.h.25.attn.c_attn.weight": "pytorch_model-00006-of-00010.bin",
153
+ "transformer.h.25.attn.c_proj.weight": "pytorch_model-00006-of-00010.bin",
154
+ "transformer.h.25.ln_1.weight": "pytorch_model-00006-of-00010.bin",
155
+ "transformer.h.25.ln_2.weight": "pytorch_model-00006-of-00010.bin",
156
+ "transformer.h.25.mlp.c_proj.weight": "pytorch_model-00007-of-00010.bin",
157
+ "transformer.h.25.mlp.w1.weight": "pytorch_model-00006-of-00010.bin",
158
+ "transformer.h.25.mlp.w2.weight": "pytorch_model-00006-of-00010.bin",
159
+ "transformer.h.26.attn.c_attn.bias": "pytorch_model-00007-of-00010.bin",
160
+ "transformer.h.26.attn.c_attn.weight": "pytorch_model-00007-of-00010.bin",
161
+ "transformer.h.26.attn.c_proj.weight": "pytorch_model-00007-of-00010.bin",
162
+ "transformer.h.26.ln_1.weight": "pytorch_model-00007-of-00010.bin",
163
+ "transformer.h.26.ln_2.weight": "pytorch_model-00007-of-00010.bin",
164
+ "transformer.h.26.mlp.c_proj.weight": "pytorch_model-00007-of-00010.bin",
165
+ "transformer.h.26.mlp.w1.weight": "pytorch_model-00007-of-00010.bin",
166
+ "transformer.h.26.mlp.w2.weight": "pytorch_model-00007-of-00010.bin",
167
+ "transformer.h.27.attn.c_attn.bias": "pytorch_model-00007-of-00010.bin",
168
+ "transformer.h.27.attn.c_attn.weight": "pytorch_model-00007-of-00010.bin",
169
+ "transformer.h.27.attn.c_proj.weight": "pytorch_model-00007-of-00010.bin",
170
+ "transformer.h.27.ln_1.weight": "pytorch_model-00007-of-00010.bin",
171
+ "transformer.h.27.ln_2.weight": "pytorch_model-00007-of-00010.bin",
172
+ "transformer.h.27.mlp.c_proj.weight": "pytorch_model-00007-of-00010.bin",
173
+ "transformer.h.27.mlp.w1.weight": "pytorch_model-00007-of-00010.bin",
174
+ "transformer.h.27.mlp.w2.weight": "pytorch_model-00007-of-00010.bin",
175
+ "transformer.h.28.attn.c_attn.bias": "pytorch_model-00007-of-00010.bin",
176
+ "transformer.h.28.attn.c_attn.weight": "pytorch_model-00007-of-00010.bin",
177
+ "transformer.h.28.attn.c_proj.weight": "pytorch_model-00007-of-00010.bin",
178
+ "transformer.h.28.ln_1.weight": "pytorch_model-00007-of-00010.bin",
179
+ "transformer.h.28.ln_2.weight": "pytorch_model-00007-of-00010.bin",
180
+ "transformer.h.28.mlp.c_proj.weight": "pytorch_model-00007-of-00010.bin",
181
+ "transformer.h.28.mlp.w1.weight": "pytorch_model-00007-of-00010.bin",
182
+ "transformer.h.28.mlp.w2.weight": "pytorch_model-00007-of-00010.bin",
183
+ "transformer.h.29.attn.c_attn.bias": "pytorch_model-00007-of-00010.bin",
184
+ "transformer.h.29.attn.c_attn.weight": "pytorch_model-00007-of-00010.bin",
185
+ "transformer.h.29.attn.c_proj.weight": "pytorch_model-00007-of-00010.bin",
186
+ "transformer.h.29.ln_1.weight": "pytorch_model-00007-of-00010.bin",
187
+ "transformer.h.29.ln_2.weight": "pytorch_model-00007-of-00010.bin",
188
+ "transformer.h.29.mlp.c_proj.weight": "pytorch_model-00007-of-00010.bin",
189
+ "transformer.h.29.mlp.w1.weight": "pytorch_model-00007-of-00010.bin",
190
+ "transformer.h.29.mlp.w2.weight": "pytorch_model-00007-of-00010.bin",
191
+ "transformer.h.3.attn.c_attn.bias": "pytorch_model-00002-of-00010.bin",
192
+ "transformer.h.3.attn.c_attn.weight": "pytorch_model-00002-of-00010.bin",
193
+ "transformer.h.3.attn.c_proj.weight": "pytorch_model-00002-of-00010.bin",
194
+ "transformer.h.3.ln_1.weight": "pytorch_model-00002-of-00010.bin",
195
+ "transformer.h.3.ln_2.weight": "pytorch_model-00002-of-00010.bin",
196
+ "transformer.h.3.mlp.c_proj.weight": "pytorch_model-00002-of-00010.bin",
197
+ "transformer.h.3.mlp.w1.weight": "pytorch_model-00002-of-00010.bin",
198
+ "transformer.h.3.mlp.w2.weight": "pytorch_model-00002-of-00010.bin",
199
+ "transformer.h.30.attn.c_attn.bias": "pytorch_model-00007-of-00010.bin",
200
+ "transformer.h.30.attn.c_attn.weight": "pytorch_model-00007-of-00010.bin",
201
+ "transformer.h.30.attn.c_proj.weight": "pytorch_model-00007-of-00010.bin",
202
+ "transformer.h.30.ln_1.weight": "pytorch_model-00007-of-00010.bin",
203
+ "transformer.h.30.ln_2.weight": "pytorch_model-00007-of-00010.bin",
204
+ "transformer.h.30.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
205
+ "transformer.h.30.mlp.w1.weight": "pytorch_model-00007-of-00010.bin",
206
+ "transformer.h.30.mlp.w2.weight": "pytorch_model-00008-of-00010.bin",
207
+ "transformer.h.31.attn.c_attn.bias": "pytorch_model-00008-of-00010.bin",
208
+ "transformer.h.31.attn.c_attn.weight": "pytorch_model-00008-of-00010.bin",
209
+ "transformer.h.31.attn.c_proj.weight": "pytorch_model-00008-of-00010.bin",
210
+ "transformer.h.31.ln_1.weight": "pytorch_model-00008-of-00010.bin",
211
+ "transformer.h.31.ln_2.weight": "pytorch_model-00008-of-00010.bin",
212
+ "transformer.h.31.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
213
+ "transformer.h.31.mlp.w1.weight": "pytorch_model-00008-of-00010.bin",
214
+ "transformer.h.31.mlp.w2.weight": "pytorch_model-00008-of-00010.bin",
215
+ "transformer.h.4.attn.c_attn.bias": "pytorch_model-00002-of-00010.bin",
216
+ "transformer.h.4.attn.c_attn.weight": "pytorch_model-00002-of-00010.bin",
217
+ "transformer.h.4.attn.c_proj.weight": "pytorch_model-00002-of-00010.bin",
218
+ "transformer.h.4.ln_1.weight": "pytorch_model-00002-of-00010.bin",
219
+ "transformer.h.4.ln_2.weight": "pytorch_model-00002-of-00010.bin",
220
+ "transformer.h.4.mlp.c_proj.weight": "pytorch_model-00002-of-00010.bin",
221
+ "transformer.h.4.mlp.w1.weight": "pytorch_model-00002-of-00010.bin",
222
+ "transformer.h.4.mlp.w2.weight": "pytorch_model-00002-of-00010.bin",
223
+ "transformer.h.5.attn.c_attn.bias": "pytorch_model-00002-of-00010.bin",
224
+ "transformer.h.5.attn.c_attn.weight": "pytorch_model-00002-of-00010.bin",
225
+ "transformer.h.5.attn.c_proj.weight": "pytorch_model-00002-of-00010.bin",
226
+ "transformer.h.5.ln_1.weight": "pytorch_model-00002-of-00010.bin",
227
+ "transformer.h.5.ln_2.weight": "pytorch_model-00002-of-00010.bin",
228
+ "transformer.h.5.mlp.c_proj.weight": "pytorch_model-00002-of-00010.bin",
229
+ "transformer.h.5.mlp.w1.weight": "pytorch_model-00002-of-00010.bin",
230
+ "transformer.h.5.mlp.w2.weight": "pytorch_model-00002-of-00010.bin",
231
+ "transformer.h.6.attn.c_attn.bias": "pytorch_model-00002-of-00010.bin",
232
+ "transformer.h.6.attn.c_attn.weight": "pytorch_model-00002-of-00010.bin",
233
+ "transformer.h.6.attn.c_proj.weight": "pytorch_model-00002-of-00010.bin",
234
+ "transformer.h.6.ln_1.weight": "pytorch_model-00002-of-00010.bin",
235
+ "transformer.h.6.ln_2.weight": "pytorch_model-00002-of-00010.bin",
236
+ "transformer.h.6.mlp.c_proj.weight": "pytorch_model-00003-of-00010.bin",
237
+ "transformer.h.6.mlp.w1.weight": "pytorch_model-00002-of-00010.bin",
238
+ "transformer.h.6.mlp.w2.weight": "pytorch_model-00003-of-00010.bin",
239
+ "transformer.h.7.attn.c_attn.bias": "pytorch_model-00003-of-00010.bin",
240
+ "transformer.h.7.attn.c_attn.weight": "pytorch_model-00003-of-00010.bin",
241
+ "transformer.h.7.attn.c_proj.weight": "pytorch_model-00003-of-00010.bin",
242
+ "transformer.h.7.ln_1.weight": "pytorch_model-00003-of-00010.bin",
243
+ "transformer.h.7.ln_2.weight": "pytorch_model-00003-of-00010.bin",
244
+ "transformer.h.7.mlp.c_proj.weight": "pytorch_model-00003-of-00010.bin",
245
+ "transformer.h.7.mlp.w1.weight": "pytorch_model-00003-of-00010.bin",
246
+ "transformer.h.7.mlp.w2.weight": "pytorch_model-00003-of-00010.bin",
247
+ "transformer.h.8.attn.c_attn.bias": "pytorch_model-00003-of-00010.bin",
248
+ "transformer.h.8.attn.c_attn.weight": "pytorch_model-00003-of-00010.bin",
249
+ "transformer.h.8.attn.c_proj.weight": "pytorch_model-00003-of-00010.bin",
250
+ "transformer.h.8.ln_1.weight": "pytorch_model-00003-of-00010.bin",
251
+ "transformer.h.8.ln_2.weight": "pytorch_model-00003-of-00010.bin",
252
+ "transformer.h.8.mlp.c_proj.weight": "pytorch_model-00003-of-00010.bin",
253
+ "transformer.h.8.mlp.w1.weight": "pytorch_model-00003-of-00010.bin",
254
+ "transformer.h.8.mlp.w2.weight": "pytorch_model-00003-of-00010.bin",
255
+ "transformer.h.9.attn.c_attn.bias": "pytorch_model-00003-of-00010.bin",
256
+ "transformer.h.9.attn.c_attn.weight": "pytorch_model-00003-of-00010.bin",
257
+ "transformer.h.9.attn.c_proj.weight": "pytorch_model-00003-of-00010.bin",
258
+ "transformer.h.9.ln_1.weight": "pytorch_model-00003-of-00010.bin",
259
+ "transformer.h.9.ln_2.weight": "pytorch_model-00003-of-00010.bin",
260
+ "transformer.h.9.mlp.c_proj.weight": "pytorch_model-00003-of-00010.bin",
261
+ "transformer.h.9.mlp.w1.weight": "pytorch_model-00003-of-00010.bin",
262
+ "transformer.h.9.mlp.w2.weight": "pytorch_model-00003-of-00010.bin",
263
+ "transformer.ln_f.weight": "pytorch_model-00008-of-00010.bin",
264
+ "transformer.visual.attn_pool.attn.in_proj_bias": "pytorch_model-00010-of-00010.bin",
265
+ "transformer.visual.attn_pool.attn.in_proj_weight": "pytorch_model-00010-of-00010.bin",
266
+ "transformer.visual.attn_pool.attn.out_proj.bias": "pytorch_model-00010-of-00010.bin",
267
+ "transformer.visual.attn_pool.attn.out_proj.weight": "pytorch_model-00010-of-00010.bin",
268
+ "transformer.visual.attn_pool.kv_proj.weight": "pytorch_model-00010-of-00010.bin",
269
+ "transformer.visual.attn_pool.ln_kv.bias": "pytorch_model-00010-of-00010.bin",
270
+ "transformer.visual.attn_pool.ln_kv.weight": "pytorch_model-00010-of-00010.bin",
271
+ "transformer.visual.attn_pool.ln_q.bias": "pytorch_model-00010-of-00010.bin",
272
+ "transformer.visual.attn_pool.ln_q.weight": "pytorch_model-00010-of-00010.bin",
273
+ "transformer.visual.attn_pool.pos_embed": "pytorch_model-00010-of-00010.bin",
274
+ "transformer.visual.attn_pool.query": "pytorch_model-00010-of-00010.bin",
275
+ "transformer.visual.conv1.weight": "pytorch_model-00008-of-00010.bin",
276
+ "transformer.visual.ln_post.bias": "pytorch_model-00010-of-00010.bin",
277
+ "transformer.visual.ln_post.weight": "pytorch_model-00010-of-00010.bin",
278
+ "transformer.visual.ln_pre.bias": "pytorch_model-00008-of-00010.bin",
279
+ "transformer.visual.ln_pre.weight": "pytorch_model-00008-of-00010.bin",
280
+ "transformer.visual.positional_embedding": "pytorch_model-00008-of-00010.bin",
281
+ "transformer.visual.proj": "pytorch_model-00008-of-00010.bin",
282
+ "transformer.visual.transformer.resblocks.0.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
283
+ "transformer.visual.transformer.resblocks.0.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
284
+ "transformer.visual.transformer.resblocks.0.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
285
+ "transformer.visual.transformer.resblocks.0.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
286
+ "transformer.visual.transformer.resblocks.0.ln_1.bias": "pytorch_model-00008-of-00010.bin",
287
+ "transformer.visual.transformer.resblocks.0.ln_1.weight": "pytorch_model-00008-of-00010.bin",
288
+ "transformer.visual.transformer.resblocks.0.ln_2.bias": "pytorch_model-00008-of-00010.bin",
289
+ "transformer.visual.transformer.resblocks.0.ln_2.weight": "pytorch_model-00008-of-00010.bin",
290
+ "transformer.visual.transformer.resblocks.0.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
291
+ "transformer.visual.transformer.resblocks.0.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
292
+ "transformer.visual.transformer.resblocks.0.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
293
+ "transformer.visual.transformer.resblocks.0.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
294
+ "transformer.visual.transformer.resblocks.1.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
295
+ "transformer.visual.transformer.resblocks.1.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
296
+ "transformer.visual.transformer.resblocks.1.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
297
+ "transformer.visual.transformer.resblocks.1.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
298
+ "transformer.visual.transformer.resblocks.1.ln_1.bias": "pytorch_model-00008-of-00010.bin",
299
+ "transformer.visual.transformer.resblocks.1.ln_1.weight": "pytorch_model-00008-of-00010.bin",
300
+ "transformer.visual.transformer.resblocks.1.ln_2.bias": "pytorch_model-00008-of-00010.bin",
301
+ "transformer.visual.transformer.resblocks.1.ln_2.weight": "pytorch_model-00008-of-00010.bin",
302
+ "transformer.visual.transformer.resblocks.1.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
303
+ "transformer.visual.transformer.resblocks.1.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
304
+ "transformer.visual.transformer.resblocks.1.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
305
+ "transformer.visual.transformer.resblocks.1.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
306
+ "transformer.visual.transformer.resblocks.10.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
307
+ "transformer.visual.transformer.resblocks.10.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
308
+ "transformer.visual.transformer.resblocks.10.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
309
+ "transformer.visual.transformer.resblocks.10.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
310
+ "transformer.visual.transformer.resblocks.10.ln_1.bias": "pytorch_model-00008-of-00010.bin",
311
+ "transformer.visual.transformer.resblocks.10.ln_1.weight": "pytorch_model-00008-of-00010.bin",
312
+ "transformer.visual.transformer.resblocks.10.ln_2.bias": "pytorch_model-00008-of-00010.bin",
313
+ "transformer.visual.transformer.resblocks.10.ln_2.weight": "pytorch_model-00008-of-00010.bin",
314
+ "transformer.visual.transformer.resblocks.10.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
315
+ "transformer.visual.transformer.resblocks.10.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
316
+ "transformer.visual.transformer.resblocks.10.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
317
+ "transformer.visual.transformer.resblocks.10.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
318
+ "transformer.visual.transformer.resblocks.11.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
319
+ "transformer.visual.transformer.resblocks.11.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
320
+ "transformer.visual.transformer.resblocks.11.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
321
+ "transformer.visual.transformer.resblocks.11.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
322
+ "transformer.visual.transformer.resblocks.11.ln_1.bias": "pytorch_model-00008-of-00010.bin",
323
+ "transformer.visual.transformer.resblocks.11.ln_1.weight": "pytorch_model-00008-of-00010.bin",
324
+ "transformer.visual.transformer.resblocks.11.ln_2.bias": "pytorch_model-00008-of-00010.bin",
325
+ "transformer.visual.transformer.resblocks.11.ln_2.weight": "pytorch_model-00008-of-00010.bin",
326
+ "transformer.visual.transformer.resblocks.11.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
327
+ "transformer.visual.transformer.resblocks.11.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
328
+ "transformer.visual.transformer.resblocks.11.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
329
+ "transformer.visual.transformer.resblocks.11.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
330
+ "transformer.visual.transformer.resblocks.12.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
331
+ "transformer.visual.transformer.resblocks.12.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
332
+ "transformer.visual.transformer.resblocks.12.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
333
+ "transformer.visual.transformer.resblocks.12.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
334
+ "transformer.visual.transformer.resblocks.12.ln_1.bias": "pytorch_model-00008-of-00010.bin",
335
+ "transformer.visual.transformer.resblocks.12.ln_1.weight": "pytorch_model-00008-of-00010.bin",
336
+ "transformer.visual.transformer.resblocks.12.ln_2.bias": "pytorch_model-00008-of-00010.bin",
337
+ "transformer.visual.transformer.resblocks.12.ln_2.weight": "pytorch_model-00008-of-00010.bin",
338
+ "transformer.visual.transformer.resblocks.12.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
339
+ "transformer.visual.transformer.resblocks.12.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
340
+ "transformer.visual.transformer.resblocks.12.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
341
+ "transformer.visual.transformer.resblocks.12.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
342
+ "transformer.visual.transformer.resblocks.13.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
343
+ "transformer.visual.transformer.resblocks.13.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
344
+ "transformer.visual.transformer.resblocks.13.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
345
+ "transformer.visual.transformer.resblocks.13.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
346
+ "transformer.visual.transformer.resblocks.13.ln_1.bias": "pytorch_model-00008-of-00010.bin",
347
+ "transformer.visual.transformer.resblocks.13.ln_1.weight": "pytorch_model-00008-of-00010.bin",
348
+ "transformer.visual.transformer.resblocks.13.ln_2.bias": "pytorch_model-00008-of-00010.bin",
349
+ "transformer.visual.transformer.resblocks.13.ln_2.weight": "pytorch_model-00008-of-00010.bin",
350
+ "transformer.visual.transformer.resblocks.13.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
351
+ "transformer.visual.transformer.resblocks.13.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
352
+ "transformer.visual.transformer.resblocks.13.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
353
+ "transformer.visual.transformer.resblocks.13.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
354
+ "transformer.visual.transformer.resblocks.14.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
355
+ "transformer.visual.transformer.resblocks.14.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
356
+ "transformer.visual.transformer.resblocks.14.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
357
+ "transformer.visual.transformer.resblocks.14.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
358
+ "transformer.visual.transformer.resblocks.14.ln_1.bias": "pytorch_model-00008-of-00010.bin",
359
+ "transformer.visual.transformer.resblocks.14.ln_1.weight": "pytorch_model-00008-of-00010.bin",
360
+ "transformer.visual.transformer.resblocks.14.ln_2.bias": "pytorch_model-00008-of-00010.bin",
361
+ "transformer.visual.transformer.resblocks.14.ln_2.weight": "pytorch_model-00008-of-00010.bin",
362
+ "transformer.visual.transformer.resblocks.14.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
363
+ "transformer.visual.transformer.resblocks.14.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
364
+ "transformer.visual.transformer.resblocks.14.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
365
+ "transformer.visual.transformer.resblocks.14.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
366
+ "transformer.visual.transformer.resblocks.15.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
367
+ "transformer.visual.transformer.resblocks.15.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
368
+ "transformer.visual.transformer.resblocks.15.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
369
+ "transformer.visual.transformer.resblocks.15.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
370
+ "transformer.visual.transformer.resblocks.15.ln_1.bias": "pytorch_model-00008-of-00010.bin",
371
+ "transformer.visual.transformer.resblocks.15.ln_1.weight": "pytorch_model-00008-of-00010.bin",
372
+ "transformer.visual.transformer.resblocks.15.ln_2.bias": "pytorch_model-00008-of-00010.bin",
373
+ "transformer.visual.transformer.resblocks.15.ln_2.weight": "pytorch_model-00008-of-00010.bin",
374
+ "transformer.visual.transformer.resblocks.15.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
375
+ "transformer.visual.transformer.resblocks.15.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
376
+ "transformer.visual.transformer.resblocks.15.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
377
+ "transformer.visual.transformer.resblocks.15.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
378
+ "transformer.visual.transformer.resblocks.16.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
379
+ "transformer.visual.transformer.resblocks.16.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
380
+ "transformer.visual.transformer.resblocks.16.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
381
+ "transformer.visual.transformer.resblocks.16.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
382
+ "transformer.visual.transformer.resblocks.16.ln_1.bias": "pytorch_model-00008-of-00010.bin",
383
+ "transformer.visual.transformer.resblocks.16.ln_1.weight": "pytorch_model-00008-of-00010.bin",
384
+ "transformer.visual.transformer.resblocks.16.ln_2.bias": "pytorch_model-00008-of-00010.bin",
385
+ "transformer.visual.transformer.resblocks.16.ln_2.weight": "pytorch_model-00008-of-00010.bin",
386
+ "transformer.visual.transformer.resblocks.16.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
387
+ "transformer.visual.transformer.resblocks.16.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
388
+ "transformer.visual.transformer.resblocks.16.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
389
+ "transformer.visual.transformer.resblocks.16.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
390
+ "transformer.visual.transformer.resblocks.17.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
391
+ "transformer.visual.transformer.resblocks.17.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
392
+ "transformer.visual.transformer.resblocks.17.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
393
+ "transformer.visual.transformer.resblocks.17.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
394
+ "transformer.visual.transformer.resblocks.17.ln_1.bias": "pytorch_model-00008-of-00010.bin",
395
+ "transformer.visual.transformer.resblocks.17.ln_1.weight": "pytorch_model-00008-of-00010.bin",
396
+ "transformer.visual.transformer.resblocks.17.ln_2.bias": "pytorch_model-00008-of-00010.bin",
397
+ "transformer.visual.transformer.resblocks.17.ln_2.weight": "pytorch_model-00008-of-00010.bin",
398
+ "transformer.visual.transformer.resblocks.17.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
399
+ "transformer.visual.transformer.resblocks.17.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
400
+ "transformer.visual.transformer.resblocks.17.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
401
+ "transformer.visual.transformer.resblocks.17.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
402
+ "transformer.visual.transformer.resblocks.18.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
403
+ "transformer.visual.transformer.resblocks.18.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
404
+ "transformer.visual.transformer.resblocks.18.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
405
+ "transformer.visual.transformer.resblocks.18.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
406
+ "transformer.visual.transformer.resblocks.18.ln_1.bias": "pytorch_model-00009-of-00010.bin",
407
+ "transformer.visual.transformer.resblocks.18.ln_1.weight": "pytorch_model-00009-of-00010.bin",
408
+ "transformer.visual.transformer.resblocks.18.ln_2.bias": "pytorch_model-00009-of-00010.bin",
409
+ "transformer.visual.transformer.resblocks.18.ln_2.weight": "pytorch_model-00009-of-00010.bin",
410
+ "transformer.visual.transformer.resblocks.18.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
411
+ "transformer.visual.transformer.resblocks.18.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
412
+ "transformer.visual.transformer.resblocks.18.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
413
+ "transformer.visual.transformer.resblocks.18.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
414
+ "transformer.visual.transformer.resblocks.19.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
415
+ "transformer.visual.transformer.resblocks.19.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
416
+ "transformer.visual.transformer.resblocks.19.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
417
+ "transformer.visual.transformer.resblocks.19.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
418
+ "transformer.visual.transformer.resblocks.19.ln_1.bias": "pytorch_model-00009-of-00010.bin",
419
+ "transformer.visual.transformer.resblocks.19.ln_1.weight": "pytorch_model-00009-of-00010.bin",
420
+ "transformer.visual.transformer.resblocks.19.ln_2.bias": "pytorch_model-00009-of-00010.bin",
421
+ "transformer.visual.transformer.resblocks.19.ln_2.weight": "pytorch_model-00009-of-00010.bin",
422
+ "transformer.visual.transformer.resblocks.19.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
423
+ "transformer.visual.transformer.resblocks.19.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
424
+ "transformer.visual.transformer.resblocks.19.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
425
+ "transformer.visual.transformer.resblocks.19.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
426
+ "transformer.visual.transformer.resblocks.2.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
427
+ "transformer.visual.transformer.resblocks.2.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
428
+ "transformer.visual.transformer.resblocks.2.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
429
+ "transformer.visual.transformer.resblocks.2.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
430
+ "transformer.visual.transformer.resblocks.2.ln_1.bias": "pytorch_model-00008-of-00010.bin",
431
+ "transformer.visual.transformer.resblocks.2.ln_1.weight": "pytorch_model-00008-of-00010.bin",
432
+ "transformer.visual.transformer.resblocks.2.ln_2.bias": "pytorch_model-00008-of-00010.bin",
433
+ "transformer.visual.transformer.resblocks.2.ln_2.weight": "pytorch_model-00008-of-00010.bin",
434
+ "transformer.visual.transformer.resblocks.2.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
435
+ "transformer.visual.transformer.resblocks.2.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
436
+ "transformer.visual.transformer.resblocks.2.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
437
+ "transformer.visual.transformer.resblocks.2.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
438
+ "transformer.visual.transformer.resblocks.20.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
439
+ "transformer.visual.transformer.resblocks.20.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
440
+ "transformer.visual.transformer.resblocks.20.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
441
+ "transformer.visual.transformer.resblocks.20.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
442
+ "transformer.visual.transformer.resblocks.20.ln_1.bias": "pytorch_model-00009-of-00010.bin",
443
+ "transformer.visual.transformer.resblocks.20.ln_1.weight": "pytorch_model-00009-of-00010.bin",
444
+ "transformer.visual.transformer.resblocks.20.ln_2.bias": "pytorch_model-00009-of-00010.bin",
445
+ "transformer.visual.transformer.resblocks.20.ln_2.weight": "pytorch_model-00009-of-00010.bin",
446
+ "transformer.visual.transformer.resblocks.20.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
447
+ "transformer.visual.transformer.resblocks.20.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
448
+ "transformer.visual.transformer.resblocks.20.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
449
+ "transformer.visual.transformer.resblocks.20.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
450
+ "transformer.visual.transformer.resblocks.21.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
451
+ "transformer.visual.transformer.resblocks.21.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
452
+ "transformer.visual.transformer.resblocks.21.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
453
+ "transformer.visual.transformer.resblocks.21.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
454
+ "transformer.visual.transformer.resblocks.21.ln_1.bias": "pytorch_model-00009-of-00010.bin",
455
+ "transformer.visual.transformer.resblocks.21.ln_1.weight": "pytorch_model-00009-of-00010.bin",
456
+ "transformer.visual.transformer.resblocks.21.ln_2.bias": "pytorch_model-00009-of-00010.bin",
457
+ "transformer.visual.transformer.resblocks.21.ln_2.weight": "pytorch_model-00009-of-00010.bin",
458
+ "transformer.visual.transformer.resblocks.21.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
459
+ "transformer.visual.transformer.resblocks.21.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
460
+ "transformer.visual.transformer.resblocks.21.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
461
+ "transformer.visual.transformer.resblocks.21.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
462
+ "transformer.visual.transformer.resblocks.22.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
463
+ "transformer.visual.transformer.resblocks.22.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
464
+ "transformer.visual.transformer.resblocks.22.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
465
+ "transformer.visual.transformer.resblocks.22.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
466
+ "transformer.visual.transformer.resblocks.22.ln_1.bias": "pytorch_model-00009-of-00010.bin",
467
+ "transformer.visual.transformer.resblocks.22.ln_1.weight": "pytorch_model-00009-of-00010.bin",
468
+ "transformer.visual.transformer.resblocks.22.ln_2.bias": "pytorch_model-00009-of-00010.bin",
469
+ "transformer.visual.transformer.resblocks.22.ln_2.weight": "pytorch_model-00009-of-00010.bin",
470
+ "transformer.visual.transformer.resblocks.22.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
471
+ "transformer.visual.transformer.resblocks.22.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
472
+ "transformer.visual.transformer.resblocks.22.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
473
+ "transformer.visual.transformer.resblocks.22.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
474
+ "transformer.visual.transformer.resblocks.23.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
475
+ "transformer.visual.transformer.resblocks.23.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
476
+ "transformer.visual.transformer.resblocks.23.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
477
+ "transformer.visual.transformer.resblocks.23.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
478
+ "transformer.visual.transformer.resblocks.23.ln_1.bias": "pytorch_model-00009-of-00010.bin",
479
+ "transformer.visual.transformer.resblocks.23.ln_1.weight": "pytorch_model-00009-of-00010.bin",
480
+ "transformer.visual.transformer.resblocks.23.ln_2.bias": "pytorch_model-00009-of-00010.bin",
481
+ "transformer.visual.transformer.resblocks.23.ln_2.weight": "pytorch_model-00009-of-00010.bin",
482
+ "transformer.visual.transformer.resblocks.23.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
483
+ "transformer.visual.transformer.resblocks.23.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
484
+ "transformer.visual.transformer.resblocks.23.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
485
+ "transformer.visual.transformer.resblocks.23.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
486
+ "transformer.visual.transformer.resblocks.24.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
487
+ "transformer.visual.transformer.resblocks.24.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
488
+ "transformer.visual.transformer.resblocks.24.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
489
+ "transformer.visual.transformer.resblocks.24.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
490
+ "transformer.visual.transformer.resblocks.24.ln_1.bias": "pytorch_model-00009-of-00010.bin",
491
+ "transformer.visual.transformer.resblocks.24.ln_1.weight": "pytorch_model-00009-of-00010.bin",
492
+ "transformer.visual.transformer.resblocks.24.ln_2.bias": "pytorch_model-00009-of-00010.bin",
493
+ "transformer.visual.transformer.resblocks.24.ln_2.weight": "pytorch_model-00009-of-00010.bin",
494
+ "transformer.visual.transformer.resblocks.24.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
495
+ "transformer.visual.transformer.resblocks.24.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
496
+ "transformer.visual.transformer.resblocks.24.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
497
+ "transformer.visual.transformer.resblocks.24.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
498
+ "transformer.visual.transformer.resblocks.25.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
499
+ "transformer.visual.transformer.resblocks.25.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
500
+ "transformer.visual.transformer.resblocks.25.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
501
+ "transformer.visual.transformer.resblocks.25.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
502
+ "transformer.visual.transformer.resblocks.25.ln_1.bias": "pytorch_model-00009-of-00010.bin",
503
+ "transformer.visual.transformer.resblocks.25.ln_1.weight": "pytorch_model-00009-of-00010.bin",
504
+ "transformer.visual.transformer.resblocks.25.ln_2.bias": "pytorch_model-00009-of-00010.bin",
505
+ "transformer.visual.transformer.resblocks.25.ln_2.weight": "pytorch_model-00009-of-00010.bin",
506
+ "transformer.visual.transformer.resblocks.25.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
507
+ "transformer.visual.transformer.resblocks.25.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
508
+ "transformer.visual.transformer.resblocks.25.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
509
+ "transformer.visual.transformer.resblocks.25.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
510
+ "transformer.visual.transformer.resblocks.26.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
511
+ "transformer.visual.transformer.resblocks.26.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
512
+ "transformer.visual.transformer.resblocks.26.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
513
+ "transformer.visual.transformer.resblocks.26.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
514
+ "transformer.visual.transformer.resblocks.26.ln_1.bias": "pytorch_model-00009-of-00010.bin",
515
+ "transformer.visual.transformer.resblocks.26.ln_1.weight": "pytorch_model-00009-of-00010.bin",
516
+ "transformer.visual.transformer.resblocks.26.ln_2.bias": "pytorch_model-00009-of-00010.bin",
517
+ "transformer.visual.transformer.resblocks.26.ln_2.weight": "pytorch_model-00009-of-00010.bin",
518
+ "transformer.visual.transformer.resblocks.26.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
519
+ "transformer.visual.transformer.resblocks.26.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
520
+ "transformer.visual.transformer.resblocks.26.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
521
+ "transformer.visual.transformer.resblocks.26.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
522
+ "transformer.visual.transformer.resblocks.27.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
523
+ "transformer.visual.transformer.resblocks.27.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
524
+ "transformer.visual.transformer.resblocks.27.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
525
+ "transformer.visual.transformer.resblocks.27.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
526
+ "transformer.visual.transformer.resblocks.27.ln_1.bias": "pytorch_model-00009-of-00010.bin",
527
+ "transformer.visual.transformer.resblocks.27.ln_1.weight": "pytorch_model-00009-of-00010.bin",
528
+ "transformer.visual.transformer.resblocks.27.ln_2.bias": "pytorch_model-00009-of-00010.bin",
529
+ "transformer.visual.transformer.resblocks.27.ln_2.weight": "pytorch_model-00009-of-00010.bin",
530
+ "transformer.visual.transformer.resblocks.27.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
531
+ "transformer.visual.transformer.resblocks.27.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
532
+ "transformer.visual.transformer.resblocks.27.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
533
+ "transformer.visual.transformer.resblocks.27.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
534
+ "transformer.visual.transformer.resblocks.28.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
535
+ "transformer.visual.transformer.resblocks.28.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
536
+ "transformer.visual.transformer.resblocks.28.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
537
+ "transformer.visual.transformer.resblocks.28.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
538
+ "transformer.visual.transformer.resblocks.28.ln_1.bias": "pytorch_model-00009-of-00010.bin",
539
+ "transformer.visual.transformer.resblocks.28.ln_1.weight": "pytorch_model-00009-of-00010.bin",
540
+ "transformer.visual.transformer.resblocks.28.ln_2.bias": "pytorch_model-00009-of-00010.bin",
541
+ "transformer.visual.transformer.resblocks.28.ln_2.weight": "pytorch_model-00009-of-00010.bin",
542
+ "transformer.visual.transformer.resblocks.28.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
543
+ "transformer.visual.transformer.resblocks.28.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
544
+ "transformer.visual.transformer.resblocks.28.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
545
+ "transformer.visual.transformer.resblocks.28.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
546
+ "transformer.visual.transformer.resblocks.29.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
547
+ "transformer.visual.transformer.resblocks.29.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
548
+ "transformer.visual.transformer.resblocks.29.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
549
+ "transformer.visual.transformer.resblocks.29.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
550
+ "transformer.visual.transformer.resblocks.29.ln_1.bias": "pytorch_model-00009-of-00010.bin",
551
+ "transformer.visual.transformer.resblocks.29.ln_1.weight": "pytorch_model-00009-of-00010.bin",
552
+ "transformer.visual.transformer.resblocks.29.ln_2.bias": "pytorch_model-00009-of-00010.bin",
553
+ "transformer.visual.transformer.resblocks.29.ln_2.weight": "pytorch_model-00009-of-00010.bin",
554
+ "transformer.visual.transformer.resblocks.29.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
555
+ "transformer.visual.transformer.resblocks.29.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
556
+ "transformer.visual.transformer.resblocks.29.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
557
+ "transformer.visual.transformer.resblocks.29.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
558
+ "transformer.visual.transformer.resblocks.3.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
559
+ "transformer.visual.transformer.resblocks.3.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
560
+ "transformer.visual.transformer.resblocks.3.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
561
+ "transformer.visual.transformer.resblocks.3.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
562
+ "transformer.visual.transformer.resblocks.3.ln_1.bias": "pytorch_model-00008-of-00010.bin",
563
+ "transformer.visual.transformer.resblocks.3.ln_1.weight": "pytorch_model-00008-of-00010.bin",
564
+ "transformer.visual.transformer.resblocks.3.ln_2.bias": "pytorch_model-00008-of-00010.bin",
565
+ "transformer.visual.transformer.resblocks.3.ln_2.weight": "pytorch_model-00008-of-00010.bin",
566
+ "transformer.visual.transformer.resblocks.3.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
567
+ "transformer.visual.transformer.resblocks.3.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
568
+ "transformer.visual.transformer.resblocks.3.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
569
+ "transformer.visual.transformer.resblocks.3.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
570
+ "transformer.visual.transformer.resblocks.30.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
571
+ "transformer.visual.transformer.resblocks.30.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
572
+ "transformer.visual.transformer.resblocks.30.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
573
+ "transformer.visual.transformer.resblocks.30.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
574
+ "transformer.visual.transformer.resblocks.30.ln_1.bias": "pytorch_model-00009-of-00010.bin",
575
+ "transformer.visual.transformer.resblocks.30.ln_1.weight": "pytorch_model-00009-of-00010.bin",
576
+ "transformer.visual.transformer.resblocks.30.ln_2.bias": "pytorch_model-00009-of-00010.bin",
577
+ "transformer.visual.transformer.resblocks.30.ln_2.weight": "pytorch_model-00009-of-00010.bin",
578
+ "transformer.visual.transformer.resblocks.30.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
579
+ "transformer.visual.transformer.resblocks.30.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
580
+ "transformer.visual.transformer.resblocks.30.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
581
+ "transformer.visual.transformer.resblocks.30.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
582
+ "transformer.visual.transformer.resblocks.31.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
583
+ "transformer.visual.transformer.resblocks.31.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
584
+ "transformer.visual.transformer.resblocks.31.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
585
+ "transformer.visual.transformer.resblocks.31.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
586
+ "transformer.visual.transformer.resblocks.31.ln_1.bias": "pytorch_model-00009-of-00010.bin",
587
+ "transformer.visual.transformer.resblocks.31.ln_1.weight": "pytorch_model-00009-of-00010.bin",
588
+ "transformer.visual.transformer.resblocks.31.ln_2.bias": "pytorch_model-00009-of-00010.bin",
589
+ "transformer.visual.transformer.resblocks.31.ln_2.weight": "pytorch_model-00009-of-00010.bin",
590
+ "transformer.visual.transformer.resblocks.31.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
591
+ "transformer.visual.transformer.resblocks.31.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
592
+ "transformer.visual.transformer.resblocks.31.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
593
+ "transformer.visual.transformer.resblocks.31.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
594
+ "transformer.visual.transformer.resblocks.32.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
595
+ "transformer.visual.transformer.resblocks.32.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
596
+ "transformer.visual.transformer.resblocks.32.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
597
+ "transformer.visual.transformer.resblocks.32.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
598
+ "transformer.visual.transformer.resblocks.32.ln_1.bias": "pytorch_model-00009-of-00010.bin",
599
+ "transformer.visual.transformer.resblocks.32.ln_1.weight": "pytorch_model-00009-of-00010.bin",
600
+ "transformer.visual.transformer.resblocks.32.ln_2.bias": "pytorch_model-00009-of-00010.bin",
601
+ "transformer.visual.transformer.resblocks.32.ln_2.weight": "pytorch_model-00009-of-00010.bin",
602
+ "transformer.visual.transformer.resblocks.32.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
603
+ "transformer.visual.transformer.resblocks.32.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
604
+ "transformer.visual.transformer.resblocks.32.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
605
+ "transformer.visual.transformer.resblocks.32.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
606
+ "transformer.visual.transformer.resblocks.33.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
607
+ "transformer.visual.transformer.resblocks.33.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
608
+ "transformer.visual.transformer.resblocks.33.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
609
+ "transformer.visual.transformer.resblocks.33.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
610
+ "transformer.visual.transformer.resblocks.33.ln_1.bias": "pytorch_model-00009-of-00010.bin",
611
+ "transformer.visual.transformer.resblocks.33.ln_1.weight": "pytorch_model-00009-of-00010.bin",
612
+ "transformer.visual.transformer.resblocks.33.ln_2.bias": "pytorch_model-00009-of-00010.bin",
613
+ "transformer.visual.transformer.resblocks.33.ln_2.weight": "pytorch_model-00009-of-00010.bin",
614
+ "transformer.visual.transformer.resblocks.33.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
615
+ "transformer.visual.transformer.resblocks.33.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
616
+ "transformer.visual.transformer.resblocks.33.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
617
+ "transformer.visual.transformer.resblocks.33.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
618
+ "transformer.visual.transformer.resblocks.34.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
619
+ "transformer.visual.transformer.resblocks.34.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
620
+ "transformer.visual.transformer.resblocks.34.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
621
+ "transformer.visual.transformer.resblocks.34.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
622
+ "transformer.visual.transformer.resblocks.34.ln_1.bias": "pytorch_model-00009-of-00010.bin",
623
+ "transformer.visual.transformer.resblocks.34.ln_1.weight": "pytorch_model-00009-of-00010.bin",
624
+ "transformer.visual.transformer.resblocks.34.ln_2.bias": "pytorch_model-00009-of-00010.bin",
625
+ "transformer.visual.transformer.resblocks.34.ln_2.weight": "pytorch_model-00009-of-00010.bin",
626
+ "transformer.visual.transformer.resblocks.34.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
627
+ "transformer.visual.transformer.resblocks.34.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
628
+ "transformer.visual.transformer.resblocks.34.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
629
+ "transformer.visual.transformer.resblocks.34.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
630
+ "transformer.visual.transformer.resblocks.35.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
631
+ "transformer.visual.transformer.resblocks.35.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
632
+ "transformer.visual.transformer.resblocks.35.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
633
+ "transformer.visual.transformer.resblocks.35.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
634
+ "transformer.visual.transformer.resblocks.35.ln_1.bias": "pytorch_model-00009-of-00010.bin",
635
+ "transformer.visual.transformer.resblocks.35.ln_1.weight": "pytorch_model-00009-of-00010.bin",
636
+ "transformer.visual.transformer.resblocks.35.ln_2.bias": "pytorch_model-00009-of-00010.bin",
637
+ "transformer.visual.transformer.resblocks.35.ln_2.weight": "pytorch_model-00009-of-00010.bin",
638
+ "transformer.visual.transformer.resblocks.35.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
639
+ "transformer.visual.transformer.resblocks.35.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
640
+ "transformer.visual.transformer.resblocks.35.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
641
+ "transformer.visual.transformer.resblocks.35.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
642
+ "transformer.visual.transformer.resblocks.36.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
643
+ "transformer.visual.transformer.resblocks.36.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
644
+ "transformer.visual.transformer.resblocks.36.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
645
+ "transformer.visual.transformer.resblocks.36.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
646
+ "transformer.visual.transformer.resblocks.36.ln_1.bias": "pytorch_model-00009-of-00010.bin",
647
+ "transformer.visual.transformer.resblocks.36.ln_1.weight": "pytorch_model-00009-of-00010.bin",
648
+ "transformer.visual.transformer.resblocks.36.ln_2.bias": "pytorch_model-00009-of-00010.bin",
649
+ "transformer.visual.transformer.resblocks.36.ln_2.weight": "pytorch_model-00009-of-00010.bin",
650
+ "transformer.visual.transformer.resblocks.36.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
651
+ "transformer.visual.transformer.resblocks.36.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
652
+ "transformer.visual.transformer.resblocks.36.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
653
+ "transformer.visual.transformer.resblocks.36.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
654
+ "transformer.visual.transformer.resblocks.37.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
655
+ "transformer.visual.transformer.resblocks.37.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
656
+ "transformer.visual.transformer.resblocks.37.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
657
+ "transformer.visual.transformer.resblocks.37.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
658
+ "transformer.visual.transformer.resblocks.37.ln_1.bias": "pytorch_model-00009-of-00010.bin",
659
+ "transformer.visual.transformer.resblocks.37.ln_1.weight": "pytorch_model-00009-of-00010.bin",
660
+ "transformer.visual.transformer.resblocks.37.ln_2.bias": "pytorch_model-00009-of-00010.bin",
661
+ "transformer.visual.transformer.resblocks.37.ln_2.weight": "pytorch_model-00009-of-00010.bin",
662
+ "transformer.visual.transformer.resblocks.37.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
663
+ "transformer.visual.transformer.resblocks.37.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
664
+ "transformer.visual.transformer.resblocks.37.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
665
+ "transformer.visual.transformer.resblocks.37.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
666
+ "transformer.visual.transformer.resblocks.38.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
667
+ "transformer.visual.transformer.resblocks.38.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
668
+ "transformer.visual.transformer.resblocks.38.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
669
+ "transformer.visual.transformer.resblocks.38.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
670
+ "transformer.visual.transformer.resblocks.38.ln_1.bias": "pytorch_model-00009-of-00010.bin",
671
+ "transformer.visual.transformer.resblocks.38.ln_1.weight": "pytorch_model-00009-of-00010.bin",
672
+ "transformer.visual.transformer.resblocks.38.ln_2.bias": "pytorch_model-00009-of-00010.bin",
673
+ "transformer.visual.transformer.resblocks.38.ln_2.weight": "pytorch_model-00009-of-00010.bin",
674
+ "transformer.visual.transformer.resblocks.38.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
675
+ "transformer.visual.transformer.resblocks.38.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
676
+ "transformer.visual.transformer.resblocks.38.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
677
+ "transformer.visual.transformer.resblocks.38.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
678
+ "transformer.visual.transformer.resblocks.39.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
679
+ "transformer.visual.transformer.resblocks.39.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
680
+ "transformer.visual.transformer.resblocks.39.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
681
+ "transformer.visual.transformer.resblocks.39.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
682
+ "transformer.visual.transformer.resblocks.39.ln_1.bias": "pytorch_model-00009-of-00010.bin",
683
+ "transformer.visual.transformer.resblocks.39.ln_1.weight": "pytorch_model-00009-of-00010.bin",
684
+ "transformer.visual.transformer.resblocks.39.ln_2.bias": "pytorch_model-00009-of-00010.bin",
685
+ "transformer.visual.transformer.resblocks.39.ln_2.weight": "pytorch_model-00009-of-00010.bin",
686
+ "transformer.visual.transformer.resblocks.39.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
687
+ "transformer.visual.transformer.resblocks.39.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
688
+ "transformer.visual.transformer.resblocks.39.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
689
+ "transformer.visual.transformer.resblocks.39.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
690
+ "transformer.visual.transformer.resblocks.4.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
691
+ "transformer.visual.transformer.resblocks.4.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
692
+ "transformer.visual.transformer.resblocks.4.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
693
+ "transformer.visual.transformer.resblocks.4.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
694
+ "transformer.visual.transformer.resblocks.4.ln_1.bias": "pytorch_model-00008-of-00010.bin",
695
+ "transformer.visual.transformer.resblocks.4.ln_1.weight": "pytorch_model-00008-of-00010.bin",
696
+ "transformer.visual.transformer.resblocks.4.ln_2.bias": "pytorch_model-00008-of-00010.bin",
697
+ "transformer.visual.transformer.resblocks.4.ln_2.weight": "pytorch_model-00008-of-00010.bin",
698
+ "transformer.visual.transformer.resblocks.4.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
699
+ "transformer.visual.transformer.resblocks.4.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
700
+ "transformer.visual.transformer.resblocks.4.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
701
+ "transformer.visual.transformer.resblocks.4.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
702
+ "transformer.visual.transformer.resblocks.40.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
703
+ "transformer.visual.transformer.resblocks.40.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
704
+ "transformer.visual.transformer.resblocks.40.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
705
+ "transformer.visual.transformer.resblocks.40.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
706
+ "transformer.visual.transformer.resblocks.40.ln_1.bias": "pytorch_model-00009-of-00010.bin",
707
+ "transformer.visual.transformer.resblocks.40.ln_1.weight": "pytorch_model-00009-of-00010.bin",
708
+ "transformer.visual.transformer.resblocks.40.ln_2.bias": "pytorch_model-00009-of-00010.bin",
709
+ "transformer.visual.transformer.resblocks.40.ln_2.weight": "pytorch_model-00009-of-00010.bin",
710
+ "transformer.visual.transformer.resblocks.40.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
711
+ "transformer.visual.transformer.resblocks.40.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
712
+ "transformer.visual.transformer.resblocks.40.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
713
+ "transformer.visual.transformer.resblocks.40.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
714
+ "transformer.visual.transformer.resblocks.41.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
715
+ "transformer.visual.transformer.resblocks.41.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
716
+ "transformer.visual.transformer.resblocks.41.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
717
+ "transformer.visual.transformer.resblocks.41.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
718
+ "transformer.visual.transformer.resblocks.41.ln_1.bias": "pytorch_model-00009-of-00010.bin",
719
+ "transformer.visual.transformer.resblocks.41.ln_1.weight": "pytorch_model-00009-of-00010.bin",
720
+ "transformer.visual.transformer.resblocks.41.ln_2.bias": "pytorch_model-00009-of-00010.bin",
721
+ "transformer.visual.transformer.resblocks.41.ln_2.weight": "pytorch_model-00009-of-00010.bin",
722
+ "transformer.visual.transformer.resblocks.41.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
723
+ "transformer.visual.transformer.resblocks.41.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
724
+ "transformer.visual.transformer.resblocks.41.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
725
+ "transformer.visual.transformer.resblocks.41.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
726
+ "transformer.visual.transformer.resblocks.42.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
727
+ "transformer.visual.transformer.resblocks.42.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
728
+ "transformer.visual.transformer.resblocks.42.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
729
+ "transformer.visual.transformer.resblocks.42.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
730
+ "transformer.visual.transformer.resblocks.42.ln_1.bias": "pytorch_model-00009-of-00010.bin",
731
+ "transformer.visual.transformer.resblocks.42.ln_1.weight": "pytorch_model-00009-of-00010.bin",
732
+ "transformer.visual.transformer.resblocks.42.ln_2.bias": "pytorch_model-00009-of-00010.bin",
733
+ "transformer.visual.transformer.resblocks.42.ln_2.weight": "pytorch_model-00009-of-00010.bin",
734
+ "transformer.visual.transformer.resblocks.42.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
735
+ "transformer.visual.transformer.resblocks.42.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
736
+ "transformer.visual.transformer.resblocks.42.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
737
+ "transformer.visual.transformer.resblocks.42.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
738
+ "transformer.visual.transformer.resblocks.43.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
739
+ "transformer.visual.transformer.resblocks.43.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
740
+ "transformer.visual.transformer.resblocks.43.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
741
+ "transformer.visual.transformer.resblocks.43.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
742
+ "transformer.visual.transformer.resblocks.43.ln_1.bias": "pytorch_model-00009-of-00010.bin",
743
+ "transformer.visual.transformer.resblocks.43.ln_1.weight": "pytorch_model-00009-of-00010.bin",
744
+ "transformer.visual.transformer.resblocks.43.ln_2.bias": "pytorch_model-00009-of-00010.bin",
745
+ "transformer.visual.transformer.resblocks.43.ln_2.weight": "pytorch_model-00009-of-00010.bin",
746
+ "transformer.visual.transformer.resblocks.43.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
747
+ "transformer.visual.transformer.resblocks.43.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
748
+ "transformer.visual.transformer.resblocks.43.mlp.c_proj.bias": "pytorch_model-00010-of-00010.bin",
749
+ "transformer.visual.transformer.resblocks.43.mlp.c_proj.weight": "pytorch_model-00010-of-00010.bin",
750
+ "transformer.visual.transformer.resblocks.44.attn.in_proj.bias": "pytorch_model-00010-of-00010.bin",
751
+ "transformer.visual.transformer.resblocks.44.attn.in_proj.weight": "pytorch_model-00010-of-00010.bin",
752
+ "transformer.visual.transformer.resblocks.44.attn.out_proj.bias": "pytorch_model-00010-of-00010.bin",
753
+ "transformer.visual.transformer.resblocks.44.attn.out_proj.weight": "pytorch_model-00010-of-00010.bin",
754
+ "transformer.visual.transformer.resblocks.44.ln_1.bias": "pytorch_model-00010-of-00010.bin",
755
+ "transformer.visual.transformer.resblocks.44.ln_1.weight": "pytorch_model-00010-of-00010.bin",
756
+ "transformer.visual.transformer.resblocks.44.ln_2.bias": "pytorch_model-00010-of-00010.bin",
757
+ "transformer.visual.transformer.resblocks.44.ln_2.weight": "pytorch_model-00010-of-00010.bin",
758
+ "transformer.visual.transformer.resblocks.44.mlp.c_fc.bias": "pytorch_model-00010-of-00010.bin",
759
+ "transformer.visual.transformer.resblocks.44.mlp.c_fc.weight": "pytorch_model-00010-of-00010.bin",
760
+ "transformer.visual.transformer.resblocks.44.mlp.c_proj.bias": "pytorch_model-00010-of-00010.bin",
761
+ "transformer.visual.transformer.resblocks.44.mlp.c_proj.weight": "pytorch_model-00010-of-00010.bin",
762
+ "transformer.visual.transformer.resblocks.45.attn.in_proj.bias": "pytorch_model-00010-of-00010.bin",
763
+ "transformer.visual.transformer.resblocks.45.attn.in_proj.weight": "pytorch_model-00010-of-00010.bin",
764
+ "transformer.visual.transformer.resblocks.45.attn.out_proj.bias": "pytorch_model-00010-of-00010.bin",
765
+ "transformer.visual.transformer.resblocks.45.attn.out_proj.weight": "pytorch_model-00010-of-00010.bin",
766
+ "transformer.visual.transformer.resblocks.45.ln_1.bias": "pytorch_model-00010-of-00010.bin",
767
+ "transformer.visual.transformer.resblocks.45.ln_1.weight": "pytorch_model-00010-of-00010.bin",
768
+ "transformer.visual.transformer.resblocks.45.ln_2.bias": "pytorch_model-00010-of-00010.bin",
769
+ "transformer.visual.transformer.resblocks.45.ln_2.weight": "pytorch_model-00010-of-00010.bin",
770
+ "transformer.visual.transformer.resblocks.45.mlp.c_fc.bias": "pytorch_model-00010-of-00010.bin",
771
+ "transformer.visual.transformer.resblocks.45.mlp.c_fc.weight": "pytorch_model-00010-of-00010.bin",
772
+ "transformer.visual.transformer.resblocks.45.mlp.c_proj.bias": "pytorch_model-00010-of-00010.bin",
773
+ "transformer.visual.transformer.resblocks.45.mlp.c_proj.weight": "pytorch_model-00010-of-00010.bin",
774
+ "transformer.visual.transformer.resblocks.46.attn.in_proj.bias": "pytorch_model-00010-of-00010.bin",
775
+ "transformer.visual.transformer.resblocks.46.attn.in_proj.weight": "pytorch_model-00010-of-00010.bin",
776
+ "transformer.visual.transformer.resblocks.46.attn.out_proj.bias": "pytorch_model-00010-of-00010.bin",
777
+ "transformer.visual.transformer.resblocks.46.attn.out_proj.weight": "pytorch_model-00010-of-00010.bin",
778
+ "transformer.visual.transformer.resblocks.46.ln_1.bias": "pytorch_model-00010-of-00010.bin",
779
+ "transformer.visual.transformer.resblocks.46.ln_1.weight": "pytorch_model-00010-of-00010.bin",
780
+ "transformer.visual.transformer.resblocks.46.ln_2.bias": "pytorch_model-00010-of-00010.bin",
781
+ "transformer.visual.transformer.resblocks.46.ln_2.weight": "pytorch_model-00010-of-00010.bin",
782
+ "transformer.visual.transformer.resblocks.46.mlp.c_fc.bias": "pytorch_model-00010-of-00010.bin",
783
+ "transformer.visual.transformer.resblocks.46.mlp.c_fc.weight": "pytorch_model-00010-of-00010.bin",
784
+ "transformer.visual.transformer.resblocks.46.mlp.c_proj.bias": "pytorch_model-00010-of-00010.bin",
785
+ "transformer.visual.transformer.resblocks.46.mlp.c_proj.weight": "pytorch_model-00010-of-00010.bin",
786
+ "transformer.visual.transformer.resblocks.47.attn.in_proj.bias": "pytorch_model-00010-of-00010.bin",
787
+ "transformer.visual.transformer.resblocks.47.attn.in_proj.weight": "pytorch_model-00010-of-00010.bin",
788
+ "transformer.visual.transformer.resblocks.47.attn.out_proj.bias": "pytorch_model-00010-of-00010.bin",
789
+ "transformer.visual.transformer.resblocks.47.attn.out_proj.weight": "pytorch_model-00010-of-00010.bin",
790
+ "transformer.visual.transformer.resblocks.47.ln_1.bias": "pytorch_model-00010-of-00010.bin",
791
+ "transformer.visual.transformer.resblocks.47.ln_1.weight": "pytorch_model-00010-of-00010.bin",
792
+ "transformer.visual.transformer.resblocks.47.ln_2.bias": "pytorch_model-00010-of-00010.bin",
793
+ "transformer.visual.transformer.resblocks.47.ln_2.weight": "pytorch_model-00010-of-00010.bin",
794
+ "transformer.visual.transformer.resblocks.47.mlp.c_fc.bias": "pytorch_model-00010-of-00010.bin",
795
+ "transformer.visual.transformer.resblocks.47.mlp.c_fc.weight": "pytorch_model-00010-of-00010.bin",
796
+ "transformer.visual.transformer.resblocks.47.mlp.c_proj.bias": "pytorch_model-00010-of-00010.bin",
797
+ "transformer.visual.transformer.resblocks.47.mlp.c_proj.weight": "pytorch_model-00010-of-00010.bin",
798
+ "transformer.visual.transformer.resblocks.5.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
799
+ "transformer.visual.transformer.resblocks.5.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
800
+ "transformer.visual.transformer.resblocks.5.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
801
+ "transformer.visual.transformer.resblocks.5.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
802
+ "transformer.visual.transformer.resblocks.5.ln_1.bias": "pytorch_model-00008-of-00010.bin",
803
+ "transformer.visual.transformer.resblocks.5.ln_1.weight": "pytorch_model-00008-of-00010.bin",
804
+ "transformer.visual.transformer.resblocks.5.ln_2.bias": "pytorch_model-00008-of-00010.bin",
805
+ "transformer.visual.transformer.resblocks.5.ln_2.weight": "pytorch_model-00008-of-00010.bin",
806
+ "transformer.visual.transformer.resblocks.5.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
807
+ "transformer.visual.transformer.resblocks.5.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
808
+ "transformer.visual.transformer.resblocks.5.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
809
+ "transformer.visual.transformer.resblocks.5.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
810
+ "transformer.visual.transformer.resblocks.6.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
811
+ "transformer.visual.transformer.resblocks.6.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
812
+ "transformer.visual.transformer.resblocks.6.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
813
+ "transformer.visual.transformer.resblocks.6.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
814
+ "transformer.visual.transformer.resblocks.6.ln_1.bias": "pytorch_model-00008-of-00010.bin",
815
+ "transformer.visual.transformer.resblocks.6.ln_1.weight": "pytorch_model-00008-of-00010.bin",
816
+ "transformer.visual.transformer.resblocks.6.ln_2.bias": "pytorch_model-00008-of-00010.bin",
817
+ "transformer.visual.transformer.resblocks.6.ln_2.weight": "pytorch_model-00008-of-00010.bin",
818
+ "transformer.visual.transformer.resblocks.6.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
819
+ "transformer.visual.transformer.resblocks.6.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
820
+ "transformer.visual.transformer.resblocks.6.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
821
+ "transformer.visual.transformer.resblocks.6.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
822
+ "transformer.visual.transformer.resblocks.7.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
823
+ "transformer.visual.transformer.resblocks.7.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
824
+ "transformer.visual.transformer.resblocks.7.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
825
+ "transformer.visual.transformer.resblocks.7.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
826
+ "transformer.visual.transformer.resblocks.7.ln_1.bias": "pytorch_model-00008-of-00010.bin",
827
+ "transformer.visual.transformer.resblocks.7.ln_1.weight": "pytorch_model-00008-of-00010.bin",
828
+ "transformer.visual.transformer.resblocks.7.ln_2.bias": "pytorch_model-00008-of-00010.bin",
829
+ "transformer.visual.transformer.resblocks.7.ln_2.weight": "pytorch_model-00008-of-00010.bin",
830
+ "transformer.visual.transformer.resblocks.7.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
831
+ "transformer.visual.transformer.resblocks.7.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
832
+ "transformer.visual.transformer.resblocks.7.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
833
+ "transformer.visual.transformer.resblocks.7.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
834
+ "transformer.visual.transformer.resblocks.8.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
835
+ "transformer.visual.transformer.resblocks.8.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
836
+ "transformer.visual.transformer.resblocks.8.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
837
+ "transformer.visual.transformer.resblocks.8.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
838
+ "transformer.visual.transformer.resblocks.8.ln_1.bias": "pytorch_model-00008-of-00010.bin",
839
+ "transformer.visual.transformer.resblocks.8.ln_1.weight": "pytorch_model-00008-of-00010.bin",
840
+ "transformer.visual.transformer.resblocks.8.ln_2.bias": "pytorch_model-00008-of-00010.bin",
841
+ "transformer.visual.transformer.resblocks.8.ln_2.weight": "pytorch_model-00008-of-00010.bin",
842
+ "transformer.visual.transformer.resblocks.8.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
843
+ "transformer.visual.transformer.resblocks.8.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
844
+ "transformer.visual.transformer.resblocks.8.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
845
+ "transformer.visual.transformer.resblocks.8.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
846
+ "transformer.visual.transformer.resblocks.9.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
847
+ "transformer.visual.transformer.resblocks.9.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
848
+ "transformer.visual.transformer.resblocks.9.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
849
+ "transformer.visual.transformer.resblocks.9.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
850
+ "transformer.visual.transformer.resblocks.9.ln_1.bias": "pytorch_model-00008-of-00010.bin",
851
+ "transformer.visual.transformer.resblocks.9.ln_1.weight": "pytorch_model-00008-of-00010.bin",
852
+ "transformer.visual.transformer.resblocks.9.ln_2.bias": "pytorch_model-00008-of-00010.bin",
853
+ "transformer.visual.transformer.resblocks.9.ln_2.weight": "pytorch_model-00008-of-00010.bin",
854
+ "transformer.visual.transformer.resblocks.9.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
855
+ "transformer.visual.transformer.resblocks.9.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
856
+ "transformer.visual.transformer.resblocks.9.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
857
+ "transformer.visual.transformer.resblocks.9.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
858
+ "transformer.wte.weight": "pytorch_model-00001-of-00010.bin"
859
+ }
860
+ }
qwen.tiktoken ADDED
The diff for this file is too large to render. See raw diff
 
qwen_generation_utils.py ADDED
@@ -0,0 +1,420 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Alibaba Cloud.
2
+ #
3
+ # This source code is licensed under the license found in the
4
+ # LICENSE file in the root directory of this source tree.
5
+
6
+ """Generation support."""
7
+
8
+ from typing import Tuple, List, Union, Iterable
9
+
10
+ import numpy as np
11
+ import torch
12
+ import torch.nn.functional as F
13
+ from transformers import PreTrainedTokenizer
14
+ from transformers import logging
15
+ from transformers.generation import LogitsProcessor
16
+
17
+ logger = logging.get_logger(__name__)
18
+
19
+ # Types.
20
+ HistoryType = List[Tuple[str, str]]
21
+ TokensType = List[int]
22
+ BatchTokensType = List[List[int]]
23
+
24
+
25
+ def pad_batch(batch: BatchTokensType, pad_id: int, seq_length: int) -> BatchTokensType:
26
+ for tokens in batch:
27
+ context_length = len(tokens)
28
+ if context_length < seq_length:
29
+ tokens.extend([pad_id] * (seq_length - context_length))
30
+ return batch
31
+
32
+
33
+ def get_ltor_masks_and_position_ids(
34
+ data,
35
+ eod_token,
36
+ reset_position_ids,
37
+ reset_attention_mask,
38
+ eod_mask_loss,
39
+ ):
40
+ """Build masks and position id for left to right model."""
41
+
42
+ # Extract batch size and sequence length.
43
+ micro_batch_size, seq_length = data.size()
44
+
45
+ # Attention mask (lower triangular).
46
+ if reset_attention_mask:
47
+ att_mask_batch = micro_batch_size
48
+ else:
49
+ att_mask_batch = 1
50
+ attention_mask = torch.tril(
51
+ torch.ones((att_mask_batch, seq_length, seq_length), device=data.device)
52
+ ).view(att_mask_batch, 1, seq_length, seq_length)
53
+
54
+ # Loss mask.
55
+ loss_mask = torch.ones(data.size(), dtype=torch.float, device=data.device)
56
+ if eod_mask_loss:
57
+ loss_mask[data == eod_token] = 0.0
58
+
59
+ # Position ids.
60
+ position_ids = torch.arange(seq_length, dtype=torch.long, device=data.device)
61
+ position_ids = position_ids.unsqueeze(0).expand_as(data)
62
+ # We need to clone as the ids will be modifed based on batch index.
63
+ if reset_position_ids:
64
+ position_ids = position_ids.clone()
65
+
66
+ if reset_position_ids or reset_attention_mask:
67
+ # Loop through the batches:
68
+ for b in range(micro_batch_size):
69
+
70
+ # Find indecies where EOD token is.
71
+ eod_index = position_ids[b, data[b] == eod_token]
72
+ # Detach indecies from positions if going to modify positions.
73
+ if reset_position_ids:
74
+ eod_index = eod_index.clone()
75
+
76
+ # Loop through EOD indecies:
77
+ prev_index = 0
78
+ for j in range(eod_index.size()[0]):
79
+ i = eod_index[j]
80
+ # Mask attention loss.
81
+ if reset_attention_mask:
82
+ attention_mask[b, 0, (i + 1) :, : (i + 1)] = 0
83
+ # Reset positions.
84
+ if reset_position_ids:
85
+ position_ids[b, (i + 1) :] -= i + 1 - prev_index
86
+ prev_index = i + 1
87
+
88
+ # Convert attention mask to binary:
89
+ attention_mask = attention_mask < 0.5
90
+
91
+ return attention_mask, loss_mask, position_ids
92
+
93
+
94
+ def get_batch(context_tokens: torch.LongTensor, eod_id: int):
95
+ """Generate batch from context tokens."""
96
+ # Move to GPU.
97
+ tokens = context_tokens.contiguous().to(context_tokens.device)
98
+ # Get the attention mask and postition ids.
99
+ attention_mask, _, position_ids = get_ltor_masks_and_position_ids(
100
+ tokens,
101
+ eod_id,
102
+ reset_position_ids=False,
103
+ reset_attention_mask=False,
104
+ eod_mask_loss=False,
105
+ )
106
+ return tokens, attention_mask, position_ids
107
+
108
+
109
+ def get_stop_words_ids(chat_format, tokenizer):
110
+ if chat_format == "raw":
111
+ stop_words_ids = [tokenizer.encode("Human:"), [tokenizer.eod_id]]
112
+ elif chat_format == "chatml":
113
+ stop_words_ids = [[tokenizer.im_end_id], [tokenizer.im_start_id]]
114
+ else:
115
+ raise NotImplementedError(f"Unknown chat format {chat_format!r}")
116
+ return stop_words_ids
117
+
118
+
119
+ def make_context(
120
+ tokenizer: PreTrainedTokenizer,
121
+ query: str,
122
+ history: List[Tuple[str, str]] = None,
123
+ system: str = "",
124
+ max_window_size: int = 6144,
125
+ chat_format: str = "chatml",
126
+ ):
127
+ if history is None:
128
+ history = []
129
+
130
+ if chat_format == "chatml":
131
+ im_start, im_end = "<|im_start|>", "<|im_end|>"
132
+ im_start_tokens = [tokenizer.im_start_id]
133
+ im_end_tokens = [tokenizer.im_end_id]
134
+ nl_tokens = tokenizer.encode("\n")
135
+
136
+ def _tokenize_str(role, content):
137
+ return f"{role}\n{content}", tokenizer.encode(
138
+ role, allowed_special=set(tokenizer.IMAGE_ST)
139
+ ) + nl_tokens + tokenizer.encode(content, allowed_special=set(tokenizer.IMAGE_ST))
140
+
141
+ system_text, system_tokens_part = _tokenize_str("system", system)
142
+ system_tokens = im_start_tokens + system_tokens_part + im_end_tokens
143
+
144
+ raw_text = ""
145
+ context_tokens = []
146
+
147
+ for turn_query, turn_response in reversed(history):
148
+ query_text, query_tokens_part = _tokenize_str("user", turn_query)
149
+ query_tokens = im_start_tokens + query_tokens_part + im_end_tokens
150
+ if turn_response is not None:
151
+ response_text, response_tokens_part = _tokenize_str(
152
+ "assistant", turn_response
153
+ )
154
+ response_tokens = im_start_tokens + response_tokens_part + im_end_tokens
155
+
156
+ next_context_tokens = nl_tokens + query_tokens + nl_tokens + response_tokens
157
+ prev_chat = (
158
+ f"\n{im_start}{query_text}{im_end}\n{im_start}{response_text}{im_end}"
159
+ )
160
+ else:
161
+ next_context_tokens = nl_tokens + query_tokens + nl_tokens
162
+ prev_chat = f"\n{im_start}{query_text}{im_end}\n"
163
+
164
+ current_context_size = (
165
+ len(system_tokens) + len(next_context_tokens) + len(context_tokens)
166
+ )
167
+ if current_context_size < max_window_size:
168
+ context_tokens = next_context_tokens + context_tokens
169
+ raw_text = prev_chat + raw_text
170
+ else:
171
+ break
172
+
173
+ context_tokens = system_tokens + context_tokens
174
+ raw_text = f"{im_start}{system_text}{im_end}" + raw_text
175
+ context_tokens += (
176
+ nl_tokens
177
+ + im_start_tokens
178
+ + _tokenize_str("user", query)[1]
179
+ + im_end_tokens
180
+ + nl_tokens
181
+ + im_start_tokens
182
+ + tokenizer.encode("assistant")
183
+ + nl_tokens
184
+ )
185
+ raw_text += f"\n{im_start}user\n{query}{im_end}\n{im_start}assistant\n"
186
+
187
+ elif chat_format == "raw":
188
+ raw_text = query
189
+ context_tokens = tokenizer.encode(raw_text)
190
+ else:
191
+ raise NotImplementedError(f"Unknown chat format {chat_format!r}")
192
+
193
+ return raw_text, context_tokens
194
+
195
+
196
+ def _decode_default(
197
+ tokens: List[int],
198
+ *,
199
+ stop_words: List[str],
200
+ eod_words: List[str],
201
+ tokenizer: PreTrainedTokenizer,
202
+ raw_text_len: int,
203
+ verbose: bool = False,
204
+ return_end_reason: bool = False,
205
+ errors: str='replace',
206
+ ):
207
+ trim_decode_tokens = tokenizer.decode(tokens, errors=errors)[raw_text_len:]
208
+ if verbose:
209
+ print("\nRaw Generate: ", trim_decode_tokens)
210
+
211
+ end_reason = f"Gen length {len(tokens)}"
212
+ for stop_word in stop_words:
213
+ trim_decode_tokens = trim_decode_tokens.replace(stop_word, "").strip()
214
+ for eod_word in eod_words:
215
+ if eod_word in trim_decode_tokens:
216
+ end_reason = f"Gen {eod_word!r}"
217
+ trim_decode_tokens = trim_decode_tokens.split(eod_word)[0]
218
+ trim_decode_tokens = trim_decode_tokens.strip()
219
+ if verbose:
220
+ print("\nEnd Reason:", end_reason)
221
+ print("\nGenerate: ", trim_decode_tokens)
222
+
223
+ if return_end_reason:
224
+ return trim_decode_tokens, end_reason
225
+ else:
226
+ return trim_decode_tokens
227
+
228
+
229
+ def _decode_chatml(
230
+ tokens: List[int],
231
+ *,
232
+ stop_words: List[str],
233
+ eod_token_ids: List[int],
234
+ tokenizer: PreTrainedTokenizer,
235
+ raw_text_len: int,
236
+ context_length: int,
237
+ verbose: bool = False,
238
+ return_end_reason: bool = False,
239
+ errors: str='replace'
240
+ ):
241
+ end_reason = f"Gen length {len(tokens)}"
242
+ eod_token_idx = context_length
243
+ for eod_token_idx in range(context_length, len(tokens)):
244
+ if tokens[eod_token_idx] in eod_token_ids:
245
+ end_reason = f"Gen {tokenizer.decode([tokens[eod_token_idx]])!r}"
246
+ break
247
+
248
+ trim_decode_tokens = tokenizer.decode(tokens[:eod_token_idx], errors=errors)[raw_text_len:]
249
+ if verbose:
250
+ print("\nRaw Generate w/o EOD:", tokenizer.decode(tokens, errors=errors)[raw_text_len:])
251
+ print("\nRaw Generate:", trim_decode_tokens)
252
+ print("\nEnd Reason:", end_reason)
253
+ for stop_word in stop_words:
254
+ trim_decode_tokens = trim_decode_tokens.replace(stop_word, "").strip()
255
+ trim_decode_tokens = trim_decode_tokens.strip()
256
+ if verbose:
257
+ print("\nGenerate:", trim_decode_tokens)
258
+
259
+ if return_end_reason:
260
+ return trim_decode_tokens, end_reason
261
+ else:
262
+ return trim_decode_tokens
263
+
264
+
265
+ def decode_tokens(
266
+ tokens: Union[torch.LongTensor, TokensType],
267
+ tokenizer: PreTrainedTokenizer,
268
+ raw_text_len: int,
269
+ context_length: int,
270
+ chat_format: str,
271
+ verbose: bool = False,
272
+ return_end_reason: bool = False,
273
+ errors: str="replace",
274
+ ) -> str:
275
+ if torch.is_tensor(tokens):
276
+ tokens = tokens.cpu().numpy().tolist()
277
+
278
+ if chat_format == "chatml":
279
+ return _decode_chatml(
280
+ tokens,
281
+ stop_words=[],
282
+ eod_token_ids=[tokenizer.im_start_id, tokenizer.im_end_id],
283
+ tokenizer=tokenizer,
284
+ raw_text_len=raw_text_len,
285
+ context_length=context_length,
286
+ verbose=verbose,
287
+ return_end_reason=return_end_reason,
288
+ errors=errors,
289
+ )
290
+ elif chat_format == "raw":
291
+ return _decode_default(
292
+ tokens,
293
+ stop_words=["<|endoftext|>"],
294
+ eod_words=["<|endoftext|>"],
295
+ tokenizer=tokenizer,
296
+ raw_text_len=raw_text_len,
297
+ verbose=verbose,
298
+ return_end_reason=return_end_reason,
299
+ errors=errors,
300
+ )
301
+ else:
302
+ raise NotImplementedError(f"Unknown chat format {chat_format!r}")
303
+
304
+
305
+ class StopWordsLogitsProcessor(LogitsProcessor):
306
+ """
307
+ :class:`transformers.LogitsProcessor` that enforces that when specified sequences appear, stop geration.
308
+
309
+ Args:
310
+ stop_words_ids (:obj:`List[List[int]]`):
311
+ List of list of token ids of stop ids. In order to get the tokens of the words
312
+ that should not appear in the generated text, use :obj:`tokenizer(bad_word,
313
+ add_prefix_space=True).input_ids`.
314
+ eos_token_id (:obj:`int`):
315
+ The id of the `end-of-sequence` token.
316
+ """
317
+
318
+ def __init__(self, stop_words_ids: Iterable[Iterable[int]], eos_token_id: int):
319
+
320
+ if not isinstance(stop_words_ids, List) or len(stop_words_ids) == 0:
321
+ raise ValueError(
322
+ f"`stop_words_ids` has to be a non-emtpy list, but is {stop_words_ids}."
323
+ )
324
+ if any(not isinstance(bad_word_ids, list) for bad_word_ids in stop_words_ids):
325
+ raise ValueError(
326
+ f"`stop_words_ids` has to be a list of lists, but is {stop_words_ids}."
327
+ )
328
+ if any(
329
+ any(
330
+ (not isinstance(token_id, (int, np.integer)) or token_id < 0)
331
+ for token_id in stop_word_ids
332
+ )
333
+ for stop_word_ids in stop_words_ids
334
+ ):
335
+ raise ValueError(
336
+ f"Each list in `stop_words_ids` has to be a list of positive integers, but is {stop_words_ids}."
337
+ )
338
+
339
+ self.stop_words_ids = list(
340
+ filter(
341
+ lambda bad_token_seq: bad_token_seq != [eos_token_id], stop_words_ids
342
+ )
343
+ )
344
+ self.eos_token_id = eos_token_id
345
+ for stop_token_seq in self.stop_words_ids:
346
+ assert (
347
+ len(stop_token_seq) > 0
348
+ ), "Stop words token sequences {} cannot have an empty list".format(
349
+ stop_words_ids
350
+ )
351
+
352
+ def __call__(
353
+ self, input_ids: torch.LongTensor, scores: torch.FloatTensor
354
+ ) -> torch.FloatTensor:
355
+ stopped_samples = self._calc_stopped_samples(input_ids)
356
+ for i, should_stop in enumerate(stopped_samples):
357
+ if should_stop:
358
+ scores[i, self.eos_token_id] = float(2**15)
359
+ return scores
360
+
361
+ def _tokens_match(self, prev_tokens: torch.LongTensor, tokens: List[int]) -> bool:
362
+ if len(tokens) == 0:
363
+ # if bad word tokens is just one token always ban it
364
+ return True
365
+ elif len(tokens) > len(prev_tokens):
366
+ # if bad word tokens are longer then prev input_ids they can't be equal
367
+ return False
368
+ elif prev_tokens[-len(tokens) :].tolist() == tokens:
369
+ # if tokens match
370
+ return True
371
+ else:
372
+ return False
373
+
374
+ def _calc_stopped_samples(self, prev_input_ids: Iterable[int]) -> Iterable[int]:
375
+ stopped_samples = []
376
+ for prev_input_ids_slice in prev_input_ids:
377
+ match = False
378
+ for stop_token_seq in self.stop_words_ids:
379
+ if self._tokens_match(prev_input_ids_slice, stop_token_seq):
380
+ # if tokens do not match continue
381
+ match = True
382
+ break
383
+ stopped_samples.append(match)
384
+
385
+ return stopped_samples
386
+
387
+
388
+ def top_k_logits(logits, top_k=0, top_p=0.0, filter_value=-float("Inf")):
389
+ """This function has been mostly taken from huggingface conversational
390
+ ai code at
391
+ https://medium.com/huggingface/how-to-build-a-state-of-the-art-
392
+ conversational-ai-with-transfer-learning-2d818ac26313"""
393
+
394
+ if top_k > 0:
395
+ # Remove all tokens with a probability less than the
396
+ # last token of the top-k
397
+ indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None]
398
+ logits[indices_to_remove] = filter_value
399
+
400
+ if top_p > 0.0:
401
+ # Cconvert to 1D
402
+ sorted_logits, sorted_indices = torch.sort(logits, descending=True, dim=-1)
403
+ cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
404
+
405
+ # Remove tokens with cumulative probability above the threshold
406
+ sorted_indices_to_remove = cumulative_probs > top_p
407
+ # Shift the indices to the right to keep also the first token
408
+ # above the threshold
409
+ sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
410
+ sorted_indices_to_remove[..., 0] = 0
411
+ for i in range(sorted_indices.size(0)):
412
+ indices_to_remove = sorted_indices[i][sorted_indices_to_remove[i]]
413
+ logits[i][indices_to_remove] = filter_value
414
+
415
+ return logits
416
+
417
+
418
+ def switch(val1, val2, boolean):
419
+ boolean = boolean.type_as(val1)
420
+ return (1 - boolean) * val1 + boolean * val2
tokenization_qwen.py ADDED
@@ -0,0 +1,590 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Alibaba Cloud.
2
+ #
3
+ # This source code is licensed under the license found in the
4
+ # LICENSE file in the root directory of this source tree.
5
+
6
+ """Tokenization classes for QWen."""
7
+
8
+ import base64
9
+ import logging
10
+ import os
11
+ import requests
12
+ import unicodedata
13
+ from typing import Collection, Dict, List, Set, Tuple, Union, Any, Callable, Optional
14
+
15
+ import tiktoken
16
+ import numpy as np
17
+ from PIL import Image
18
+ from PIL import ImageFont
19
+ from PIL import ImageDraw
20
+ from transformers import PreTrainedTokenizer, AddedToken
21
+ from transformers.utils import try_to_load_from_cache
22
+
23
+ import matplotlib.colors as mcolors
24
+ from matplotlib.font_manager import FontProperties
25
+
26
+ logger = logging.getLogger(__name__)
27
+
28
+
29
+ VOCAB_FILES_NAMES = {"vocab_file": "qwen.tiktoken", "ttf": "SimSun.ttf"}
30
+ FONT_PATH = try_to_load_from_cache("Qwen/Qwen-VL-Chat", "SimSun.ttf")
31
+ if FONT_PATH is None:
32
+ if not os.path.exists("SimSun.ttf"):
33
+ ttf = requests.get("https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/SimSun.ttf")
34
+ open("SimSun.ttf", "wb").write(ttf.content)
35
+ FONT_PATH = "SimSun.ttf"
36
+
37
+ PAT_STR = r"""(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}| ?[^\s\p{L}\p{N}]+[\r\n]*|\s*[\r\n]+|\s+(?!\S)|\s+"""
38
+ ENDOFTEXT = "<|endoftext|>"
39
+ IMSTART = "<|im_start|>"
40
+ IMEND = "<|im_end|>"
41
+ # as the default behavior is changed to allow special tokens in
42
+ # regular texts, the surface forms of special tokens need to be
43
+ # as different as possible to minimize the impact
44
+ EXTRAS = tuple((f"<|extra_{i}|>" for i in range(205)))
45
+ SPECIAL_TOKENS = (
46
+ ENDOFTEXT,
47
+ IMSTART,
48
+ IMEND,
49
+ ) + EXTRAS
50
+ IMG_TOKEN_SPAN = 256
51
+
52
+
53
+ def _load_tiktoken_bpe(tiktoken_bpe_file: str) -> Dict[bytes, int]:
54
+ with open(tiktoken_bpe_file, "rb") as f:
55
+ contents = f.read()
56
+ return {
57
+ base64.b64decode(token): int(rank)
58
+ for token, rank in (line.split() for line in contents.splitlines() if line)
59
+ }
60
+
61
+ def _list_find(
62
+ input_list: List[Any],
63
+ candidates: Tuple[Any],
64
+ start: int = 0,
65
+ ):
66
+ for i in range(start, len(input_list)):
67
+ if input_list[i] in candidates:
68
+ return i
69
+ return -1
70
+
71
+ def _replace_closed_tag(
72
+ input_tokens: List[Any],
73
+ start_tags: Union[Any, Tuple[Any]],
74
+ end_tags: Union[Any, Tuple[Any]],
75
+ inclusive_replace_func: Callable,
76
+ exclusive_replace_func: Callable = lambda x: x,
77
+ ):
78
+ if isinstance(start_tags, (str, int)):
79
+ start_tags = (start_tags,)
80
+ if isinstance(end_tags, (str, int)):
81
+ end_tags = (end_tags,)
82
+ assert len(start_tags) == len(end_tags)
83
+
84
+ output_tokens = []
85
+ end = 0
86
+ while True:
87
+ start = _list_find(input_tokens, start_tags, end)
88
+ if start == -1:
89
+ break
90
+ output_tokens.extend(exclusive_replace_func(input_tokens[end : start]))
91
+ tag_idx = start_tags.index(input_tokens[start])
92
+ end = _list_find(input_tokens, (end_tags[tag_idx],), start)
93
+ if end == -1:
94
+ raise ValueError("Unclosed image token")
95
+ output_tokens.extend(inclusive_replace_func(input_tokens[start : end + 1]))
96
+ end += 1
97
+ output_tokens.extend(exclusive_replace_func(input_tokens[end : ]))
98
+ return output_tokens
99
+
100
+ class QWenTokenizer(PreTrainedTokenizer):
101
+ """QWen tokenizer."""
102
+
103
+ vocab_files_names = VOCAB_FILES_NAMES
104
+
105
+ def __init__(
106
+ self,
107
+ vocab_file,
108
+ errors="replace",
109
+ image_start_tag='<img>',
110
+ image_end_tag='</img>',
111
+ image_pad_tag='<imgpad>',
112
+ ref_start_tag='<ref>',
113
+ ref_end_tag='</ref>',
114
+ box_start_tag='<box>',
115
+ box_end_tag='</box>',
116
+ quad_start_tag='<quad>',
117
+ quad_end_tag='</quad>',
118
+ **kwargs,
119
+ ):
120
+ super().__init__(**kwargs)
121
+ self.image_start_tag = image_start_tag
122
+ self.image_end_tag = image_end_tag
123
+ self.image_pad_tag = image_pad_tag
124
+ self.ref_start_tag = ref_start_tag
125
+ self.ref_end_tag = ref_end_tag
126
+ self.box_start_tag = box_start_tag
127
+ self.box_end_tag = box_end_tag
128
+ self.quad_start_tag = quad_start_tag
129
+ self.quad_end_tag = quad_end_tag
130
+ self.IMAGE_ST = (
131
+ ref_start_tag, ref_end_tag,
132
+ box_start_tag, box_end_tag,
133
+ quad_start_tag, quad_end_tag,
134
+ image_start_tag, image_end_tag,
135
+ image_pad_tag
136
+ )
137
+
138
+ self.errors = errors # how to handle errors in decoding
139
+
140
+ self.mergeable_ranks = _load_tiktoken_bpe(vocab_file) # type: dict[bytes, int]
141
+ self.special_tokens = {
142
+ token: index
143
+ for index, token in enumerate(
144
+ SPECIAL_TOKENS + self.IMAGE_ST, start=len(self.mergeable_ranks)
145
+ )
146
+ }
147
+ self.img_start_id = self.special_tokens[self.image_start_tag]
148
+ self.img_end_id = self.special_tokens[self.image_end_tag]
149
+ self.img_pad_id = self.special_tokens[self.image_pad_tag]
150
+ self.ref_start_id = self.special_tokens[self.ref_start_tag]
151
+ self.ref_end_id = self.special_tokens[self.ref_end_tag]
152
+ self.box_start_id = self.special_tokens[self.box_start_tag]
153
+ self.box_end_id = self.special_tokens[self.box_end_tag]
154
+ self.quad_start_id = self.special_tokens[self.quad_start_tag]
155
+ self.quad_end_id = self.special_tokens[self.quad_end_tag]
156
+
157
+ enc = tiktoken.Encoding(
158
+ "Qwen",
159
+ pat_str=PAT_STR,
160
+ mergeable_ranks=self.mergeable_ranks,
161
+ special_tokens=self.special_tokens,
162
+ )
163
+ assert (
164
+ len(self.mergeable_ranks) + len(self.special_tokens) == enc.n_vocab
165
+ ), f"{len(self.mergeable_ranks) + len(self.special_tokens)} != {enc.n_vocab} in encoding"
166
+
167
+ self.decoder = {
168
+ v: k for k, v in self.mergeable_ranks.items()
169
+ } # type: dict[int, bytes|str]
170
+ self.decoder.update({v: k for k, v in self.special_tokens.items()})
171
+
172
+ self.tokenizer = enc # type: tiktoken.Encoding
173
+
174
+ self.eod_id = self.tokenizer.eot_token
175
+ self.im_start_id = self.special_tokens[IMSTART]
176
+ self.im_end_id = self.special_tokens[IMEND]
177
+
178
+ def __getstate__(self):
179
+ # for pickle lovers
180
+ state = self.__dict__.copy()
181
+ del state['tokenizer']
182
+ return state
183
+
184
+ def __setstate__(self, state):
185
+ # tokenizer is not python native; don't pass it; rebuild it
186
+ self.__dict__.update(state)
187
+ enc = tiktoken.Encoding(
188
+ "Qwen",
189
+ pat_str=PAT_STR,
190
+ mergeable_ranks=self.mergeable_ranks,
191
+ special_tokens=self.special_tokens,
192
+ )
193
+ self.tokenizer = enc
194
+
195
+
196
+ def __len__(self) -> int:
197
+ return self.tokenizer.n_vocab
198
+
199
+ def get_vocab(self) -> Dict[bytes, int]:
200
+ return self.mergeable_ranks
201
+
202
+ def convert_tokens_to_ids(
203
+ self, tokens: Union[bytes, str, List[Union[bytes, str]]]
204
+ ) -> List[int]:
205
+ ids = []
206
+ if isinstance(tokens, (str, bytes)):
207
+ if tokens in self.special_tokens:
208
+ return self.special_tokens[tokens]
209
+ else:
210
+ return self.mergeable_ranks.get(tokens)
211
+ for token in tokens:
212
+ if token in self.special_tokens:
213
+ ids.append(self.special_tokens[token])
214
+ else:
215
+ ids.append(self.mergeable_ranks.get(token))
216
+ return ids
217
+
218
+ def _add_tokens(self, new_tokens: Union[List[str], List[AddedToken]], special_tokens: bool = False) -> int:
219
+ if not special_tokens and new_tokens:
220
+ raise ValueError('Adding regular tokens is not supported')
221
+ for token in new_tokens:
222
+ surface_form = token.content if isinstance(token, AddedToken) else token
223
+ if surface_form not in SPECIAL_TOKENS + self.IMAGE_ST:
224
+ raise ValueError('Adding unknown special tokens is not supported')
225
+ return 0
226
+
227
+ def save_vocabulary(self, save_directory: str, **kwargs) -> Tuple[str]:
228
+ """
229
+ Save only the vocabulary of the tokenizer (vocabulary).
230
+
231
+ Returns:
232
+ `Tuple(str)`: Paths to the files saved.
233
+ """
234
+ file_path = os.path.join(save_directory, "qwen.tiktoken")
235
+ with open(file_path, "w", encoding="utf8") as w:
236
+ for k, v in self.mergeable_ranks.items():
237
+ line = base64.b64encode(k).decode("utf8") + " " + str(v) + "\n"
238
+ w.write(line)
239
+ return (file_path,)
240
+
241
+ def tokenize(
242
+ self,
243
+ text: str,
244
+ allowed_special: Union[Set, str] = "all",
245
+ disallowed_special: Union[Collection, str] = (),
246
+ **kwargs,
247
+ ) -> List[Union[bytes, str]]:
248
+ """
249
+ Converts a string in a sequence of tokens.
250
+
251
+ Args:
252
+ text (`str`):
253
+ The sequence to be encoded.
254
+ allowed_special (`Literal["all"]` or `set`):
255
+ The surface forms of the tokens to be encoded as special tokens in regular texts.
256
+ Default to "all".
257
+ disallowed_special (`Literal["all"]` or `Collection`):
258
+ The surface forms of the tokens that should not be in regular texts and trigger errors.
259
+ Default to an empty tuple.
260
+
261
+ kwargs (additional keyword arguments, *optional*):
262
+ Will be passed to the underlying model specific encode method.
263
+
264
+ Returns:
265
+ `List[bytes|str]`: The list of tokens.
266
+ """
267
+ tokens = []
268
+ text = unicodedata.normalize("NFC", text)
269
+
270
+ # this implementation takes a detour: text -> token id -> token surface forms
271
+ for t in self.tokenizer.encode(
272
+ text, allowed_special=allowed_special, disallowed_special=disallowed_special
273
+ ):
274
+ tokens.append(self.decoder[t])
275
+
276
+ def _encode_imgurl(img_tokens):
277
+ assert img_tokens[0] == self.image_start_tag and img_tokens[-1] == self.image_end_tag
278
+ img_tokens = img_tokens[1:-1]
279
+ img_url = b''.join(img_tokens)
280
+ out_img_tokens = list(map(self.decoder.get, img_url))
281
+ if len(out_img_tokens) > IMG_TOKEN_SPAN:
282
+ raise ValueError("The content in {}..{} is too long".format(
283
+ self.image_start_tag, self.image_end_tag))
284
+ out_img_tokens.extend([self.image_pad_tag] * (IMG_TOKEN_SPAN - len(out_img_tokens)))
285
+ out_img_tokens = [self.image_start_tag] + out_img_tokens + [self.image_end_tag]
286
+ return out_img_tokens
287
+
288
+ return _replace_closed_tag(tokens, self.image_start_tag, self.image_end_tag, _encode_imgurl)
289
+
290
+ def convert_tokens_to_string(self, tokens: List[Union[bytes, str]]) -> str:
291
+ """
292
+ Converts a sequence of tokens in a single string.
293
+ """
294
+ text = ""
295
+ temp = b""
296
+ for t in tokens:
297
+ if isinstance(t, str):
298
+ if temp:
299
+ text += temp.decode("utf-8", errors=self.errors)
300
+ temp = b""
301
+ text += t
302
+ elif isinstance(t, bytes):
303
+ temp += t
304
+ else:
305
+ raise TypeError("token should only be of type types or str")
306
+ if temp:
307
+ text += temp.decode("utf-8", errors=self.errors)
308
+ return text
309
+
310
+ @property
311
+ def vocab_size(self):
312
+ return self.tokenizer.n_vocab
313
+
314
+ def _convert_id_to_token(self, index: int) -> Union[bytes, str]:
315
+ """Converts an id to a token, special tokens included"""
316
+ if index in self.decoder:
317
+ return self.decoder[index]
318
+ raise ValueError("unknown ids")
319
+
320
+ def _convert_token_to_id(self, token: Union[bytes, str]) -> int:
321
+ """Converts a token to an id using the vocab, special tokens included"""
322
+ if token in self.special_tokens:
323
+ return self.special_tokens[token]
324
+ if token in self.mergeable_ranks:
325
+ return self.mergeable_ranks[token]
326
+ raise ValueError("unknown token")
327
+
328
+ def _tokenize(self, text: str, **kwargs):
329
+ """
330
+ Converts a string in a sequence of tokens (string), using the tokenizer. Split in words for word-based
331
+ vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces).
332
+
333
+ Do NOT take care of added tokens.
334
+ """
335
+ raise NotImplementedError
336
+
337
+ def _decode(
338
+ self,
339
+ token_ids: Union[int, List[int]],
340
+ skip_special_tokens: bool = False,
341
+ errors: str = None,
342
+ **kwargs,
343
+ ) -> str:
344
+ if isinstance(token_ids, int):
345
+ token_ids = [token_ids]
346
+
347
+ def _decode_imgurl(img_token_ids):
348
+ assert img_token_ids[0] == self.img_start_id and img_token_ids[-1] == self.img_end_id
349
+ img_token_ids = img_token_ids[1:-1]
350
+ img_token_ids = img_token_ids[ : img_token_ids.index(self.img_pad_id)]
351
+ img_url = bytes(img_token_ids).decode('utf-8')
352
+ return [self.img_start_id] + self.tokenizer.encode(img_url) + [self.img_end_id]
353
+
354
+ token_ids = _replace_closed_tag(token_ids, self.img_start_id, self.img_end_id, _decode_imgurl)
355
+
356
+ if skip_special_tokens:
357
+ token_ids = [i for i in token_ids if i < self.eod_id]
358
+ return self.tokenizer.decode(token_ids, errors=errors or self.errors)
359
+
360
+ def to_list_format(self, text: str):
361
+ text = unicodedata.normalize("NFC", text)
362
+ token_ids = self.tokenizer.encode(
363
+ text, allowed_special=set(self.IMAGE_ST + (ENDOFTEXT,)))
364
+
365
+ def _encode_vl_info(tokens):
366
+ if len(tokens) == 0:
367
+ return []
368
+ if tokens[0] == self.img_start_id and tokens[-1] == self.img_end_id:
369
+ key = 'image'
370
+ elif tokens[0] == self.ref_start_id and tokens[-1] == self.ref_end_id:
371
+ key = 'ref'
372
+ elif tokens[0] == self.box_start_id and tokens[-1] == self.box_end_id:
373
+ key = 'box'
374
+ elif tokens[0] == self.quad_start_id and tokens[-1] == self.quad_end_id:
375
+ key = 'quad'
376
+ else:
377
+ _tobytes = lambda x: x.encode('utf-8') if isinstance(x, str) else x
378
+ return [{'text': b''.join(map(_tobytes, map(self.decoder.get, tokens))).decode('utf-8')}]
379
+ _tobytes = lambda x: x.encode('utf-8') if isinstance(x, str) else x
380
+ val = b''.join(map(_tobytes, map(self.decoder.get, tokens[1:-1]))).decode('utf-8')
381
+ return [{key: val}]
382
+
383
+ return _replace_closed_tag(
384
+ token_ids,
385
+ (self.img_start_id, self.ref_start_id, self.box_start_id, self.quad_start_id),
386
+ (self.img_end_id, self.ref_end_id, self.box_end_id, self.quad_end_id),
387
+ _encode_vl_info,
388
+ _encode_vl_info,
389
+ )
390
+
391
+ def from_list_format(self, list_format: List[Dict]):
392
+ text = ''
393
+ num_images = 0
394
+ for ele in list_format:
395
+ if 'image' in ele:
396
+ num_images += 1
397
+ text += f'Picture {num_images}:'
398
+ text += self.image_start_tag + ele['image'] + self.image_end_tag
399
+ text += '\n'
400
+ elif 'text' in ele:
401
+ text += ele['text']
402
+ elif 'box' in ele:
403
+ if 'ref' in ele:
404
+ text += self.ref_start_tag + ele['ref'] + self.ref_end_tag
405
+ for box in ele['box']:
406
+ text += self.box_start_tag + '(%d,%d),(%d,%d)' % (box[0], box[1], box[2], box[3]) + self.box_end_tag
407
+ else:
408
+ raise ValueError("Unsupport element: " + str(ele))
409
+ return text
410
+
411
+ def _fetch_latest_picture(self, response, history):
412
+ if history is None:
413
+ history = []
414
+ _history = history + [(response, None)]
415
+ for q, r in _history[::-1]:
416
+ for ele in self.to_list_format(q)[::-1]:
417
+ if 'image' in ele:
418
+ return ele['image']
419
+ return None
420
+
421
+ def _fetch_all_box_with_ref(self, text):
422
+ list_format = self.to_list_format(text)
423
+ output = []
424
+ for i, ele in enumerate(list_format):
425
+ if 'box' in ele:
426
+ bbox = tuple(map(int, ele['box'].replace('(', '').replace(')', '').split(',')))
427
+ assert len(bbox) == 4
428
+ output.append({'box': bbox})
429
+ if i > 0 and 'ref' in list_format[i-1]:
430
+ output[-1]['ref'] = list_format[i-1]['ref'].strip()
431
+ return output
432
+
433
+ def draw_bbox_on_latest_picture(
434
+ self,
435
+ response,
436
+ history=None,
437
+ ) -> Optional[Image.Image]:
438
+ image = self._fetch_latest_picture(response, history)
439
+ if image is None:
440
+ return None
441
+ if image.startswith("http://") or image.startswith("https://"):
442
+ image = Image.open(requests.get(image, stream=True).raw).convert("RGB")
443
+ h, w = image.height, image.width
444
+ else:
445
+ image = np.asarray(Image.open(image).convert("RGB"))
446
+ h, w = image.shape[0], image.shape[1]
447
+ visualizer = Visualizer(image)
448
+
449
+ boxes = self._fetch_all_box_with_ref(response)
450
+ if not boxes:
451
+ return None
452
+ color = random.choice([_ for _ in mcolors.TABLEAU_COLORS.keys()]) # init color
453
+ for box in boxes:
454
+ if 'ref' in box: # random new color for new refexps
455
+ color = random.choice([_ for _ in mcolors.TABLEAU_COLORS.keys()])
456
+ x1, y1, x2, y2 = box['box']
457
+ x1, y1, x2, y2 = (int(x1 / 1000 * w), int(y1 / 1000 * h), int(x2 / 1000 * w), int(y2 / 1000 * h))
458
+ visualizer.draw_box((x1, y1, x2, y2), alpha=1, edge_color=color)
459
+ if 'ref' in box:
460
+ visualizer.draw_text(box['ref'], (x1, y1), color=color, horizontal_alignment="left")
461
+ return visualizer.output
462
+
463
+
464
+ import colorsys
465
+ import logging
466
+ import math
467
+ import numpy as np
468
+ import matplotlib as mpl
469
+ import matplotlib.colors as mplc
470
+ import matplotlib.figure as mplfigure
471
+ import torch
472
+ from matplotlib.backends.backend_agg import FigureCanvasAgg
473
+ from PIL import Image
474
+ import random
475
+
476
+ logger = logging.getLogger(__name__)
477
+
478
+
479
+ class VisImage:
480
+ def __init__(self, img, scale=1.0):
481
+ self.img = img
482
+ self.scale = scale
483
+ self.width, self.height = img.shape[1], img.shape[0]
484
+ self._setup_figure(img)
485
+
486
+ def _setup_figure(self, img):
487
+ fig = mplfigure.Figure(frameon=False)
488
+ self.dpi = fig.get_dpi()
489
+ # add a small 1e-2 to avoid precision lost due to matplotlib's truncation
490
+ # (https://github.com/matplotlib/matplotlib/issues/15363)
491
+ fig.set_size_inches(
492
+ (self.width * self.scale + 1e-2) / self.dpi,
493
+ (self.height * self.scale + 1e-2) / self.dpi,
494
+ )
495
+ self.canvas = FigureCanvasAgg(fig)
496
+ # self.canvas = mpl.backends.backend_cairo.FigureCanvasCairo(fig)
497
+ ax = fig.add_axes([0.0, 0.0, 1.0, 1.0])
498
+ ax.axis("off")
499
+ self.fig = fig
500
+ self.ax = ax
501
+ self.reset_image(img)
502
+
503
+ def reset_image(self, img):
504
+ img = img.astype("uint8")
505
+ self.ax.imshow(img, extent=(0, self.width, self.height, 0), interpolation="nearest")
506
+
507
+ def save(self, filepath):
508
+ self.fig.savefig(filepath)
509
+
510
+ def get_image(self):
511
+ canvas = self.canvas
512
+ s, (width, height) = canvas.print_to_buffer()
513
+
514
+ buffer = np.frombuffer(s, dtype="uint8")
515
+
516
+ img_rgba = buffer.reshape(height, width, 4)
517
+ rgb, alpha = np.split(img_rgba, [3], axis=2)
518
+ return rgb.astype("uint8")
519
+
520
+
521
+ class Visualizer:
522
+ def __init__(self, img_rgb, metadata=None, scale=1.0):
523
+ self.img = np.asarray(img_rgb).clip(0, 255).astype(np.uint8)
524
+ self.font_path = FONT_PATH
525
+ self.output = VisImage(self.img, scale=scale)
526
+ self.cpu_device = torch.device("cpu")
527
+
528
+ # too small texts are useless, therefore clamp to 14
529
+ self._default_font_size = max(
530
+ np.sqrt(self.output.height * self.output.width) // 30, 15 // scale
531
+ )
532
+
533
+ def draw_text(
534
+ self,
535
+ text,
536
+ position,
537
+ *,
538
+ font_size=None,
539
+ color="g",
540
+ horizontal_alignment="center",
541
+ rotation=0,
542
+ ):
543
+ if not font_size:
544
+ font_size = self._default_font_size
545
+
546
+ # since the text background is dark, we don't want the text to be dark
547
+ color = np.maximum(list(mplc.to_rgb(color)), 0.2)
548
+ color[np.argmax(color)] = max(0.8, np.max(color))
549
+
550
+ x, y = position
551
+ self.output.ax.text(
552
+ x,
553
+ y,
554
+ text,
555
+ size=font_size * self.output.scale,
556
+ fontproperties=FontProperties(fname=self.font_path),
557
+ bbox={"facecolor": "black", "alpha": 0.8, "pad": 0.7, "edgecolor": "none"},
558
+ verticalalignment="top",
559
+ horizontalalignment=horizontal_alignment,
560
+ color=color,
561
+ zorder=10,
562
+ rotation=rotation,
563
+ )
564
+ return self.output
565
+
566
+ def draw_box(self, box_coord, alpha=0.5, edge_color="g", line_style="-"):
567
+
568
+ x0, y0, x1, y1 = box_coord
569
+ width = x1 - x0
570
+ height = y1 - y0
571
+
572
+ linewidth = max(self._default_font_size / 4, 1)
573
+
574
+ self.output.ax.add_patch(
575
+ mpl.patches.Rectangle(
576
+ (x0, y0),
577
+ width,
578
+ height,
579
+ fill=False,
580
+ edgecolor=edge_color,
581
+ linewidth=linewidth * self.output.scale,
582
+ alpha=alpha,
583
+ linestyle=line_style,
584
+ )
585
+ )
586
+ return self.output
587
+
588
+ def get_output(self):
589
+
590
+ return self.output
tokenizer_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_max_length": 8192,
3
+ "tokenizer_class": "QWenTokenizer",
4
+ "auto_map": {
5
+ "AutoTokenizer": [
6
+ "tokenization_qwen.QWenTokenizer",
7
+ null
8
+ ]
9
+ }
10
+ }
visual.py ADDED
@@ -0,0 +1,426 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Alibaba Cloud.
2
+ #
3
+ # This source code is licensed under the license found in the
4
+ # LICENSE file in the root directory of this source tree.
5
+
6
+ from collections import OrderedDict
7
+ import math
8
+ import requests
9
+ from io import BytesIO
10
+ from functools import partial
11
+ from PIL import Image
12
+ from typing import Callable, Optional, Sequence, Tuple, List
13
+ import numpy as np
14
+
15
+ import torch
16
+ from torch import nn
17
+ from torch.nn import functional as F
18
+ from torch.nn.init import trunc_normal_
19
+ from torchvision import transforms
20
+ from torchvision.transforms import InterpolationMode
21
+
22
+
23
+ def get_abs_pos(abs_pos, tgt_size):
24
+ # abs_pos: L, C
25
+ # tgt_size: M
26
+ # return: M, C
27
+ src_size = int(math.sqrt(abs_pos.size(0)))
28
+ tgt_size = int(math.sqrt(tgt_size))
29
+ dtype = abs_pos.dtype
30
+
31
+ if src_size != tgt_size:
32
+ return F.interpolate(
33
+ abs_pos.float().reshape(1, src_size, src_size, -1).permute(0, 3, 1, 2),
34
+ size=(tgt_size, tgt_size),
35
+ mode="bicubic",
36
+ align_corners=False,
37
+ ).permute(0, 2, 3, 1).flatten(0, 2).to(dtype=dtype)
38
+ else:
39
+ return abs_pos
40
+
41
+ # https://github.com/facebookresearch/mae/blob/efb2a8062c206524e35e47d04501ed4f544c0ae8/util/pos_embed.py#L20
42
+ def get_2d_sincos_pos_embed(embed_dim, grid_size, cls_token=False):
43
+ """
44
+ grid_size: int of the grid height and width
45
+ return:
46
+ pos_embed: [grid_size*grid_size, embed_dim] or [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token)
47
+ """
48
+ grid_h = np.arange(grid_size, dtype=np.float32)
49
+ grid_w = np.arange(grid_size, dtype=np.float32)
50
+ grid = np.meshgrid(grid_w, grid_h) # here w goes first
51
+ grid = np.stack(grid, axis=0)
52
+
53
+ grid = grid.reshape([2, 1, grid_size, grid_size])
54
+ pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid)
55
+ if cls_token:
56
+ pos_embed = np.concatenate([np.zeros([1, embed_dim]), pos_embed], axis=0)
57
+ return pos_embed
58
+
59
+
60
+ def get_2d_sincos_pos_embed_from_grid(embed_dim, grid):
61
+ assert embed_dim % 2 == 0
62
+
63
+ # use half of dimensions to encode grid_h
64
+ emb_h = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[0]) # (H*W, D/2)
65
+ emb_w = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[1]) # (H*W, D/2)
66
+
67
+ emb = np.concatenate([emb_h, emb_w], axis=1) # (H*W, D)
68
+ return emb
69
+
70
+
71
+ def get_1d_sincos_pos_embed_from_grid(embed_dim, pos):
72
+ """
73
+ embed_dim: output dimension for each position
74
+ pos: a list of positions to be encoded: size (M,)
75
+ out: (M, D)
76
+ """
77
+ assert embed_dim % 2 == 0
78
+ omega = np.arange(embed_dim // 2, dtype=np.float32)
79
+ omega /= embed_dim / 2.
80
+ omega = 1. / 10000**omega # (D/2,)
81
+
82
+ pos = pos.reshape(-1) # (M,)
83
+ out = np.einsum('m,d->md', pos, omega) # (M, D/2), outer product
84
+
85
+ emb_sin = np.sin(out) # (M, D/2)
86
+ emb_cos = np.cos(out) # (M, D/2)
87
+
88
+ emb = np.concatenate([emb_sin, emb_cos], axis=1) # (M, D)
89
+ return emb
90
+
91
+
92
+ class Resampler(nn.Module):
93
+ """
94
+ A 2D perceiver-resampler network with one cross attention layers by
95
+ (grid_size**2) learnable queries and 2d sincos pos_emb
96
+ Outputs:
97
+ A tensor with the shape of (grid_size**2, embed_dim)
98
+ """
99
+ def __init__(
100
+ self,
101
+ grid_size,
102
+ embed_dim,
103
+ num_heads,
104
+ kv_dim=None,
105
+ norm_layer=nn.LayerNorm
106
+ ):
107
+ super().__init__()
108
+ self.num_queries = grid_size ** 2
109
+ self.embed_dim = embed_dim
110
+ self.num_heads = num_heads
111
+
112
+ self.pos_embed = nn.Parameter(
113
+ torch.from_numpy(get_2d_sincos_pos_embed(embed_dim, grid_size)).float()
114
+ ).requires_grad_(False)
115
+
116
+ self.query = nn.Parameter(torch.zeros(self.num_queries, embed_dim))
117
+ trunc_normal_(self.query, std=.02)
118
+
119
+ if kv_dim is not None and kv_dim != embed_dim:
120
+ self.kv_proj = nn.Linear(kv_dim, embed_dim, bias=False)
121
+ else:
122
+ self.kv_proj = nn.Identity()
123
+
124
+ self.attn = nn.MultiheadAttention(embed_dim, num_heads)
125
+ self.ln_q = norm_layer(embed_dim)
126
+ self.ln_kv = norm_layer(embed_dim)
127
+
128
+ self.apply(self._init_weights)
129
+
130
+ def _init_weights(self, m):
131
+ if isinstance(m, nn.Linear):
132
+ trunc_normal_(m.weight, std=.02)
133
+ if isinstance(m, nn.Linear) and m.bias is not None:
134
+ nn.init.constant_(m.bias, 0)
135
+ elif isinstance(m, nn.LayerNorm):
136
+ nn.init.constant_(m.bias, 0)
137
+ nn.init.constant_(m.weight, 1.0)
138
+
139
+ def forward(self, x, attn_mask=None):
140
+
141
+ pos_embed = get_abs_pos(self.pos_embed, x.size(1))
142
+
143
+ x = self.kv_proj(x)
144
+ x = self.ln_kv(x).permute(1, 0, 2)
145
+
146
+ N = x.shape[1]
147
+ q = self.ln_q(self.query)
148
+ out = self.attn(
149
+ self._repeat(q, N) + self.pos_embed.unsqueeze(1),
150
+ x + pos_embed.unsqueeze(1),
151
+ x,
152
+ attn_mask=attn_mask)[0]
153
+ return out.permute(1, 0, 2)
154
+
155
+ def _repeat(self, query, N: int):
156
+ return query.unsqueeze(1).repeat(1, N, 1)
157
+
158
+
159
+ class VisualAttention(nn.Module):
160
+ """self-attention layer class.
161
+
162
+ Self-attention layer takes input with size [s, b, h]
163
+ and returns output of the same size.
164
+ """
165
+
166
+ def __init__(self, embed_dim, num_heads,
167
+ bias=True, kdim=None, vdim=None):
168
+ super(VisualAttention, self).__init__()
169
+ self.embed_dim = embed_dim
170
+ self.kdim = kdim if kdim is not None else embed_dim
171
+ self.vdim = vdim if vdim is not None else embed_dim
172
+ self._qkv_same_embed_dim = self.kdim == embed_dim and self.vdim == embed_dim
173
+
174
+ self.num_heads = num_heads
175
+
176
+ # Per attention head and per partition values.
177
+ assert embed_dim % num_heads == 0
178
+ self.hidden_size_per_attention_head = embed_dim // num_heads
179
+ self.num_attention_heads_per_partition = num_heads
180
+ self.hidden_size_per_partition = embed_dim
181
+
182
+ # Strided linear layer.
183
+ assert self._qkv_same_embed_dim, 'Only Support SelfAttention Currently'
184
+ self.in_proj = nn.Linear(embed_dim, 3 * embed_dim)
185
+ self.out_proj = nn.Linear(embed_dim, embed_dim)
186
+ self.norm_factor = math.sqrt(self.hidden_size_per_attention_head)
187
+
188
+ def forward(self, query, key, value, attn_mask = None):
189
+ # query/key/value: [sq, b, h]
190
+ sq, b, _ = query.size()
191
+
192
+ assert query is key, 'Only Support Self-Attention Currently'
193
+ sk = sq
194
+ mixed_x_layer = self.in_proj(query)
195
+
196
+ # [sq, b, (np * 3 * hn)] --> [sq, b, np, 3 * hn]
197
+ new_tensor_shape = mixed_x_layer.size()[:-1] + \
198
+ (self.num_attention_heads_per_partition,
199
+ 3 * self.hidden_size_per_attention_head)
200
+ mixed_x_layer = mixed_x_layer.view(*new_tensor_shape)
201
+
202
+ # [sq, b, np, 3 * hn] --> 3 [sq, b, np, hn]
203
+ query_layer, key_layer, value_layer = mixed_x_layer.split(
204
+ self.hidden_size_per_attention_head, dim=-1)
205
+
206
+ # [sq, b, np, hn] -> [sq, b * np, hn]
207
+ query_layer = query_layer.view(sq,
208
+ b * self.num_attention_heads_per_partition,
209
+ self.hidden_size_per_attention_head).transpose(0, 1)
210
+ # [sk, b, np, hn] -> [sk, b * np, hn]
211
+ key_layer = key_layer.view(sk,
212
+ b * self.num_attention_heads_per_partition,
213
+ self.hidden_size_per_attention_head).transpose(0, 1)
214
+
215
+ q_scaled = query_layer / self.norm_factor
216
+ if attn_mask is not None:
217
+ attention_probs = torch.baddbmm(attn_mask, q_scaled, key_layer.transpose(-2, -1))
218
+ else:
219
+ attention_probs = torch.bmm(q_scaled, key_layer.transpose(-2, -1))
220
+ attention_probs = attention_probs.softmax(dim=-1)
221
+
222
+ value_layer = value_layer.view(sk,
223
+ b * self.num_attention_heads_per_partition,
224
+ self.hidden_size_per_attention_head).transpose(0, 1)
225
+
226
+ # matmul: [b * np, sq, hn]
227
+ context_layer = torch.bmm(attention_probs, value_layer)
228
+
229
+ # change view [b, np, sq, hn]
230
+ context_layer = context_layer.view(b,
231
+ self.num_attention_heads_per_partition,
232
+ sq, self.hidden_size_per_attention_head)
233
+
234
+ # [b, np, sq, hn] --> [sq, b, np, hn]
235
+ context_layer = context_layer.permute(2, 0, 1, 3).contiguous()
236
+
237
+ # [sq, b, np, hn] --> [sq, b, hp]
238
+ new_context_layer_shape = context_layer.size()[:-2] + \
239
+ (self.hidden_size_per_partition,)
240
+ context_layer = context_layer.view(*new_context_layer_shape)
241
+
242
+ output = self.out_proj(context_layer)
243
+
244
+ return output
245
+
246
+
247
+ class VisualAttentionBlock(nn.Module):
248
+ def __init__(
249
+ self,
250
+ d_model: int,
251
+ n_head: int,
252
+ mlp_ratio: float = 4.0,
253
+ act_layer: Callable = nn.GELU,
254
+ norm_layer: Callable = nn.LayerNorm,
255
+ is_cross_attention: bool = False,
256
+ ):
257
+ super().__init__()
258
+
259
+ self.ln_1 = norm_layer(d_model)
260
+ if is_cross_attention:
261
+ self.ln_1_kv = norm_layer(d_model)
262
+
263
+ self.ln_2 = norm_layer(d_model)
264
+ mlp_width = int(d_model * mlp_ratio)
265
+ self.attn = VisualAttention(d_model, n_head)
266
+ self.mlp = nn.Sequential(OrderedDict([
267
+ ("c_fc", nn.Linear(d_model, mlp_width)),
268
+ ("gelu", act_layer()),
269
+ ("c_proj", nn.Linear(mlp_width, d_model))
270
+ ]))
271
+
272
+ def attention(
273
+ self,
274
+ q_x: torch.Tensor,
275
+ k_x: Optional[torch.Tensor] = None,
276
+ v_x: Optional[torch.Tensor] = None,
277
+ attn_mask: Optional[torch.Tensor] = None,
278
+ ):
279
+ k_x = k_x if k_x is not None else q_x
280
+ v_x = v_x if v_x is not None else q_x
281
+
282
+ attn_mask = attn_mask.to(q_x.dtype) if attn_mask is not None else None
283
+ return self.attn(q_x, k_x, v_x, attn_mask=attn_mask)
284
+
285
+ def forward(
286
+ self,
287
+ q_x: torch.Tensor,
288
+ k_x: Optional[torch.Tensor] = None,
289
+ v_x: Optional[torch.Tensor] = None,
290
+ attn_mask: Optional[torch.Tensor] = None,
291
+ ):
292
+ k_x = self.ln_1_kv(k_x) if hasattr(self, "ln_1_kv") and k_x is not None else None
293
+ v_x = self.ln_1_kv(v_x) if hasattr(self, "ln_1_kv") and v_x is not None else None
294
+
295
+ x = q_x + self.attention(q_x=self.ln_1(q_x), k_x=k_x, v_x=v_x, attn_mask=attn_mask)
296
+ x = x + self.mlp(self.ln_2(x))
297
+ return x
298
+
299
+
300
+ class TransformerBlock(nn.Module):
301
+ def __init__(
302
+ self,
303
+ width: int,
304
+ layers: int,
305
+ heads: int,
306
+ mlp_ratio: float = 4.0,
307
+ act_layer: Callable = nn.GELU,
308
+ norm_layer: Callable = nn.LayerNorm,
309
+ ):
310
+ super().__init__()
311
+ self.width = width
312
+ self.layers = layers
313
+
314
+ self.resblocks = nn.ModuleList([
315
+ VisualAttentionBlock(
316
+ width, heads, mlp_ratio, act_layer=act_layer, norm_layer=norm_layer)
317
+ for _ in range(layers)
318
+ ])
319
+
320
+ def get_cast_dtype(self) -> torch.dtype:
321
+ return self.resblocks[0].mlp.c_fc.weight.dtype
322
+
323
+ def get_cast_device(self) -> torch.device:
324
+ return self.resblocks[0].mlp.c_fc.weight.device
325
+
326
+ def forward(self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None):
327
+ for r in self.resblocks:
328
+ x = r(x, attn_mask=attn_mask)
329
+ return x
330
+
331
+
332
+ class VisionTransformer(nn.Module):
333
+
334
+ def __init__(
335
+ self,
336
+ image_size: int,
337
+ patch_size: int,
338
+ width: int,
339
+ layers: int,
340
+ heads: int,
341
+ mlp_ratio: float,
342
+ n_queries: int = 256,
343
+ output_dim: int = 512,
344
+ **kwargs
345
+ ):
346
+ super().__init__()
347
+ image_height, image_width = self.image_size = (image_size, image_size)
348
+ patch_height, patch_width = self.patch_size = (patch_size, patch_size)
349
+ self.grid_size = (image_height // patch_height, image_width // patch_width)
350
+ self.output_dim = output_dim
351
+
352
+ mean = (0.48145466, 0.4578275, 0.40821073)
353
+ std = (0.26862954, 0.26130258, 0.27577711)
354
+ self.image_transform = transforms.Compose([
355
+ transforms.Resize(
356
+ (image_size, image_size),
357
+ interpolation=InterpolationMode.BICUBIC
358
+ ),
359
+ transforms.ToTensor(),
360
+ transforms.Normalize(mean=mean, std=std),
361
+ ])
362
+
363
+ self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False)
364
+
365
+ # class embeddings and positional embeddings
366
+ scale = width ** -0.5
367
+ self.positional_embedding = nn.Parameter(scale * torch.randn(256, width))
368
+
369
+ norm_layer = partial(nn.LayerNorm, eps=1e-6)
370
+ act_layer = nn.GELU
371
+
372
+ self.ln_pre = norm_layer(width)
373
+ self.transformer = TransformerBlock(
374
+ width,
375
+ layers,
376
+ heads,
377
+ mlp_ratio,
378
+ act_layer=act_layer,
379
+ norm_layer=norm_layer,
380
+ )
381
+
382
+ self.attn_pool = Resampler(
383
+ grid_size=int(math.sqrt(n_queries)),
384
+ embed_dim=output_dim,
385
+ num_heads=output_dim // 128,
386
+ kv_dim=width,
387
+ norm_layer=norm_layer,
388
+ )
389
+ self.ln_post = norm_layer(output_dim)
390
+ self.proj = nn.Parameter((output_dim** -0.5) * torch.randn(output_dim, output_dim))
391
+
392
+ def forward(self, x: torch.Tensor):
393
+ x = x.to(
394
+ dtype=self.transformer.get_cast_dtype(),
395
+ device=self.transformer.get_cast_device(),
396
+ )
397
+ # to patches
398
+ x = self.conv1(x) # shape = [*, width, grid, grid]
399
+ x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2]
400
+ x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width]
401
+
402
+ x = x + get_abs_pos(self.positional_embedding, x.size(1))
403
+
404
+ x = self.ln_pre(x)
405
+
406
+ x = x.permute(1, 0, 2) # NLD -> LND
407
+ x = self.transformer(x)
408
+ x = x.permute(1, 0, 2) # LND -> NLD
409
+
410
+ x = self.attn_pool(x)
411
+ x = self.ln_post(x)
412
+ x = x @ self.proj
413
+
414
+ return x
415
+
416
+ def encode(self, image_paths: List[str]):
417
+ images = []
418
+ for image_path in image_paths:
419
+ if image_path.startswith("http://") or image_path.startswith("https://"):
420
+ image = Image.open(requests.get(image_path, stream=True).raw)
421
+ else:
422
+ image = Image.open(image_path)
423
+ image = image.convert("RGB")
424
+ images.append(self.image_transform(image))
425
+ images = torch.stack(images, dim=0)
426
+ return self(images)