prince-canuma commited on
Commit
cfed5ed
1 Parent(s): 13513bf

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ license_link: https://huggingface.co/microsoft/Florence-2-base-ft/resolve/main/LICENSE
4
+ pipeline_tag: image-text-to-text
5
+ tags:
6
+ - vision
7
+ - mlx
8
+ ---
9
+
10
+ # mlx-community/Florence-2-base-ft-4bit
11
+ This model was converted to MLX format from [`prince-canuma/Florence-2-base-ft`]() using mlx-vlm version **0.1.0**.
12
+ Refer to the [original model card](https://huggingface.co/prince-canuma/Florence-2-base-ft) for more details on the model.
13
+ ## Use with mlx
14
+
15
+ ```bash
16
+ pip install -U mlx-vlm
17
+ ```
18
+
19
+ ```bash
20
+ python -m mlx_vlm.generate --model mlx-community/Florence-2-base-ft-4bit --max-tokens 100 --temp 0.0
21
+ ```
added_tokens.json ADDED
@@ -0,0 +1,1026 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</cap>": 51270,
3
+ "</dcap>": 51274,
4
+ "</grounding>": 51276,
5
+ "</ncap>": 51272,
6
+ "</ocr>": 50268,
7
+ "</od>": 50266,
8
+ "</poly>": 51287,
9
+ "</proposal>": 51285,
10
+ "</region_cap>": 51281,
11
+ "</region_to_desciption>": 51283,
12
+ "</seg>": 51278,
13
+ "<and>": 51288,
14
+ "<cap>": 51269,
15
+ "<dcap>": 51273,
16
+ "<grounding>": 51275,
17
+ "<loc_0>": 50269,
18
+ "<loc_100>": 50369,
19
+ "<loc_101>": 50370,
20
+ "<loc_102>": 50371,
21
+ "<loc_103>": 50372,
22
+ "<loc_104>": 50373,
23
+ "<loc_105>": 50374,
24
+ "<loc_106>": 50375,
25
+ "<loc_107>": 50376,
26
+ "<loc_108>": 50377,
27
+ "<loc_109>": 50378,
28
+ "<loc_10>": 50279,
29
+ "<loc_110>": 50379,
30
+ "<loc_111>": 50380,
31
+ "<loc_112>": 50381,
32
+ "<loc_113>": 50382,
33
+ "<loc_114>": 50383,
34
+ "<loc_115>": 50384,
35
+ "<loc_116>": 50385,
36
+ "<loc_117>": 50386,
37
+ "<loc_118>": 50387,
38
+ "<loc_119>": 50388,
39
+ "<loc_11>": 50280,
40
+ "<loc_120>": 50389,
41
+ "<loc_121>": 50390,
42
+ "<loc_122>": 50391,
43
+ "<loc_123>": 50392,
44
+ "<loc_124>": 50393,
45
+ "<loc_125>": 50394,
46
+ "<loc_126>": 50395,
47
+ "<loc_127>": 50396,
48
+ "<loc_128>": 50397,
49
+ "<loc_129>": 50398,
50
+ "<loc_12>": 50281,
51
+ "<loc_130>": 50399,
52
+ "<loc_131>": 50400,
53
+ "<loc_132>": 50401,
54
+ "<loc_133>": 50402,
55
+ "<loc_134>": 50403,
56
+ "<loc_135>": 50404,
57
+ "<loc_136>": 50405,
58
+ "<loc_137>": 50406,
59
+ "<loc_138>": 50407,
60
+ "<loc_139>": 50408,
61
+ "<loc_13>": 50282,
62
+ "<loc_140>": 50409,
63
+ "<loc_141>": 50410,
64
+ "<loc_142>": 50411,
65
+ "<loc_143>": 50412,
66
+ "<loc_144>": 50413,
67
+ "<loc_145>": 50414,
68
+ "<loc_146>": 50415,
69
+ "<loc_147>": 50416,
70
+ "<loc_148>": 50417,
71
+ "<loc_149>": 50418,
72
+ "<loc_14>": 50283,
73
+ "<loc_150>": 50419,
74
+ "<loc_151>": 50420,
75
+ "<loc_152>": 50421,
76
+ "<loc_153>": 50422,
77
+ "<loc_154>": 50423,
78
+ "<loc_155>": 50424,
79
+ "<loc_156>": 50425,
80
+ "<loc_157>": 50426,
81
+ "<loc_158>": 50427,
82
+ "<loc_159>": 50428,
83
+ "<loc_15>": 50284,
84
+ "<loc_160>": 50429,
85
+ "<loc_161>": 50430,
86
+ "<loc_162>": 50431,
87
+ "<loc_163>": 50432,
88
+ "<loc_164>": 50433,
89
+ "<loc_165>": 50434,
90
+ "<loc_166>": 50435,
91
+ "<loc_167>": 50436,
92
+ "<loc_168>": 50437,
93
+ "<loc_169>": 50438,
94
+ "<loc_16>": 50285,
95
+ "<loc_170>": 50439,
96
+ "<loc_171>": 50440,
97
+ "<loc_172>": 50441,
98
+ "<loc_173>": 50442,
99
+ "<loc_174>": 50443,
100
+ "<loc_175>": 50444,
101
+ "<loc_176>": 50445,
102
+ "<loc_177>": 50446,
103
+ "<loc_178>": 50447,
104
+ "<loc_179>": 50448,
105
+ "<loc_17>": 50286,
106
+ "<loc_180>": 50449,
107
+ "<loc_181>": 50450,
108
+ "<loc_182>": 50451,
109
+ "<loc_183>": 50452,
110
+ "<loc_184>": 50453,
111
+ "<loc_185>": 50454,
112
+ "<loc_186>": 50455,
113
+ "<loc_187>": 50456,
114
+ "<loc_188>": 50457,
115
+ "<loc_189>": 50458,
116
+ "<loc_18>": 50287,
117
+ "<loc_190>": 50459,
118
+ "<loc_191>": 50460,
119
+ "<loc_192>": 50461,
120
+ "<loc_193>": 50462,
121
+ "<loc_194>": 50463,
122
+ "<loc_195>": 50464,
123
+ "<loc_196>": 50465,
124
+ "<loc_197>": 50466,
125
+ "<loc_198>": 50467,
126
+ "<loc_199>": 50468,
127
+ "<loc_19>": 50288,
128
+ "<loc_1>": 50270,
129
+ "<loc_200>": 50469,
130
+ "<loc_201>": 50470,
131
+ "<loc_202>": 50471,
132
+ "<loc_203>": 50472,
133
+ "<loc_204>": 50473,
134
+ "<loc_205>": 50474,
135
+ "<loc_206>": 50475,
136
+ "<loc_207>": 50476,
137
+ "<loc_208>": 50477,
138
+ "<loc_209>": 50478,
139
+ "<loc_20>": 50289,
140
+ "<loc_210>": 50479,
141
+ "<loc_211>": 50480,
142
+ "<loc_212>": 50481,
143
+ "<loc_213>": 50482,
144
+ "<loc_214>": 50483,
145
+ "<loc_215>": 50484,
146
+ "<loc_216>": 50485,
147
+ "<loc_217>": 50486,
148
+ "<loc_218>": 50487,
149
+ "<loc_219>": 50488,
150
+ "<loc_21>": 50290,
151
+ "<loc_220>": 50489,
152
+ "<loc_221>": 50490,
153
+ "<loc_222>": 50491,
154
+ "<loc_223>": 50492,
155
+ "<loc_224>": 50493,
156
+ "<loc_225>": 50494,
157
+ "<loc_226>": 50495,
158
+ "<loc_227>": 50496,
159
+ "<loc_228>": 50497,
160
+ "<loc_229>": 50498,
161
+ "<loc_22>": 50291,
162
+ "<loc_230>": 50499,
163
+ "<loc_231>": 50500,
164
+ "<loc_232>": 50501,
165
+ "<loc_233>": 50502,
166
+ "<loc_234>": 50503,
167
+ "<loc_235>": 50504,
168
+ "<loc_236>": 50505,
169
+ "<loc_237>": 50506,
170
+ "<loc_238>": 50507,
171
+ "<loc_239>": 50508,
172
+ "<loc_23>": 50292,
173
+ "<loc_240>": 50509,
174
+ "<loc_241>": 50510,
175
+ "<loc_242>": 50511,
176
+ "<loc_243>": 50512,
177
+ "<loc_244>": 50513,
178
+ "<loc_245>": 50514,
179
+ "<loc_246>": 50515,
180
+ "<loc_247>": 50516,
181
+ "<loc_248>": 50517,
182
+ "<loc_249>": 50518,
183
+ "<loc_24>": 50293,
184
+ "<loc_250>": 50519,
185
+ "<loc_251>": 50520,
186
+ "<loc_252>": 50521,
187
+ "<loc_253>": 50522,
188
+ "<loc_254>": 50523,
189
+ "<loc_255>": 50524,
190
+ "<loc_256>": 50525,
191
+ "<loc_257>": 50526,
192
+ "<loc_258>": 50527,
193
+ "<loc_259>": 50528,
194
+ "<loc_25>": 50294,
195
+ "<loc_260>": 50529,
196
+ "<loc_261>": 50530,
197
+ "<loc_262>": 50531,
198
+ "<loc_263>": 50532,
199
+ "<loc_264>": 50533,
200
+ "<loc_265>": 50534,
201
+ "<loc_266>": 50535,
202
+ "<loc_267>": 50536,
203
+ "<loc_268>": 50537,
204
+ "<loc_269>": 50538,
205
+ "<loc_26>": 50295,
206
+ "<loc_270>": 50539,
207
+ "<loc_271>": 50540,
208
+ "<loc_272>": 50541,
209
+ "<loc_273>": 50542,
210
+ "<loc_274>": 50543,
211
+ "<loc_275>": 50544,
212
+ "<loc_276>": 50545,
213
+ "<loc_277>": 50546,
214
+ "<loc_278>": 50547,
215
+ "<loc_279>": 50548,
216
+ "<loc_27>": 50296,
217
+ "<loc_280>": 50549,
218
+ "<loc_281>": 50550,
219
+ "<loc_282>": 50551,
220
+ "<loc_283>": 50552,
221
+ "<loc_284>": 50553,
222
+ "<loc_285>": 50554,
223
+ "<loc_286>": 50555,
224
+ "<loc_287>": 50556,
225
+ "<loc_288>": 50557,
226
+ "<loc_289>": 50558,
227
+ "<loc_28>": 50297,
228
+ "<loc_290>": 50559,
229
+ "<loc_291>": 50560,
230
+ "<loc_292>": 50561,
231
+ "<loc_293>": 50562,
232
+ "<loc_294>": 50563,
233
+ "<loc_295>": 50564,
234
+ "<loc_296>": 50565,
235
+ "<loc_297>": 50566,
236
+ "<loc_298>": 50567,
237
+ "<loc_299>": 50568,
238
+ "<loc_29>": 50298,
239
+ "<loc_2>": 50271,
240
+ "<loc_300>": 50569,
241
+ "<loc_301>": 50570,
242
+ "<loc_302>": 50571,
243
+ "<loc_303>": 50572,
244
+ "<loc_304>": 50573,
245
+ "<loc_305>": 50574,
246
+ "<loc_306>": 50575,
247
+ "<loc_307>": 50576,
248
+ "<loc_308>": 50577,
249
+ "<loc_309>": 50578,
250
+ "<loc_30>": 50299,
251
+ "<loc_310>": 50579,
252
+ "<loc_311>": 50580,
253
+ "<loc_312>": 50581,
254
+ "<loc_313>": 50582,
255
+ "<loc_314>": 50583,
256
+ "<loc_315>": 50584,
257
+ "<loc_316>": 50585,
258
+ "<loc_317>": 50586,
259
+ "<loc_318>": 50587,
260
+ "<loc_319>": 50588,
261
+ "<loc_31>": 50300,
262
+ "<loc_320>": 50589,
263
+ "<loc_321>": 50590,
264
+ "<loc_322>": 50591,
265
+ "<loc_323>": 50592,
266
+ "<loc_324>": 50593,
267
+ "<loc_325>": 50594,
268
+ "<loc_326>": 50595,
269
+ "<loc_327>": 50596,
270
+ "<loc_328>": 50597,
271
+ "<loc_329>": 50598,
272
+ "<loc_32>": 50301,
273
+ "<loc_330>": 50599,
274
+ "<loc_331>": 50600,
275
+ "<loc_332>": 50601,
276
+ "<loc_333>": 50602,
277
+ "<loc_334>": 50603,
278
+ "<loc_335>": 50604,
279
+ "<loc_336>": 50605,
280
+ "<loc_337>": 50606,
281
+ "<loc_338>": 50607,
282
+ "<loc_339>": 50608,
283
+ "<loc_33>": 50302,
284
+ "<loc_340>": 50609,
285
+ "<loc_341>": 50610,
286
+ "<loc_342>": 50611,
287
+ "<loc_343>": 50612,
288
+ "<loc_344>": 50613,
289
+ "<loc_345>": 50614,
290
+ "<loc_346>": 50615,
291
+ "<loc_347>": 50616,
292
+ "<loc_348>": 50617,
293
+ "<loc_349>": 50618,
294
+ "<loc_34>": 50303,
295
+ "<loc_350>": 50619,
296
+ "<loc_351>": 50620,
297
+ "<loc_352>": 50621,
298
+ "<loc_353>": 50622,
299
+ "<loc_354>": 50623,
300
+ "<loc_355>": 50624,
301
+ "<loc_356>": 50625,
302
+ "<loc_357>": 50626,
303
+ "<loc_358>": 50627,
304
+ "<loc_359>": 50628,
305
+ "<loc_35>": 50304,
306
+ "<loc_360>": 50629,
307
+ "<loc_361>": 50630,
308
+ "<loc_362>": 50631,
309
+ "<loc_363>": 50632,
310
+ "<loc_364>": 50633,
311
+ "<loc_365>": 50634,
312
+ "<loc_366>": 50635,
313
+ "<loc_367>": 50636,
314
+ "<loc_368>": 50637,
315
+ "<loc_369>": 50638,
316
+ "<loc_36>": 50305,
317
+ "<loc_370>": 50639,
318
+ "<loc_371>": 50640,
319
+ "<loc_372>": 50641,
320
+ "<loc_373>": 50642,
321
+ "<loc_374>": 50643,
322
+ "<loc_375>": 50644,
323
+ "<loc_376>": 50645,
324
+ "<loc_377>": 50646,
325
+ "<loc_378>": 50647,
326
+ "<loc_379>": 50648,
327
+ "<loc_37>": 50306,
328
+ "<loc_380>": 50649,
329
+ "<loc_381>": 50650,
330
+ "<loc_382>": 50651,
331
+ "<loc_383>": 50652,
332
+ "<loc_384>": 50653,
333
+ "<loc_385>": 50654,
334
+ "<loc_386>": 50655,
335
+ "<loc_387>": 50656,
336
+ "<loc_388>": 50657,
337
+ "<loc_389>": 50658,
338
+ "<loc_38>": 50307,
339
+ "<loc_390>": 50659,
340
+ "<loc_391>": 50660,
341
+ "<loc_392>": 50661,
342
+ "<loc_393>": 50662,
343
+ "<loc_394>": 50663,
344
+ "<loc_395>": 50664,
345
+ "<loc_396>": 50665,
346
+ "<loc_397>": 50666,
347
+ "<loc_398>": 50667,
348
+ "<loc_399>": 50668,
349
+ "<loc_39>": 50308,
350
+ "<loc_3>": 50272,
351
+ "<loc_400>": 50669,
352
+ "<loc_401>": 50670,
353
+ "<loc_402>": 50671,
354
+ "<loc_403>": 50672,
355
+ "<loc_404>": 50673,
356
+ "<loc_405>": 50674,
357
+ "<loc_406>": 50675,
358
+ "<loc_407>": 50676,
359
+ "<loc_408>": 50677,
360
+ "<loc_409>": 50678,
361
+ "<loc_40>": 50309,
362
+ "<loc_410>": 50679,
363
+ "<loc_411>": 50680,
364
+ "<loc_412>": 50681,
365
+ "<loc_413>": 50682,
366
+ "<loc_414>": 50683,
367
+ "<loc_415>": 50684,
368
+ "<loc_416>": 50685,
369
+ "<loc_417>": 50686,
370
+ "<loc_418>": 50687,
371
+ "<loc_419>": 50688,
372
+ "<loc_41>": 50310,
373
+ "<loc_420>": 50689,
374
+ "<loc_421>": 50690,
375
+ "<loc_422>": 50691,
376
+ "<loc_423>": 50692,
377
+ "<loc_424>": 50693,
378
+ "<loc_425>": 50694,
379
+ "<loc_426>": 50695,
380
+ "<loc_427>": 50696,
381
+ "<loc_428>": 50697,
382
+ "<loc_429>": 50698,
383
+ "<loc_42>": 50311,
384
+ "<loc_430>": 50699,
385
+ "<loc_431>": 50700,
386
+ "<loc_432>": 50701,
387
+ "<loc_433>": 50702,
388
+ "<loc_434>": 50703,
389
+ "<loc_435>": 50704,
390
+ "<loc_436>": 50705,
391
+ "<loc_437>": 50706,
392
+ "<loc_438>": 50707,
393
+ "<loc_439>": 50708,
394
+ "<loc_43>": 50312,
395
+ "<loc_440>": 50709,
396
+ "<loc_441>": 50710,
397
+ "<loc_442>": 50711,
398
+ "<loc_443>": 50712,
399
+ "<loc_444>": 50713,
400
+ "<loc_445>": 50714,
401
+ "<loc_446>": 50715,
402
+ "<loc_447>": 50716,
403
+ "<loc_448>": 50717,
404
+ "<loc_449>": 50718,
405
+ "<loc_44>": 50313,
406
+ "<loc_450>": 50719,
407
+ "<loc_451>": 50720,
408
+ "<loc_452>": 50721,
409
+ "<loc_453>": 50722,
410
+ "<loc_454>": 50723,
411
+ "<loc_455>": 50724,
412
+ "<loc_456>": 50725,
413
+ "<loc_457>": 50726,
414
+ "<loc_458>": 50727,
415
+ "<loc_459>": 50728,
416
+ "<loc_45>": 50314,
417
+ "<loc_460>": 50729,
418
+ "<loc_461>": 50730,
419
+ "<loc_462>": 50731,
420
+ "<loc_463>": 50732,
421
+ "<loc_464>": 50733,
422
+ "<loc_465>": 50734,
423
+ "<loc_466>": 50735,
424
+ "<loc_467>": 50736,
425
+ "<loc_468>": 50737,
426
+ "<loc_469>": 50738,
427
+ "<loc_46>": 50315,
428
+ "<loc_470>": 50739,
429
+ "<loc_471>": 50740,
430
+ "<loc_472>": 50741,
431
+ "<loc_473>": 50742,
432
+ "<loc_474>": 50743,
433
+ "<loc_475>": 50744,
434
+ "<loc_476>": 50745,
435
+ "<loc_477>": 50746,
436
+ "<loc_478>": 50747,
437
+ "<loc_479>": 50748,
438
+ "<loc_47>": 50316,
439
+ "<loc_480>": 50749,
440
+ "<loc_481>": 50750,
441
+ "<loc_482>": 50751,
442
+ "<loc_483>": 50752,
443
+ "<loc_484>": 50753,
444
+ "<loc_485>": 50754,
445
+ "<loc_486>": 50755,
446
+ "<loc_487>": 50756,
447
+ "<loc_488>": 50757,
448
+ "<loc_489>": 50758,
449
+ "<loc_48>": 50317,
450
+ "<loc_490>": 50759,
451
+ "<loc_491>": 50760,
452
+ "<loc_492>": 50761,
453
+ "<loc_493>": 50762,
454
+ "<loc_494>": 50763,
455
+ "<loc_495>": 50764,
456
+ "<loc_496>": 50765,
457
+ "<loc_497>": 50766,
458
+ "<loc_498>": 50767,
459
+ "<loc_499>": 50768,
460
+ "<loc_49>": 50318,
461
+ "<loc_4>": 50273,
462
+ "<loc_500>": 50769,
463
+ "<loc_501>": 50770,
464
+ "<loc_502>": 50771,
465
+ "<loc_503>": 50772,
466
+ "<loc_504>": 50773,
467
+ "<loc_505>": 50774,
468
+ "<loc_506>": 50775,
469
+ "<loc_507>": 50776,
470
+ "<loc_508>": 50777,
471
+ "<loc_509>": 50778,
472
+ "<loc_50>": 50319,
473
+ "<loc_510>": 50779,
474
+ "<loc_511>": 50780,
475
+ "<loc_512>": 50781,
476
+ "<loc_513>": 50782,
477
+ "<loc_514>": 50783,
478
+ "<loc_515>": 50784,
479
+ "<loc_516>": 50785,
480
+ "<loc_517>": 50786,
481
+ "<loc_518>": 50787,
482
+ "<loc_519>": 50788,
483
+ "<loc_51>": 50320,
484
+ "<loc_520>": 50789,
485
+ "<loc_521>": 50790,
486
+ "<loc_522>": 50791,
487
+ "<loc_523>": 50792,
488
+ "<loc_524>": 50793,
489
+ "<loc_525>": 50794,
490
+ "<loc_526>": 50795,
491
+ "<loc_527>": 50796,
492
+ "<loc_528>": 50797,
493
+ "<loc_529>": 50798,
494
+ "<loc_52>": 50321,
495
+ "<loc_530>": 50799,
496
+ "<loc_531>": 50800,
497
+ "<loc_532>": 50801,
498
+ "<loc_533>": 50802,
499
+ "<loc_534>": 50803,
500
+ "<loc_535>": 50804,
501
+ "<loc_536>": 50805,
502
+ "<loc_537>": 50806,
503
+ "<loc_538>": 50807,
504
+ "<loc_539>": 50808,
505
+ "<loc_53>": 50322,
506
+ "<loc_540>": 50809,
507
+ "<loc_541>": 50810,
508
+ "<loc_542>": 50811,
509
+ "<loc_543>": 50812,
510
+ "<loc_544>": 50813,
511
+ "<loc_545>": 50814,
512
+ "<loc_546>": 50815,
513
+ "<loc_547>": 50816,
514
+ "<loc_548>": 50817,
515
+ "<loc_549>": 50818,
516
+ "<loc_54>": 50323,
517
+ "<loc_550>": 50819,
518
+ "<loc_551>": 50820,
519
+ "<loc_552>": 50821,
520
+ "<loc_553>": 50822,
521
+ "<loc_554>": 50823,
522
+ "<loc_555>": 50824,
523
+ "<loc_556>": 50825,
524
+ "<loc_557>": 50826,
525
+ "<loc_558>": 50827,
526
+ "<loc_559>": 50828,
527
+ "<loc_55>": 50324,
528
+ "<loc_560>": 50829,
529
+ "<loc_561>": 50830,
530
+ "<loc_562>": 50831,
531
+ "<loc_563>": 50832,
532
+ "<loc_564>": 50833,
533
+ "<loc_565>": 50834,
534
+ "<loc_566>": 50835,
535
+ "<loc_567>": 50836,
536
+ "<loc_568>": 50837,
537
+ "<loc_569>": 50838,
538
+ "<loc_56>": 50325,
539
+ "<loc_570>": 50839,
540
+ "<loc_571>": 50840,
541
+ "<loc_572>": 50841,
542
+ "<loc_573>": 50842,
543
+ "<loc_574>": 50843,
544
+ "<loc_575>": 50844,
545
+ "<loc_576>": 50845,
546
+ "<loc_577>": 50846,
547
+ "<loc_578>": 50847,
548
+ "<loc_579>": 50848,
549
+ "<loc_57>": 50326,
550
+ "<loc_580>": 50849,
551
+ "<loc_581>": 50850,
552
+ "<loc_582>": 50851,
553
+ "<loc_583>": 50852,
554
+ "<loc_584>": 50853,
555
+ "<loc_585>": 50854,
556
+ "<loc_586>": 50855,
557
+ "<loc_587>": 50856,
558
+ "<loc_588>": 50857,
559
+ "<loc_589>": 50858,
560
+ "<loc_58>": 50327,
561
+ "<loc_590>": 50859,
562
+ "<loc_591>": 50860,
563
+ "<loc_592>": 50861,
564
+ "<loc_593>": 50862,
565
+ "<loc_594>": 50863,
566
+ "<loc_595>": 50864,
567
+ "<loc_596>": 50865,
568
+ "<loc_597>": 50866,
569
+ "<loc_598>": 50867,
570
+ "<loc_599>": 50868,
571
+ "<loc_59>": 50328,
572
+ "<loc_5>": 50274,
573
+ "<loc_600>": 50869,
574
+ "<loc_601>": 50870,
575
+ "<loc_602>": 50871,
576
+ "<loc_603>": 50872,
577
+ "<loc_604>": 50873,
578
+ "<loc_605>": 50874,
579
+ "<loc_606>": 50875,
580
+ "<loc_607>": 50876,
581
+ "<loc_608>": 50877,
582
+ "<loc_609>": 50878,
583
+ "<loc_60>": 50329,
584
+ "<loc_610>": 50879,
585
+ "<loc_611>": 50880,
586
+ "<loc_612>": 50881,
587
+ "<loc_613>": 50882,
588
+ "<loc_614>": 50883,
589
+ "<loc_615>": 50884,
590
+ "<loc_616>": 50885,
591
+ "<loc_617>": 50886,
592
+ "<loc_618>": 50887,
593
+ "<loc_619>": 50888,
594
+ "<loc_61>": 50330,
595
+ "<loc_620>": 50889,
596
+ "<loc_621>": 50890,
597
+ "<loc_622>": 50891,
598
+ "<loc_623>": 50892,
599
+ "<loc_624>": 50893,
600
+ "<loc_625>": 50894,
601
+ "<loc_626>": 50895,
602
+ "<loc_627>": 50896,
603
+ "<loc_628>": 50897,
604
+ "<loc_629>": 50898,
605
+ "<loc_62>": 50331,
606
+ "<loc_630>": 50899,
607
+ "<loc_631>": 50900,
608
+ "<loc_632>": 50901,
609
+ "<loc_633>": 50902,
610
+ "<loc_634>": 50903,
611
+ "<loc_635>": 50904,
612
+ "<loc_636>": 50905,
613
+ "<loc_637>": 50906,
614
+ "<loc_638>": 50907,
615
+ "<loc_639>": 50908,
616
+ "<loc_63>": 50332,
617
+ "<loc_640>": 50909,
618
+ "<loc_641>": 50910,
619
+ "<loc_642>": 50911,
620
+ "<loc_643>": 50912,
621
+ "<loc_644>": 50913,
622
+ "<loc_645>": 50914,
623
+ "<loc_646>": 50915,
624
+ "<loc_647>": 50916,
625
+ "<loc_648>": 50917,
626
+ "<loc_649>": 50918,
627
+ "<loc_64>": 50333,
628
+ "<loc_650>": 50919,
629
+ "<loc_651>": 50920,
630
+ "<loc_652>": 50921,
631
+ "<loc_653>": 50922,
632
+ "<loc_654>": 50923,
633
+ "<loc_655>": 50924,
634
+ "<loc_656>": 50925,
635
+ "<loc_657>": 50926,
636
+ "<loc_658>": 50927,
637
+ "<loc_659>": 50928,
638
+ "<loc_65>": 50334,
639
+ "<loc_660>": 50929,
640
+ "<loc_661>": 50930,
641
+ "<loc_662>": 50931,
642
+ "<loc_663>": 50932,
643
+ "<loc_664>": 50933,
644
+ "<loc_665>": 50934,
645
+ "<loc_666>": 50935,
646
+ "<loc_667>": 50936,
647
+ "<loc_668>": 50937,
648
+ "<loc_669>": 50938,
649
+ "<loc_66>": 50335,
650
+ "<loc_670>": 50939,
651
+ "<loc_671>": 50940,
652
+ "<loc_672>": 50941,
653
+ "<loc_673>": 50942,
654
+ "<loc_674>": 50943,
655
+ "<loc_675>": 50944,
656
+ "<loc_676>": 50945,
657
+ "<loc_677>": 50946,
658
+ "<loc_678>": 50947,
659
+ "<loc_679>": 50948,
660
+ "<loc_67>": 50336,
661
+ "<loc_680>": 50949,
662
+ "<loc_681>": 50950,
663
+ "<loc_682>": 50951,
664
+ "<loc_683>": 50952,
665
+ "<loc_684>": 50953,
666
+ "<loc_685>": 50954,
667
+ "<loc_686>": 50955,
668
+ "<loc_687>": 50956,
669
+ "<loc_688>": 50957,
670
+ "<loc_689>": 50958,
671
+ "<loc_68>": 50337,
672
+ "<loc_690>": 50959,
673
+ "<loc_691>": 50960,
674
+ "<loc_692>": 50961,
675
+ "<loc_693>": 50962,
676
+ "<loc_694>": 50963,
677
+ "<loc_695>": 50964,
678
+ "<loc_696>": 50965,
679
+ "<loc_697>": 50966,
680
+ "<loc_698>": 50967,
681
+ "<loc_699>": 50968,
682
+ "<loc_69>": 50338,
683
+ "<loc_6>": 50275,
684
+ "<loc_700>": 50969,
685
+ "<loc_701>": 50970,
686
+ "<loc_702>": 50971,
687
+ "<loc_703>": 50972,
688
+ "<loc_704>": 50973,
689
+ "<loc_705>": 50974,
690
+ "<loc_706>": 50975,
691
+ "<loc_707>": 50976,
692
+ "<loc_708>": 50977,
693
+ "<loc_709>": 50978,
694
+ "<loc_70>": 50339,
695
+ "<loc_710>": 50979,
696
+ "<loc_711>": 50980,
697
+ "<loc_712>": 50981,
698
+ "<loc_713>": 50982,
699
+ "<loc_714>": 50983,
700
+ "<loc_715>": 50984,
701
+ "<loc_716>": 50985,
702
+ "<loc_717>": 50986,
703
+ "<loc_718>": 50987,
704
+ "<loc_719>": 50988,
705
+ "<loc_71>": 50340,
706
+ "<loc_720>": 50989,
707
+ "<loc_721>": 50990,
708
+ "<loc_722>": 50991,
709
+ "<loc_723>": 50992,
710
+ "<loc_724>": 50993,
711
+ "<loc_725>": 50994,
712
+ "<loc_726>": 50995,
713
+ "<loc_727>": 50996,
714
+ "<loc_728>": 50997,
715
+ "<loc_729>": 50998,
716
+ "<loc_72>": 50341,
717
+ "<loc_730>": 50999,
718
+ "<loc_731>": 51000,
719
+ "<loc_732>": 51001,
720
+ "<loc_733>": 51002,
721
+ "<loc_734>": 51003,
722
+ "<loc_735>": 51004,
723
+ "<loc_736>": 51005,
724
+ "<loc_737>": 51006,
725
+ "<loc_738>": 51007,
726
+ "<loc_739>": 51008,
727
+ "<loc_73>": 50342,
728
+ "<loc_740>": 51009,
729
+ "<loc_741>": 51010,
730
+ "<loc_742>": 51011,
731
+ "<loc_743>": 51012,
732
+ "<loc_744>": 51013,
733
+ "<loc_745>": 51014,
734
+ "<loc_746>": 51015,
735
+ "<loc_747>": 51016,
736
+ "<loc_748>": 51017,
737
+ "<loc_749>": 51018,
738
+ "<loc_74>": 50343,
739
+ "<loc_750>": 51019,
740
+ "<loc_751>": 51020,
741
+ "<loc_752>": 51021,
742
+ "<loc_753>": 51022,
743
+ "<loc_754>": 51023,
744
+ "<loc_755>": 51024,
745
+ "<loc_756>": 51025,
746
+ "<loc_757>": 51026,
747
+ "<loc_758>": 51027,
748
+ "<loc_759>": 51028,
749
+ "<loc_75>": 50344,
750
+ "<loc_760>": 51029,
751
+ "<loc_761>": 51030,
752
+ "<loc_762>": 51031,
753
+ "<loc_763>": 51032,
754
+ "<loc_764>": 51033,
755
+ "<loc_765>": 51034,
756
+ "<loc_766>": 51035,
757
+ "<loc_767>": 51036,
758
+ "<loc_768>": 51037,
759
+ "<loc_769>": 51038,
760
+ "<loc_76>": 50345,
761
+ "<loc_770>": 51039,
762
+ "<loc_771>": 51040,
763
+ "<loc_772>": 51041,
764
+ "<loc_773>": 51042,
765
+ "<loc_774>": 51043,
766
+ "<loc_775>": 51044,
767
+ "<loc_776>": 51045,
768
+ "<loc_777>": 51046,
769
+ "<loc_778>": 51047,
770
+ "<loc_779>": 51048,
771
+ "<loc_77>": 50346,
772
+ "<loc_780>": 51049,
773
+ "<loc_781>": 51050,
774
+ "<loc_782>": 51051,
775
+ "<loc_783>": 51052,
776
+ "<loc_784>": 51053,
777
+ "<loc_785>": 51054,
778
+ "<loc_786>": 51055,
779
+ "<loc_787>": 51056,
780
+ "<loc_788>": 51057,
781
+ "<loc_789>": 51058,
782
+ "<loc_78>": 50347,
783
+ "<loc_790>": 51059,
784
+ "<loc_791>": 51060,
785
+ "<loc_792>": 51061,
786
+ "<loc_793>": 51062,
787
+ "<loc_794>": 51063,
788
+ "<loc_795>": 51064,
789
+ "<loc_796>": 51065,
790
+ "<loc_797>": 51066,
791
+ "<loc_798>": 51067,
792
+ "<loc_799>": 51068,
793
+ "<loc_79>": 50348,
794
+ "<loc_7>": 50276,
795
+ "<loc_800>": 51069,
796
+ "<loc_801>": 51070,
797
+ "<loc_802>": 51071,
798
+ "<loc_803>": 51072,
799
+ "<loc_804>": 51073,
800
+ "<loc_805>": 51074,
801
+ "<loc_806>": 51075,
802
+ "<loc_807>": 51076,
803
+ "<loc_808>": 51077,
804
+ "<loc_809>": 51078,
805
+ "<loc_80>": 50349,
806
+ "<loc_810>": 51079,
807
+ "<loc_811>": 51080,
808
+ "<loc_812>": 51081,
809
+ "<loc_813>": 51082,
810
+ "<loc_814>": 51083,
811
+ "<loc_815>": 51084,
812
+ "<loc_816>": 51085,
813
+ "<loc_817>": 51086,
814
+ "<loc_818>": 51087,
815
+ "<loc_819>": 51088,
816
+ "<loc_81>": 50350,
817
+ "<loc_820>": 51089,
818
+ "<loc_821>": 51090,
819
+ "<loc_822>": 51091,
820
+ "<loc_823>": 51092,
821
+ "<loc_824>": 51093,
822
+ "<loc_825>": 51094,
823
+ "<loc_826>": 51095,
824
+ "<loc_827>": 51096,
825
+ "<loc_828>": 51097,
826
+ "<loc_829>": 51098,
827
+ "<loc_82>": 50351,
828
+ "<loc_830>": 51099,
829
+ "<loc_831>": 51100,
830
+ "<loc_832>": 51101,
831
+ "<loc_833>": 51102,
832
+ "<loc_834>": 51103,
833
+ "<loc_835>": 51104,
834
+ "<loc_836>": 51105,
835
+ "<loc_837>": 51106,
836
+ "<loc_838>": 51107,
837
+ "<loc_839>": 51108,
838
+ "<loc_83>": 50352,
839
+ "<loc_840>": 51109,
840
+ "<loc_841>": 51110,
841
+ "<loc_842>": 51111,
842
+ "<loc_843>": 51112,
843
+ "<loc_844>": 51113,
844
+ "<loc_845>": 51114,
845
+ "<loc_846>": 51115,
846
+ "<loc_847>": 51116,
847
+ "<loc_848>": 51117,
848
+ "<loc_849>": 51118,
849
+ "<loc_84>": 50353,
850
+ "<loc_850>": 51119,
851
+ "<loc_851>": 51120,
852
+ "<loc_852>": 51121,
853
+ "<loc_853>": 51122,
854
+ "<loc_854>": 51123,
855
+ "<loc_855>": 51124,
856
+ "<loc_856>": 51125,
857
+ "<loc_857>": 51126,
858
+ "<loc_858>": 51127,
859
+ "<loc_859>": 51128,
860
+ "<loc_85>": 50354,
861
+ "<loc_860>": 51129,
862
+ "<loc_861>": 51130,
863
+ "<loc_862>": 51131,
864
+ "<loc_863>": 51132,
865
+ "<loc_864>": 51133,
866
+ "<loc_865>": 51134,
867
+ "<loc_866>": 51135,
868
+ "<loc_867>": 51136,
869
+ "<loc_868>": 51137,
870
+ "<loc_869>": 51138,
871
+ "<loc_86>": 50355,
872
+ "<loc_870>": 51139,
873
+ "<loc_871>": 51140,
874
+ "<loc_872>": 51141,
875
+ "<loc_873>": 51142,
876
+ "<loc_874>": 51143,
877
+ "<loc_875>": 51144,
878
+ "<loc_876>": 51145,
879
+ "<loc_877>": 51146,
880
+ "<loc_878>": 51147,
881
+ "<loc_879>": 51148,
882
+ "<loc_87>": 50356,
883
+ "<loc_880>": 51149,
884
+ "<loc_881>": 51150,
885
+ "<loc_882>": 51151,
886
+ "<loc_883>": 51152,
887
+ "<loc_884>": 51153,
888
+ "<loc_885>": 51154,
889
+ "<loc_886>": 51155,
890
+ "<loc_887>": 51156,
891
+ "<loc_888>": 51157,
892
+ "<loc_889>": 51158,
893
+ "<loc_88>": 50357,
894
+ "<loc_890>": 51159,
895
+ "<loc_891>": 51160,
896
+ "<loc_892>": 51161,
897
+ "<loc_893>": 51162,
898
+ "<loc_894>": 51163,
899
+ "<loc_895>": 51164,
900
+ "<loc_896>": 51165,
901
+ "<loc_897>": 51166,
902
+ "<loc_898>": 51167,
903
+ "<loc_899>": 51168,
904
+ "<loc_89>": 50358,
905
+ "<loc_8>": 50277,
906
+ "<loc_900>": 51169,
907
+ "<loc_901>": 51170,
908
+ "<loc_902>": 51171,
909
+ "<loc_903>": 51172,
910
+ "<loc_904>": 51173,
911
+ "<loc_905>": 51174,
912
+ "<loc_906>": 51175,
913
+ "<loc_907>": 51176,
914
+ "<loc_908>": 51177,
915
+ "<loc_909>": 51178,
916
+ "<loc_90>": 50359,
917
+ "<loc_910>": 51179,
918
+ "<loc_911>": 51180,
919
+ "<loc_912>": 51181,
920
+ "<loc_913>": 51182,
921
+ "<loc_914>": 51183,
922
+ "<loc_915>": 51184,
923
+ "<loc_916>": 51185,
924
+ "<loc_917>": 51186,
925
+ "<loc_918>": 51187,
926
+ "<loc_919>": 51188,
927
+ "<loc_91>": 50360,
928
+ "<loc_920>": 51189,
929
+ "<loc_921>": 51190,
930
+ "<loc_922>": 51191,
931
+ "<loc_923>": 51192,
932
+ "<loc_924>": 51193,
933
+ "<loc_925>": 51194,
934
+ "<loc_926>": 51195,
935
+ "<loc_927>": 51196,
936
+ "<loc_928>": 51197,
937
+ "<loc_929>": 51198,
938
+ "<loc_92>": 50361,
939
+ "<loc_930>": 51199,
940
+ "<loc_931>": 51200,
941
+ "<loc_932>": 51201,
942
+ "<loc_933>": 51202,
943
+ "<loc_934>": 51203,
944
+ "<loc_935>": 51204,
945
+ "<loc_936>": 51205,
946
+ "<loc_937>": 51206,
947
+ "<loc_938>": 51207,
948
+ "<loc_939>": 51208,
949
+ "<loc_93>": 50362,
950
+ "<loc_940>": 51209,
951
+ "<loc_941>": 51210,
952
+ "<loc_942>": 51211,
953
+ "<loc_943>": 51212,
954
+ "<loc_944>": 51213,
955
+ "<loc_945>": 51214,
956
+ "<loc_946>": 51215,
957
+ "<loc_947>": 51216,
958
+ "<loc_948>": 51217,
959
+ "<loc_949>": 51218,
960
+ "<loc_94>": 50363,
961
+ "<loc_950>": 51219,
962
+ "<loc_951>": 51220,
963
+ "<loc_952>": 51221,
964
+ "<loc_953>": 51222,
965
+ "<loc_954>": 51223,
966
+ "<loc_955>": 51224,
967
+ "<loc_956>": 51225,
968
+ "<loc_957>": 51226,
969
+ "<loc_958>": 51227,
970
+ "<loc_959>": 51228,
971
+ "<loc_95>": 50364,
972
+ "<loc_960>": 51229,
973
+ "<loc_961>": 51230,
974
+ "<loc_962>": 51231,
975
+ "<loc_963>": 51232,
976
+ "<loc_964>": 51233,
977
+ "<loc_965>": 51234,
978
+ "<loc_966>": 51235,
979
+ "<loc_967>": 51236,
980
+ "<loc_968>": 51237,
981
+ "<loc_969>": 51238,
982
+ "<loc_96>": 50365,
983
+ "<loc_970>": 51239,
984
+ "<loc_971>": 51240,
985
+ "<loc_972>": 51241,
986
+ "<loc_973>": 51242,
987
+ "<loc_974>": 51243,
988
+ "<loc_975>": 51244,
989
+ "<loc_976>": 51245,
990
+ "<loc_977>": 51246,
991
+ "<loc_978>": 51247,
992
+ "<loc_979>": 51248,
993
+ "<loc_97>": 50366,
994
+ "<loc_980>": 51249,
995
+ "<loc_981>": 51250,
996
+ "<loc_982>": 51251,
997
+ "<loc_983>": 51252,
998
+ "<loc_984>": 51253,
999
+ "<loc_985>": 51254,
1000
+ "<loc_986>": 51255,
1001
+ "<loc_987>": 51256,
1002
+ "<loc_988>": 51257,
1003
+ "<loc_989>": 51258,
1004
+ "<loc_98>": 50367,
1005
+ "<loc_990>": 51259,
1006
+ "<loc_991>": 51260,
1007
+ "<loc_992>": 51261,
1008
+ "<loc_993>": 51262,
1009
+ "<loc_994>": 51263,
1010
+ "<loc_995>": 51264,
1011
+ "<loc_996>": 51265,
1012
+ "<loc_997>": 51266,
1013
+ "<loc_998>": 51267,
1014
+ "<loc_999>": 51268,
1015
+ "<loc_99>": 50368,
1016
+ "<loc_9>": 50278,
1017
+ "<ncap>": 51271,
1018
+ "<ocr>": 50267,
1019
+ "<od>": 50265,
1020
+ "<poly>": 51286,
1021
+ "<proposal>": 51284,
1022
+ "<region_cap>": 51280,
1023
+ "<region_to_desciption>": 51282,
1024
+ "<seg>": 51277,
1025
+ "<sep>": 51279
1026
+ }
config.json ADDED
@@ -0,0 +1,241 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Florence2ForConditionalGeneration"
4
+ ],
5
+ "auto_map": {
6
+ "AutoConfig": "microsoft/Florence-2-base-ft--configuration_florence2.Florence2Config",
7
+ "AutoModelForCausalLM": "microsoft/Florence-2-base-ft--modeling_florence2.Florence2ForConditionalGeneration"
8
+ },
9
+ "bos_token_id": 0,
10
+ "eos_token_id": 2,
11
+ "ignore_index": -100,
12
+ "is_encoder_decoder": true,
13
+ "model_type": "florence2",
14
+ "pad_token_id": 1,
15
+ "projection_dim": 768,
16
+ "quantization": {
17
+ "group_size": 64,
18
+ "bits": 4
19
+ },
20
+ "text_config": {
21
+ "_name_or_path": "",
22
+ "activation_dropout": 0.1,
23
+ "activation_function": "gelu",
24
+ "add_bias_logits": false,
25
+ "add_cross_attention": false,
26
+ "add_final_layer_norm": false,
27
+ "architectures": null,
28
+ "attention_dropout": 0.1,
29
+ "bad_words_ids": null,
30
+ "begin_suppress_tokens": null,
31
+ "bos_token_id": 0,
32
+ "chunk_size_feed_forward": 0,
33
+ "classif_dropout": 0.1,
34
+ "classifier_dropout": 0.0,
35
+ "cross_attention_hidden_size": null,
36
+ "d_model": 768,
37
+ "decoder_attention_heads": 12,
38
+ "decoder_ffn_dim": 3072,
39
+ "decoder_layerdrop": 0.0,
40
+ "decoder_layers": 6,
41
+ "decoder_start_token_id": 2,
42
+ "diversity_penalty": 0.0,
43
+ "do_sample": false,
44
+ "dropout": 0.1,
45
+ "early_stopping": true,
46
+ "encoder_attention_heads": 12,
47
+ "encoder_ffn_dim": 3072,
48
+ "encoder_layerdrop": 0.0,
49
+ "encoder_layers": 6,
50
+ "encoder_no_repeat_ngram_size": 0,
51
+ "eos_token_id": 2,
52
+ "exponential_decay_length_penalty": null,
53
+ "finetuning_task": null,
54
+ "forced_bos_token_id": 0,
55
+ "forced_eos_token_id": 2,
56
+ "gradient_checkpointing": false,
57
+ "id2label": {
58
+ "0": "LABEL_0",
59
+ "1": "LABEL_1",
60
+ "2": "LABEL_2"
61
+ },
62
+ "init_std": 0.02,
63
+ "is_decoder": false,
64
+ "is_encoder_decoder": true,
65
+ "label2id": {
66
+ "LABEL_0": 0,
67
+ "LABEL_1": 1,
68
+ "LABEL_2": 2
69
+ },
70
+ "length_penalty": 1.0,
71
+ "max_length": 20,
72
+ "max_position_embeddings": 1024,
73
+ "min_length": 0,
74
+ "model_type": "florence2_language",
75
+ "no_repeat_ngram_size": 3,
76
+ "normalize_before": false,
77
+ "num_beam_groups": 1,
78
+ "num_beams": 3,
79
+ "num_hidden_layers": 6,
80
+ "num_return_sequences": 1,
81
+ "output_attentions": false,
82
+ "output_hidden_states": false,
83
+ "output_scores": false,
84
+ "pad_token_id": 1,
85
+ "prefix": null,
86
+ "problem_type": null,
87
+ "pruned_heads": {},
88
+ "remove_invalid_values": false,
89
+ "repetition_penalty": 1.0,
90
+ "return_dict": true,
91
+ "return_dict_in_generate": false,
92
+ "scale_embedding": false,
93
+ "sep_token_id": null,
94
+ "suppress_tokens": null,
95
+ "task_specific_params": null,
96
+ "temperature": 1.0,
97
+ "tf_legacy_loss": false,
98
+ "tie_encoder_decoder": false,
99
+ "tie_word_embeddings": true,
100
+ "tokenizer_class": null,
101
+ "top_k": 50,
102
+ "top_p": 1.0,
103
+ "torch_dtype": null,
104
+ "torchscript": false,
105
+ "typical_p": 1.0,
106
+ "use_bfloat16": false,
107
+ "use_cache": true,
108
+ "vocab_size": 51289
109
+ },
110
+ "torch_dtype": "float32",
111
+ "transformers_version": "4.45.2",
112
+ "vision_config": {
113
+ "_name_or_path": "",
114
+ "add_cross_attention": false,
115
+ "architectures": null,
116
+ "bad_words_ids": null,
117
+ "begin_suppress_tokens": null,
118
+ "bos_token_id": null,
119
+ "chunk_size_feed_forward": 0,
120
+ "cross_attention_hidden_size": null,
121
+ "decoder_start_token_id": null,
122
+ "depths": [
123
+ 1,
124
+ 1,
125
+ 9,
126
+ 1
127
+ ],
128
+ "dim_embed": [
129
+ 128,
130
+ 256,
131
+ 512,
132
+ 1024
133
+ ],
134
+ "diversity_penalty": 0.0,
135
+ "do_sample": false,
136
+ "drop_path_rate": 0.1,
137
+ "early_stopping": false,
138
+ "enable_checkpoint": false,
139
+ "encoder_no_repeat_ngram_size": 0,
140
+ "eos_token_id": null,
141
+ "exponential_decay_length_penalty": null,
142
+ "finetuning_task": null,
143
+ "forced_bos_token_id": null,
144
+ "forced_eos_token_id": null,
145
+ "id2label": {
146
+ "0": "LABEL_0",
147
+ "1": "LABEL_1"
148
+ },
149
+ "image_feature_source": [
150
+ "spatial_avg_pool",
151
+ "temporal_avg_pool"
152
+ ],
153
+ "image_pos_embed": {
154
+ "max_pos_embeddings": 50,
155
+ "type": "learned_abs_2d"
156
+ },
157
+ "is_decoder": false,
158
+ "is_encoder_decoder": false,
159
+ "label2id": {
160
+ "LABEL_0": 0,
161
+ "LABEL_1": 1
162
+ },
163
+ "length_penalty": 1.0,
164
+ "max_length": 20,
165
+ "min_length": 0,
166
+ "model_type": "",
167
+ "no_repeat_ngram_size": 0,
168
+ "num_beam_groups": 1,
169
+ "num_beams": 1,
170
+ "num_groups": [
171
+ 4,
172
+ 8,
173
+ 16,
174
+ 32
175
+ ],
176
+ "num_heads": [
177
+ 4,
178
+ 8,
179
+ 16,
180
+ 32
181
+ ],
182
+ "num_return_sequences": 1,
183
+ "output_attentions": false,
184
+ "output_hidden_states": false,
185
+ "output_scores": false,
186
+ "pad_token_id": null,
187
+ "patch_padding": [
188
+ 3,
189
+ 1,
190
+ 1,
191
+ 1
192
+ ],
193
+ "patch_prenorm": [
194
+ false,
195
+ true,
196
+ true,
197
+ true
198
+ ],
199
+ "patch_size": [
200
+ 7,
201
+ 3,
202
+ 3,
203
+ 3
204
+ ],
205
+ "patch_stride": [
206
+ 4,
207
+ 2,
208
+ 2,
209
+ 2
210
+ ],
211
+ "prefix": null,
212
+ "problem_type": null,
213
+ "projection_dim": 768,
214
+ "pruned_heads": {},
215
+ "remove_invalid_values": false,
216
+ "repetition_penalty": 1.0,
217
+ "return_dict": true,
218
+ "return_dict_in_generate": false,
219
+ "sep_token_id": null,
220
+ "suppress_tokens": null,
221
+ "task_specific_params": null,
222
+ "temperature": 1.0,
223
+ "tf_legacy_loss": false,
224
+ "tie_encoder_decoder": false,
225
+ "tie_word_embeddings": true,
226
+ "tokenizer_class": null,
227
+ "top_k": 50,
228
+ "top_p": 1.0,
229
+ "torch_dtype": null,
230
+ "torchscript": false,
231
+ "typical_p": 1.0,
232
+ "use_bfloat16": false,
233
+ "visual_temporal_embedding": {
234
+ "max_temporal_embeddings": 100,
235
+ "type": "COSINE"
236
+ },
237
+ "window_size": 12,
238
+ "hidden_size": 768
239
+ },
240
+ "vocab_size": 51289
241
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:904321eedba8cf7bb1d7258e08d4efb6b8b2f62dc154d530b7ff663f240df533
3
+ size 163524711
model.safetensors.index.json ADDED
@@ -0,0 +1,1069 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 163387296
4
+ },
5
+ "weight_map": {
6
+ "image_pos_embed.column_embeddings.biases": "model.safetensors",
7
+ "image_pos_embed.column_embeddings.scales": "model.safetensors",
8
+ "image_pos_embed.column_embeddings.weight": "model.safetensors",
9
+ "image_pos_embed.row_embeddings.biases": "model.safetensors",
10
+ "image_pos_embed.row_embeddings.scales": "model.safetensors",
11
+ "image_pos_embed.row_embeddings.weight": "model.safetensors",
12
+ "image_proj_norm.bias": "model.safetensors",
13
+ "image_proj_norm.weight": "model.safetensors",
14
+ "image_projection": "model.safetensors",
15
+ "language_model.lm_head.biases": "model.safetensors",
16
+ "language_model.lm_head.scales": "model.safetensors",
17
+ "language_model.lm_head.weight": "model.safetensors",
18
+ "language_model.model.decoder.embed_positions.biases": "model.safetensors",
19
+ "language_model.model.decoder.embed_positions.scales": "model.safetensors",
20
+ "language_model.model.decoder.embed_positions.weight": "model.safetensors",
21
+ "language_model.model.decoder.layernorm_embedding.bias": "model.safetensors",
22
+ "language_model.model.decoder.layernorm_embedding.weight": "model.safetensors",
23
+ "language_model.model.decoder.layers.0.encoder_attn.k_proj.bias": "model.safetensors",
24
+ "language_model.model.decoder.layers.0.encoder_attn.k_proj.biases": "model.safetensors",
25
+ "language_model.model.decoder.layers.0.encoder_attn.k_proj.scales": "model.safetensors",
26
+ "language_model.model.decoder.layers.0.encoder_attn.k_proj.weight": "model.safetensors",
27
+ "language_model.model.decoder.layers.0.encoder_attn.out_proj.bias": "model.safetensors",
28
+ "language_model.model.decoder.layers.0.encoder_attn.out_proj.biases": "model.safetensors",
29
+ "language_model.model.decoder.layers.0.encoder_attn.out_proj.scales": "model.safetensors",
30
+ "language_model.model.decoder.layers.0.encoder_attn.out_proj.weight": "model.safetensors",
31
+ "language_model.model.decoder.layers.0.encoder_attn.q_proj.bias": "model.safetensors",
32
+ "language_model.model.decoder.layers.0.encoder_attn.q_proj.biases": "model.safetensors",
33
+ "language_model.model.decoder.layers.0.encoder_attn.q_proj.scales": "model.safetensors",
34
+ "language_model.model.decoder.layers.0.encoder_attn.q_proj.weight": "model.safetensors",
35
+ "language_model.model.decoder.layers.0.encoder_attn.v_proj.bias": "model.safetensors",
36
+ "language_model.model.decoder.layers.0.encoder_attn.v_proj.biases": "model.safetensors",
37
+ "language_model.model.decoder.layers.0.encoder_attn.v_proj.scales": "model.safetensors",
38
+ "language_model.model.decoder.layers.0.encoder_attn.v_proj.weight": "model.safetensors",
39
+ "language_model.model.decoder.layers.0.encoder_attn_layer_norm.bias": "model.safetensors",
40
+ "language_model.model.decoder.layers.0.encoder_attn_layer_norm.weight": "model.safetensors",
41
+ "language_model.model.decoder.layers.0.fc1.bias": "model.safetensors",
42
+ "language_model.model.decoder.layers.0.fc1.biases": "model.safetensors",
43
+ "language_model.model.decoder.layers.0.fc1.scales": "model.safetensors",
44
+ "language_model.model.decoder.layers.0.fc1.weight": "model.safetensors",
45
+ "language_model.model.decoder.layers.0.fc2.bias": "model.safetensors",
46
+ "language_model.model.decoder.layers.0.fc2.biases": "model.safetensors",
47
+ "language_model.model.decoder.layers.0.fc2.scales": "model.safetensors",
48
+ "language_model.model.decoder.layers.0.fc2.weight": "model.safetensors",
49
+ "language_model.model.decoder.layers.0.final_layer_norm.bias": "model.safetensors",
50
+ "language_model.model.decoder.layers.0.final_layer_norm.weight": "model.safetensors",
51
+ "language_model.model.decoder.layers.0.self_attn.k_proj.bias": "model.safetensors",
52
+ "language_model.model.decoder.layers.0.self_attn.k_proj.biases": "model.safetensors",
53
+ "language_model.model.decoder.layers.0.self_attn.k_proj.scales": "model.safetensors",
54
+ "language_model.model.decoder.layers.0.self_attn.k_proj.weight": "model.safetensors",
55
+ "language_model.model.decoder.layers.0.self_attn.out_proj.bias": "model.safetensors",
56
+ "language_model.model.decoder.layers.0.self_attn.out_proj.biases": "model.safetensors",
57
+ "language_model.model.decoder.layers.0.self_attn.out_proj.scales": "model.safetensors",
58
+ "language_model.model.decoder.layers.0.self_attn.out_proj.weight": "model.safetensors",
59
+ "language_model.model.decoder.layers.0.self_attn.q_proj.bias": "model.safetensors",
60
+ "language_model.model.decoder.layers.0.self_attn.q_proj.biases": "model.safetensors",
61
+ "language_model.model.decoder.layers.0.self_attn.q_proj.scales": "model.safetensors",
62
+ "language_model.model.decoder.layers.0.self_attn.q_proj.weight": "model.safetensors",
63
+ "language_model.model.decoder.layers.0.self_attn.v_proj.bias": "model.safetensors",
64
+ "language_model.model.decoder.layers.0.self_attn.v_proj.biases": "model.safetensors",
65
+ "language_model.model.decoder.layers.0.self_attn.v_proj.scales": "model.safetensors",
66
+ "language_model.model.decoder.layers.0.self_attn.v_proj.weight": "model.safetensors",
67
+ "language_model.model.decoder.layers.0.self_attn_layer_norm.bias": "model.safetensors",
68
+ "language_model.model.decoder.layers.0.self_attn_layer_norm.weight": "model.safetensors",
69
+ "language_model.model.decoder.layers.1.encoder_attn.k_proj.bias": "model.safetensors",
70
+ "language_model.model.decoder.layers.1.encoder_attn.k_proj.biases": "model.safetensors",
71
+ "language_model.model.decoder.layers.1.encoder_attn.k_proj.scales": "model.safetensors",
72
+ "language_model.model.decoder.layers.1.encoder_attn.k_proj.weight": "model.safetensors",
73
+ "language_model.model.decoder.layers.1.encoder_attn.out_proj.bias": "model.safetensors",
74
+ "language_model.model.decoder.layers.1.encoder_attn.out_proj.biases": "model.safetensors",
75
+ "language_model.model.decoder.layers.1.encoder_attn.out_proj.scales": "model.safetensors",
76
+ "language_model.model.decoder.layers.1.encoder_attn.out_proj.weight": "model.safetensors",
77
+ "language_model.model.decoder.layers.1.encoder_attn.q_proj.bias": "model.safetensors",
78
+ "language_model.model.decoder.layers.1.encoder_attn.q_proj.biases": "model.safetensors",
79
+ "language_model.model.decoder.layers.1.encoder_attn.q_proj.scales": "model.safetensors",
80
+ "language_model.model.decoder.layers.1.encoder_attn.q_proj.weight": "model.safetensors",
81
+ "language_model.model.decoder.layers.1.encoder_attn.v_proj.bias": "model.safetensors",
82
+ "language_model.model.decoder.layers.1.encoder_attn.v_proj.biases": "model.safetensors",
83
+ "language_model.model.decoder.layers.1.encoder_attn.v_proj.scales": "model.safetensors",
84
+ "language_model.model.decoder.layers.1.encoder_attn.v_proj.weight": "model.safetensors",
85
+ "language_model.model.decoder.layers.1.encoder_attn_layer_norm.bias": "model.safetensors",
86
+ "language_model.model.decoder.layers.1.encoder_attn_layer_norm.weight": "model.safetensors",
87
+ "language_model.model.decoder.layers.1.fc1.bias": "model.safetensors",
88
+ "language_model.model.decoder.layers.1.fc1.biases": "model.safetensors",
89
+ "language_model.model.decoder.layers.1.fc1.scales": "model.safetensors",
90
+ "language_model.model.decoder.layers.1.fc1.weight": "model.safetensors",
91
+ "language_model.model.decoder.layers.1.fc2.bias": "model.safetensors",
92
+ "language_model.model.decoder.layers.1.fc2.biases": "model.safetensors",
93
+ "language_model.model.decoder.layers.1.fc2.scales": "model.safetensors",
94
+ "language_model.model.decoder.layers.1.fc2.weight": "model.safetensors",
95
+ "language_model.model.decoder.layers.1.final_layer_norm.bias": "model.safetensors",
96
+ "language_model.model.decoder.layers.1.final_layer_norm.weight": "model.safetensors",
97
+ "language_model.model.decoder.layers.1.self_attn.k_proj.bias": "model.safetensors",
98
+ "language_model.model.decoder.layers.1.self_attn.k_proj.biases": "model.safetensors",
99
+ "language_model.model.decoder.layers.1.self_attn.k_proj.scales": "model.safetensors",
100
+ "language_model.model.decoder.layers.1.self_attn.k_proj.weight": "model.safetensors",
101
+ "language_model.model.decoder.layers.1.self_attn.out_proj.bias": "model.safetensors",
102
+ "language_model.model.decoder.layers.1.self_attn.out_proj.biases": "model.safetensors",
103
+ "language_model.model.decoder.layers.1.self_attn.out_proj.scales": "model.safetensors",
104
+ "language_model.model.decoder.layers.1.self_attn.out_proj.weight": "model.safetensors",
105
+ "language_model.model.decoder.layers.1.self_attn.q_proj.bias": "model.safetensors",
106
+ "language_model.model.decoder.layers.1.self_attn.q_proj.biases": "model.safetensors",
107
+ "language_model.model.decoder.layers.1.self_attn.q_proj.scales": "model.safetensors",
108
+ "language_model.model.decoder.layers.1.self_attn.q_proj.weight": "model.safetensors",
109
+ "language_model.model.decoder.layers.1.self_attn.v_proj.bias": "model.safetensors",
110
+ "language_model.model.decoder.layers.1.self_attn.v_proj.biases": "model.safetensors",
111
+ "language_model.model.decoder.layers.1.self_attn.v_proj.scales": "model.safetensors",
112
+ "language_model.model.decoder.layers.1.self_attn.v_proj.weight": "model.safetensors",
113
+ "language_model.model.decoder.layers.1.self_attn_layer_norm.bias": "model.safetensors",
114
+ "language_model.model.decoder.layers.1.self_attn_layer_norm.weight": "model.safetensors",
115
+ "language_model.model.decoder.layers.2.encoder_attn.k_proj.bias": "model.safetensors",
116
+ "language_model.model.decoder.layers.2.encoder_attn.k_proj.biases": "model.safetensors",
117
+ "language_model.model.decoder.layers.2.encoder_attn.k_proj.scales": "model.safetensors",
118
+ "language_model.model.decoder.layers.2.encoder_attn.k_proj.weight": "model.safetensors",
119
+ "language_model.model.decoder.layers.2.encoder_attn.out_proj.bias": "model.safetensors",
120
+ "language_model.model.decoder.layers.2.encoder_attn.out_proj.biases": "model.safetensors",
121
+ "language_model.model.decoder.layers.2.encoder_attn.out_proj.scales": "model.safetensors",
122
+ "language_model.model.decoder.layers.2.encoder_attn.out_proj.weight": "model.safetensors",
123
+ "language_model.model.decoder.layers.2.encoder_attn.q_proj.bias": "model.safetensors",
124
+ "language_model.model.decoder.layers.2.encoder_attn.q_proj.biases": "model.safetensors",
125
+ "language_model.model.decoder.layers.2.encoder_attn.q_proj.scales": "model.safetensors",
126
+ "language_model.model.decoder.layers.2.encoder_attn.q_proj.weight": "model.safetensors",
127
+ "language_model.model.decoder.layers.2.encoder_attn.v_proj.bias": "model.safetensors",
128
+ "language_model.model.decoder.layers.2.encoder_attn.v_proj.biases": "model.safetensors",
129
+ "language_model.model.decoder.layers.2.encoder_attn.v_proj.scales": "model.safetensors",
130
+ "language_model.model.decoder.layers.2.encoder_attn.v_proj.weight": "model.safetensors",
131
+ "language_model.model.decoder.layers.2.encoder_attn_layer_norm.bias": "model.safetensors",
132
+ "language_model.model.decoder.layers.2.encoder_attn_layer_norm.weight": "model.safetensors",
133
+ "language_model.model.decoder.layers.2.fc1.bias": "model.safetensors",
134
+ "language_model.model.decoder.layers.2.fc1.biases": "model.safetensors",
135
+ "language_model.model.decoder.layers.2.fc1.scales": "model.safetensors",
136
+ "language_model.model.decoder.layers.2.fc1.weight": "model.safetensors",
137
+ "language_model.model.decoder.layers.2.fc2.bias": "model.safetensors",
138
+ "language_model.model.decoder.layers.2.fc2.biases": "model.safetensors",
139
+ "language_model.model.decoder.layers.2.fc2.scales": "model.safetensors",
140
+ "language_model.model.decoder.layers.2.fc2.weight": "model.safetensors",
141
+ "language_model.model.decoder.layers.2.final_layer_norm.bias": "model.safetensors",
142
+ "language_model.model.decoder.layers.2.final_layer_norm.weight": "model.safetensors",
143
+ "language_model.model.decoder.layers.2.self_attn.k_proj.bias": "model.safetensors",
144
+ "language_model.model.decoder.layers.2.self_attn.k_proj.biases": "model.safetensors",
145
+ "language_model.model.decoder.layers.2.self_attn.k_proj.scales": "model.safetensors",
146
+ "language_model.model.decoder.layers.2.self_attn.k_proj.weight": "model.safetensors",
147
+ "language_model.model.decoder.layers.2.self_attn.out_proj.bias": "model.safetensors",
148
+ "language_model.model.decoder.layers.2.self_attn.out_proj.biases": "model.safetensors",
149
+ "language_model.model.decoder.layers.2.self_attn.out_proj.scales": "model.safetensors",
150
+ "language_model.model.decoder.layers.2.self_attn.out_proj.weight": "model.safetensors",
151
+ "language_model.model.decoder.layers.2.self_attn.q_proj.bias": "model.safetensors",
152
+ "language_model.model.decoder.layers.2.self_attn.q_proj.biases": "model.safetensors",
153
+ "language_model.model.decoder.layers.2.self_attn.q_proj.scales": "model.safetensors",
154
+ "language_model.model.decoder.layers.2.self_attn.q_proj.weight": "model.safetensors",
155
+ "language_model.model.decoder.layers.2.self_attn.v_proj.bias": "model.safetensors",
156
+ "language_model.model.decoder.layers.2.self_attn.v_proj.biases": "model.safetensors",
157
+ "language_model.model.decoder.layers.2.self_attn.v_proj.scales": "model.safetensors",
158
+ "language_model.model.decoder.layers.2.self_attn.v_proj.weight": "model.safetensors",
159
+ "language_model.model.decoder.layers.2.self_attn_layer_norm.bias": "model.safetensors",
160
+ "language_model.model.decoder.layers.2.self_attn_layer_norm.weight": "model.safetensors",
161
+ "language_model.model.decoder.layers.3.encoder_attn.k_proj.bias": "model.safetensors",
162
+ "language_model.model.decoder.layers.3.encoder_attn.k_proj.biases": "model.safetensors",
163
+ "language_model.model.decoder.layers.3.encoder_attn.k_proj.scales": "model.safetensors",
164
+ "language_model.model.decoder.layers.3.encoder_attn.k_proj.weight": "model.safetensors",
165
+ "language_model.model.decoder.layers.3.encoder_attn.out_proj.bias": "model.safetensors",
166
+ "language_model.model.decoder.layers.3.encoder_attn.out_proj.biases": "model.safetensors",
167
+ "language_model.model.decoder.layers.3.encoder_attn.out_proj.scales": "model.safetensors",
168
+ "language_model.model.decoder.layers.3.encoder_attn.out_proj.weight": "model.safetensors",
169
+ "language_model.model.decoder.layers.3.encoder_attn.q_proj.bias": "model.safetensors",
170
+ "language_model.model.decoder.layers.3.encoder_attn.q_proj.biases": "model.safetensors",
171
+ "language_model.model.decoder.layers.3.encoder_attn.q_proj.scales": "model.safetensors",
172
+ "language_model.model.decoder.layers.3.encoder_attn.q_proj.weight": "model.safetensors",
173
+ "language_model.model.decoder.layers.3.encoder_attn.v_proj.bias": "model.safetensors",
174
+ "language_model.model.decoder.layers.3.encoder_attn.v_proj.biases": "model.safetensors",
175
+ "language_model.model.decoder.layers.3.encoder_attn.v_proj.scales": "model.safetensors",
176
+ "language_model.model.decoder.layers.3.encoder_attn.v_proj.weight": "model.safetensors",
177
+ "language_model.model.decoder.layers.3.encoder_attn_layer_norm.bias": "model.safetensors",
178
+ "language_model.model.decoder.layers.3.encoder_attn_layer_norm.weight": "model.safetensors",
179
+ "language_model.model.decoder.layers.3.fc1.bias": "model.safetensors",
180
+ "language_model.model.decoder.layers.3.fc1.biases": "model.safetensors",
181
+ "language_model.model.decoder.layers.3.fc1.scales": "model.safetensors",
182
+ "language_model.model.decoder.layers.3.fc1.weight": "model.safetensors",
183
+ "language_model.model.decoder.layers.3.fc2.bias": "model.safetensors",
184
+ "language_model.model.decoder.layers.3.fc2.biases": "model.safetensors",
185
+ "language_model.model.decoder.layers.3.fc2.scales": "model.safetensors",
186
+ "language_model.model.decoder.layers.3.fc2.weight": "model.safetensors",
187
+ "language_model.model.decoder.layers.3.final_layer_norm.bias": "model.safetensors",
188
+ "language_model.model.decoder.layers.3.final_layer_norm.weight": "model.safetensors",
189
+ "language_model.model.decoder.layers.3.self_attn.k_proj.bias": "model.safetensors",
190
+ "language_model.model.decoder.layers.3.self_attn.k_proj.biases": "model.safetensors",
191
+ "language_model.model.decoder.layers.3.self_attn.k_proj.scales": "model.safetensors",
192
+ "language_model.model.decoder.layers.3.self_attn.k_proj.weight": "model.safetensors",
193
+ "language_model.model.decoder.layers.3.self_attn.out_proj.bias": "model.safetensors",
194
+ "language_model.model.decoder.layers.3.self_attn.out_proj.biases": "model.safetensors",
195
+ "language_model.model.decoder.layers.3.self_attn.out_proj.scales": "model.safetensors",
196
+ "language_model.model.decoder.layers.3.self_attn.out_proj.weight": "model.safetensors",
197
+ "language_model.model.decoder.layers.3.self_attn.q_proj.bias": "model.safetensors",
198
+ "language_model.model.decoder.layers.3.self_attn.q_proj.biases": "model.safetensors",
199
+ "language_model.model.decoder.layers.3.self_attn.q_proj.scales": "model.safetensors",
200
+ "language_model.model.decoder.layers.3.self_attn.q_proj.weight": "model.safetensors",
201
+ "language_model.model.decoder.layers.3.self_attn.v_proj.bias": "model.safetensors",
202
+ "language_model.model.decoder.layers.3.self_attn.v_proj.biases": "model.safetensors",
203
+ "language_model.model.decoder.layers.3.self_attn.v_proj.scales": "model.safetensors",
204
+ "language_model.model.decoder.layers.3.self_attn.v_proj.weight": "model.safetensors",
205
+ "language_model.model.decoder.layers.3.self_attn_layer_norm.bias": "model.safetensors",
206
+ "language_model.model.decoder.layers.3.self_attn_layer_norm.weight": "model.safetensors",
207
+ "language_model.model.decoder.layers.4.encoder_attn.k_proj.bias": "model.safetensors",
208
+ "language_model.model.decoder.layers.4.encoder_attn.k_proj.biases": "model.safetensors",
209
+ "language_model.model.decoder.layers.4.encoder_attn.k_proj.scales": "model.safetensors",
210
+ "language_model.model.decoder.layers.4.encoder_attn.k_proj.weight": "model.safetensors",
211
+ "language_model.model.decoder.layers.4.encoder_attn.out_proj.bias": "model.safetensors",
212
+ "language_model.model.decoder.layers.4.encoder_attn.out_proj.biases": "model.safetensors",
213
+ "language_model.model.decoder.layers.4.encoder_attn.out_proj.scales": "model.safetensors",
214
+ "language_model.model.decoder.layers.4.encoder_attn.out_proj.weight": "model.safetensors",
215
+ "language_model.model.decoder.layers.4.encoder_attn.q_proj.bias": "model.safetensors",
216
+ "language_model.model.decoder.layers.4.encoder_attn.q_proj.biases": "model.safetensors",
217
+ "language_model.model.decoder.layers.4.encoder_attn.q_proj.scales": "model.safetensors",
218
+ "language_model.model.decoder.layers.4.encoder_attn.q_proj.weight": "model.safetensors",
219
+ "language_model.model.decoder.layers.4.encoder_attn.v_proj.bias": "model.safetensors",
220
+ "language_model.model.decoder.layers.4.encoder_attn.v_proj.biases": "model.safetensors",
221
+ "language_model.model.decoder.layers.4.encoder_attn.v_proj.scales": "model.safetensors",
222
+ "language_model.model.decoder.layers.4.encoder_attn.v_proj.weight": "model.safetensors",
223
+ "language_model.model.decoder.layers.4.encoder_attn_layer_norm.bias": "model.safetensors",
224
+ "language_model.model.decoder.layers.4.encoder_attn_layer_norm.weight": "model.safetensors",
225
+ "language_model.model.decoder.layers.4.fc1.bias": "model.safetensors",
226
+ "language_model.model.decoder.layers.4.fc1.biases": "model.safetensors",
227
+ "language_model.model.decoder.layers.4.fc1.scales": "model.safetensors",
228
+ "language_model.model.decoder.layers.4.fc1.weight": "model.safetensors",
229
+ "language_model.model.decoder.layers.4.fc2.bias": "model.safetensors",
230
+ "language_model.model.decoder.layers.4.fc2.biases": "model.safetensors",
231
+ "language_model.model.decoder.layers.4.fc2.scales": "model.safetensors",
232
+ "language_model.model.decoder.layers.4.fc2.weight": "model.safetensors",
233
+ "language_model.model.decoder.layers.4.final_layer_norm.bias": "model.safetensors",
234
+ "language_model.model.decoder.layers.4.final_layer_norm.weight": "model.safetensors",
235
+ "language_model.model.decoder.layers.4.self_attn.k_proj.bias": "model.safetensors",
236
+ "language_model.model.decoder.layers.4.self_attn.k_proj.biases": "model.safetensors",
237
+ "language_model.model.decoder.layers.4.self_attn.k_proj.scales": "model.safetensors",
238
+ "language_model.model.decoder.layers.4.self_attn.k_proj.weight": "model.safetensors",
239
+ "language_model.model.decoder.layers.4.self_attn.out_proj.bias": "model.safetensors",
240
+ "language_model.model.decoder.layers.4.self_attn.out_proj.biases": "model.safetensors",
241
+ "language_model.model.decoder.layers.4.self_attn.out_proj.scales": "model.safetensors",
242
+ "language_model.model.decoder.layers.4.self_attn.out_proj.weight": "model.safetensors",
243
+ "language_model.model.decoder.layers.4.self_attn.q_proj.bias": "model.safetensors",
244
+ "language_model.model.decoder.layers.4.self_attn.q_proj.biases": "model.safetensors",
245
+ "language_model.model.decoder.layers.4.self_attn.q_proj.scales": "model.safetensors",
246
+ "language_model.model.decoder.layers.4.self_attn.q_proj.weight": "model.safetensors",
247
+ "language_model.model.decoder.layers.4.self_attn.v_proj.bias": "model.safetensors",
248
+ "language_model.model.decoder.layers.4.self_attn.v_proj.biases": "model.safetensors",
249
+ "language_model.model.decoder.layers.4.self_attn.v_proj.scales": "model.safetensors",
250
+ "language_model.model.decoder.layers.4.self_attn.v_proj.weight": "model.safetensors",
251
+ "language_model.model.decoder.layers.4.self_attn_layer_norm.bias": "model.safetensors",
252
+ "language_model.model.decoder.layers.4.self_attn_layer_norm.weight": "model.safetensors",
253
+ "language_model.model.decoder.layers.5.encoder_attn.k_proj.bias": "model.safetensors",
254
+ "language_model.model.decoder.layers.5.encoder_attn.k_proj.biases": "model.safetensors",
255
+ "language_model.model.decoder.layers.5.encoder_attn.k_proj.scales": "model.safetensors",
256
+ "language_model.model.decoder.layers.5.encoder_attn.k_proj.weight": "model.safetensors",
257
+ "language_model.model.decoder.layers.5.encoder_attn.out_proj.bias": "model.safetensors",
258
+ "language_model.model.decoder.layers.5.encoder_attn.out_proj.biases": "model.safetensors",
259
+ "language_model.model.decoder.layers.5.encoder_attn.out_proj.scales": "model.safetensors",
260
+ "language_model.model.decoder.layers.5.encoder_attn.out_proj.weight": "model.safetensors",
261
+ "language_model.model.decoder.layers.5.encoder_attn.q_proj.bias": "model.safetensors",
262
+ "language_model.model.decoder.layers.5.encoder_attn.q_proj.biases": "model.safetensors",
263
+ "language_model.model.decoder.layers.5.encoder_attn.q_proj.scales": "model.safetensors",
264
+ "language_model.model.decoder.layers.5.encoder_attn.q_proj.weight": "model.safetensors",
265
+ "language_model.model.decoder.layers.5.encoder_attn.v_proj.bias": "model.safetensors",
266
+ "language_model.model.decoder.layers.5.encoder_attn.v_proj.biases": "model.safetensors",
267
+ "language_model.model.decoder.layers.5.encoder_attn.v_proj.scales": "model.safetensors",
268
+ "language_model.model.decoder.layers.5.encoder_attn.v_proj.weight": "model.safetensors",
269
+ "language_model.model.decoder.layers.5.encoder_attn_layer_norm.bias": "model.safetensors",
270
+ "language_model.model.decoder.layers.5.encoder_attn_layer_norm.weight": "model.safetensors",
271
+ "language_model.model.decoder.layers.5.fc1.bias": "model.safetensors",
272
+ "language_model.model.decoder.layers.5.fc1.biases": "model.safetensors",
273
+ "language_model.model.decoder.layers.5.fc1.scales": "model.safetensors",
274
+ "language_model.model.decoder.layers.5.fc1.weight": "model.safetensors",
275
+ "language_model.model.decoder.layers.5.fc2.bias": "model.safetensors",
276
+ "language_model.model.decoder.layers.5.fc2.biases": "model.safetensors",
277
+ "language_model.model.decoder.layers.5.fc2.scales": "model.safetensors",
278
+ "language_model.model.decoder.layers.5.fc2.weight": "model.safetensors",
279
+ "language_model.model.decoder.layers.5.final_layer_norm.bias": "model.safetensors",
280
+ "language_model.model.decoder.layers.5.final_layer_norm.weight": "model.safetensors",
281
+ "language_model.model.decoder.layers.5.self_attn.k_proj.bias": "model.safetensors",
282
+ "language_model.model.decoder.layers.5.self_attn.k_proj.biases": "model.safetensors",
283
+ "language_model.model.decoder.layers.5.self_attn.k_proj.scales": "model.safetensors",
284
+ "language_model.model.decoder.layers.5.self_attn.k_proj.weight": "model.safetensors",
285
+ "language_model.model.decoder.layers.5.self_attn.out_proj.bias": "model.safetensors",
286
+ "language_model.model.decoder.layers.5.self_attn.out_proj.biases": "model.safetensors",
287
+ "language_model.model.decoder.layers.5.self_attn.out_proj.scales": "model.safetensors",
288
+ "language_model.model.decoder.layers.5.self_attn.out_proj.weight": "model.safetensors",
289
+ "language_model.model.decoder.layers.5.self_attn.q_proj.bias": "model.safetensors",
290
+ "language_model.model.decoder.layers.5.self_attn.q_proj.biases": "model.safetensors",
291
+ "language_model.model.decoder.layers.5.self_attn.q_proj.scales": "model.safetensors",
292
+ "language_model.model.decoder.layers.5.self_attn.q_proj.weight": "model.safetensors",
293
+ "language_model.model.decoder.layers.5.self_attn.v_proj.bias": "model.safetensors",
294
+ "language_model.model.decoder.layers.5.self_attn.v_proj.biases": "model.safetensors",
295
+ "language_model.model.decoder.layers.5.self_attn.v_proj.scales": "model.safetensors",
296
+ "language_model.model.decoder.layers.5.self_attn.v_proj.weight": "model.safetensors",
297
+ "language_model.model.decoder.layers.5.self_attn_layer_norm.bias": "model.safetensors",
298
+ "language_model.model.decoder.layers.5.self_attn_layer_norm.weight": "model.safetensors",
299
+ "language_model.model.encoder.embed_positions.biases": "model.safetensors",
300
+ "language_model.model.encoder.embed_positions.scales": "model.safetensors",
301
+ "language_model.model.encoder.embed_positions.weight": "model.safetensors",
302
+ "language_model.model.encoder.layernorm_embedding.bias": "model.safetensors",
303
+ "language_model.model.encoder.layernorm_embedding.weight": "model.safetensors",
304
+ "language_model.model.encoder.layers.0.fc1.bias": "model.safetensors",
305
+ "language_model.model.encoder.layers.0.fc1.biases": "model.safetensors",
306
+ "language_model.model.encoder.layers.0.fc1.scales": "model.safetensors",
307
+ "language_model.model.encoder.layers.0.fc1.weight": "model.safetensors",
308
+ "language_model.model.encoder.layers.0.fc2.bias": "model.safetensors",
309
+ "language_model.model.encoder.layers.0.fc2.biases": "model.safetensors",
310
+ "language_model.model.encoder.layers.0.fc2.scales": "model.safetensors",
311
+ "language_model.model.encoder.layers.0.fc2.weight": "model.safetensors",
312
+ "language_model.model.encoder.layers.0.final_layer_norm.bias": "model.safetensors",
313
+ "language_model.model.encoder.layers.0.final_layer_norm.weight": "model.safetensors",
314
+ "language_model.model.encoder.layers.0.self_attn.k_proj.bias": "model.safetensors",
315
+ "language_model.model.encoder.layers.0.self_attn.k_proj.biases": "model.safetensors",
316
+ "language_model.model.encoder.layers.0.self_attn.k_proj.scales": "model.safetensors",
317
+ "language_model.model.encoder.layers.0.self_attn.k_proj.weight": "model.safetensors",
318
+ "language_model.model.encoder.layers.0.self_attn.out_proj.bias": "model.safetensors",
319
+ "language_model.model.encoder.layers.0.self_attn.out_proj.biases": "model.safetensors",
320
+ "language_model.model.encoder.layers.0.self_attn.out_proj.scales": "model.safetensors",
321
+ "language_model.model.encoder.layers.0.self_attn.out_proj.weight": "model.safetensors",
322
+ "language_model.model.encoder.layers.0.self_attn.q_proj.bias": "model.safetensors",
323
+ "language_model.model.encoder.layers.0.self_attn.q_proj.biases": "model.safetensors",
324
+ "language_model.model.encoder.layers.0.self_attn.q_proj.scales": "model.safetensors",
325
+ "language_model.model.encoder.layers.0.self_attn.q_proj.weight": "model.safetensors",
326
+ "language_model.model.encoder.layers.0.self_attn.v_proj.bias": "model.safetensors",
327
+ "language_model.model.encoder.layers.0.self_attn.v_proj.biases": "model.safetensors",
328
+ "language_model.model.encoder.layers.0.self_attn.v_proj.scales": "model.safetensors",
329
+ "language_model.model.encoder.layers.0.self_attn.v_proj.weight": "model.safetensors",
330
+ "language_model.model.encoder.layers.0.self_attn_layer_norm.bias": "model.safetensors",
331
+ "language_model.model.encoder.layers.0.self_attn_layer_norm.weight": "model.safetensors",
332
+ "language_model.model.encoder.layers.1.fc1.bias": "model.safetensors",
333
+ "language_model.model.encoder.layers.1.fc1.biases": "model.safetensors",
334
+ "language_model.model.encoder.layers.1.fc1.scales": "model.safetensors",
335
+ "language_model.model.encoder.layers.1.fc1.weight": "model.safetensors",
336
+ "language_model.model.encoder.layers.1.fc2.bias": "model.safetensors",
337
+ "language_model.model.encoder.layers.1.fc2.biases": "model.safetensors",
338
+ "language_model.model.encoder.layers.1.fc2.scales": "model.safetensors",
339
+ "language_model.model.encoder.layers.1.fc2.weight": "model.safetensors",
340
+ "language_model.model.encoder.layers.1.final_layer_norm.bias": "model.safetensors",
341
+ "language_model.model.encoder.layers.1.final_layer_norm.weight": "model.safetensors",
342
+ "language_model.model.encoder.layers.1.self_attn.k_proj.bias": "model.safetensors",
343
+ "language_model.model.encoder.layers.1.self_attn.k_proj.biases": "model.safetensors",
344
+ "language_model.model.encoder.layers.1.self_attn.k_proj.scales": "model.safetensors",
345
+ "language_model.model.encoder.layers.1.self_attn.k_proj.weight": "model.safetensors",
346
+ "language_model.model.encoder.layers.1.self_attn.out_proj.bias": "model.safetensors",
347
+ "language_model.model.encoder.layers.1.self_attn.out_proj.biases": "model.safetensors",
348
+ "language_model.model.encoder.layers.1.self_attn.out_proj.scales": "model.safetensors",
349
+ "language_model.model.encoder.layers.1.self_attn.out_proj.weight": "model.safetensors",
350
+ "language_model.model.encoder.layers.1.self_attn.q_proj.bias": "model.safetensors",
351
+ "language_model.model.encoder.layers.1.self_attn.q_proj.biases": "model.safetensors",
352
+ "language_model.model.encoder.layers.1.self_attn.q_proj.scales": "model.safetensors",
353
+ "language_model.model.encoder.layers.1.self_attn.q_proj.weight": "model.safetensors",
354
+ "language_model.model.encoder.layers.1.self_attn.v_proj.bias": "model.safetensors",
355
+ "language_model.model.encoder.layers.1.self_attn.v_proj.biases": "model.safetensors",
356
+ "language_model.model.encoder.layers.1.self_attn.v_proj.scales": "model.safetensors",
357
+ "language_model.model.encoder.layers.1.self_attn.v_proj.weight": "model.safetensors",
358
+ "language_model.model.encoder.layers.1.self_attn_layer_norm.bias": "model.safetensors",
359
+ "language_model.model.encoder.layers.1.self_attn_layer_norm.weight": "model.safetensors",
360
+ "language_model.model.encoder.layers.2.fc1.bias": "model.safetensors",
361
+ "language_model.model.encoder.layers.2.fc1.biases": "model.safetensors",
362
+ "language_model.model.encoder.layers.2.fc1.scales": "model.safetensors",
363
+ "language_model.model.encoder.layers.2.fc1.weight": "model.safetensors",
364
+ "language_model.model.encoder.layers.2.fc2.bias": "model.safetensors",
365
+ "language_model.model.encoder.layers.2.fc2.biases": "model.safetensors",
366
+ "language_model.model.encoder.layers.2.fc2.scales": "model.safetensors",
367
+ "language_model.model.encoder.layers.2.fc2.weight": "model.safetensors",
368
+ "language_model.model.encoder.layers.2.final_layer_norm.bias": "model.safetensors",
369
+ "language_model.model.encoder.layers.2.final_layer_norm.weight": "model.safetensors",
370
+ "language_model.model.encoder.layers.2.self_attn.k_proj.bias": "model.safetensors",
371
+ "language_model.model.encoder.layers.2.self_attn.k_proj.biases": "model.safetensors",
372
+ "language_model.model.encoder.layers.2.self_attn.k_proj.scales": "model.safetensors",
373
+ "language_model.model.encoder.layers.2.self_attn.k_proj.weight": "model.safetensors",
374
+ "language_model.model.encoder.layers.2.self_attn.out_proj.bias": "model.safetensors",
375
+ "language_model.model.encoder.layers.2.self_attn.out_proj.biases": "model.safetensors",
376
+ "language_model.model.encoder.layers.2.self_attn.out_proj.scales": "model.safetensors",
377
+ "language_model.model.encoder.layers.2.self_attn.out_proj.weight": "model.safetensors",
378
+ "language_model.model.encoder.layers.2.self_attn.q_proj.bias": "model.safetensors",
379
+ "language_model.model.encoder.layers.2.self_attn.q_proj.biases": "model.safetensors",
380
+ "language_model.model.encoder.layers.2.self_attn.q_proj.scales": "model.safetensors",
381
+ "language_model.model.encoder.layers.2.self_attn.q_proj.weight": "model.safetensors",
382
+ "language_model.model.encoder.layers.2.self_attn.v_proj.bias": "model.safetensors",
383
+ "language_model.model.encoder.layers.2.self_attn.v_proj.biases": "model.safetensors",
384
+ "language_model.model.encoder.layers.2.self_attn.v_proj.scales": "model.safetensors",
385
+ "language_model.model.encoder.layers.2.self_attn.v_proj.weight": "model.safetensors",
386
+ "language_model.model.encoder.layers.2.self_attn_layer_norm.bias": "model.safetensors",
387
+ "language_model.model.encoder.layers.2.self_attn_layer_norm.weight": "model.safetensors",
388
+ "language_model.model.encoder.layers.3.fc1.bias": "model.safetensors",
389
+ "language_model.model.encoder.layers.3.fc1.biases": "model.safetensors",
390
+ "language_model.model.encoder.layers.3.fc1.scales": "model.safetensors",
391
+ "language_model.model.encoder.layers.3.fc1.weight": "model.safetensors",
392
+ "language_model.model.encoder.layers.3.fc2.bias": "model.safetensors",
393
+ "language_model.model.encoder.layers.3.fc2.biases": "model.safetensors",
394
+ "language_model.model.encoder.layers.3.fc2.scales": "model.safetensors",
395
+ "language_model.model.encoder.layers.3.fc2.weight": "model.safetensors",
396
+ "language_model.model.encoder.layers.3.final_layer_norm.bias": "model.safetensors",
397
+ "language_model.model.encoder.layers.3.final_layer_norm.weight": "model.safetensors",
398
+ "language_model.model.encoder.layers.3.self_attn.k_proj.bias": "model.safetensors",
399
+ "language_model.model.encoder.layers.3.self_attn.k_proj.biases": "model.safetensors",
400
+ "language_model.model.encoder.layers.3.self_attn.k_proj.scales": "model.safetensors",
401
+ "language_model.model.encoder.layers.3.self_attn.k_proj.weight": "model.safetensors",
402
+ "language_model.model.encoder.layers.3.self_attn.out_proj.bias": "model.safetensors",
403
+ "language_model.model.encoder.layers.3.self_attn.out_proj.biases": "model.safetensors",
404
+ "language_model.model.encoder.layers.3.self_attn.out_proj.scales": "model.safetensors",
405
+ "language_model.model.encoder.layers.3.self_attn.out_proj.weight": "model.safetensors",
406
+ "language_model.model.encoder.layers.3.self_attn.q_proj.bias": "model.safetensors",
407
+ "language_model.model.encoder.layers.3.self_attn.q_proj.biases": "model.safetensors",
408
+ "language_model.model.encoder.layers.3.self_attn.q_proj.scales": "model.safetensors",
409
+ "language_model.model.encoder.layers.3.self_attn.q_proj.weight": "model.safetensors",
410
+ "language_model.model.encoder.layers.3.self_attn.v_proj.bias": "model.safetensors",
411
+ "language_model.model.encoder.layers.3.self_attn.v_proj.biases": "model.safetensors",
412
+ "language_model.model.encoder.layers.3.self_attn.v_proj.scales": "model.safetensors",
413
+ "language_model.model.encoder.layers.3.self_attn.v_proj.weight": "model.safetensors",
414
+ "language_model.model.encoder.layers.3.self_attn_layer_norm.bias": "model.safetensors",
415
+ "language_model.model.encoder.layers.3.self_attn_layer_norm.weight": "model.safetensors",
416
+ "language_model.model.encoder.layers.4.fc1.bias": "model.safetensors",
417
+ "language_model.model.encoder.layers.4.fc1.biases": "model.safetensors",
418
+ "language_model.model.encoder.layers.4.fc1.scales": "model.safetensors",
419
+ "language_model.model.encoder.layers.4.fc1.weight": "model.safetensors",
420
+ "language_model.model.encoder.layers.4.fc2.bias": "model.safetensors",
421
+ "language_model.model.encoder.layers.4.fc2.biases": "model.safetensors",
422
+ "language_model.model.encoder.layers.4.fc2.scales": "model.safetensors",
423
+ "language_model.model.encoder.layers.4.fc2.weight": "model.safetensors",
424
+ "language_model.model.encoder.layers.4.final_layer_norm.bias": "model.safetensors",
425
+ "language_model.model.encoder.layers.4.final_layer_norm.weight": "model.safetensors",
426
+ "language_model.model.encoder.layers.4.self_attn.k_proj.bias": "model.safetensors",
427
+ "language_model.model.encoder.layers.4.self_attn.k_proj.biases": "model.safetensors",
428
+ "language_model.model.encoder.layers.4.self_attn.k_proj.scales": "model.safetensors",
429
+ "language_model.model.encoder.layers.4.self_attn.k_proj.weight": "model.safetensors",
430
+ "language_model.model.encoder.layers.4.self_attn.out_proj.bias": "model.safetensors",
431
+ "language_model.model.encoder.layers.4.self_attn.out_proj.biases": "model.safetensors",
432
+ "language_model.model.encoder.layers.4.self_attn.out_proj.scales": "model.safetensors",
433
+ "language_model.model.encoder.layers.4.self_attn.out_proj.weight": "model.safetensors",
434
+ "language_model.model.encoder.layers.4.self_attn.q_proj.bias": "model.safetensors",
435
+ "language_model.model.encoder.layers.4.self_attn.q_proj.biases": "model.safetensors",
436
+ "language_model.model.encoder.layers.4.self_attn.q_proj.scales": "model.safetensors",
437
+ "language_model.model.encoder.layers.4.self_attn.q_proj.weight": "model.safetensors",
438
+ "language_model.model.encoder.layers.4.self_attn.v_proj.bias": "model.safetensors",
439
+ "language_model.model.encoder.layers.4.self_attn.v_proj.biases": "model.safetensors",
440
+ "language_model.model.encoder.layers.4.self_attn.v_proj.scales": "model.safetensors",
441
+ "language_model.model.encoder.layers.4.self_attn.v_proj.weight": "model.safetensors",
442
+ "language_model.model.encoder.layers.4.self_attn_layer_norm.bias": "model.safetensors",
443
+ "language_model.model.encoder.layers.4.self_attn_layer_norm.weight": "model.safetensors",
444
+ "language_model.model.encoder.layers.5.fc1.bias": "model.safetensors",
445
+ "language_model.model.encoder.layers.5.fc1.biases": "model.safetensors",
446
+ "language_model.model.encoder.layers.5.fc1.scales": "model.safetensors",
447
+ "language_model.model.encoder.layers.5.fc1.weight": "model.safetensors",
448
+ "language_model.model.encoder.layers.5.fc2.bias": "model.safetensors",
449
+ "language_model.model.encoder.layers.5.fc2.biases": "model.safetensors",
450
+ "language_model.model.encoder.layers.5.fc2.scales": "model.safetensors",
451
+ "language_model.model.encoder.layers.5.fc2.weight": "model.safetensors",
452
+ "language_model.model.encoder.layers.5.final_layer_norm.bias": "model.safetensors",
453
+ "language_model.model.encoder.layers.5.final_layer_norm.weight": "model.safetensors",
454
+ "language_model.model.encoder.layers.5.self_attn.k_proj.bias": "model.safetensors",
455
+ "language_model.model.encoder.layers.5.self_attn.k_proj.biases": "model.safetensors",
456
+ "language_model.model.encoder.layers.5.self_attn.k_proj.scales": "model.safetensors",
457
+ "language_model.model.encoder.layers.5.self_attn.k_proj.weight": "model.safetensors",
458
+ "language_model.model.encoder.layers.5.self_attn.out_proj.bias": "model.safetensors",
459
+ "language_model.model.encoder.layers.5.self_attn.out_proj.biases": "model.safetensors",
460
+ "language_model.model.encoder.layers.5.self_attn.out_proj.scales": "model.safetensors",
461
+ "language_model.model.encoder.layers.5.self_attn.out_proj.weight": "model.safetensors",
462
+ "language_model.model.encoder.layers.5.self_attn.q_proj.bias": "model.safetensors",
463
+ "language_model.model.encoder.layers.5.self_attn.q_proj.biases": "model.safetensors",
464
+ "language_model.model.encoder.layers.5.self_attn.q_proj.scales": "model.safetensors",
465
+ "language_model.model.encoder.layers.5.self_attn.q_proj.weight": "model.safetensors",
466
+ "language_model.model.encoder.layers.5.self_attn.v_proj.bias": "model.safetensors",
467
+ "language_model.model.encoder.layers.5.self_attn.v_proj.biases": "model.safetensors",
468
+ "language_model.model.encoder.layers.5.self_attn.v_proj.scales": "model.safetensors",
469
+ "language_model.model.encoder.layers.5.self_attn.v_proj.weight": "model.safetensors",
470
+ "language_model.model.encoder.layers.5.self_attn_layer_norm.bias": "model.safetensors",
471
+ "language_model.model.encoder.layers.5.self_attn_layer_norm.weight": "model.safetensors",
472
+ "language_model.model.shared.biases": "model.safetensors",
473
+ "language_model.model.shared.scales": "model.safetensors",
474
+ "language_model.model.shared.weight": "model.safetensors",
475
+ "vision_tower.blocks.0.0.channel_block.channel_attn.fn.proj.bias": "model.safetensors",
476
+ "vision_tower.blocks.0.0.channel_block.channel_attn.fn.proj.biases": "model.safetensors",
477
+ "vision_tower.blocks.0.0.channel_block.channel_attn.fn.proj.scales": "model.safetensors",
478
+ "vision_tower.blocks.0.0.channel_block.channel_attn.fn.proj.weight": "model.safetensors",
479
+ "vision_tower.blocks.0.0.channel_block.channel_attn.fn.qkv.bias": "model.safetensors",
480
+ "vision_tower.blocks.0.0.channel_block.channel_attn.fn.qkv.biases": "model.safetensors",
481
+ "vision_tower.blocks.0.0.channel_block.channel_attn.fn.qkv.scales": "model.safetensors",
482
+ "vision_tower.blocks.0.0.channel_block.channel_attn.fn.qkv.weight": "model.safetensors",
483
+ "vision_tower.blocks.0.0.channel_block.channel_attn.norm.bias": "model.safetensors",
484
+ "vision_tower.blocks.0.0.channel_block.channel_attn.norm.weight": "model.safetensors",
485
+ "vision_tower.blocks.0.0.channel_block.conv1.fn.dw.bias": "model.safetensors",
486
+ "vision_tower.blocks.0.0.channel_block.conv1.fn.dw.weight": "model.safetensors",
487
+ "vision_tower.blocks.0.0.channel_block.conv2.fn.dw.bias": "model.safetensors",
488
+ "vision_tower.blocks.0.0.channel_block.conv2.fn.dw.weight": "model.safetensors",
489
+ "vision_tower.blocks.0.0.channel_block.ffn.fn.net.fc1.bias": "model.safetensors",
490
+ "vision_tower.blocks.0.0.channel_block.ffn.fn.net.fc1.biases": "model.safetensors",
491
+ "vision_tower.blocks.0.0.channel_block.ffn.fn.net.fc1.scales": "model.safetensors",
492
+ "vision_tower.blocks.0.0.channel_block.ffn.fn.net.fc1.weight": "model.safetensors",
493
+ "vision_tower.blocks.0.0.channel_block.ffn.fn.net.fc2.bias": "model.safetensors",
494
+ "vision_tower.blocks.0.0.channel_block.ffn.fn.net.fc2.biases": "model.safetensors",
495
+ "vision_tower.blocks.0.0.channel_block.ffn.fn.net.fc2.scales": "model.safetensors",
496
+ "vision_tower.blocks.0.0.channel_block.ffn.fn.net.fc2.weight": "model.safetensors",
497
+ "vision_tower.blocks.0.0.channel_block.ffn.norm.bias": "model.safetensors",
498
+ "vision_tower.blocks.0.0.channel_block.ffn.norm.weight": "model.safetensors",
499
+ "vision_tower.blocks.0.0.spatial_block.conv1.fn.dw.bias": "model.safetensors",
500
+ "vision_tower.blocks.0.0.spatial_block.conv1.fn.dw.weight": "model.safetensors",
501
+ "vision_tower.blocks.0.0.spatial_block.conv2.fn.dw.bias": "model.safetensors",
502
+ "vision_tower.blocks.0.0.spatial_block.conv2.fn.dw.weight": "model.safetensors",
503
+ "vision_tower.blocks.0.0.spatial_block.ffn.fn.net.fc1.bias": "model.safetensors",
504
+ "vision_tower.blocks.0.0.spatial_block.ffn.fn.net.fc1.biases": "model.safetensors",
505
+ "vision_tower.blocks.0.0.spatial_block.ffn.fn.net.fc1.scales": "model.safetensors",
506
+ "vision_tower.blocks.0.0.spatial_block.ffn.fn.net.fc1.weight": "model.safetensors",
507
+ "vision_tower.blocks.0.0.spatial_block.ffn.fn.net.fc2.bias": "model.safetensors",
508
+ "vision_tower.blocks.0.0.spatial_block.ffn.fn.net.fc2.biases": "model.safetensors",
509
+ "vision_tower.blocks.0.0.spatial_block.ffn.fn.net.fc2.scales": "model.safetensors",
510
+ "vision_tower.blocks.0.0.spatial_block.ffn.fn.net.fc2.weight": "model.safetensors",
511
+ "vision_tower.blocks.0.0.spatial_block.ffn.norm.bias": "model.safetensors",
512
+ "vision_tower.blocks.0.0.spatial_block.ffn.norm.weight": "model.safetensors",
513
+ "vision_tower.blocks.0.0.spatial_block.window_attn.fn.proj.bias": "model.safetensors",
514
+ "vision_tower.blocks.0.0.spatial_block.window_attn.fn.proj.biases": "model.safetensors",
515
+ "vision_tower.blocks.0.0.spatial_block.window_attn.fn.proj.scales": "model.safetensors",
516
+ "vision_tower.blocks.0.0.spatial_block.window_attn.fn.proj.weight": "model.safetensors",
517
+ "vision_tower.blocks.0.0.spatial_block.window_attn.fn.qkv.bias": "model.safetensors",
518
+ "vision_tower.blocks.0.0.spatial_block.window_attn.fn.qkv.biases": "model.safetensors",
519
+ "vision_tower.blocks.0.0.spatial_block.window_attn.fn.qkv.scales": "model.safetensors",
520
+ "vision_tower.blocks.0.0.spatial_block.window_attn.fn.qkv.weight": "model.safetensors",
521
+ "vision_tower.blocks.0.0.spatial_block.window_attn.norm.bias": "model.safetensors",
522
+ "vision_tower.blocks.0.0.spatial_block.window_attn.norm.weight": "model.safetensors",
523
+ "vision_tower.blocks.1.0.channel_block.channel_attn.fn.proj.bias": "model.safetensors",
524
+ "vision_tower.blocks.1.0.channel_block.channel_attn.fn.proj.biases": "model.safetensors",
525
+ "vision_tower.blocks.1.0.channel_block.channel_attn.fn.proj.scales": "model.safetensors",
526
+ "vision_tower.blocks.1.0.channel_block.channel_attn.fn.proj.weight": "model.safetensors",
527
+ "vision_tower.blocks.1.0.channel_block.channel_attn.fn.qkv.bias": "model.safetensors",
528
+ "vision_tower.blocks.1.0.channel_block.channel_attn.fn.qkv.biases": "model.safetensors",
529
+ "vision_tower.blocks.1.0.channel_block.channel_attn.fn.qkv.scales": "model.safetensors",
530
+ "vision_tower.blocks.1.0.channel_block.channel_attn.fn.qkv.weight": "model.safetensors",
531
+ "vision_tower.blocks.1.0.channel_block.channel_attn.norm.bias": "model.safetensors",
532
+ "vision_tower.blocks.1.0.channel_block.channel_attn.norm.weight": "model.safetensors",
533
+ "vision_tower.blocks.1.0.channel_block.conv1.fn.dw.bias": "model.safetensors",
534
+ "vision_tower.blocks.1.0.channel_block.conv1.fn.dw.weight": "model.safetensors",
535
+ "vision_tower.blocks.1.0.channel_block.conv2.fn.dw.bias": "model.safetensors",
536
+ "vision_tower.blocks.1.0.channel_block.conv2.fn.dw.weight": "model.safetensors",
537
+ "vision_tower.blocks.1.0.channel_block.ffn.fn.net.fc1.bias": "model.safetensors",
538
+ "vision_tower.blocks.1.0.channel_block.ffn.fn.net.fc1.biases": "model.safetensors",
539
+ "vision_tower.blocks.1.0.channel_block.ffn.fn.net.fc1.scales": "model.safetensors",
540
+ "vision_tower.blocks.1.0.channel_block.ffn.fn.net.fc1.weight": "model.safetensors",
541
+ "vision_tower.blocks.1.0.channel_block.ffn.fn.net.fc2.bias": "model.safetensors",
542
+ "vision_tower.blocks.1.0.channel_block.ffn.fn.net.fc2.biases": "model.safetensors",
543
+ "vision_tower.blocks.1.0.channel_block.ffn.fn.net.fc2.scales": "model.safetensors",
544
+ "vision_tower.blocks.1.0.channel_block.ffn.fn.net.fc2.weight": "model.safetensors",
545
+ "vision_tower.blocks.1.0.channel_block.ffn.norm.bias": "model.safetensors",
546
+ "vision_tower.blocks.1.0.channel_block.ffn.norm.weight": "model.safetensors",
547
+ "vision_tower.blocks.1.0.spatial_block.conv1.fn.dw.bias": "model.safetensors",
548
+ "vision_tower.blocks.1.0.spatial_block.conv1.fn.dw.weight": "model.safetensors",
549
+ "vision_tower.blocks.1.0.spatial_block.conv2.fn.dw.bias": "model.safetensors",
550
+ "vision_tower.blocks.1.0.spatial_block.conv2.fn.dw.weight": "model.safetensors",
551
+ "vision_tower.blocks.1.0.spatial_block.ffn.fn.net.fc1.bias": "model.safetensors",
552
+ "vision_tower.blocks.1.0.spatial_block.ffn.fn.net.fc1.biases": "model.safetensors",
553
+ "vision_tower.blocks.1.0.spatial_block.ffn.fn.net.fc1.scales": "model.safetensors",
554
+ "vision_tower.blocks.1.0.spatial_block.ffn.fn.net.fc1.weight": "model.safetensors",
555
+ "vision_tower.blocks.1.0.spatial_block.ffn.fn.net.fc2.bias": "model.safetensors",
556
+ "vision_tower.blocks.1.0.spatial_block.ffn.fn.net.fc2.biases": "model.safetensors",
557
+ "vision_tower.blocks.1.0.spatial_block.ffn.fn.net.fc2.scales": "model.safetensors",
558
+ "vision_tower.blocks.1.0.spatial_block.ffn.fn.net.fc2.weight": "model.safetensors",
559
+ "vision_tower.blocks.1.0.spatial_block.ffn.norm.bias": "model.safetensors",
560
+ "vision_tower.blocks.1.0.spatial_block.ffn.norm.weight": "model.safetensors",
561
+ "vision_tower.blocks.1.0.spatial_block.window_attn.fn.proj.bias": "model.safetensors",
562
+ "vision_tower.blocks.1.0.spatial_block.window_attn.fn.proj.biases": "model.safetensors",
563
+ "vision_tower.blocks.1.0.spatial_block.window_attn.fn.proj.scales": "model.safetensors",
564
+ "vision_tower.blocks.1.0.spatial_block.window_attn.fn.proj.weight": "model.safetensors",
565
+ "vision_tower.blocks.1.0.spatial_block.window_attn.fn.qkv.bias": "model.safetensors",
566
+ "vision_tower.blocks.1.0.spatial_block.window_attn.fn.qkv.biases": "model.safetensors",
567
+ "vision_tower.blocks.1.0.spatial_block.window_attn.fn.qkv.scales": "model.safetensors",
568
+ "vision_tower.blocks.1.0.spatial_block.window_attn.fn.qkv.weight": "model.safetensors",
569
+ "vision_tower.blocks.1.0.spatial_block.window_attn.norm.bias": "model.safetensors",
570
+ "vision_tower.blocks.1.0.spatial_block.window_attn.norm.weight": "model.safetensors",
571
+ "vision_tower.blocks.2.0.channel_block.channel_attn.fn.proj.bias": "model.safetensors",
572
+ "vision_tower.blocks.2.0.channel_block.channel_attn.fn.proj.biases": "model.safetensors",
573
+ "vision_tower.blocks.2.0.channel_block.channel_attn.fn.proj.scales": "model.safetensors",
574
+ "vision_tower.blocks.2.0.channel_block.channel_attn.fn.proj.weight": "model.safetensors",
575
+ "vision_tower.blocks.2.0.channel_block.channel_attn.fn.qkv.bias": "model.safetensors",
576
+ "vision_tower.blocks.2.0.channel_block.channel_attn.fn.qkv.biases": "model.safetensors",
577
+ "vision_tower.blocks.2.0.channel_block.channel_attn.fn.qkv.scales": "model.safetensors",
578
+ "vision_tower.blocks.2.0.channel_block.channel_attn.fn.qkv.weight": "model.safetensors",
579
+ "vision_tower.blocks.2.0.channel_block.channel_attn.norm.bias": "model.safetensors",
580
+ "vision_tower.blocks.2.0.channel_block.channel_attn.norm.weight": "model.safetensors",
581
+ "vision_tower.blocks.2.0.channel_block.conv1.fn.dw.bias": "model.safetensors",
582
+ "vision_tower.blocks.2.0.channel_block.conv1.fn.dw.weight": "model.safetensors",
583
+ "vision_tower.blocks.2.0.channel_block.conv2.fn.dw.bias": "model.safetensors",
584
+ "vision_tower.blocks.2.0.channel_block.conv2.fn.dw.weight": "model.safetensors",
585
+ "vision_tower.blocks.2.0.channel_block.ffn.fn.net.fc1.bias": "model.safetensors",
586
+ "vision_tower.blocks.2.0.channel_block.ffn.fn.net.fc1.biases": "model.safetensors",
587
+ "vision_tower.blocks.2.0.channel_block.ffn.fn.net.fc1.scales": "model.safetensors",
588
+ "vision_tower.blocks.2.0.channel_block.ffn.fn.net.fc1.weight": "model.safetensors",
589
+ "vision_tower.blocks.2.0.channel_block.ffn.fn.net.fc2.bias": "model.safetensors",
590
+ "vision_tower.blocks.2.0.channel_block.ffn.fn.net.fc2.biases": "model.safetensors",
591
+ "vision_tower.blocks.2.0.channel_block.ffn.fn.net.fc2.scales": "model.safetensors",
592
+ "vision_tower.blocks.2.0.channel_block.ffn.fn.net.fc2.weight": "model.safetensors",
593
+ "vision_tower.blocks.2.0.channel_block.ffn.norm.bias": "model.safetensors",
594
+ "vision_tower.blocks.2.0.channel_block.ffn.norm.weight": "model.safetensors",
595
+ "vision_tower.blocks.2.0.spatial_block.conv1.fn.dw.bias": "model.safetensors",
596
+ "vision_tower.blocks.2.0.spatial_block.conv1.fn.dw.weight": "model.safetensors",
597
+ "vision_tower.blocks.2.0.spatial_block.conv2.fn.dw.bias": "model.safetensors",
598
+ "vision_tower.blocks.2.0.spatial_block.conv2.fn.dw.weight": "model.safetensors",
599
+ "vision_tower.blocks.2.0.spatial_block.ffn.fn.net.fc1.bias": "model.safetensors",
600
+ "vision_tower.blocks.2.0.spatial_block.ffn.fn.net.fc1.biases": "model.safetensors",
601
+ "vision_tower.blocks.2.0.spatial_block.ffn.fn.net.fc1.scales": "model.safetensors",
602
+ "vision_tower.blocks.2.0.spatial_block.ffn.fn.net.fc1.weight": "model.safetensors",
603
+ "vision_tower.blocks.2.0.spatial_block.ffn.fn.net.fc2.bias": "model.safetensors",
604
+ "vision_tower.blocks.2.0.spatial_block.ffn.fn.net.fc2.biases": "model.safetensors",
605
+ "vision_tower.blocks.2.0.spatial_block.ffn.fn.net.fc2.scales": "model.safetensors",
606
+ "vision_tower.blocks.2.0.spatial_block.ffn.fn.net.fc2.weight": "model.safetensors",
607
+ "vision_tower.blocks.2.0.spatial_block.ffn.norm.bias": "model.safetensors",
608
+ "vision_tower.blocks.2.0.spatial_block.ffn.norm.weight": "model.safetensors",
609
+ "vision_tower.blocks.2.0.spatial_block.window_attn.fn.proj.bias": "model.safetensors",
610
+ "vision_tower.blocks.2.0.spatial_block.window_attn.fn.proj.biases": "model.safetensors",
611
+ "vision_tower.blocks.2.0.spatial_block.window_attn.fn.proj.scales": "model.safetensors",
612
+ "vision_tower.blocks.2.0.spatial_block.window_attn.fn.proj.weight": "model.safetensors",
613
+ "vision_tower.blocks.2.0.spatial_block.window_attn.fn.qkv.bias": "model.safetensors",
614
+ "vision_tower.blocks.2.0.spatial_block.window_attn.fn.qkv.biases": "model.safetensors",
615
+ "vision_tower.blocks.2.0.spatial_block.window_attn.fn.qkv.scales": "model.safetensors",
616
+ "vision_tower.blocks.2.0.spatial_block.window_attn.fn.qkv.weight": "model.safetensors",
617
+ "vision_tower.blocks.2.0.spatial_block.window_attn.norm.bias": "model.safetensors",
618
+ "vision_tower.blocks.2.0.spatial_block.window_attn.norm.weight": "model.safetensors",
619
+ "vision_tower.blocks.2.1.channel_block.channel_attn.fn.proj.bias": "model.safetensors",
620
+ "vision_tower.blocks.2.1.channel_block.channel_attn.fn.proj.biases": "model.safetensors",
621
+ "vision_tower.blocks.2.1.channel_block.channel_attn.fn.proj.scales": "model.safetensors",
622
+ "vision_tower.blocks.2.1.channel_block.channel_attn.fn.proj.weight": "model.safetensors",
623
+ "vision_tower.blocks.2.1.channel_block.channel_attn.fn.qkv.bias": "model.safetensors",
624
+ "vision_tower.blocks.2.1.channel_block.channel_attn.fn.qkv.biases": "model.safetensors",
625
+ "vision_tower.blocks.2.1.channel_block.channel_attn.fn.qkv.scales": "model.safetensors",
626
+ "vision_tower.blocks.2.1.channel_block.channel_attn.fn.qkv.weight": "model.safetensors",
627
+ "vision_tower.blocks.2.1.channel_block.channel_attn.norm.bias": "model.safetensors",
628
+ "vision_tower.blocks.2.1.channel_block.channel_attn.norm.weight": "model.safetensors",
629
+ "vision_tower.blocks.2.1.channel_block.conv1.fn.dw.bias": "model.safetensors",
630
+ "vision_tower.blocks.2.1.channel_block.conv1.fn.dw.weight": "model.safetensors",
631
+ "vision_tower.blocks.2.1.channel_block.conv2.fn.dw.bias": "model.safetensors",
632
+ "vision_tower.blocks.2.1.channel_block.conv2.fn.dw.weight": "model.safetensors",
633
+ "vision_tower.blocks.2.1.channel_block.ffn.fn.net.fc1.bias": "model.safetensors",
634
+ "vision_tower.blocks.2.1.channel_block.ffn.fn.net.fc1.biases": "model.safetensors",
635
+ "vision_tower.blocks.2.1.channel_block.ffn.fn.net.fc1.scales": "model.safetensors",
636
+ "vision_tower.blocks.2.1.channel_block.ffn.fn.net.fc1.weight": "model.safetensors",
637
+ "vision_tower.blocks.2.1.channel_block.ffn.fn.net.fc2.bias": "model.safetensors",
638
+ "vision_tower.blocks.2.1.channel_block.ffn.fn.net.fc2.biases": "model.safetensors",
639
+ "vision_tower.blocks.2.1.channel_block.ffn.fn.net.fc2.scales": "model.safetensors",
640
+ "vision_tower.blocks.2.1.channel_block.ffn.fn.net.fc2.weight": "model.safetensors",
641
+ "vision_tower.blocks.2.1.channel_block.ffn.norm.bias": "model.safetensors",
642
+ "vision_tower.blocks.2.1.channel_block.ffn.norm.weight": "model.safetensors",
643
+ "vision_tower.blocks.2.1.spatial_block.conv1.fn.dw.bias": "model.safetensors",
644
+ "vision_tower.blocks.2.1.spatial_block.conv1.fn.dw.weight": "model.safetensors",
645
+ "vision_tower.blocks.2.1.spatial_block.conv2.fn.dw.bias": "model.safetensors",
646
+ "vision_tower.blocks.2.1.spatial_block.conv2.fn.dw.weight": "model.safetensors",
647
+ "vision_tower.blocks.2.1.spatial_block.ffn.fn.net.fc1.bias": "model.safetensors",
648
+ "vision_tower.blocks.2.1.spatial_block.ffn.fn.net.fc1.biases": "model.safetensors",
649
+ "vision_tower.blocks.2.1.spatial_block.ffn.fn.net.fc1.scales": "model.safetensors",
650
+ "vision_tower.blocks.2.1.spatial_block.ffn.fn.net.fc1.weight": "model.safetensors",
651
+ "vision_tower.blocks.2.1.spatial_block.ffn.fn.net.fc2.bias": "model.safetensors",
652
+ "vision_tower.blocks.2.1.spatial_block.ffn.fn.net.fc2.biases": "model.safetensors",
653
+ "vision_tower.blocks.2.1.spatial_block.ffn.fn.net.fc2.scales": "model.safetensors",
654
+ "vision_tower.blocks.2.1.spatial_block.ffn.fn.net.fc2.weight": "model.safetensors",
655
+ "vision_tower.blocks.2.1.spatial_block.ffn.norm.bias": "model.safetensors",
656
+ "vision_tower.blocks.2.1.spatial_block.ffn.norm.weight": "model.safetensors",
657
+ "vision_tower.blocks.2.1.spatial_block.window_attn.fn.proj.bias": "model.safetensors",
658
+ "vision_tower.blocks.2.1.spatial_block.window_attn.fn.proj.biases": "model.safetensors",
659
+ "vision_tower.blocks.2.1.spatial_block.window_attn.fn.proj.scales": "model.safetensors",
660
+ "vision_tower.blocks.2.1.spatial_block.window_attn.fn.proj.weight": "model.safetensors",
661
+ "vision_tower.blocks.2.1.spatial_block.window_attn.fn.qkv.bias": "model.safetensors",
662
+ "vision_tower.blocks.2.1.spatial_block.window_attn.fn.qkv.biases": "model.safetensors",
663
+ "vision_tower.blocks.2.1.spatial_block.window_attn.fn.qkv.scales": "model.safetensors",
664
+ "vision_tower.blocks.2.1.spatial_block.window_attn.fn.qkv.weight": "model.safetensors",
665
+ "vision_tower.blocks.2.1.spatial_block.window_attn.norm.bias": "model.safetensors",
666
+ "vision_tower.blocks.2.1.spatial_block.window_attn.norm.weight": "model.safetensors",
667
+ "vision_tower.blocks.2.2.channel_block.channel_attn.fn.proj.bias": "model.safetensors",
668
+ "vision_tower.blocks.2.2.channel_block.channel_attn.fn.proj.biases": "model.safetensors",
669
+ "vision_tower.blocks.2.2.channel_block.channel_attn.fn.proj.scales": "model.safetensors",
670
+ "vision_tower.blocks.2.2.channel_block.channel_attn.fn.proj.weight": "model.safetensors",
671
+ "vision_tower.blocks.2.2.channel_block.channel_attn.fn.qkv.bias": "model.safetensors",
672
+ "vision_tower.blocks.2.2.channel_block.channel_attn.fn.qkv.biases": "model.safetensors",
673
+ "vision_tower.blocks.2.2.channel_block.channel_attn.fn.qkv.scales": "model.safetensors",
674
+ "vision_tower.blocks.2.2.channel_block.channel_attn.fn.qkv.weight": "model.safetensors",
675
+ "vision_tower.blocks.2.2.channel_block.channel_attn.norm.bias": "model.safetensors",
676
+ "vision_tower.blocks.2.2.channel_block.channel_attn.norm.weight": "model.safetensors",
677
+ "vision_tower.blocks.2.2.channel_block.conv1.fn.dw.bias": "model.safetensors",
678
+ "vision_tower.blocks.2.2.channel_block.conv1.fn.dw.weight": "model.safetensors",
679
+ "vision_tower.blocks.2.2.channel_block.conv2.fn.dw.bias": "model.safetensors",
680
+ "vision_tower.blocks.2.2.channel_block.conv2.fn.dw.weight": "model.safetensors",
681
+ "vision_tower.blocks.2.2.channel_block.ffn.fn.net.fc1.bias": "model.safetensors",
682
+ "vision_tower.blocks.2.2.channel_block.ffn.fn.net.fc1.biases": "model.safetensors",
683
+ "vision_tower.blocks.2.2.channel_block.ffn.fn.net.fc1.scales": "model.safetensors",
684
+ "vision_tower.blocks.2.2.channel_block.ffn.fn.net.fc1.weight": "model.safetensors",
685
+ "vision_tower.blocks.2.2.channel_block.ffn.fn.net.fc2.bias": "model.safetensors",
686
+ "vision_tower.blocks.2.2.channel_block.ffn.fn.net.fc2.biases": "model.safetensors",
687
+ "vision_tower.blocks.2.2.channel_block.ffn.fn.net.fc2.scales": "model.safetensors",
688
+ "vision_tower.blocks.2.2.channel_block.ffn.fn.net.fc2.weight": "model.safetensors",
689
+ "vision_tower.blocks.2.2.channel_block.ffn.norm.bias": "model.safetensors",
690
+ "vision_tower.blocks.2.2.channel_block.ffn.norm.weight": "model.safetensors",
691
+ "vision_tower.blocks.2.2.spatial_block.conv1.fn.dw.bias": "model.safetensors",
692
+ "vision_tower.blocks.2.2.spatial_block.conv1.fn.dw.weight": "model.safetensors",
693
+ "vision_tower.blocks.2.2.spatial_block.conv2.fn.dw.bias": "model.safetensors",
694
+ "vision_tower.blocks.2.2.spatial_block.conv2.fn.dw.weight": "model.safetensors",
695
+ "vision_tower.blocks.2.2.spatial_block.ffn.fn.net.fc1.bias": "model.safetensors",
696
+ "vision_tower.blocks.2.2.spatial_block.ffn.fn.net.fc1.biases": "model.safetensors",
697
+ "vision_tower.blocks.2.2.spatial_block.ffn.fn.net.fc1.scales": "model.safetensors",
698
+ "vision_tower.blocks.2.2.spatial_block.ffn.fn.net.fc1.weight": "model.safetensors",
699
+ "vision_tower.blocks.2.2.spatial_block.ffn.fn.net.fc2.bias": "model.safetensors",
700
+ "vision_tower.blocks.2.2.spatial_block.ffn.fn.net.fc2.biases": "model.safetensors",
701
+ "vision_tower.blocks.2.2.spatial_block.ffn.fn.net.fc2.scales": "model.safetensors",
702
+ "vision_tower.blocks.2.2.spatial_block.ffn.fn.net.fc2.weight": "model.safetensors",
703
+ "vision_tower.blocks.2.2.spatial_block.ffn.norm.bias": "model.safetensors",
704
+ "vision_tower.blocks.2.2.spatial_block.ffn.norm.weight": "model.safetensors",
705
+ "vision_tower.blocks.2.2.spatial_block.window_attn.fn.proj.bias": "model.safetensors",
706
+ "vision_tower.blocks.2.2.spatial_block.window_attn.fn.proj.biases": "model.safetensors",
707
+ "vision_tower.blocks.2.2.spatial_block.window_attn.fn.proj.scales": "model.safetensors",
708
+ "vision_tower.blocks.2.2.spatial_block.window_attn.fn.proj.weight": "model.safetensors",
709
+ "vision_tower.blocks.2.2.spatial_block.window_attn.fn.qkv.bias": "model.safetensors",
710
+ "vision_tower.blocks.2.2.spatial_block.window_attn.fn.qkv.biases": "model.safetensors",
711
+ "vision_tower.blocks.2.2.spatial_block.window_attn.fn.qkv.scales": "model.safetensors",
712
+ "vision_tower.blocks.2.2.spatial_block.window_attn.fn.qkv.weight": "model.safetensors",
713
+ "vision_tower.blocks.2.2.spatial_block.window_attn.norm.bias": "model.safetensors",
714
+ "vision_tower.blocks.2.2.spatial_block.window_attn.norm.weight": "model.safetensors",
715
+ "vision_tower.blocks.2.3.channel_block.channel_attn.fn.proj.bias": "model.safetensors",
716
+ "vision_tower.blocks.2.3.channel_block.channel_attn.fn.proj.biases": "model.safetensors",
717
+ "vision_tower.blocks.2.3.channel_block.channel_attn.fn.proj.scales": "model.safetensors",
718
+ "vision_tower.blocks.2.3.channel_block.channel_attn.fn.proj.weight": "model.safetensors",
719
+ "vision_tower.blocks.2.3.channel_block.channel_attn.fn.qkv.bias": "model.safetensors",
720
+ "vision_tower.blocks.2.3.channel_block.channel_attn.fn.qkv.biases": "model.safetensors",
721
+ "vision_tower.blocks.2.3.channel_block.channel_attn.fn.qkv.scales": "model.safetensors",
722
+ "vision_tower.blocks.2.3.channel_block.channel_attn.fn.qkv.weight": "model.safetensors",
723
+ "vision_tower.blocks.2.3.channel_block.channel_attn.norm.bias": "model.safetensors",
724
+ "vision_tower.blocks.2.3.channel_block.channel_attn.norm.weight": "model.safetensors",
725
+ "vision_tower.blocks.2.3.channel_block.conv1.fn.dw.bias": "model.safetensors",
726
+ "vision_tower.blocks.2.3.channel_block.conv1.fn.dw.weight": "model.safetensors",
727
+ "vision_tower.blocks.2.3.channel_block.conv2.fn.dw.bias": "model.safetensors",
728
+ "vision_tower.blocks.2.3.channel_block.conv2.fn.dw.weight": "model.safetensors",
729
+ "vision_tower.blocks.2.3.channel_block.ffn.fn.net.fc1.bias": "model.safetensors",
730
+ "vision_tower.blocks.2.3.channel_block.ffn.fn.net.fc1.biases": "model.safetensors",
731
+ "vision_tower.blocks.2.3.channel_block.ffn.fn.net.fc1.scales": "model.safetensors",
732
+ "vision_tower.blocks.2.3.channel_block.ffn.fn.net.fc1.weight": "model.safetensors",
733
+ "vision_tower.blocks.2.3.channel_block.ffn.fn.net.fc2.bias": "model.safetensors",
734
+ "vision_tower.blocks.2.3.channel_block.ffn.fn.net.fc2.biases": "model.safetensors",
735
+ "vision_tower.blocks.2.3.channel_block.ffn.fn.net.fc2.scales": "model.safetensors",
736
+ "vision_tower.blocks.2.3.channel_block.ffn.fn.net.fc2.weight": "model.safetensors",
737
+ "vision_tower.blocks.2.3.channel_block.ffn.norm.bias": "model.safetensors",
738
+ "vision_tower.blocks.2.3.channel_block.ffn.norm.weight": "model.safetensors",
739
+ "vision_tower.blocks.2.3.spatial_block.conv1.fn.dw.bias": "model.safetensors",
740
+ "vision_tower.blocks.2.3.spatial_block.conv1.fn.dw.weight": "model.safetensors",
741
+ "vision_tower.blocks.2.3.spatial_block.conv2.fn.dw.bias": "model.safetensors",
742
+ "vision_tower.blocks.2.3.spatial_block.conv2.fn.dw.weight": "model.safetensors",
743
+ "vision_tower.blocks.2.3.spatial_block.ffn.fn.net.fc1.bias": "model.safetensors",
744
+ "vision_tower.blocks.2.3.spatial_block.ffn.fn.net.fc1.biases": "model.safetensors",
745
+ "vision_tower.blocks.2.3.spatial_block.ffn.fn.net.fc1.scales": "model.safetensors",
746
+ "vision_tower.blocks.2.3.spatial_block.ffn.fn.net.fc1.weight": "model.safetensors",
747
+ "vision_tower.blocks.2.3.spatial_block.ffn.fn.net.fc2.bias": "model.safetensors",
748
+ "vision_tower.blocks.2.3.spatial_block.ffn.fn.net.fc2.biases": "model.safetensors",
749
+ "vision_tower.blocks.2.3.spatial_block.ffn.fn.net.fc2.scales": "model.safetensors",
750
+ "vision_tower.blocks.2.3.spatial_block.ffn.fn.net.fc2.weight": "model.safetensors",
751
+ "vision_tower.blocks.2.3.spatial_block.ffn.norm.bias": "model.safetensors",
752
+ "vision_tower.blocks.2.3.spatial_block.ffn.norm.weight": "model.safetensors",
753
+ "vision_tower.blocks.2.3.spatial_block.window_attn.fn.proj.bias": "model.safetensors",
754
+ "vision_tower.blocks.2.3.spatial_block.window_attn.fn.proj.biases": "model.safetensors",
755
+ "vision_tower.blocks.2.3.spatial_block.window_attn.fn.proj.scales": "model.safetensors",
756
+ "vision_tower.blocks.2.3.spatial_block.window_attn.fn.proj.weight": "model.safetensors",
757
+ "vision_tower.blocks.2.3.spatial_block.window_attn.fn.qkv.bias": "model.safetensors",
758
+ "vision_tower.blocks.2.3.spatial_block.window_attn.fn.qkv.biases": "model.safetensors",
759
+ "vision_tower.blocks.2.3.spatial_block.window_attn.fn.qkv.scales": "model.safetensors",
760
+ "vision_tower.blocks.2.3.spatial_block.window_attn.fn.qkv.weight": "model.safetensors",
761
+ "vision_tower.blocks.2.3.spatial_block.window_attn.norm.bias": "model.safetensors",
762
+ "vision_tower.blocks.2.3.spatial_block.window_attn.norm.weight": "model.safetensors",
763
+ "vision_tower.blocks.2.4.channel_block.channel_attn.fn.proj.bias": "model.safetensors",
764
+ "vision_tower.blocks.2.4.channel_block.channel_attn.fn.proj.biases": "model.safetensors",
765
+ "vision_tower.blocks.2.4.channel_block.channel_attn.fn.proj.scales": "model.safetensors",
766
+ "vision_tower.blocks.2.4.channel_block.channel_attn.fn.proj.weight": "model.safetensors",
767
+ "vision_tower.blocks.2.4.channel_block.channel_attn.fn.qkv.bias": "model.safetensors",
768
+ "vision_tower.blocks.2.4.channel_block.channel_attn.fn.qkv.biases": "model.safetensors",
769
+ "vision_tower.blocks.2.4.channel_block.channel_attn.fn.qkv.scales": "model.safetensors",
770
+ "vision_tower.blocks.2.4.channel_block.channel_attn.fn.qkv.weight": "model.safetensors",
771
+ "vision_tower.blocks.2.4.channel_block.channel_attn.norm.bias": "model.safetensors",
772
+ "vision_tower.blocks.2.4.channel_block.channel_attn.norm.weight": "model.safetensors",
773
+ "vision_tower.blocks.2.4.channel_block.conv1.fn.dw.bias": "model.safetensors",
774
+ "vision_tower.blocks.2.4.channel_block.conv1.fn.dw.weight": "model.safetensors",
775
+ "vision_tower.blocks.2.4.channel_block.conv2.fn.dw.bias": "model.safetensors",
776
+ "vision_tower.blocks.2.4.channel_block.conv2.fn.dw.weight": "model.safetensors",
777
+ "vision_tower.blocks.2.4.channel_block.ffn.fn.net.fc1.bias": "model.safetensors",
778
+ "vision_tower.blocks.2.4.channel_block.ffn.fn.net.fc1.biases": "model.safetensors",
779
+ "vision_tower.blocks.2.4.channel_block.ffn.fn.net.fc1.scales": "model.safetensors",
780
+ "vision_tower.blocks.2.4.channel_block.ffn.fn.net.fc1.weight": "model.safetensors",
781
+ "vision_tower.blocks.2.4.channel_block.ffn.fn.net.fc2.bias": "model.safetensors",
782
+ "vision_tower.blocks.2.4.channel_block.ffn.fn.net.fc2.biases": "model.safetensors",
783
+ "vision_tower.blocks.2.4.channel_block.ffn.fn.net.fc2.scales": "model.safetensors",
784
+ "vision_tower.blocks.2.4.channel_block.ffn.fn.net.fc2.weight": "model.safetensors",
785
+ "vision_tower.blocks.2.4.channel_block.ffn.norm.bias": "model.safetensors",
786
+ "vision_tower.blocks.2.4.channel_block.ffn.norm.weight": "model.safetensors",
787
+ "vision_tower.blocks.2.4.spatial_block.conv1.fn.dw.bias": "model.safetensors",
788
+ "vision_tower.blocks.2.4.spatial_block.conv1.fn.dw.weight": "model.safetensors",
789
+ "vision_tower.blocks.2.4.spatial_block.conv2.fn.dw.bias": "model.safetensors",
790
+ "vision_tower.blocks.2.4.spatial_block.conv2.fn.dw.weight": "model.safetensors",
791
+ "vision_tower.blocks.2.4.spatial_block.ffn.fn.net.fc1.bias": "model.safetensors",
792
+ "vision_tower.blocks.2.4.spatial_block.ffn.fn.net.fc1.biases": "model.safetensors",
793
+ "vision_tower.blocks.2.4.spatial_block.ffn.fn.net.fc1.scales": "model.safetensors",
794
+ "vision_tower.blocks.2.4.spatial_block.ffn.fn.net.fc1.weight": "model.safetensors",
795
+ "vision_tower.blocks.2.4.spatial_block.ffn.fn.net.fc2.bias": "model.safetensors",
796
+ "vision_tower.blocks.2.4.spatial_block.ffn.fn.net.fc2.biases": "model.safetensors",
797
+ "vision_tower.blocks.2.4.spatial_block.ffn.fn.net.fc2.scales": "model.safetensors",
798
+ "vision_tower.blocks.2.4.spatial_block.ffn.fn.net.fc2.weight": "model.safetensors",
799
+ "vision_tower.blocks.2.4.spatial_block.ffn.norm.bias": "model.safetensors",
800
+ "vision_tower.blocks.2.4.spatial_block.ffn.norm.weight": "model.safetensors",
801
+ "vision_tower.blocks.2.4.spatial_block.window_attn.fn.proj.bias": "model.safetensors",
802
+ "vision_tower.blocks.2.4.spatial_block.window_attn.fn.proj.biases": "model.safetensors",
803
+ "vision_tower.blocks.2.4.spatial_block.window_attn.fn.proj.scales": "model.safetensors",
804
+ "vision_tower.blocks.2.4.spatial_block.window_attn.fn.proj.weight": "model.safetensors",
805
+ "vision_tower.blocks.2.4.spatial_block.window_attn.fn.qkv.bias": "model.safetensors",
806
+ "vision_tower.blocks.2.4.spatial_block.window_attn.fn.qkv.biases": "model.safetensors",
807
+ "vision_tower.blocks.2.4.spatial_block.window_attn.fn.qkv.scales": "model.safetensors",
808
+ "vision_tower.blocks.2.4.spatial_block.window_attn.fn.qkv.weight": "model.safetensors",
809
+ "vision_tower.blocks.2.4.spatial_block.window_attn.norm.bias": "model.safetensors",
810
+ "vision_tower.blocks.2.4.spatial_block.window_attn.norm.weight": "model.safetensors",
811
+ "vision_tower.blocks.2.5.channel_block.channel_attn.fn.proj.bias": "model.safetensors",
812
+ "vision_tower.blocks.2.5.channel_block.channel_attn.fn.proj.biases": "model.safetensors",
813
+ "vision_tower.blocks.2.5.channel_block.channel_attn.fn.proj.scales": "model.safetensors",
814
+ "vision_tower.blocks.2.5.channel_block.channel_attn.fn.proj.weight": "model.safetensors",
815
+ "vision_tower.blocks.2.5.channel_block.channel_attn.fn.qkv.bias": "model.safetensors",
816
+ "vision_tower.blocks.2.5.channel_block.channel_attn.fn.qkv.biases": "model.safetensors",
817
+ "vision_tower.blocks.2.5.channel_block.channel_attn.fn.qkv.scales": "model.safetensors",
818
+ "vision_tower.blocks.2.5.channel_block.channel_attn.fn.qkv.weight": "model.safetensors",
819
+ "vision_tower.blocks.2.5.channel_block.channel_attn.norm.bias": "model.safetensors",
820
+ "vision_tower.blocks.2.5.channel_block.channel_attn.norm.weight": "model.safetensors",
821
+ "vision_tower.blocks.2.5.channel_block.conv1.fn.dw.bias": "model.safetensors",
822
+ "vision_tower.blocks.2.5.channel_block.conv1.fn.dw.weight": "model.safetensors",
823
+ "vision_tower.blocks.2.5.channel_block.conv2.fn.dw.bias": "model.safetensors",
824
+ "vision_tower.blocks.2.5.channel_block.conv2.fn.dw.weight": "model.safetensors",
825
+ "vision_tower.blocks.2.5.channel_block.ffn.fn.net.fc1.bias": "model.safetensors",
826
+ "vision_tower.blocks.2.5.channel_block.ffn.fn.net.fc1.biases": "model.safetensors",
827
+ "vision_tower.blocks.2.5.channel_block.ffn.fn.net.fc1.scales": "model.safetensors",
828
+ "vision_tower.blocks.2.5.channel_block.ffn.fn.net.fc1.weight": "model.safetensors",
829
+ "vision_tower.blocks.2.5.channel_block.ffn.fn.net.fc2.bias": "model.safetensors",
830
+ "vision_tower.blocks.2.5.channel_block.ffn.fn.net.fc2.biases": "model.safetensors",
831
+ "vision_tower.blocks.2.5.channel_block.ffn.fn.net.fc2.scales": "model.safetensors",
832
+ "vision_tower.blocks.2.5.channel_block.ffn.fn.net.fc2.weight": "model.safetensors",
833
+ "vision_tower.blocks.2.5.channel_block.ffn.norm.bias": "model.safetensors",
834
+ "vision_tower.blocks.2.5.channel_block.ffn.norm.weight": "model.safetensors",
835
+ "vision_tower.blocks.2.5.spatial_block.conv1.fn.dw.bias": "model.safetensors",
836
+ "vision_tower.blocks.2.5.spatial_block.conv1.fn.dw.weight": "model.safetensors",
837
+ "vision_tower.blocks.2.5.spatial_block.conv2.fn.dw.bias": "model.safetensors",
838
+ "vision_tower.blocks.2.5.spatial_block.conv2.fn.dw.weight": "model.safetensors",
839
+ "vision_tower.blocks.2.5.spatial_block.ffn.fn.net.fc1.bias": "model.safetensors",
840
+ "vision_tower.blocks.2.5.spatial_block.ffn.fn.net.fc1.biases": "model.safetensors",
841
+ "vision_tower.blocks.2.5.spatial_block.ffn.fn.net.fc1.scales": "model.safetensors",
842
+ "vision_tower.blocks.2.5.spatial_block.ffn.fn.net.fc1.weight": "model.safetensors",
843
+ "vision_tower.blocks.2.5.spatial_block.ffn.fn.net.fc2.bias": "model.safetensors",
844
+ "vision_tower.blocks.2.5.spatial_block.ffn.fn.net.fc2.biases": "model.safetensors",
845
+ "vision_tower.blocks.2.5.spatial_block.ffn.fn.net.fc2.scales": "model.safetensors",
846
+ "vision_tower.blocks.2.5.spatial_block.ffn.fn.net.fc2.weight": "model.safetensors",
847
+ "vision_tower.blocks.2.5.spatial_block.ffn.norm.bias": "model.safetensors",
848
+ "vision_tower.blocks.2.5.spatial_block.ffn.norm.weight": "model.safetensors",
849
+ "vision_tower.blocks.2.5.spatial_block.window_attn.fn.proj.bias": "model.safetensors",
850
+ "vision_tower.blocks.2.5.spatial_block.window_attn.fn.proj.biases": "model.safetensors",
851
+ "vision_tower.blocks.2.5.spatial_block.window_attn.fn.proj.scales": "model.safetensors",
852
+ "vision_tower.blocks.2.5.spatial_block.window_attn.fn.proj.weight": "model.safetensors",
853
+ "vision_tower.blocks.2.5.spatial_block.window_attn.fn.qkv.bias": "model.safetensors",
854
+ "vision_tower.blocks.2.5.spatial_block.window_attn.fn.qkv.biases": "model.safetensors",
855
+ "vision_tower.blocks.2.5.spatial_block.window_attn.fn.qkv.scales": "model.safetensors",
856
+ "vision_tower.blocks.2.5.spatial_block.window_attn.fn.qkv.weight": "model.safetensors",
857
+ "vision_tower.blocks.2.5.spatial_block.window_attn.norm.bias": "model.safetensors",
858
+ "vision_tower.blocks.2.5.spatial_block.window_attn.norm.weight": "model.safetensors",
859
+ "vision_tower.blocks.2.6.channel_block.channel_attn.fn.proj.bias": "model.safetensors",
860
+ "vision_tower.blocks.2.6.channel_block.channel_attn.fn.proj.biases": "model.safetensors",
861
+ "vision_tower.blocks.2.6.channel_block.channel_attn.fn.proj.scales": "model.safetensors",
862
+ "vision_tower.blocks.2.6.channel_block.channel_attn.fn.proj.weight": "model.safetensors",
863
+ "vision_tower.blocks.2.6.channel_block.channel_attn.fn.qkv.bias": "model.safetensors",
864
+ "vision_tower.blocks.2.6.channel_block.channel_attn.fn.qkv.biases": "model.safetensors",
865
+ "vision_tower.blocks.2.6.channel_block.channel_attn.fn.qkv.scales": "model.safetensors",
866
+ "vision_tower.blocks.2.6.channel_block.channel_attn.fn.qkv.weight": "model.safetensors",
867
+ "vision_tower.blocks.2.6.channel_block.channel_attn.norm.bias": "model.safetensors",
868
+ "vision_tower.blocks.2.6.channel_block.channel_attn.norm.weight": "model.safetensors",
869
+ "vision_tower.blocks.2.6.channel_block.conv1.fn.dw.bias": "model.safetensors",
870
+ "vision_tower.blocks.2.6.channel_block.conv1.fn.dw.weight": "model.safetensors",
871
+ "vision_tower.blocks.2.6.channel_block.conv2.fn.dw.bias": "model.safetensors",
872
+ "vision_tower.blocks.2.6.channel_block.conv2.fn.dw.weight": "model.safetensors",
873
+ "vision_tower.blocks.2.6.channel_block.ffn.fn.net.fc1.bias": "model.safetensors",
874
+ "vision_tower.blocks.2.6.channel_block.ffn.fn.net.fc1.biases": "model.safetensors",
875
+ "vision_tower.blocks.2.6.channel_block.ffn.fn.net.fc1.scales": "model.safetensors",
876
+ "vision_tower.blocks.2.6.channel_block.ffn.fn.net.fc1.weight": "model.safetensors",
877
+ "vision_tower.blocks.2.6.channel_block.ffn.fn.net.fc2.bias": "model.safetensors",
878
+ "vision_tower.blocks.2.6.channel_block.ffn.fn.net.fc2.biases": "model.safetensors",
879
+ "vision_tower.blocks.2.6.channel_block.ffn.fn.net.fc2.scales": "model.safetensors",
880
+ "vision_tower.blocks.2.6.channel_block.ffn.fn.net.fc2.weight": "model.safetensors",
881
+ "vision_tower.blocks.2.6.channel_block.ffn.norm.bias": "model.safetensors",
882
+ "vision_tower.blocks.2.6.channel_block.ffn.norm.weight": "model.safetensors",
883
+ "vision_tower.blocks.2.6.spatial_block.conv1.fn.dw.bias": "model.safetensors",
884
+ "vision_tower.blocks.2.6.spatial_block.conv1.fn.dw.weight": "model.safetensors",
885
+ "vision_tower.blocks.2.6.spatial_block.conv2.fn.dw.bias": "model.safetensors",
886
+ "vision_tower.blocks.2.6.spatial_block.conv2.fn.dw.weight": "model.safetensors",
887
+ "vision_tower.blocks.2.6.spatial_block.ffn.fn.net.fc1.bias": "model.safetensors",
888
+ "vision_tower.blocks.2.6.spatial_block.ffn.fn.net.fc1.biases": "model.safetensors",
889
+ "vision_tower.blocks.2.6.spatial_block.ffn.fn.net.fc1.scales": "model.safetensors",
890
+ "vision_tower.blocks.2.6.spatial_block.ffn.fn.net.fc1.weight": "model.safetensors",
891
+ "vision_tower.blocks.2.6.spatial_block.ffn.fn.net.fc2.bias": "model.safetensors",
892
+ "vision_tower.blocks.2.6.spatial_block.ffn.fn.net.fc2.biases": "model.safetensors",
893
+ "vision_tower.blocks.2.6.spatial_block.ffn.fn.net.fc2.scales": "model.safetensors",
894
+ "vision_tower.blocks.2.6.spatial_block.ffn.fn.net.fc2.weight": "model.safetensors",
895
+ "vision_tower.blocks.2.6.spatial_block.ffn.norm.bias": "model.safetensors",
896
+ "vision_tower.blocks.2.6.spatial_block.ffn.norm.weight": "model.safetensors",
897
+ "vision_tower.blocks.2.6.spatial_block.window_attn.fn.proj.bias": "model.safetensors",
898
+ "vision_tower.blocks.2.6.spatial_block.window_attn.fn.proj.biases": "model.safetensors",
899
+ "vision_tower.blocks.2.6.spatial_block.window_attn.fn.proj.scales": "model.safetensors",
900
+ "vision_tower.blocks.2.6.spatial_block.window_attn.fn.proj.weight": "model.safetensors",
901
+ "vision_tower.blocks.2.6.spatial_block.window_attn.fn.qkv.bias": "model.safetensors",
902
+ "vision_tower.blocks.2.6.spatial_block.window_attn.fn.qkv.biases": "model.safetensors",
903
+ "vision_tower.blocks.2.6.spatial_block.window_attn.fn.qkv.scales": "model.safetensors",
904
+ "vision_tower.blocks.2.6.spatial_block.window_attn.fn.qkv.weight": "model.safetensors",
905
+ "vision_tower.blocks.2.6.spatial_block.window_attn.norm.bias": "model.safetensors",
906
+ "vision_tower.blocks.2.6.spatial_block.window_attn.norm.weight": "model.safetensors",
907
+ "vision_tower.blocks.2.7.channel_block.channel_attn.fn.proj.bias": "model.safetensors",
908
+ "vision_tower.blocks.2.7.channel_block.channel_attn.fn.proj.biases": "model.safetensors",
909
+ "vision_tower.blocks.2.7.channel_block.channel_attn.fn.proj.scales": "model.safetensors",
910
+ "vision_tower.blocks.2.7.channel_block.channel_attn.fn.proj.weight": "model.safetensors",
911
+ "vision_tower.blocks.2.7.channel_block.channel_attn.fn.qkv.bias": "model.safetensors",
912
+ "vision_tower.blocks.2.7.channel_block.channel_attn.fn.qkv.biases": "model.safetensors",
913
+ "vision_tower.blocks.2.7.channel_block.channel_attn.fn.qkv.scales": "model.safetensors",
914
+ "vision_tower.blocks.2.7.channel_block.channel_attn.fn.qkv.weight": "model.safetensors",
915
+ "vision_tower.blocks.2.7.channel_block.channel_attn.norm.bias": "model.safetensors",
916
+ "vision_tower.blocks.2.7.channel_block.channel_attn.norm.weight": "model.safetensors",
917
+ "vision_tower.blocks.2.7.channel_block.conv1.fn.dw.bias": "model.safetensors",
918
+ "vision_tower.blocks.2.7.channel_block.conv1.fn.dw.weight": "model.safetensors",
919
+ "vision_tower.blocks.2.7.channel_block.conv2.fn.dw.bias": "model.safetensors",
920
+ "vision_tower.blocks.2.7.channel_block.conv2.fn.dw.weight": "model.safetensors",
921
+ "vision_tower.blocks.2.7.channel_block.ffn.fn.net.fc1.bias": "model.safetensors",
922
+ "vision_tower.blocks.2.7.channel_block.ffn.fn.net.fc1.biases": "model.safetensors",
923
+ "vision_tower.blocks.2.7.channel_block.ffn.fn.net.fc1.scales": "model.safetensors",
924
+ "vision_tower.blocks.2.7.channel_block.ffn.fn.net.fc1.weight": "model.safetensors",
925
+ "vision_tower.blocks.2.7.channel_block.ffn.fn.net.fc2.bias": "model.safetensors",
926
+ "vision_tower.blocks.2.7.channel_block.ffn.fn.net.fc2.biases": "model.safetensors",
927
+ "vision_tower.blocks.2.7.channel_block.ffn.fn.net.fc2.scales": "model.safetensors",
928
+ "vision_tower.blocks.2.7.channel_block.ffn.fn.net.fc2.weight": "model.safetensors",
929
+ "vision_tower.blocks.2.7.channel_block.ffn.norm.bias": "model.safetensors",
930
+ "vision_tower.blocks.2.7.channel_block.ffn.norm.weight": "model.safetensors",
931
+ "vision_tower.blocks.2.7.spatial_block.conv1.fn.dw.bias": "model.safetensors",
932
+ "vision_tower.blocks.2.7.spatial_block.conv1.fn.dw.weight": "model.safetensors",
933
+ "vision_tower.blocks.2.7.spatial_block.conv2.fn.dw.bias": "model.safetensors",
934
+ "vision_tower.blocks.2.7.spatial_block.conv2.fn.dw.weight": "model.safetensors",
935
+ "vision_tower.blocks.2.7.spatial_block.ffn.fn.net.fc1.bias": "model.safetensors",
936
+ "vision_tower.blocks.2.7.spatial_block.ffn.fn.net.fc1.biases": "model.safetensors",
937
+ "vision_tower.blocks.2.7.spatial_block.ffn.fn.net.fc1.scales": "model.safetensors",
938
+ "vision_tower.blocks.2.7.spatial_block.ffn.fn.net.fc1.weight": "model.safetensors",
939
+ "vision_tower.blocks.2.7.spatial_block.ffn.fn.net.fc2.bias": "model.safetensors",
940
+ "vision_tower.blocks.2.7.spatial_block.ffn.fn.net.fc2.biases": "model.safetensors",
941
+ "vision_tower.blocks.2.7.spatial_block.ffn.fn.net.fc2.scales": "model.safetensors",
942
+ "vision_tower.blocks.2.7.spatial_block.ffn.fn.net.fc2.weight": "model.safetensors",
943
+ "vision_tower.blocks.2.7.spatial_block.ffn.norm.bias": "model.safetensors",
944
+ "vision_tower.blocks.2.7.spatial_block.ffn.norm.weight": "model.safetensors",
945
+ "vision_tower.blocks.2.7.spatial_block.window_attn.fn.proj.bias": "model.safetensors",
946
+ "vision_tower.blocks.2.7.spatial_block.window_attn.fn.proj.biases": "model.safetensors",
947
+ "vision_tower.blocks.2.7.spatial_block.window_attn.fn.proj.scales": "model.safetensors",
948
+ "vision_tower.blocks.2.7.spatial_block.window_attn.fn.proj.weight": "model.safetensors",
949
+ "vision_tower.blocks.2.7.spatial_block.window_attn.fn.qkv.bias": "model.safetensors",
950
+ "vision_tower.blocks.2.7.spatial_block.window_attn.fn.qkv.biases": "model.safetensors",
951
+ "vision_tower.blocks.2.7.spatial_block.window_attn.fn.qkv.scales": "model.safetensors",
952
+ "vision_tower.blocks.2.7.spatial_block.window_attn.fn.qkv.weight": "model.safetensors",
953
+ "vision_tower.blocks.2.7.spatial_block.window_attn.norm.bias": "model.safetensors",
954
+ "vision_tower.blocks.2.7.spatial_block.window_attn.norm.weight": "model.safetensors",
955
+ "vision_tower.blocks.2.8.channel_block.channel_attn.fn.proj.bias": "model.safetensors",
956
+ "vision_tower.blocks.2.8.channel_block.channel_attn.fn.proj.biases": "model.safetensors",
957
+ "vision_tower.blocks.2.8.channel_block.channel_attn.fn.proj.scales": "model.safetensors",
958
+ "vision_tower.blocks.2.8.channel_block.channel_attn.fn.proj.weight": "model.safetensors",
959
+ "vision_tower.blocks.2.8.channel_block.channel_attn.fn.qkv.bias": "model.safetensors",
960
+ "vision_tower.blocks.2.8.channel_block.channel_attn.fn.qkv.biases": "model.safetensors",
961
+ "vision_tower.blocks.2.8.channel_block.channel_attn.fn.qkv.scales": "model.safetensors",
962
+ "vision_tower.blocks.2.8.channel_block.channel_attn.fn.qkv.weight": "model.safetensors",
963
+ "vision_tower.blocks.2.8.channel_block.channel_attn.norm.bias": "model.safetensors",
964
+ "vision_tower.blocks.2.8.channel_block.channel_attn.norm.weight": "model.safetensors",
965
+ "vision_tower.blocks.2.8.channel_block.conv1.fn.dw.bias": "model.safetensors",
966
+ "vision_tower.blocks.2.8.channel_block.conv1.fn.dw.weight": "model.safetensors",
967
+ "vision_tower.blocks.2.8.channel_block.conv2.fn.dw.bias": "model.safetensors",
968
+ "vision_tower.blocks.2.8.channel_block.conv2.fn.dw.weight": "model.safetensors",
969
+ "vision_tower.blocks.2.8.channel_block.ffn.fn.net.fc1.bias": "model.safetensors",
970
+ "vision_tower.blocks.2.8.channel_block.ffn.fn.net.fc1.biases": "model.safetensors",
971
+ "vision_tower.blocks.2.8.channel_block.ffn.fn.net.fc1.scales": "model.safetensors",
972
+ "vision_tower.blocks.2.8.channel_block.ffn.fn.net.fc1.weight": "model.safetensors",
973
+ "vision_tower.blocks.2.8.channel_block.ffn.fn.net.fc2.bias": "model.safetensors",
974
+ "vision_tower.blocks.2.8.channel_block.ffn.fn.net.fc2.biases": "model.safetensors",
975
+ "vision_tower.blocks.2.8.channel_block.ffn.fn.net.fc2.scales": "model.safetensors",
976
+ "vision_tower.blocks.2.8.channel_block.ffn.fn.net.fc2.weight": "model.safetensors",
977
+ "vision_tower.blocks.2.8.channel_block.ffn.norm.bias": "model.safetensors",
978
+ "vision_tower.blocks.2.8.channel_block.ffn.norm.weight": "model.safetensors",
979
+ "vision_tower.blocks.2.8.spatial_block.conv1.fn.dw.bias": "model.safetensors",
980
+ "vision_tower.blocks.2.8.spatial_block.conv1.fn.dw.weight": "model.safetensors",
981
+ "vision_tower.blocks.2.8.spatial_block.conv2.fn.dw.bias": "model.safetensors",
982
+ "vision_tower.blocks.2.8.spatial_block.conv2.fn.dw.weight": "model.safetensors",
983
+ "vision_tower.blocks.2.8.spatial_block.ffn.fn.net.fc1.bias": "model.safetensors",
984
+ "vision_tower.blocks.2.8.spatial_block.ffn.fn.net.fc1.biases": "model.safetensors",
985
+ "vision_tower.blocks.2.8.spatial_block.ffn.fn.net.fc1.scales": "model.safetensors",
986
+ "vision_tower.blocks.2.8.spatial_block.ffn.fn.net.fc1.weight": "model.safetensors",
987
+ "vision_tower.blocks.2.8.spatial_block.ffn.fn.net.fc2.bias": "model.safetensors",
988
+ "vision_tower.blocks.2.8.spatial_block.ffn.fn.net.fc2.biases": "model.safetensors",
989
+ "vision_tower.blocks.2.8.spatial_block.ffn.fn.net.fc2.scales": "model.safetensors",
990
+ "vision_tower.blocks.2.8.spatial_block.ffn.fn.net.fc2.weight": "model.safetensors",
991
+ "vision_tower.blocks.2.8.spatial_block.ffn.norm.bias": "model.safetensors",
992
+ "vision_tower.blocks.2.8.spatial_block.ffn.norm.weight": "model.safetensors",
993
+ "vision_tower.blocks.2.8.spatial_block.window_attn.fn.proj.bias": "model.safetensors",
994
+ "vision_tower.blocks.2.8.spatial_block.window_attn.fn.proj.biases": "model.safetensors",
995
+ "vision_tower.blocks.2.8.spatial_block.window_attn.fn.proj.scales": "model.safetensors",
996
+ "vision_tower.blocks.2.8.spatial_block.window_attn.fn.proj.weight": "model.safetensors",
997
+ "vision_tower.blocks.2.8.spatial_block.window_attn.fn.qkv.bias": "model.safetensors",
998
+ "vision_tower.blocks.2.8.spatial_block.window_attn.fn.qkv.biases": "model.safetensors",
999
+ "vision_tower.blocks.2.8.spatial_block.window_attn.fn.qkv.scales": "model.safetensors",
1000
+ "vision_tower.blocks.2.8.spatial_block.window_attn.fn.qkv.weight": "model.safetensors",
1001
+ "vision_tower.blocks.2.8.spatial_block.window_attn.norm.bias": "model.safetensors",
1002
+ "vision_tower.blocks.2.8.spatial_block.window_attn.norm.weight": "model.safetensors",
1003
+ "vision_tower.blocks.3.0.channel_block.channel_attn.fn.proj.bias": "model.safetensors",
1004
+ "vision_tower.blocks.3.0.channel_block.channel_attn.fn.proj.biases": "model.safetensors",
1005
+ "vision_tower.blocks.3.0.channel_block.channel_attn.fn.proj.scales": "model.safetensors",
1006
+ "vision_tower.blocks.3.0.channel_block.channel_attn.fn.proj.weight": "model.safetensors",
1007
+ "vision_tower.blocks.3.0.channel_block.channel_attn.fn.qkv.bias": "model.safetensors",
1008
+ "vision_tower.blocks.3.0.channel_block.channel_attn.fn.qkv.biases": "model.safetensors",
1009
+ "vision_tower.blocks.3.0.channel_block.channel_attn.fn.qkv.scales": "model.safetensors",
1010
+ "vision_tower.blocks.3.0.channel_block.channel_attn.fn.qkv.weight": "model.safetensors",
1011
+ "vision_tower.blocks.3.0.channel_block.channel_attn.norm.bias": "model.safetensors",
1012
+ "vision_tower.blocks.3.0.channel_block.channel_attn.norm.weight": "model.safetensors",
1013
+ "vision_tower.blocks.3.0.channel_block.conv1.fn.dw.bias": "model.safetensors",
1014
+ "vision_tower.blocks.3.0.channel_block.conv1.fn.dw.weight": "model.safetensors",
1015
+ "vision_tower.blocks.3.0.channel_block.conv2.fn.dw.bias": "model.safetensors",
1016
+ "vision_tower.blocks.3.0.channel_block.conv2.fn.dw.weight": "model.safetensors",
1017
+ "vision_tower.blocks.3.0.channel_block.ffn.fn.net.fc1.bias": "model.safetensors",
1018
+ "vision_tower.blocks.3.0.channel_block.ffn.fn.net.fc1.biases": "model.safetensors",
1019
+ "vision_tower.blocks.3.0.channel_block.ffn.fn.net.fc1.scales": "model.safetensors",
1020
+ "vision_tower.blocks.3.0.channel_block.ffn.fn.net.fc1.weight": "model.safetensors",
1021
+ "vision_tower.blocks.3.0.channel_block.ffn.fn.net.fc2.bias": "model.safetensors",
1022
+ "vision_tower.blocks.3.0.channel_block.ffn.fn.net.fc2.biases": "model.safetensors",
1023
+ "vision_tower.blocks.3.0.channel_block.ffn.fn.net.fc2.scales": "model.safetensors",
1024
+ "vision_tower.blocks.3.0.channel_block.ffn.fn.net.fc2.weight": "model.safetensors",
1025
+ "vision_tower.blocks.3.0.channel_block.ffn.norm.bias": "model.safetensors",
1026
+ "vision_tower.blocks.3.0.channel_block.ffn.norm.weight": "model.safetensors",
1027
+ "vision_tower.blocks.3.0.spatial_block.conv1.fn.dw.bias": "model.safetensors",
1028
+ "vision_tower.blocks.3.0.spatial_block.conv1.fn.dw.weight": "model.safetensors",
1029
+ "vision_tower.blocks.3.0.spatial_block.conv2.fn.dw.bias": "model.safetensors",
1030
+ "vision_tower.blocks.3.0.spatial_block.conv2.fn.dw.weight": "model.safetensors",
1031
+ "vision_tower.blocks.3.0.spatial_block.ffn.fn.net.fc1.bias": "model.safetensors",
1032
+ "vision_tower.blocks.3.0.spatial_block.ffn.fn.net.fc1.biases": "model.safetensors",
1033
+ "vision_tower.blocks.3.0.spatial_block.ffn.fn.net.fc1.scales": "model.safetensors",
1034
+ "vision_tower.blocks.3.0.spatial_block.ffn.fn.net.fc1.weight": "model.safetensors",
1035
+ "vision_tower.blocks.3.0.spatial_block.ffn.fn.net.fc2.bias": "model.safetensors",
1036
+ "vision_tower.blocks.3.0.spatial_block.ffn.fn.net.fc2.biases": "model.safetensors",
1037
+ "vision_tower.blocks.3.0.spatial_block.ffn.fn.net.fc2.scales": "model.safetensors",
1038
+ "vision_tower.blocks.3.0.spatial_block.ffn.fn.net.fc2.weight": "model.safetensors",
1039
+ "vision_tower.blocks.3.0.spatial_block.ffn.norm.bias": "model.safetensors",
1040
+ "vision_tower.blocks.3.0.spatial_block.ffn.norm.weight": "model.safetensors",
1041
+ "vision_tower.blocks.3.0.spatial_block.window_attn.fn.proj.bias": "model.safetensors",
1042
+ "vision_tower.blocks.3.0.spatial_block.window_attn.fn.proj.biases": "model.safetensors",
1043
+ "vision_tower.blocks.3.0.spatial_block.window_attn.fn.proj.scales": "model.safetensors",
1044
+ "vision_tower.blocks.3.0.spatial_block.window_attn.fn.proj.weight": "model.safetensors",
1045
+ "vision_tower.blocks.3.0.spatial_block.window_attn.fn.qkv.bias": "model.safetensors",
1046
+ "vision_tower.blocks.3.0.spatial_block.window_attn.fn.qkv.biases": "model.safetensors",
1047
+ "vision_tower.blocks.3.0.spatial_block.window_attn.fn.qkv.scales": "model.safetensors",
1048
+ "vision_tower.blocks.3.0.spatial_block.window_attn.fn.qkv.weight": "model.safetensors",
1049
+ "vision_tower.blocks.3.0.spatial_block.window_attn.norm.bias": "model.safetensors",
1050
+ "vision_tower.blocks.3.0.spatial_block.window_attn.norm.weight": "model.safetensors",
1051
+ "vision_tower.convs.0.norm.bias": "model.safetensors",
1052
+ "vision_tower.convs.0.norm.weight": "model.safetensors",
1053
+ "vision_tower.convs.0.proj.bias": "model.safetensors",
1054
+ "vision_tower.convs.0.proj.weight": "model.safetensors",
1055
+ "vision_tower.convs.1.norm.bias": "model.safetensors",
1056
+ "vision_tower.convs.1.norm.weight": "model.safetensors",
1057
+ "vision_tower.convs.1.proj.bias": "model.safetensors",
1058
+ "vision_tower.convs.1.proj.weight": "model.safetensors",
1059
+ "vision_tower.convs.2.norm.bias": "model.safetensors",
1060
+ "vision_tower.convs.2.norm.weight": "model.safetensors",
1061
+ "vision_tower.convs.2.proj.bias": "model.safetensors",
1062
+ "vision_tower.convs.2.proj.weight": "model.safetensors",
1063
+ "vision_tower.convs.3.norm.bias": "model.safetensors",
1064
+ "vision_tower.convs.3.norm.weight": "model.safetensors",
1065
+ "vision_tower.convs.3.proj.bias": "model.safetensors",
1066
+ "vision_tower.convs.3.proj.weight": "model.safetensors",
1067
+ "visual_temporal_embed.pos_idx_to_embed": "model.safetensors"
1068
+ }
1069
+ }
preprocessor_config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoProcessor": "processing_florence2.Florence2Processor"
4
+ },
5
+ "crop_size": {
6
+ "height": 768,
7
+ "width": 768
8
+ },
9
+ "do_center_crop": false,
10
+ "do_convert_rgb": null,
11
+ "do_normalize": true,
12
+ "do_rescale": true,
13
+ "do_resize": true,
14
+ "image_mean": [
15
+ 0.485,
16
+ 0.456,
17
+ 0.406
18
+ ],
19
+ "image_processor_type": "CLIPImageProcessor",
20
+ "image_seq_length": 577,
21
+ "image_std": [
22
+ 0.229,
23
+ 0.224,
24
+ 0.225
25
+ ],
26
+ "processor_class": "Florence2Processor",
27
+ "resample": 3,
28
+ "rescale_factor": 0.00392156862745098,
29
+ "size": {
30
+ "height": 768,
31
+ "width": 768
32
+ }
33
+ }
processing_florence2.py ADDED
@@ -0,0 +1,1088 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 Microsoft and The HuggingFace Inc. team.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """
16
+ Processor class for Florence-2.
17
+ """
18
+
19
+ import re
20
+ import logging
21
+ from typing import List, Optional, Union
22
+ import numpy as np
23
+
24
+ import torch
25
+
26
+ from transformers.feature_extraction_utils import BatchFeature
27
+ from transformers.image_utils import ImageInput, is_valid_image
28
+ from transformers.processing_utils import ProcessorMixin
29
+ from transformers.tokenization_utils_base import (
30
+ PaddingStrategy,
31
+ PreTokenizedInput,
32
+ TextInput,
33
+ TruncationStrategy,
34
+ )
35
+ from transformers.utils import TensorType
36
+
37
+
38
+ logger = logging.getLogger(__name__)
39
+
40
+ # Copied from transformers.models.idefics2.processing_idefics2.is_url
41
+ def is_url(val) -> bool:
42
+ return isinstance(val, str) and val.startswith("http")
43
+
44
+ # Copied from transformers.models.idefics2.processing_idefics2.is_image_or_image_url
45
+ def is_image_or_image_url(elem):
46
+ return is_url(elem) or is_valid_image(elem)
47
+
48
+
49
+ def _is_str_or_image(elem):
50
+ return isinstance(elem, (str)) or is_image_or_image_url(elem)
51
+
52
+
53
+ class Florence2Processor(ProcessorMixin):
54
+ r"""
55
+ Constructs a Florence2 processor which wraps a Florence2 image processor and a Florence2 tokenizer into a single processor.
56
+
57
+ [`Florence2Processor`] offers all the functionalities of [`CLIPImageProcessor`] and [`BartTokenizerFast`]. See the
58
+ [`~Florence2Processor.__call__`] and [`~Florence2Processor.decode`] for more information.
59
+
60
+ Args:
61
+ image_processor ([`CLIPImageProcessor`], *optional*):
62
+ The image processor is a required input.
63
+ tokenizer ([`BartTokenizerFast`], *optional*):
64
+ The tokenizer is a required input.
65
+ """
66
+
67
+ attributes = ["image_processor", "tokenizer"]
68
+ image_processor_class = "CLIPImageProcessor"
69
+ tokenizer_class = ("BartTokenizer", "BartTokenizerFast")
70
+
71
+ def __init__(
72
+ self,
73
+ image_processor=None,
74
+ tokenizer=None,
75
+ ):
76
+ if image_processor is None:
77
+ raise ValueError("You need to specify an `image_processor`.")
78
+ if tokenizer is None:
79
+ raise ValueError("You need to specify a `tokenizer`.")
80
+ if not hasattr(image_processor, "image_seq_length"):
81
+ raise ValueError("Image processor is missing an `image_seq_length` attribute.")
82
+
83
+ self.image_seq_length = image_processor.image_seq_length
84
+
85
+ tokens_to_add = {
86
+ 'additional_special_tokens': \
87
+ tokenizer.additional_special_tokens + \
88
+ ['<od>', '</od>', '<ocr>', '</ocr>'] + \
89
+ [f'<loc_{x}>' for x in range(1000)] + \
90
+ ['<cap>', '</cap>', '<ncap>', '</ncap>','<dcap>', '</dcap>', '<grounding>', '</grounding>', '<seg>', '</seg>', '<sep>', '<region_cap>', '</region_cap>', '<region_to_desciption>', '</region_to_desciption>', '<proposal>', '</proposal>', '<poly>', '</poly>', '<and>']
91
+ }
92
+ tokenizer.add_special_tokens(tokens_to_add)
93
+
94
+ self.tasks_answer_post_processing_type = {
95
+ '<OCR>': 'pure_text',
96
+ '<OCR_WITH_REGION>': 'ocr',
97
+ '<CAPTION>': 'pure_text',
98
+ '<DETAILED_CAPTION>': 'pure_text',
99
+ '<MORE_DETAILED_CAPTION>': 'pure_text',
100
+ '<OD>': 'description_with_bboxes',
101
+ '<DENSE_REGION_CAPTION>': 'description_with_bboxes',
102
+ '<CAPTION_TO_PHRASE_GROUNDING>': "phrase_grounding",
103
+ '<REFERRING_EXPRESSION_SEGMENTATION>': 'polygons',
104
+ '<REGION_TO_SEGMENTATION>': 'polygons',
105
+ '<OPEN_VOCABULARY_DETECTION>': 'description_with_bboxes_or_polygons',
106
+ '<REGION_TO_CATEGORY>': 'pure_text',
107
+ '<REGION_TO_DESCRIPTION>': 'pure_text',
108
+ '<REGION_TO_OCR>': 'pure_text',
109
+ '<REGION_PROPOSAL>': 'bboxes'
110
+ }
111
+
112
+ self.task_prompts_without_inputs = {
113
+ '<OCR>': 'What is the text in the image?',
114
+ '<OCR_WITH_REGION>': 'What is the text in the image, with regions?',
115
+ '<CAPTION>': 'What does the image describe?',
116
+ '<DETAILED_CAPTION>': 'Describe in detail what is shown in the image.',
117
+ '<MORE_DETAILED_CAPTION>': 'Describe with a paragraph what is shown in the image.',
118
+ '<OD>': 'Locate the objects with category name in the image.',
119
+ '<DENSE_REGION_CAPTION>': 'Locate the objects in the image, with their descriptions.',
120
+ '<REGION_PROPOSAL>': 'Locate the region proposals in the image.'
121
+ }
122
+
123
+ self.task_prompts_with_input = {
124
+ '<CAPTION_TO_PHRASE_GROUNDING>': "Locate the phrases in the caption: {input}",
125
+ '<REFERRING_EXPRESSION_SEGMENTATION>': 'Locate {input} in the image with mask',
126
+ '<REGION_TO_SEGMENTATION>': 'What is the polygon mask of region {input}',
127
+ '<OPEN_VOCABULARY_DETECTION>': 'Locate {input} in the image.',
128
+ '<REGION_TO_CATEGORY>': 'What is the region {input}?',
129
+ '<REGION_TO_DESCRIPTION>': 'What does the region {input} describe?',
130
+ '<REGION_TO_OCR>': 'What text is in the region {input}?',
131
+ }
132
+
133
+ self.post_processor = Florence2PostProcesser(tokenizer=tokenizer)
134
+
135
+
136
+ super().__init__(image_processor, tokenizer)
137
+
138
+ def _construct_prompts(self, text):
139
+ # replace the task tokens with the task prompts if task token is in the text
140
+ prompts = []
141
+ for _text in text:
142
+ # 1. fixed task prompts without additional inputs
143
+ for task_token, task_prompt in self.task_prompts_without_inputs.items():
144
+ if task_token in _text:
145
+ assert _text == task_token, f"Task token {task_token} should be the only token in the text."
146
+ _text = task_prompt
147
+ break
148
+ # 2. task prompts with additional inputs
149
+ for task_token, task_prompt in self.task_prompts_with_input.items():
150
+ if task_token in _text:
151
+ _text = task_prompt.format(input=_text.replace(task_token, ''))
152
+ break
153
+ prompts.append(_text)
154
+ return prompts
155
+
156
+ def __call__(
157
+ self,
158
+ text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None,
159
+ images: ImageInput = None,
160
+ tokenize_newline_separately: bool = True,
161
+ padding: Union[bool, str, PaddingStrategy] = False,
162
+ truncation: Union[bool, str, TruncationStrategy] = None,
163
+ max_length=None,
164
+ return_tensors: Optional[Union[str, TensorType]] = TensorType.PYTORCH,
165
+ do_resize: bool = None,
166
+ do_normalize: bool = None,
167
+ image_mean: Optional[Union[float, List[float]]] = None,
168
+ image_std: Optional[Union[float, List[float]]] = None,
169
+ data_format: Optional["ChannelDimension"] = "channels_first", # noqa: F821
170
+ input_data_format: Optional[
171
+ Union[str, "ChannelDimension"] # noqa: F821
172
+ ] = None,
173
+ resample: "PILImageResampling" = None, # noqa: F821
174
+ do_convert_rgb: bool = None,
175
+ do_thumbnail: bool = None,
176
+ do_align_long_axis: bool = None,
177
+ do_rescale: bool = None,
178
+ ) -> BatchFeature:
179
+ """
180
+ Main method to prepare for the model one or several sequences(s) and image(s). This method forwards the `text`
181
+ and `kwargs` arguments to BartTokenizerFast's [`~BartTokenizerFast.__call__`] if `text` is not `None` to encode
182
+ the text. To prepare the image(s), this method forwards the `images` and `kwrags` arguments to
183
+ CLIPImageProcessor's [`~CLIPImageProcessor.__call__`] if `images` is not `None`. Please refer to the doctsring
184
+ of the above two methods for more information.
185
+
186
+ Args:
187
+ text (`str`, `List[str]`, `List[List[str]]`):
188
+ The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
189
+ (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
190
+ `is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
191
+ images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
192
+ The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
193
+ tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
194
+ number of channels, H and W are image height and width.
195
+ tokenize_newline_separately (`bool`, defaults to `True`):
196
+ Adds a separately tokenized '\n' at the end of the prompt.
197
+ padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`):
198
+ Select a strategy to pad the returned sequences (according to the model's padding side and padding
199
+ index) among:
200
+ - `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
201
+ sequence if provided).
202
+ - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
203
+ acceptable input length for the model if that argument is not provided.
204
+ - `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different
205
+ lengths).
206
+ max_length (`int`, *optional*):
207
+ Maximum length of the returned list and optionally padding length (see above).
208
+ truncation (`bool`, *optional*):
209
+ Activates truncation to cut input sequences longer than `max_length` to `max_length`.
210
+ return_tensors (`str` or [`~utils.TensorType`], *optional*):
211
+ If set, will return tensors of a particular framework. Acceptable values are:
212
+
213
+ - `'tf'`: Return TensorFlow `tf.constant` objects.
214
+ - `'pt'`: Return PyTorch `torch.Tensor` objects.
215
+ - `'np'`: Return NumPy `np.ndarray` objects.
216
+ - `'jax'`: Return JAX `jnp.ndarray` objects.
217
+
218
+ Returns:
219
+ [`BatchFeature`]: A [`BatchFeature`] with the following fields:
220
+
221
+ - **input_ids** -- List of token ids to be fed to a model. Returned when `text` is not `None`. If `suffix`
222
+ is provided, the `input_ids` will also contain the suffix input ids.
223
+ - **attention_mask** -- List of indices specifying which tokens should be attended to by the model (when
224
+ `return_attention_mask=True` or if *"attention_mask"* is in `self.model_input_names` and if `text` is not
225
+ `None`).
226
+ - **pixel_values** -- Pixel values to be fed to a model. Returned when `images` is not `None`.
227
+ - **labels** -- Labels compatible with training if `suffix` is not None
228
+ """
229
+
230
+ return_token_type_ids = False
231
+
232
+ if images is None:
233
+ raise ValueError("`images` are expected as arguments to a `Florence2Processor` instance.")
234
+ if text is None:
235
+ logger.warning_once(
236
+ "You are using Florence-2 without a text prompt."
237
+ )
238
+ text = ""
239
+
240
+ if isinstance(text, List) and isinstance(images, List):
241
+ if len(images) < len(text):
242
+ raise ValueError(
243
+ f"Received {len(images)} images for {len(text)} prompts. Each prompt should be associated with an image."
244
+ )
245
+ if _is_str_or_image(text):
246
+ text = [text]
247
+ elif isinstance(text, list) and _is_str_or_image(text[0]):
248
+ pass
249
+
250
+ pixel_values = self.image_processor(
251
+ images,
252
+ do_resize=do_resize,
253
+ do_normalize=do_normalize,
254
+ return_tensors=return_tensors,
255
+ image_mean=image_mean,
256
+ image_std=image_std,
257
+ input_data_format=input_data_format,
258
+ data_format=data_format,
259
+ resample=resample,
260
+ do_convert_rgb=do_convert_rgb,
261
+ )["pixel_values"]
262
+
263
+ if max_length is not None:
264
+ max_length -= self.image_seq_length # max_length has to account for the image tokens
265
+
266
+ text = self._construct_prompts(text)
267
+
268
+ inputs = self.tokenizer(
269
+ text,
270
+ return_tensors=return_tensors,
271
+ padding=padding,
272
+ max_length=max_length,
273
+ truncation=truncation,
274
+ return_token_type_ids=return_token_type_ids,
275
+ )
276
+
277
+ return_data = {**inputs, "pixel_values": pixel_values}
278
+
279
+ if return_token_type_ids:
280
+ labels = inputs["input_ids"].masked_fill(inputs["token_type_ids"] == 0, -100)
281
+ return_data.update({"labels": labels})
282
+ return BatchFeature(data=return_data)
283
+
284
+ # Copied from transformers.models.clip.processing_clip.CLIPProcessor.batch_decode with CLIP->Florence2
285
+ def batch_decode(self, *args, **kwargs):
286
+ """
287
+ This method forwards all its arguments to BartTokenizerFast's [`~PreTrainedTokenizer.batch_decode`]. Please
288
+ refer to the docstring of this method for more information.
289
+ """
290
+ return self.tokenizer.batch_decode(*args, **kwargs)
291
+
292
+ # Copied from transformers.models.clip.processing_clip.CLIPProcessor.decode with CLIP->Florence2
293
+ def decode(self, *args, **kwargs):
294
+ """
295
+ This method forwards all its arguments to BartTokenizerFast's [`~PreTrainedTokenizer.decode`]. Please refer to
296
+ the docstring of this method for more information.
297
+ """
298
+ return self.tokenizer.decode(*args, **kwargs)
299
+
300
+ @property
301
+ # Copied from transformers.models.clip.processing_clip.CLIPProcessor.model_input_names with CLIP->Florence2
302
+ def model_input_names(self):
303
+ tokenizer_input_names = self.tokenizer.model_input_names
304
+ image_processor_input_names = self.image_processor.model_input_names
305
+ return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
306
+
307
+ def post_process_generation(self, text, task, image_size):
308
+ """
309
+ Post-process the output of the model to each of the task outputs.
310
+
311
+ Args:
312
+ text (`str`): The text to post-process.
313
+ task (`str`): The task to post-process the text for.
314
+ image_size (`Tuple[int, int]`): The size of the image. height x width.
315
+ """
316
+
317
+ task_answer_post_processing_type = self.tasks_answer_post_processing_type.get(task, 'pure_text')
318
+ task_answer = self.post_processor(
319
+ text=text,
320
+ image_size=image_size,
321
+ parse_tasks=task_answer_post_processing_type,
322
+ )[task_answer_post_processing_type]
323
+
324
+ if task_answer_post_processing_type == 'pure_text':
325
+ final_answer = task_answer
326
+ # remove the special tokens
327
+ final_answer = final_answer.replace('<s>', '').replace('</s>', '')
328
+ elif task_answer_post_processing_type in ['od', 'description_with_bboxes', 'bboxes']:
329
+ od_instances = task_answer
330
+ bboxes_od = [_od_instance['bbox'] for _od_instance in od_instances]
331
+ labels_od = [str(_od_instance['cat_name']) for _od_instance in od_instances]
332
+ final_answer = {'bboxes': bboxes_od, 'labels': labels_od}
333
+ elif task_answer_post_processing_type in ['ocr']:
334
+ bboxes = [_od_instance['quad_box'] for _od_instance in task_answer]
335
+ labels = [str(_od_instance['text']) for _od_instance in task_answer]
336
+ final_answer = {'quad_boxes': bboxes, 'labels': labels}
337
+ elif task_answer_post_processing_type in ['phrase_grounding']:
338
+ bboxes = []
339
+ labels = []
340
+ for _grounded_phrase in task_answer:
341
+ for _bbox in _grounded_phrase['bbox']:
342
+ bboxes.append(_bbox)
343
+ labels.append(_grounded_phrase['cat_name'])
344
+ final_answer = {'bboxes': bboxes, 'labels': labels}
345
+ elif task_answer_post_processing_type in ['description_with_polygons', 'polygons']:
346
+ labels = []
347
+ polygons = []
348
+ for result in task_answer:
349
+ label = result['cat_name']
350
+ _polygons = result['polygons']
351
+ labels.append(label)
352
+ polygons.append(_polygons)
353
+ final_answer = {'polygons': polygons, 'labels': labels}
354
+ elif task_answer_post_processing_type in ['description_with_bboxes_or_polygons']:
355
+ bboxes = []
356
+ bboxes_labels = []
357
+ polygons = []
358
+ polygons_labels = []
359
+ for result in task_answer:
360
+ label = result['cat_name']
361
+ if 'polygons' in result:
362
+ _polygons = result['polygons']
363
+ polygons.append(_polygons)
364
+ polygons_labels.append(label)
365
+ else:
366
+ _bbox = result['bbox']
367
+ bboxes.append(_bbox)
368
+ bboxes_labels.append(label)
369
+ final_answer = {'bboxes': bboxes, 'bboxes_labels': bboxes_labels, 'polygons': polygons, 'polygons_labels': polygons_labels}
370
+ else:
371
+ raise ValueError('Unknown task answer post processing type: {}'.format(task_answer_post_processing_type))
372
+
373
+ final_answer = {
374
+ task: final_answer}
375
+ return final_answer
376
+
377
+ class BoxQuantizer(object):
378
+ def __init__(self, mode, bins):
379
+ self.mode = mode
380
+ self.bins = bins
381
+
382
+ def quantize(self, boxes: torch.Tensor, size):
383
+ bins_w, bins_h = self.bins # Quantization bins.
384
+ size_w, size_h = size # Original image size.
385
+ size_per_bin_w = size_w / bins_w
386
+ size_per_bin_h = size_h / bins_h
387
+ xmin, ymin, xmax, ymax = boxes.split(1, dim=-1) # Shape: 4 * [N, 1].
388
+
389
+ if self.mode == 'floor':
390
+ quantized_xmin = (
391
+ xmin / size_per_bin_w).floor().clamp(0, bins_w - 1)
392
+ quantized_ymin = (
393
+ ymin / size_per_bin_h).floor().clamp(0, bins_h - 1)
394
+ quantized_xmax = (
395
+ xmax / size_per_bin_w).floor().clamp(0, bins_w - 1)
396
+ quantized_ymax = (
397
+ ymax / size_per_bin_h).floor().clamp(0, bins_h - 1)
398
+
399
+ elif self.mode == 'round':
400
+ raise NotImplementedError()
401
+
402
+ else:
403
+ raise ValueError('Incorrect quantization type.')
404
+
405
+ quantized_boxes = torch.cat(
406
+ (quantized_xmin, quantized_ymin, quantized_xmax, quantized_ymax), dim=-1
407
+ ).int()
408
+
409
+ return quantized_boxes
410
+
411
+ def dequantize(self, boxes: torch.Tensor, size):
412
+ bins_w, bins_h = self.bins # Quantization bins.
413
+ size_w, size_h = size # Original image size.
414
+ size_per_bin_w = size_w / bins_w
415
+ size_per_bin_h = size_h / bins_h
416
+ xmin, ymin, xmax, ymax = boxes.split(1, dim=-1) # Shape: 4 * [N, 1].
417
+
418
+ if self.mode == 'floor':
419
+ # Add 0.5 to use the center position of the bin as the coordinate.
420
+ dequantized_xmin = (xmin + 0.5) * size_per_bin_w
421
+ dequantized_ymin = (ymin + 0.5) * size_per_bin_h
422
+ dequantized_xmax = (xmax + 0.5) * size_per_bin_w
423
+ dequantized_ymax = (ymax + 0.5) * size_per_bin_h
424
+
425
+ elif self.mode == 'round':
426
+ raise NotImplementedError()
427
+
428
+ else:
429
+ raise ValueError('Incorrect quantization type.')
430
+
431
+ dequantized_boxes = torch.cat(
432
+ (dequantized_xmin, dequantized_ymin,
433
+ dequantized_xmax, dequantized_ymax), dim=-1
434
+ )
435
+
436
+ return dequantized_boxes
437
+
438
+
439
+ class CoordinatesQuantizer(object):
440
+ """
441
+ Quantize coornidates (Nx2)
442
+ """
443
+
444
+ def __init__(self, mode, bins):
445
+ self.mode = mode
446
+ self.bins = bins
447
+
448
+ def quantize(self, coordinates: torch.Tensor, size):
449
+ bins_w, bins_h = self.bins # Quantization bins.
450
+ size_w, size_h = size # Original image size.
451
+ size_per_bin_w = size_w / bins_w
452
+ size_per_bin_h = size_h / bins_h
453
+ assert coordinates.shape[-1] == 2, 'coordinates should be shape (N, 2)'
454
+ x, y = coordinates.split(1, dim=-1) # Shape: 4 * [N, 1].
455
+
456
+ if self.mode == 'floor':
457
+ quantized_x = (x / size_per_bin_w).floor().clamp(0, bins_w - 1)
458
+ quantized_y = (y / size_per_bin_h).floor().clamp(0, bins_h - 1)
459
+
460
+ elif self.mode == 'round':
461
+ raise NotImplementedError()
462
+
463
+ else:
464
+ raise ValueError('Incorrect quantization type.')
465
+
466
+ quantized_coordinates = torch.cat(
467
+ (quantized_x, quantized_y), dim=-1
468
+ ).int()
469
+
470
+ return quantized_coordinates
471
+
472
+ def dequantize(self, coordinates: torch.Tensor, size):
473
+ bins_w, bins_h = self.bins # Quantization bins.
474
+ size_w, size_h = size # Original image size.
475
+ size_per_bin_w = size_w / bins_w
476
+ size_per_bin_h = size_h / bins_h
477
+ assert coordinates.shape[-1] == 2, 'coordinates should be shape (N, 2)'
478
+ x, y = coordinates.split(1, dim=-1) # Shape: 4 * [N, 1].
479
+
480
+ if self.mode == 'floor':
481
+ # Add 0.5 to use the center position of the bin as the coordinate.
482
+ dequantized_x = (x + 0.5) * size_per_bin_w
483
+ dequantized_y = (y + 0.5) * size_per_bin_h
484
+
485
+ elif self.mode == 'round':
486
+ raise NotImplementedError()
487
+
488
+ else:
489
+ raise ValueError('Incorrect quantization type.')
490
+
491
+ dequantized_coordinates = torch.cat(
492
+ (dequantized_x, dequantized_y), dim=-1
493
+ )
494
+
495
+ return dequantized_coordinates
496
+
497
+
498
+ class Florence2PostProcesser(object):
499
+ """
500
+ Florence-2 post process for converting text prediction to various tasks results.
501
+
502
+ Args:
503
+ config: A dict of configs.
504
+ tokenizer: A tokenizer for decoding text to spans.
505
+ sample config:
506
+ UNIFIED_POST_PROCESS:
507
+ # commom configs
508
+ NUM_BBOX_HEIGHT_BINS: 1000
509
+ NUM_BBOX_WIDTH_BINS: 1000
510
+ COORDINATES_HEIGHT_BINS: 1000
511
+ COORDINATES_WIDTH_BINS: 1000
512
+ # task specific configs, override the common configs
513
+ PRASE_TASKS:
514
+ - TASK_NAME: 'video_dense_caption'
515
+ PATTERN: 'r<time_(\d+)><time_(\d+)>([a-zA-Z0-9 ]+)'
516
+ SCORE_MODE: 'avg_cat_name_scores'
517
+ NUM_BINS: 100
518
+ - TASK_NAME: 'od'
519
+ PATTERN: 'r<loc_(\d+)><loc_(\d+)><loc_(\d+)><loc_(\d+)>([a-zA-Z0-9 ]+)'
520
+ SCORE_MODE: 'avg_cat_name_scores'
521
+
522
+ Returns:
523
+ parsed_dict (dict): A dict of parsed results.
524
+ """
525
+ def __init__(
526
+ self,
527
+ tokenizer=None
528
+ ):
529
+ parse_tasks = []
530
+ parse_task_configs = {}
531
+ config = self._create_default_config()
532
+ for task in config['PARSE_TASKS']:
533
+ parse_tasks.append(task['TASK_NAME'])
534
+ parse_task_configs[task['TASK_NAME']] = task
535
+
536
+ self.config = config
537
+ self.parse_tasks = parse_tasks
538
+ self.parse_tasks_configs = parse_task_configs
539
+
540
+ self.tokenizer = tokenizer
541
+ if self.tokenizer is not None:
542
+ self.all_special_tokens = set(self.tokenizer.all_special_tokens)
543
+
544
+ self.init_quantizers()
545
+ self.black_list_of_phrase_grounding = self._create_black_list_of_phrase_grounding()
546
+
547
+ def _create_black_list_of_phrase_grounding(self):
548
+ black_list = {}
549
+
550
+ if 'phrase_grounding' in self.parse_tasks and self.parse_tasks_configs['phrase_grounding']['FILTER_BY_BLACK_LIST']:
551
+ black_list = set(
552
+ ['it', 'I', 'me', 'mine',
553
+ 'you', 'your', 'yours',
554
+ 'he', 'him', 'his',
555
+ 'she', 'her', 'hers',
556
+ 'they', 'them', 'their', 'theirs',
557
+ 'one', 'oneself',
558
+ 'we', 'us', 'our', 'ours',
559
+ 'you', 'your', 'yours',
560
+ 'they', 'them', 'their', 'theirs',
561
+ 'mine', 'yours', 'his', 'hers', 'its',
562
+ 'ours', 'yours', 'theirs',
563
+ 'myself', 'yourself', 'himself', 'herself', 'itself',
564
+ 'ourselves', 'yourselves', 'themselves',
565
+ 'this', 'that',
566
+ 'these', 'those',
567
+ 'who', 'whom', 'whose', 'which', 'what',
568
+ 'who', 'whom', 'whose', 'which', 'that',
569
+ 'all', 'another', 'any', 'anybody', 'anyone', 'anything',
570
+ 'each', 'everybody', 'everyone', 'everything',
571
+ 'few', 'many', 'nobody', 'none', 'one', 'several',
572
+ 'some', 'somebody', 'someone', 'something',
573
+ 'each other', 'one another',
574
+ 'myself', 'yourself', 'himself', 'herself', 'itself',
575
+ 'ourselves', 'yourselves', 'themselves',
576
+ 'the image', 'image', 'images', 'the', 'a', 'an', 'a group',
577
+ 'other objects', 'lots', 'a set',
578
+ ]
579
+ )
580
+
581
+ return black_list
582
+
583
+ def _create_default_config(self):
584
+ config = {
585
+ 'NUM_BBOX_HEIGHT_BINS': 1000,
586
+ 'NUM_BBOX_WIDTH_BINS': 1000,
587
+ 'BOX_QUANTIZATION_MODE': 'floor',
588
+ 'COORDINATES_HEIGHT_BINS': 1000,
589
+ 'COORDINATES_WIDTH_BINS': 1000,
590
+ 'COORDINATES_QUANTIZATION_MODE': 'floor',
591
+ 'PARSE_TASKS': [
592
+ {
593
+ 'TASK_NAME': 'od',
594
+ 'PATTERN': r'([a-zA-Z0-9 ]+)<loc_(\\d+)><loc_(\\d+)><loc_(\\d+)><loc_(\\d+)>'
595
+ },
596
+ {
597
+ 'TASK_NAME': 'ocr',
598
+ 'PATTERN': r'(.+?)<loc_(\d+)><loc_(\d+)><loc_(\d+)><loc_(\d+)><loc_(\d+)><loc_(\d+)><loc_(\d+)><loc_(\d+)>',
599
+ 'AREA_THRESHOLD': 0.00
600
+ },
601
+ {
602
+ 'TASK_NAME': 'phrase_grounding',
603
+ 'FILTER_BY_BLACK_LIST': True
604
+ },
605
+ {
606
+ 'TASK_NAME': 'pure_text',
607
+ },
608
+ {
609
+ 'TASK_NAME': 'description_with_bboxes',
610
+ },
611
+ {
612
+ 'TASK_NAME': 'description_with_polygons',
613
+ },
614
+ {
615
+ 'TASK_NAME': 'polygons',
616
+ },
617
+ {
618
+ 'TASK_NAME': 'bboxes',
619
+ },
620
+ {
621
+ 'TASK_NAME': 'description_with_bboxes_or_polygons',
622
+ }
623
+ ]
624
+ }
625
+
626
+ return config
627
+
628
+ def init_quantizers(self):
629
+ # we have box_quantizer (od, grounding) and coordinates_quantizer (ocr, referring_segmentation)
630
+ num_bbox_height_bins = self.config.get('NUM_BBOX_HEIGHT_BINS', 1000)
631
+ num_bbox_width_bins = self.config.get('NUM_BBOX_WIDTH_BINS', 1000)
632
+ box_quantization_mode = self.config.get('BOX_QUANTIZATION_MODE', 'floor')
633
+ self.box_quantizer = BoxQuantizer(
634
+ box_quantization_mode,
635
+ (num_bbox_width_bins, num_bbox_height_bins),
636
+ )
637
+
638
+ num_bbox_height_bins = self.config['COORDINATES_HEIGHT_BINS'] if 'COORDINATES_HEIGHT_BINS' in self.config else self.config.get('NUM_BBOX_HEIGHT_BINS', 1000)
639
+ num_bbox_width_bins = self.config['COORDINATES_WIDTH_BINS'] if 'COORDINATES_WIDTH_BINS' in self.config else self.config.get('NUM_BBOX_WIDTH_BINS', 1000)
640
+ box_quantization_mode = self.config.get('COORDINATES_QUANTIZATION_MODE') if 'COORDINATES_QUANTIZATION_MODE' in self.config else self.config.get('BOX_QUANTIZATION_MODE', 'floor')
641
+ self.coordinates_quantizer = CoordinatesQuantizer(
642
+ box_quantization_mode,
643
+ (num_bbox_width_bins, num_bbox_height_bins),
644
+ )
645
+
646
+ def decode_with_spans(self, tokenizer, token_ids):
647
+ filtered_tokens = tokenizer.convert_ids_to_tokens(
648
+ token_ids, skip_special_tokens=False)
649
+ assert len(filtered_tokens) == len(token_ids)
650
+
651
+ # To avoid mixing byte-level and unicode for byte-level BPT
652
+ # we need to build string separately for added tokens and byte-level tokens
653
+ # cf. https://github.com/huggingface/transformers/issues/1133
654
+ sub_texts = []
655
+ for token in filtered_tokens:
656
+ if token in self.all_special_tokens:
657
+ sub_texts.append(token)
658
+ else:
659
+ if isinstance(tokenizer, (BartTokenizer, BartTokenizerFast)):
660
+ sub_text = tokenizer.convert_tokens_to_string([token])
661
+ elif isinstance(tokenizer, (T5Tokenizer, T5TokenizerFast)):
662
+ # Ref: https://github.com/google/sentencepiece#whitespace-is-treated-as-a-basic-symbol
663
+ # Note: Do not strip sub_text as it may have functional whitespace
664
+ sub_text = token.replace('▁', ' ')
665
+ else:
666
+ raise ValueError(f'type {type(tokenizer)} not supported')
667
+ sub_texts.append(sub_text)
668
+
669
+ text = ''
670
+ spans = []
671
+ for sub_text in sub_texts:
672
+ span = (len(text), len(text) + len(sub_text)) # [start index, end index).
673
+ text += sub_text
674
+ spans.append(span)
675
+
676
+ # Text format:
677
+ # 1. T5Tokenizer/T5TokenizerFast:
678
+ # "<loc_1><loc_2><loc_3><loc_4> transplanting dog<loc_1><loc_2><loc_3><loc_4> cat</s>"
679
+ # Equivalent to t5_tokenizer.decode(input_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False, spaces_between_special_tokens=False)
680
+ # 2. BartTokenizer (need to double check):
681
+ # "<s><loc_1><loc_2><loc_3><loc_4>transplanting dog<loc_1><loc_2><loc_3><loc_4>cat</s>"
682
+ # Equivalent to bart_tokenizer.decode(input_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False, spaces_between_special_tokens=False)
683
+ return text, spans
684
+
685
+ def parse_od_from_text_and_spans(
686
+ self,
687
+ text,
688
+ pattern,
689
+ image_size,
690
+ phrase_centric=False
691
+ ):
692
+ parsed = list(re.finditer(pattern, text))
693
+
694
+ instances = []
695
+ for i in range(len(parsed)):
696
+ # Prepare instance.
697
+ instance = {}
698
+
699
+ if phrase_centric:
700
+ bbox_bins = [int(parsed[i].group(j)) for j in range(2, 6)]
701
+ else:
702
+ bbox_bins = [int(parsed[i].group(j)) for j in range(1, 5)]
703
+ instance['bbox'] = self.box_quantizer.dequantize(
704
+ boxes=torch.tensor(bbox_bins),
705
+ size=image_size
706
+ ).tolist()
707
+
708
+ if phrase_centric:
709
+ instance['cat_name'] = parsed[i].group(1).lower().strip()
710
+ else:
711
+ instance['cat_name'] = parsed[i].group(5).lower().strip()
712
+ instances.append(instance)
713
+
714
+ return instances
715
+
716
+ def parse_ocr_from_text_and_spans(self,
717
+ text,
718
+ pattern,
719
+ image_size,
720
+ area_threshold=-1.0,
721
+ ):
722
+ bboxes = []
723
+ labels = []
724
+ text = text.replace('<s>', '')
725
+ # ocr with regions
726
+ parsed = re.findall(pattern, text)
727
+ instances = []
728
+ image_width, image_height = image_size
729
+
730
+ for ocr_line in parsed:
731
+ ocr_content = ocr_line[0]
732
+ quad_box = ocr_line[1:]
733
+ quad_box = [int(i) for i in quad_box]
734
+ quad_box = self.coordinates_quantizer.dequantize(
735
+ torch.tensor(np.array(quad_box).reshape(-1, 2)),
736
+ size=image_size
737
+ ).reshape(-1).tolist()
738
+
739
+ if area_threshold > 0:
740
+ x_coords = [i for i in quad_box[0::2]]
741
+ y_coords = [i for i in quad_box[1::2]]
742
+
743
+ # apply the Shoelace formula
744
+ area = 0.5 * abs(sum(x_coords[i] * y_coords[i + 1] - x_coords[i + 1] * y_coords[i] for i in range(4 - 1)))
745
+
746
+ if area < (image_width * image_height) * area_threshold:
747
+ continue
748
+
749
+ bboxes.append(quad_box)
750
+ labels.append(ocr_content)
751
+ instances.append({
752
+ 'quad_box': quad_box,
753
+ 'text': ocr_content,
754
+ })
755
+ return instances
756
+
757
+ def parse_phrase_grounding_from_text_and_spans(self, text, pattern, image_size):
758
+ # ignore <s> </s> and <pad>
759
+ cur_span = 0
760
+ if text.startswith('<s>'):
761
+ cur_span += 3
762
+
763
+ text = text.replace('<s>', '')
764
+ text = text.replace('</s>', '')
765
+ text = text.replace('<pad>', '')
766
+
767
+ pattern = r"([^<]+(?:<loc_\d+>){4,})"
768
+ phrases = re.findall(pattern, text)
769
+
770
+ # pattern should be text pattern and od pattern
771
+ pattern = r'^\s*(.*?)(?=<od>|</od>|<box>|</box>|<bbox>|</bbox>|<loc_)'
772
+ box_pattern = r'<loc_(\d+)><loc_(\d+)><loc_(\d+)><loc_(\d+)>'
773
+
774
+ instances = []
775
+ for pharse_text in phrases:
776
+ phrase_text_strip = pharse_text.replace('<ground>', '', 1)
777
+ phrase_text_strip = pharse_text.replace('<obj>', '', 1)
778
+
779
+ if phrase_text_strip == '':
780
+ cur_span += len(pharse_text)
781
+ continue
782
+
783
+ # Prepare instance.
784
+ instance = {}
785
+
786
+ # parse phrase, get string
787
+ phrase = re.search(pattern, phrase_text_strip)
788
+ if phrase is None:
789
+ cur_span += len(pharse_text)
790
+ continue
791
+
792
+ # parse bboxes by box_pattern
793
+ bboxes_parsed = list(re.finditer(box_pattern, pharse_text))
794
+ if len(bboxes_parsed) == 0:
795
+ cur_span += len(pharse_text)
796
+ continue
797
+
798
+ phrase = phrase.group()
799
+ # remove leading and trailing spaces
800
+ phrase = phrase.strip()
801
+
802
+ if phrase in self.black_list_of_phrase_grounding:
803
+ cur_span += len(pharse_text)
804
+ continue
805
+
806
+ # a list of list
807
+ bbox_bins = [[int(_bboxes_parsed.group(j)) for j in range(1, 5)] for _bboxes_parsed in bboxes_parsed]
808
+ instance['bbox'] = self.box_quantizer.dequantize(
809
+ boxes=torch.tensor(bbox_bins),
810
+ size=image_size
811
+ ).tolist()
812
+
813
+ # exclude non-ascii characters
814
+ phrase = phrase.encode('ascii',errors='ignore').decode('ascii')
815
+ instance['cat_name'] = phrase
816
+
817
+ instances.append(instance)
818
+
819
+ return instances
820
+
821
+ def parse_description_with_bboxes_from_text_and_spans(self, text, pattern, image_size, allow_empty_phrase=False):
822
+ # temporary parse solution, split by '.'
823
+ # ignore <s> </s> and <pad>
824
+
825
+ text = text.replace('<s>', '')
826
+ text = text.replace('</s>', '')
827
+ text = text.replace('<pad>', '')
828
+
829
+ if allow_empty_phrase:
830
+ pattern = rf"(?:(?:<loc_\d+>){{4,}})"
831
+ else:
832
+ pattern = r"([^<]+(?:<loc_\d+>){4,})"
833
+ phrases = re.findall(pattern, text)
834
+
835
+ # pattern should be text pattern and od pattern
836
+ pattern = r'^\s*(.*?)(?=<od>|</od>|<box>|</box>|<bbox>|</bbox>|<loc_)'
837
+ box_pattern = r'<loc_(\d+)><loc_(\d+)><loc_(\d+)><loc_(\d+)>'
838
+
839
+ instances = []
840
+ for pharse_text in phrases:
841
+ phrase_text_strip = pharse_text.replace('<ground>', '', 1)
842
+ phrase_text_strip = pharse_text.replace('<obj>', '', 1)
843
+
844
+ if phrase_text_strip == '' and not allow_empty_phrase:
845
+ continue
846
+
847
+ # parse phrase, get string
848
+ phrase = re.search(pattern, phrase_text_strip)
849
+ if phrase is None:
850
+ continue
851
+
852
+ phrase = phrase.group()
853
+ # remove leading and trailing spaces
854
+ phrase = phrase.strip()
855
+
856
+ # parse bboxes by box_pattern
857
+ bboxes_parsed = list(re.finditer(box_pattern, pharse_text))
858
+ if len(bboxes_parsed) == 0:
859
+ continue
860
+
861
+ # a list of list
862
+ bbox_bins = [[int(_bboxes_parsed.group(j)) for j in range(1, 5)] for _bboxes_parsed in bboxes_parsed]
863
+
864
+ bboxes = self.box_quantizer.dequantize(
865
+ boxes=torch.tensor(bbox_bins),
866
+ size=image_size
867
+ ).tolist()
868
+
869
+ phrase = phrase.encode('ascii',errors='ignore').decode('ascii')
870
+ for _bboxes in bboxes:
871
+ # Prepare instance.
872
+ instance = {}
873
+ instance['bbox'] = _bboxes
874
+ # exclude non-ascii characters
875
+ instance['cat_name'] = phrase
876
+ instances.append(instance)
877
+
878
+ return instances
879
+
880
+ def parse_description_with_polygons_from_text_and_spans(self, text, pattern, image_size,
881
+ allow_empty_phrase=False,
882
+ polygon_sep_token='<sep>',
883
+ polygon_start_token='<poly>',
884
+ polygon_end_token='</poly>',
885
+ with_box_at_start=False,
886
+ ):
887
+
888
+ # ref_seg format: '<expression><x1><y1><x2><y2><><><sep><><><><>'
889
+ # ignore <s> </s> and <pad>
890
+
891
+ text = text.replace('<s>', '')
892
+ text = text.replace('</s>', '')
893
+ text = text.replace('<pad>', '')
894
+
895
+ if allow_empty_phrase:
896
+ pattern = rf"(?:(?:<loc_\d+>|{re.escape(polygon_sep_token)}|{re.escape(polygon_start_token)}|{re.escape(polygon_end_token)}){{4,}})"
897
+ else:
898
+ # [^<]+: This part matches one or more characters that are not the < symbol.
899
+ # The ^ inside the square brackets [] is a negation, meaning it matches anything except <.
900
+ #
901
+ pattern = rf"([^<]+(?:<loc_\d+>|{re.escape(polygon_sep_token)}|{re.escape(polygon_start_token)}|{re.escape(polygon_end_token)}){{4,}})"
902
+ phrases = re.findall(pattern, text)
903
+
904
+ phrase_string_pattern = r'^\s*(.*?)(?=<od>|</od>|<box>|</box>|<bbox>|</bbox>|<loc_|<poly>)'
905
+ box_pattern = rf'((?:<loc_\d+>)+)(?:{re.escape(polygon_sep_token)}|$)'
906
+
907
+ # one polygons instance is separated by polygon_start_token and polygon_end_token
908
+ polygons_instance_pattern = rf'{re.escape(polygon_start_token)}(.*?){re.escape(polygon_end_token)}'
909
+
910
+ instances = []
911
+ for phrase_text in phrases:
912
+
913
+ # exclude loc_\d+>
914
+ # need to get span if want to include category score
915
+ phrase_text_strip = re.sub(r'^loc_\d+>', '', phrase_text, count=1)
916
+
917
+ # phrase = phrase.replace('<poly>', '')
918
+ # phrase = phrase.replace('poly>', '')
919
+
920
+ if phrase_text_strip == '' and not allow_empty_phrase:
921
+ continue
922
+
923
+
924
+ # parse phrase, get string
925
+ phrase = re.search(phrase_string_pattern, phrase_text_strip)
926
+ if phrase is None:
927
+ continue
928
+ phrase = phrase.group()
929
+ # remove leading and trailing spaces
930
+ phrase = phrase.strip()
931
+
932
+ # parse bboxes by box_pattern
933
+
934
+ # split by polygon_start_token and polygon_end_token first using polygons_instance_pattern
935
+ if polygon_start_token in phrase_text and polygon_end_token in phrase_text:
936
+ polygons_instances_parsed = list(re.finditer(polygons_instance_pattern, phrase_text))
937
+ else:
938
+ polygons_instances_parsed = [phrase_text]
939
+
940
+ for _polygons_instances_parsed in polygons_instances_parsed:
941
+ # Prepare instance.
942
+ instance = {}
943
+
944
+ # polygons_parsed= list(re.finditer(box_pattern, phrase_text))
945
+ if isinstance(_polygons_instances_parsed, str):
946
+ polygons_parsed= list(re.finditer(box_pattern, _polygons_instances_parsed))
947
+ else:
948
+ polygons_parsed= list(re.finditer(box_pattern, _polygons_instances_parsed.group(1)))
949
+ if len(polygons_parsed) == 0:
950
+ continue
951
+
952
+ # a list of list (polygon)
953
+ bbox = []
954
+ polygons = []
955
+ for _polygon_parsed in polygons_parsed:
956
+ # group 1: whole <loc_\d+>...</loc_\d+>
957
+ _polygon = _polygon_parsed.group(1)
958
+ # parse into list of int
959
+ _polygon = [int(_loc_parsed.group(1)) for _loc_parsed in re.finditer(r'<loc_(\d+)>', _polygon)]
960
+ if with_box_at_start and len(bbox) == 0:
961
+ if len(_polygon) > 4:
962
+ # no valid bbox prediction
963
+ bbox = _polygon[:4]
964
+ _polygon = _polygon[4:]
965
+ else:
966
+ bbox = [0, 0, 0, 0]
967
+ # abandon last element if is not paired
968
+ if len(_polygon) % 2 == 1:
969
+ _polygon = _polygon[:-1]
970
+
971
+ # reshape into (n, 2)
972
+ _polygon = self.coordinates_quantizer.dequantize(
973
+ torch.tensor(np.array(_polygon).reshape(-1, 2)),
974
+ size=image_size
975
+ ).reshape(-1).tolist()
976
+ # reshape back
977
+ polygons.append(_polygon)
978
+
979
+ instance['cat_name'] = phrase
980
+ instance['polygons'] = polygons
981
+ if len(bbox) != 0:
982
+ instance['bbox'] = self.box_quantizer.dequantize(
983
+ boxes=torch.tensor([bbox]),
984
+ size=image_size
985
+ ).tolist()[0]
986
+
987
+ instances.append(instance)
988
+
989
+ return instances
990
+
991
+ def __call__(
992
+ self,
993
+ text=None,
994
+ image_size=None,
995
+ parse_tasks=None,
996
+ ):
997
+ """
998
+ Args:
999
+ text: model outputs
1000
+ image_size: (width, height)
1001
+ parse_tasks: a list of tasks to parse, if None, parse all tasks.
1002
+
1003
+ """
1004
+ if parse_tasks is not None:
1005
+ if isinstance(parse_tasks, str):
1006
+ parse_tasks = [parse_tasks]
1007
+ for _parse_task in parse_tasks:
1008
+ assert _parse_task in self.parse_tasks, f'parse task {_parse_task} not supported'
1009
+
1010
+ # sequence or text should be provided
1011
+ assert text is not None, 'text should be provided'
1012
+
1013
+ parsed_dict = {
1014
+ 'text': text
1015
+ }
1016
+
1017
+ for task in self.parse_tasks:
1018
+ if parse_tasks is not None and task not in parse_tasks:
1019
+ continue
1020
+
1021
+ pattern = self.parse_tasks_configs[task].get('PATTERN', None)
1022
+
1023
+ if task == 'ocr':
1024
+ instances = self.parse_ocr_from_text_and_spans(
1025
+ text,
1026
+ pattern=pattern,
1027
+ image_size=image_size,
1028
+ area_threshold=self.parse_tasks_configs[task].get('AREA_THRESHOLD', 0.0),
1029
+ )
1030
+ parsed_dict['ocr'] = instances
1031
+ elif task == 'phrase_grounding':
1032
+ instances = self.parse_phrase_grounding_from_text_and_spans(
1033
+ text,
1034
+ pattern=pattern,
1035
+ image_size=image_size,
1036
+ )
1037
+ parsed_dict['phrase_grounding'] = instances
1038
+ elif task == 'pure_text':
1039
+ parsed_dict['pure_text'] = text
1040
+ elif task == 'description_with_bboxes':
1041
+ instances = self.parse_description_with_bboxes_from_text_and_spans(
1042
+ text,
1043
+ pattern=pattern,
1044
+ image_size=image_size,
1045
+ )
1046
+ parsed_dict['description_with_bboxes'] = instances
1047
+ elif task == 'description_with_polygons':
1048
+ instances = self.parse_description_with_polygons_from_text_and_spans(
1049
+ text,
1050
+ pattern=pattern,
1051
+ image_size=image_size,
1052
+ )
1053
+ parsed_dict['description_with_polygons'] = instances
1054
+ elif task == 'polygons':
1055
+ instances = self.parse_description_with_polygons_from_text_and_spans(
1056
+ text,
1057
+ pattern=pattern,
1058
+ image_size=image_size,
1059
+ allow_empty_phrase=True,
1060
+ )
1061
+ parsed_dict['polygons'] = instances
1062
+ elif task == 'bboxes':
1063
+ instances = self.parse_description_with_bboxes_from_text_and_spans(
1064
+ text,
1065
+ pattern=pattern,
1066
+ image_size=image_size,
1067
+ allow_empty_phrase=True,
1068
+ )
1069
+ parsed_dict['bboxes'] = instances
1070
+ elif task == 'description_with_bboxes_or_polygons':
1071
+ if '<poly>' in text:
1072
+ # only support either polygons or bboxes, not both at the same time
1073
+ instances = self.parse_description_with_polygons_from_text_and_spans(
1074
+ text,
1075
+ pattern=pattern,
1076
+ image_size=image_size,
1077
+ )
1078
+ else:
1079
+ instances = self.parse_description_with_bboxes_from_text_and_spans(
1080
+ text,
1081
+ pattern=pattern,
1082
+ image_size=image_size,
1083
+ )
1084
+ parsed_dict['description_with_bboxes_or_polygons'] = instances
1085
+ else:
1086
+ raise ValueError("task {} is not supported".format(task))
1087
+
1088
+ return parsed_dict
processor_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoProcessor": "processing_florence2.Florence2Processor"
4
+ },
5
+ "processor_class": "Florence2Processor"
6
+ }
special_tokens_map.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
vocab.json ADDED
The diff for this file is too large to render. See raw diff