hf-transformers-bot commited on
Commit
28b0579
·
verified ·
1 Parent(s): 9254bb1

Upload 2026-04-15/runs/7289-24451435462/ci_results_run_models_gpu/model_results.json with huggingface_hub

Browse files
2026-04-15/runs/7289-24451435462/ci_results_run_models_gpu/model_results.json ADDED
@@ -0,0 +1,1902 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "models_auto": {
3
+ "failed": {
4
+ "PyTorch": {
5
+ "unclassified": 0,
6
+ "single": 0,
7
+ "multi": 0
8
+ },
9
+ "Tokenizers": {
10
+ "unclassified": 0,
11
+ "single": 1,
12
+ "multi": 1
13
+ },
14
+ "Pipelines": {
15
+ "unclassified": 0,
16
+ "single": 0,
17
+ "multi": 0
18
+ },
19
+ "Trainer": {
20
+ "unclassified": 0,
21
+ "single": 0,
22
+ "multi": 0
23
+ },
24
+ "ONNX": {
25
+ "unclassified": 0,
26
+ "single": 0,
27
+ "multi": 0
28
+ },
29
+ "Auto": {
30
+ "unclassified": 0,
31
+ "single": 0,
32
+ "multi": 0
33
+ },
34
+ "Quantization": {
35
+ "unclassified": 0,
36
+ "single": 0,
37
+ "multi": 0
38
+ },
39
+ "Unclassified": {
40
+ "unclassified": 0,
41
+ "single": 0,
42
+ "multi": 0
43
+ }
44
+ },
45
+ "errors": 0,
46
+ "success": 254,
47
+ "skipped": 14,
48
+ "time_spent": [
49
+ 85.93,
50
+ 84.88
51
+ ],
52
+ "error": false,
53
+ "failures": {
54
+ "single": [
55
+ {
56
+ "line": "tests/models/auto/test_tokenization_auto.py::AutoTokenizerTest::test_custom_tokenizer_from_hub",
57
+ "trace": "(line 687) AssertionError: False is not true"
58
+ }
59
+ ],
60
+ "multi": [
61
+ {
62
+ "line": "tests/models/auto/test_tokenization_auto.py::AutoTokenizerTest::test_custom_tokenizer_from_hub",
63
+ "trace": "(line 687) AssertionError: False is not true"
64
+ }
65
+ ]
66
+ },
67
+ "job_link": {
68
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682535",
69
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682533"
70
+ },
71
+ "captured_info": {}
72
+ },
73
+ "models_bert": {
74
+ "failed": {
75
+ "PyTorch": {
76
+ "unclassified": 0,
77
+ "single": 1,
78
+ "multi": 1
79
+ },
80
+ "Tokenizers": {
81
+ "unclassified": 0,
82
+ "single": 0,
83
+ "multi": 0
84
+ },
85
+ "Pipelines": {
86
+ "unclassified": 0,
87
+ "single": 0,
88
+ "multi": 0
89
+ },
90
+ "Trainer": {
91
+ "unclassified": 0,
92
+ "single": 0,
93
+ "multi": 0
94
+ },
95
+ "ONNX": {
96
+ "unclassified": 0,
97
+ "single": 0,
98
+ "multi": 0
99
+ },
100
+ "Auto": {
101
+ "unclassified": 0,
102
+ "single": 0,
103
+ "multi": 0
104
+ },
105
+ "Quantization": {
106
+ "unclassified": 0,
107
+ "single": 0,
108
+ "multi": 0
109
+ },
110
+ "Unclassified": {
111
+ "unclassified": 0,
112
+ "single": 0,
113
+ "multi": 0
114
+ }
115
+ },
116
+ "errors": 0,
117
+ "success": 417,
118
+ "skipped": 193,
119
+ "time_spent": [
120
+ 143.65,
121
+ 142.44
122
+ ],
123
+ "error": false,
124
+ "failures": {
125
+ "multi": [
126
+ {
127
+ "line": "tests/models/bert/test_modeling_bert.py::BertModelTest::test_flash_attn_2_inference_equivalence_right_padding",
128
+ "trace": "(line 3404) AssertionError: Tensor-likes are not close!"
129
+ }
130
+ ],
131
+ "single": [
132
+ {
133
+ "line": "tests/models/bert/test_modeling_bert.py::BertModelTest::test_flash_attn_2_inference_equivalence_right_padding",
134
+ "trace": "(line 3404) AssertionError: Tensor-likes are not close!"
135
+ }
136
+ ]
137
+ },
138
+ "job_link": {
139
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682540",
140
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682498"
141
+ },
142
+ "captured_info": {
143
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682540#step:16:1",
144
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682498#step:16:1"
145
+ }
146
+ },
147
+ "models_clip": {
148
+ "failed": {
149
+ "PyTorch": {
150
+ "unclassified": 0,
151
+ "single": 0,
152
+ "multi": 0
153
+ },
154
+ "Tokenizers": {
155
+ "unclassified": 0,
156
+ "single": 0,
157
+ "multi": 0
158
+ },
159
+ "Pipelines": {
160
+ "unclassified": 0,
161
+ "single": 0,
162
+ "multi": 0
163
+ },
164
+ "Trainer": {
165
+ "unclassified": 0,
166
+ "single": 0,
167
+ "multi": 0
168
+ },
169
+ "ONNX": {
170
+ "unclassified": 0,
171
+ "single": 0,
172
+ "multi": 0
173
+ },
174
+ "Auto": {
175
+ "unclassified": 0,
176
+ "single": 0,
177
+ "multi": 0
178
+ },
179
+ "Quantization": {
180
+ "unclassified": 0,
181
+ "single": 0,
182
+ "multi": 0
183
+ },
184
+ "Unclassified": {
185
+ "unclassified": 0,
186
+ "single": 0,
187
+ "multi": 0
188
+ }
189
+ },
190
+ "errors": 0,
191
+ "success": 1022,
192
+ "skipped": 580,
193
+ "time_spent": [
194
+ 158.17,
195
+ 156.15
196
+ ],
197
+ "error": false,
198
+ "failures": {},
199
+ "job_link": {
200
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682616",
201
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682464"
202
+ },
203
+ "captured_info": {}
204
+ },
205
+ "models_csm": {
206
+ "failed": {
207
+ "PyTorch": {
208
+ "unclassified": 0,
209
+ "single": 0,
210
+ "multi": 0
211
+ },
212
+ "Tokenizers": {
213
+ "unclassified": 0,
214
+ "single": 0,
215
+ "multi": 0
216
+ },
217
+ "Pipelines": {
218
+ "unclassified": 0,
219
+ "single": 0,
220
+ "multi": 0
221
+ },
222
+ "Trainer": {
223
+ "unclassified": 0,
224
+ "single": 0,
225
+ "multi": 0
226
+ },
227
+ "ONNX": {
228
+ "unclassified": 0,
229
+ "single": 0,
230
+ "multi": 0
231
+ },
232
+ "Auto": {
233
+ "unclassified": 0,
234
+ "single": 0,
235
+ "multi": 0
236
+ },
237
+ "Quantization": {
238
+ "unclassified": 0,
239
+ "single": 0,
240
+ "multi": 0
241
+ },
242
+ "Unclassified": {
243
+ "unclassified": 0,
244
+ "single": 0,
245
+ "multi": 0
246
+ }
247
+ },
248
+ "errors": 0,
249
+ "success": 292,
250
+ "skipped": 212,
251
+ "time_spent": [
252
+ 171.42,
253
+ 171.71
254
+ ],
255
+ "error": false,
256
+ "failures": {},
257
+ "job_link": {
258
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682574",
259
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682666"
260
+ },
261
+ "captured_info": {}
262
+ },
263
+ "models_detr": {
264
+ "failed": {
265
+ "PyTorch": {
266
+ "unclassified": 0,
267
+ "single": 0,
268
+ "multi": 0
269
+ },
270
+ "Tokenizers": {
271
+ "unclassified": 0,
272
+ "single": 0,
273
+ "multi": 0
274
+ },
275
+ "Pipelines": {
276
+ "unclassified": 0,
277
+ "single": 0,
278
+ "multi": 0
279
+ },
280
+ "Trainer": {
281
+ "unclassified": 0,
282
+ "single": 0,
283
+ "multi": 0
284
+ },
285
+ "ONNX": {
286
+ "unclassified": 0,
287
+ "single": 0,
288
+ "multi": 0
289
+ },
290
+ "Auto": {
291
+ "unclassified": 0,
292
+ "single": 0,
293
+ "multi": 0
294
+ },
295
+ "Quantization": {
296
+ "unclassified": 0,
297
+ "single": 0,
298
+ "multi": 0
299
+ },
300
+ "Unclassified": {
301
+ "unclassified": 0,
302
+ "single": 0,
303
+ "multi": 0
304
+ }
305
+ },
306
+ "errors": 0,
307
+ "success": 251,
308
+ "skipped": 211,
309
+ "time_spent": [
310
+ 91.27,
311
+ 93.59
312
+ ],
313
+ "error": false,
314
+ "failures": {},
315
+ "job_link": {
316
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682491",
317
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682611"
318
+ },
319
+ "captured_info": {}
320
+ },
321
+ "models_gemma3": {
322
+ "failed": {
323
+ "PyTorch": {
324
+ "unclassified": 0,
325
+ "single": 8,
326
+ "multi": 8
327
+ },
328
+ "Tokenizers": {
329
+ "unclassified": 0,
330
+ "single": 0,
331
+ "multi": 0
332
+ },
333
+ "Pipelines": {
334
+ "unclassified": 0,
335
+ "single": 0,
336
+ "multi": 0
337
+ },
338
+ "Trainer": {
339
+ "unclassified": 0,
340
+ "single": 0,
341
+ "multi": 0
342
+ },
343
+ "ONNX": {
344
+ "unclassified": 0,
345
+ "single": 0,
346
+ "multi": 0
347
+ },
348
+ "Auto": {
349
+ "unclassified": 0,
350
+ "single": 0,
351
+ "multi": 0
352
+ },
353
+ "Quantization": {
354
+ "unclassified": 0,
355
+ "single": 0,
356
+ "multi": 0
357
+ },
358
+ "Unclassified": {
359
+ "unclassified": 0,
360
+ "single": 0,
361
+ "multi": 0
362
+ }
363
+ },
364
+ "errors": 0,
365
+ "success": 716,
366
+ "skipped": 440,
367
+ "time_spent": [
368
+ 516.45,
369
+ 516.25
370
+ ],
371
+ "error": false,
372
+ "failures": {
373
+ "multi": [
374
+ {
375
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3Vision2TextModelTest::test_torch_export",
376
+ "trace": "(line 481) AssertionError: Current active mode <torch.fx.experimental.proxy_tensor.ProxyTorchDispatchMode object at 0x7fbea2db2bc0> not registered"
377
+ },
378
+ {
379
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_dynamic_sliding_window_is_default",
380
+ "trace": "(line 865) AssertionError: 'DynamicSlidingWindowLayer' unexpectedly found in 'DynamicCache(layers=[DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer])'"
381
+ },
382
+ {
383
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_batch",
384
+ "trace": "(line 533) AssertionError: Lists differ: [\"user\\nYou are a helpful assistant.\\n\\n\\n[426 chars]ern'] != ['user\\nYou are a helpful assistant.\\n\\n\\n[389 chars]own\"]"
385
+ },
386
+ {
387
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_batch_crops",
388
+ "trace": "(line 653) AssertionError: Lists differ: ['user\\nYou are a helpful assistant.\\n\\nHe[701 chars]te,'] != [\"user\\nYou are a helpful assistant.\\n\\nHe[674 chars]h a']"
389
+ },
390
+ {
391
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_bf16",
392
+ "trace": "(line 464) AssertionError: Lists differ: [\"user\\nYou are a helpful assistant.\\n\\n\\n[110 chars]me?\"] != ['user\\nYou are a helpful assistant.\\n\\n\\n[178 chars]ike']"
393
+ },
394
+ {
395
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_crops",
396
+ "trace": "(line 580) AssertionError: Lists differ: [\"user\\nYou are a helpful assistant.\\n\\nHe[174 chars]see\"] != ['user\\nYou are a helpful assistant.\\n\\nHe[268 chars]the']"
397
+ },
398
+ {
399
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_flash_attn",
400
+ "trace": "(line 753) AssertionError: Lists differ: ['use[77 chars]l\\nThis image appears to be a distorted, almos[105 chars] of'] != ['use[77 chars]l\\nThe image shows a brown and white cow stand[104 chars]day']"
401
+ },
402
+ {
403
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_multiimage",
404
+ "trace": "(line 696) AssertionError: Lists differ: [\"use[74 chars]kay, you've presented me with a wonderfully lo[82 chars]x…”\"] != [\"use[74 chars]kay, let's break down what I see in this image[58 chars]rch\"]"
405
+ }
406
+ ],
407
+ "single": [
408
+ {
409
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3Vision2TextModelTest::test_torch_export",
410
+ "trace": "(line 481) AssertionError: Current active mode <torch.fx.experimental.proxy_tensor.ProxyTorchDispatchMode object at 0x7fa0219b0f70> not registered"
411
+ },
412
+ {
413
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_dynamic_sliding_window_is_default",
414
+ "trace": "(line 865) AssertionError: 'DynamicSlidingWindowLayer' unexpectedly found in 'DynamicCache(layers=[DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer])'"
415
+ },
416
+ {
417
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_batch",
418
+ "trace": "(line 533) AssertionError: Lists differ: [\"user\\nYou are a helpful assistant.\\n\\n\\n[426 chars]ern'] != ['user\\nYou are a helpful assistant.\\n\\n\\n[389 chars]own\"]"
419
+ },
420
+ {
421
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_batch_crops",
422
+ "trace": "(line 653) AssertionError: Lists differ: ['user\\nYou are a helpful assistant.\\n\\nHe[701 chars]te,'] != [\"user\\nYou are a helpful assistant.\\n\\nHe[674 chars]h a']"
423
+ },
424
+ {
425
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_bf16",
426
+ "trace": "(line 464) AssertionError: Lists differ: [\"user\\nYou are a helpful assistant.\\n\\n\\n[110 chars]me?\"] != ['user\\nYou are a helpful assistant.\\n\\n\\n[178 chars]ike']"
427
+ },
428
+ {
429
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_crops",
430
+ "trace": "(line 580) AssertionError: Lists differ: [\"user\\nYou are a helpful assistant.\\n\\nHe[174 chars]see\"] != ['user\\nYou are a helpful assistant.\\n\\nHe[268 chars]the']"
431
+ },
432
+ {
433
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_flash_attn",
434
+ "trace": "(line 753) AssertionError: Lists differ: ['use[77 chars]l\\nThis image appears to be a distorted, almos[105 chars] of'] != ['use[77 chars]l\\nThe image shows a brown and white cow stand[104 chars]day']"
435
+ },
436
+ {
437
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_multiimage",
438
+ "trace": "(line 696) AssertionError: Lists differ: [\"use[74 chars]kay, you've presented me with a wonderfully lo[82 chars]x…”\"] != [\"use[74 chars]kay, let's break down what I see in this image[58 chars]rch\"]"
439
+ }
440
+ ]
441
+ },
442
+ "job_link": {
443
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682638",
444
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682490"
445
+ },
446
+ "captured_info": {
447
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682638#step:16:1",
448
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682490#step:16:1"
449
+ }
450
+ },
451
+ "models_gemma3n": {
452
+ "failed": {
453
+ "PyTorch": {
454
+ "unclassified": 0,
455
+ "single": 7,
456
+ "multi": 9
457
+ },
458
+ "Tokenizers": {
459
+ "unclassified": 0,
460
+ "single": 0,
461
+ "multi": 0
462
+ },
463
+ "Pipelines": {
464
+ "unclassified": 0,
465
+ "single": 0,
466
+ "multi": 0
467
+ },
468
+ "Trainer": {
469
+ "unclassified": 0,
470
+ "single": 0,
471
+ "multi": 0
472
+ },
473
+ "ONNX": {
474
+ "unclassified": 0,
475
+ "single": 0,
476
+ "multi": 0
477
+ },
478
+ "Auto": {
479
+ "unclassified": 0,
480
+ "single": 0,
481
+ "multi": 0
482
+ },
483
+ "Quantization": {
484
+ "unclassified": 0,
485
+ "single": 0,
486
+ "multi": 0
487
+ },
488
+ "Unclassified": {
489
+ "unclassified": 0,
490
+ "single": 0,
491
+ "multi": 0
492
+ }
493
+ },
494
+ "errors": 0,
495
+ "success": 609,
496
+ "skipped": 721,
497
+ "time_spent": [
498
+ 619.86,
499
+ 623.82
500
+ ],
501
+ "error": false,
502
+ "failures": {
503
+ "single": [
504
+ {
505
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_equivalence",
506
+ "trace": "(line 632) AssertionError: Tensor-likes are not close!"
507
+ },
508
+ {
509
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_inference_equivalence",
510
+ "trace": "(line 3400) AssertionError: Tensor-likes are not close!"
511
+ },
512
+ {
513
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_inference_equivalence_right_padding",
514
+ "trace": "(line 3400) AssertionError: Tensor-likes are not close!"
515
+ },
516
+ {
517
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nIntegrationTest::test_generation_beyond_sliding_window",
518
+ "trace": "(line 1196) AssertionError: Lists differ: [\" and the food is delicious. I'm so glad I came her[83 chars]re'\"] != [\" and the people are so friendly. I'm so glad I cam[83 chars]re'\"]"
519
+ },
520
+ {
521
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nIntegrationTest::test_generation_beyond_sliding_window_with_generation_config",
522
+ "trace": "(line 1228) AssertionError: Lists differ: [\" and I'm very happy to be here. This is a nice pla[87 chars]re'\"] != [\" and I'm glad I came here. This is a nice place. T[88 chars]re'\"]"
523
+ },
524
+ {
525
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nIntegrationTest::test_model_4b_bf16",
526
+ "trace": "(line 998) AssertionError: Lists differ: ['use[149 chars]to a turquoise ocean. The cow is facing the vi[31 chars]ned'] != ['use[149 chars]to a clear blue ocean. The cow is facing the v[25 chars]tly']"
527
+ },
528
+ {
529
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nIntegrationTest::test_model_4b_image",
530
+ "trace": "(line 1110) AssertionError: Lists differ: ['use[149 chars]to a turquoise ocean. The cow is facing the vi[31 chars]ned'] != ['use[149 chars]to a clear blue ocean. The cow is facing the v[25 chars]tly']"
531
+ }
532
+ ],
533
+ "multi": [
534
+ {
535
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_equivalence",
536
+ "trace": "(line 632) AssertionError: Tensor-likes are not close!"
537
+ },
538
+ {
539
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_inference_equivalence",
540
+ "trace": "(line 3400) AssertionError: Tensor-likes are not close!"
541
+ },
542
+ {
543
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_inference_equivalence_right_padding",
544
+ "trace": "(line 3404) AssertionError: Tensor-likes are not close!"
545
+ },
546
+ {
547
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nVision2TextModelTest::test_model_parallelism",
548
+ "trace": "(line 1962) AttributeError: 'Gemma3nModel' object has no attribute 'hf_device_map'"
549
+ },
550
+ {
551
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nVision2TextModelTest::test_multi_gpu_data_parallel_forward",
552
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
553
+ },
554
+ {
555
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nIntegrationTest::test_generation_beyond_sliding_window",
556
+ "trace": "(line 1196) AssertionError: Lists differ: [\" and the food is delicious. I'm so glad I came her[83 chars]re'\"] != [\" and the people are so friendly. I'm so glad I cam[83 chars]re'\"]"
557
+ },
558
+ {
559
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nIntegrationTest::test_generation_beyond_sliding_window_with_generation_config",
560
+ "trace": "(line 1228) AssertionError: Lists differ: [\" and I'm very happy to be here. This is a nice pla[87 chars]re'\"] != [\" and I'm glad I came here. This is a nice place. T[88 chars]re'\"]"
561
+ },
562
+ {
563
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nIntegrationTest::test_model_4b_bf16",
564
+ "trace": "(line 998) AssertionError: Lists differ: ['use[149 chars]to a turquoise ocean. The cow is facing the vi[31 chars]ned'] != ['use[149 chars]to a clear blue ocean. The cow is facing the v[25 chars]tly']"
565
+ },
566
+ {
567
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nIntegrationTest::test_model_4b_image",
568
+ "trace": "(line 1110) AssertionError: Lists differ: ['use[149 chars]to a turquoise ocean. The cow is facing the vi[31 chars]ned'] != ['use[149 chars]to a clear blue ocean. The cow is facing the v[25 chars]tly']"
569
+ }
570
+ ]
571
+ },
572
+ "job_link": {
573
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682483",
574
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682603"
575
+ },
576
+ "captured_info": {
577
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682483#step:16:1",
578
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682603#step:16:1"
579
+ }
580
+ },
581
+ "models_got_ocr2": {
582
+ "failed": {
583
+ "PyTorch": {
584
+ "unclassified": 0,
585
+ "single": 1,
586
+ "multi": 1
587
+ },
588
+ "Tokenizers": {
589
+ "unclassified": 0,
590
+ "single": 0,
591
+ "multi": 0
592
+ },
593
+ "Pipelines": {
594
+ "unclassified": 0,
595
+ "single": 0,
596
+ "multi": 0
597
+ },
598
+ "Trainer": {
599
+ "unclassified": 0,
600
+ "single": 0,
601
+ "multi": 0
602
+ },
603
+ "ONNX": {
604
+ "unclassified": 0,
605
+ "single": 0,
606
+ "multi": 0
607
+ },
608
+ "Auto": {
609
+ "unclassified": 0,
610
+ "single": 0,
611
+ "multi": 0
612
+ },
613
+ "Quantization": {
614
+ "unclassified": 0,
615
+ "single": 0,
616
+ "multi": 0
617
+ },
618
+ "Unclassified": {
619
+ "unclassified": 0,
620
+ "single": 0,
621
+ "multi": 0
622
+ }
623
+ },
624
+ "errors": 0,
625
+ "success": 327,
626
+ "skipped": 319,
627
+ "time_spent": [
628
+ 186.15,
629
+ 186.06
630
+ ],
631
+ "error": false,
632
+ "failures": {
633
+ "multi": [
634
+ {
635
+ "line": "tests/models/got_ocr2/test_modeling_got_ocr2.py::GotOcr2IntegrationTest::test_small_model_integration_test_got_ocr_format",
636
+ "trace": "(line 213) AssertionError: 'R\\\\&D' != '\\\\title{\\nR'"
637
+ }
638
+ ],
639
+ "single": [
640
+ {
641
+ "line": "tests/models/got_ocr2/test_modeling_got_ocr2.py::GotOcr2IntegrationTest::test_small_model_integration_test_got_ocr_format",
642
+ "trace": "(line 213) AssertionError: 'R\\\\&D' != '\\\\title{\\nR'"
643
+ }
644
+ ]
645
+ },
646
+ "job_link": {
647
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441683775",
648
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682544"
649
+ },
650
+ "captured_info": {
651
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441683775#step:16:1",
652
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682544#step:16:1"
653
+ }
654
+ },
655
+ "models_gpt2": {
656
+ "failed": {
657
+ "PyTorch": {
658
+ "unclassified": 0,
659
+ "single": 0,
660
+ "multi": 0
661
+ },
662
+ "Tokenizers": {
663
+ "unclassified": 0,
664
+ "single": 0,
665
+ "multi": 0
666
+ },
667
+ "Pipelines": {
668
+ "unclassified": 0,
669
+ "single": 0,
670
+ "multi": 0
671
+ },
672
+ "Trainer": {
673
+ "unclassified": 0,
674
+ "single": 0,
675
+ "multi": 0
676
+ },
677
+ "ONNX": {
678
+ "unclassified": 0,
679
+ "single": 0,
680
+ "multi": 0
681
+ },
682
+ "Auto": {
683
+ "unclassified": 0,
684
+ "single": 0,
685
+ "multi": 0
686
+ },
687
+ "Quantization": {
688
+ "unclassified": 0,
689
+ "single": 0,
690
+ "multi": 0
691
+ },
692
+ "Unclassified": {
693
+ "unclassified": 0,
694
+ "single": 0,
695
+ "multi": 0
696
+ }
697
+ },
698
+ "errors": 0,
699
+ "success": 441,
700
+ "skipped": 213,
701
+ "time_spent": [
702
+ 148.27,
703
+ 149.38
704
+ ],
705
+ "error": false,
706
+ "failures": {},
707
+ "job_link": {
708
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682509",
709
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682650"
710
+ },
711
+ "captured_info": {}
712
+ },
713
+ "models_internvl": {
714
+ "failed": {
715
+ "PyTorch": {
716
+ "unclassified": 0,
717
+ "single": 0,
718
+ "multi": 1
719
+ },
720
+ "Tokenizers": {
721
+ "unclassified": 0,
722
+ "single": 0,
723
+ "multi": 0
724
+ },
725
+ "Pipelines": {
726
+ "unclassified": 0,
727
+ "single": 0,
728
+ "multi": 0
729
+ },
730
+ "Trainer": {
731
+ "unclassified": 0,
732
+ "single": 0,
733
+ "multi": 0
734
+ },
735
+ "ONNX": {
736
+ "unclassified": 0,
737
+ "single": 0,
738
+ "multi": 0
739
+ },
740
+ "Auto": {
741
+ "unclassified": 0,
742
+ "single": 0,
743
+ "multi": 0
744
+ },
745
+ "Quantization": {
746
+ "unclassified": 0,
747
+ "single": 0,
748
+ "multi": 0
749
+ },
750
+ "Unclassified": {
751
+ "unclassified": 0,
752
+ "single": 0,
753
+ "multi": 0
754
+ }
755
+ },
756
+ "errors": 0,
757
+ "success": 446,
758
+ "skipped": 213,
759
+ "time_spent": [
760
+ 239.68,
761
+ 241.85
762
+ ],
763
+ "error": false,
764
+ "failures": {
765
+ "multi": [
766
+ {
767
+ "line": "tests/models/internvl/test_modeling_internvl.py::InternVLModelTest::test_multi_gpu_data_parallel_forward",
768
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
769
+ }
770
+ ]
771
+ },
772
+ "job_link": {
773
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682644",
774
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682499"
775
+ },
776
+ "captured_info": {}
777
+ },
778
+ "models_llama": {
779
+ "failed": {
780
+ "PyTorch": {
781
+ "unclassified": 0,
782
+ "single": 0,
783
+ "multi": 0
784
+ },
785
+ "Tokenizers": {
786
+ "unclassified": 0,
787
+ "single": 0,
788
+ "multi": 0
789
+ },
790
+ "Pipelines": {
791
+ "unclassified": 0,
792
+ "single": 0,
793
+ "multi": 0
794
+ },
795
+ "Trainer": {
796
+ "unclassified": 0,
797
+ "single": 0,
798
+ "multi": 0
799
+ },
800
+ "ONNX": {
801
+ "unclassified": 0,
802
+ "single": 0,
803
+ "multi": 0
804
+ },
805
+ "Auto": {
806
+ "unclassified": 0,
807
+ "single": 0,
808
+ "multi": 0
809
+ },
810
+ "Quantization": {
811
+ "unclassified": 0,
812
+ "single": 0,
813
+ "multi": 0
814
+ },
815
+ "Unclassified": {
816
+ "unclassified": 0,
817
+ "single": 0,
818
+ "multi": 0
819
+ }
820
+ },
821
+ "errors": 0,
822
+ "success": 457,
823
+ "skipped": 179,
824
+ "time_spent": [
825
+ 274.81,
826
+ 272.41
827
+ ],
828
+ "error": false,
829
+ "failures": {},
830
+ "job_link": {
831
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682651",
832
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682539"
833
+ },
834
+ "captured_info": {
835
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682651#step:16:1",
836
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682539#step:16:1"
837
+ }
838
+ },
839
+ "models_llava": {
840
+ "failed": {
841
+ "PyTorch": {
842
+ "unclassified": 0,
843
+ "single": 10,
844
+ "multi": 10
845
+ },
846
+ "Tokenizers": {
847
+ "unclassified": 0,
848
+ "single": 0,
849
+ "multi": 0
850
+ },
851
+ "Pipelines": {
852
+ "unclassified": 0,
853
+ "single": 0,
854
+ "multi": 0
855
+ },
856
+ "Trainer": {
857
+ "unclassified": 0,
858
+ "single": 0,
859
+ "multi": 0
860
+ },
861
+ "ONNX": {
862
+ "unclassified": 0,
863
+ "single": 0,
864
+ "multi": 0
865
+ },
866
+ "Auto": {
867
+ "unclassified": 0,
868
+ "single": 0,
869
+ "multi": 0
870
+ },
871
+ "Quantization": {
872
+ "unclassified": 0,
873
+ "single": 0,
874
+ "multi": 0
875
+ },
876
+ "Unclassified": {
877
+ "unclassified": 0,
878
+ "single": 0,
879
+ "multi": 0
880
+ }
881
+ },
882
+ "errors": 0,
883
+ "success": 419,
884
+ "skipped": 231,
885
+ "time_spent": [
886
+ 248.39,
887
+ 246.38
888
+ ],
889
+ "error": false,
890
+ "failures": {
891
+ "multi": [
892
+ {
893
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_batched_generation",
894
+ "trace": "(line 569) AssertionError: Lists differ: [\"\\n [51 chars]TANT:\", '\\nUSER: Describe the image.\\nASSISTAN[139 chars]man'] != [\"\\n [51 chars]TANT: In the two images, the primary differenc[294 chars]ama']"
895
+ },
896
+ {
897
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_generation_siglip_backbone",
898
+ "trace": "(line 628) AssertionError: 'user[29 chars]t These are two different types of animals: a [15 chars]key.' != 'user[29 chars]t The image shows two cats, one on the left an[80 chars] cat'"
899
+ },
900
+ {
901
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral",
902
+ "trace": "(line 4854) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 21.10 GiB. GPU 0 has a total capacity of 22.30 GiB of which 20.14 GiB is free. Process 65434 has 2.15 GiB memory in use. Of the allocated memory 1.64 GiB is allocated by PyTorch, and 19.35 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
903
+ },
904
+ {
905
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral_4bit",
906
+ "trace": "(line 687) AssertionError: False is not true"
907
+ },
908
+ {
909
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral_batched",
910
+ "trace": "(line 727) AssertionError: Lists differ: ['Wha[97 chars]mage?A narrow dirt path is surrounded by grass[74 chars]ue.'] != ['Wha[97 chars]mage?The image depicts a narrow, winding dirt [175 chars]ere']"
911
+ },
912
+ {
913
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test",
914
+ "trace": "(line 415) AssertionError"
915
+ },
916
+ {
917
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test_batch",
918
+ "trace": "(line 415) AssertionError"
919
+ },
920
+ {
921
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test_llama_batched",
922
+ "trace": "(line 415) AssertionError"
923
+ },
924
+ {
925
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test_llama_batched_regression",
926
+ "trace": "(line 415) AssertionError"
927
+ },
928
+ {
929
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test_llama_single",
930
+ "trace": "(line 415) AssertionError"
931
+ }
932
+ ],
933
+ "single": [
934
+ {
935
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_batched_generation",
936
+ "trace": "(line 569) AssertionError: Lists differ: [\"\\n [51 chars]TANT:\", '\\nUSER: Describe the image.\\nASSISTAN[139 chars]man'] != [\"\\n [51 chars]TANT: In the two images, the primary differenc[294 chars]ama']"
937
+ },
938
+ {
939
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_generation_siglip_backbone",
940
+ "trace": "(line 628) AssertionError: 'user[29 chars]t These are two different types of animals: a [15 chars]key.' != 'user[29 chars]t The image shows two cats, one on the left an[80 chars] cat'"
941
+ },
942
+ {
943
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral",
944
+ "trace": "(line 4854) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 21.10 GiB. GPU 0 has a total capacity of 22.30 GiB of which 20.28 GiB is free. Process 63128 has 2.01 GiB memory in use. Of the allocated memory 1.63 GiB is allocated by PyTorch, and 98.50 KiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
945
+ },
946
+ {
947
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral_4bit",
948
+ "trace": "(line 687) AssertionError: False is not true"
949
+ },
950
+ {
951
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral_batched",
952
+ "trace": "(line 727) AssertionError: Lists differ: ['Wha[97 chars]mage?A narrow dirt path is surrounded by grass[74 chars]ue.'] != ['Wha[97 chars]mage?The image depicts a narrow, winding dirt [175 chars]ere']"
953
+ },
954
+ {
955
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test",
956
+ "trace": "(line 415) AssertionError"
957
+ },
958
+ {
959
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test_batch",
960
+ "trace": "(line 415) AssertionError"
961
+ },
962
+ {
963
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test_llama_batched",
964
+ "trace": "(line 415) AssertionError"
965
+ },
966
+ {
967
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test_llama_batched_regression",
968
+ "trace": "(line 415) AssertionError"
969
+ },
970
+ {
971
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test_llama_single",
972
+ "trace": "(line 415) AssertionError"
973
+ }
974
+ ]
975
+ },
976
+ "job_link": {
977
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682649",
978
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682494"
979
+ },
980
+ "captured_info": {
981
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682649#step:16:1",
982
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682494#step:16:1"
983
+ }
984
+ },
985
+ "models_mistral3": {
986
+ "failed": {
987
+ "PyTorch": {
988
+ "unclassified": 0,
989
+ "single": 2,
990
+ "multi": 2
991
+ },
992
+ "Tokenizers": {
993
+ "unclassified": 0,
994
+ "single": 0,
995
+ "multi": 0
996
+ },
997
+ "Pipelines": {
998
+ "unclassified": 0,
999
+ "single": 0,
1000
+ "multi": 0
1001
+ },
1002
+ "Trainer": {
1003
+ "unclassified": 0,
1004
+ "single": 0,
1005
+ "multi": 0
1006
+ },
1007
+ "ONNX": {
1008
+ "unclassified": 0,
1009
+ "single": 0,
1010
+ "multi": 0
1011
+ },
1012
+ "Auto": {
1013
+ "unclassified": 0,
1014
+ "single": 0,
1015
+ "multi": 0
1016
+ },
1017
+ "Quantization": {
1018
+ "unclassified": 0,
1019
+ "single": 0,
1020
+ "multi": 0
1021
+ },
1022
+ "Unclassified": {
1023
+ "unclassified": 0,
1024
+ "single": 0,
1025
+ "multi": 0
1026
+ }
1027
+ },
1028
+ "errors": 0,
1029
+ "success": 357,
1030
+ "skipped": 245,
1031
+ "time_spent": [
1032
+ 650.32,
1033
+ 635.28
1034
+ ],
1035
+ "error": false,
1036
+ "failures": {
1037
+ "single": [
1038
+ {
1039
+ "line": "tests/models/mistral3/test_modeling_mistral3.py::Mistral3IntegrationTest::test_mistral3_integration_batched_generate",
1040
+ "trace": "(line 365) AssertionError: ' to write a short story based on this ima[70 chars]e pl' != 'Calm waters reflect\\nWooden path to dista[26 chars]oods'"
1041
+ },
1042
+ {
1043
+ "line": "tests/models/mistral3/test_modeling_mistral3.py::Mistral3IntegrationTest::test_mistral3_integration_batched_generate_multi_image",
1044
+ "trace": "(line 441) AssertionError: ' to write a short story based on this im[81 chars]ched' != \"Calm waters reflect\\nWooden path to dist[29 chars]hold\""
1045
+ }
1046
+ ],
1047
+ "multi": [
1048
+ {
1049
+ "line": "tests/models/mistral3/test_modeling_mistral3.py::Mistral3IntegrationTest::test_mistral3_integration_batched_generate",
1050
+ "trace": "(line 365) AssertionError: ' to write a short story based on this ima[70 chars]e pl' != 'Calm waters reflect\\nWooden path to dista[26 chars]oods'"
1051
+ },
1052
+ {
1053
+ "line": "tests/models/mistral3/test_modeling_mistral3.py::Mistral3IntegrationTest::test_mistral3_integration_batched_generate_multi_image",
1054
+ "trace": "(line 441) AssertionError: ' to write a short story based on this im[81 chars]ched' != \"Calm waters reflect\\nWooden path to dist[29 chars]hold\""
1055
+ }
1056
+ ]
1057
+ },
1058
+ "job_link": {
1059
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682436",
1060
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682671"
1061
+ },
1062
+ "captured_info": {
1063
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682436#step:16:1",
1064
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682671#step:16:1"
1065
+ }
1066
+ },
1067
+ "models_modernbert": {
1068
+ "failed": {
1069
+ "PyTorch": {
1070
+ "unclassified": 0,
1071
+ "single": 1,
1072
+ "multi": 1
1073
+ },
1074
+ "Tokenizers": {
1075
+ "unclassified": 0,
1076
+ "single": 0,
1077
+ "multi": 0
1078
+ },
1079
+ "Pipelines": {
1080
+ "unclassified": 0,
1081
+ "single": 0,
1082
+ "multi": 0
1083
+ },
1084
+ "Trainer": {
1085
+ "unclassified": 0,
1086
+ "single": 0,
1087
+ "multi": 0
1088
+ },
1089
+ "ONNX": {
1090
+ "unclassified": 0,
1091
+ "single": 0,
1092
+ "multi": 0
1093
+ },
1094
+ "Auto": {
1095
+ "unclassified": 0,
1096
+ "single": 0,
1097
+ "multi": 0
1098
+ },
1099
+ "Quantization": {
1100
+ "unclassified": 0,
1101
+ "single": 0,
1102
+ "multi": 0
1103
+ },
1104
+ "Unclassified": {
1105
+ "unclassified": 0,
1106
+ "single": 0,
1107
+ "multi": 0
1108
+ }
1109
+ },
1110
+ "errors": 0,
1111
+ "success": 238,
1112
+ "skipped": 162,
1113
+ "time_spent": [
1114
+ 103.14,
1115
+ 104.0
1116
+ ],
1117
+ "error": false,
1118
+ "failures": {
1119
+ "multi": [
1120
+ {
1121
+ "line": "tests/models/modernbert/test_modeling_modernbert.py::ModernBertModelIntegrationTest::test_inference_masked_lm_flash_attention_2",
1122
+ "trace": "(line 437) AssertionError: Tensor-likes are not close!"
1123
+ }
1124
+ ],
1125
+ "single": [
1126
+ {
1127
+ "line": "tests/models/modernbert/test_modeling_modernbert.py::ModernBertModelIntegrationTest::test_inference_masked_lm_flash_attention_2",
1128
+ "trace": "(line 437) AssertionError: Tensor-likes are not close!"
1129
+ }
1130
+ ]
1131
+ },
1132
+ "job_link": {
1133
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682646",
1134
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682474"
1135
+ },
1136
+ "captured_info": {
1137
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682646#step:16:1",
1138
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682474#step:16:1"
1139
+ }
1140
+ },
1141
+ "models_pi0": {
1142
+ "failed": {
1143
+ "PyTorch": {
1144
+ "unclassified": 0,
1145
+ "single": 3,
1146
+ "multi": 3
1147
+ },
1148
+ "Tokenizers": {
1149
+ "unclassified": 0,
1150
+ "single": 0,
1151
+ "multi": 0
1152
+ },
1153
+ "Pipelines": {
1154
+ "unclassified": 0,
1155
+ "single": 0,
1156
+ "multi": 0
1157
+ },
1158
+ "Trainer": {
1159
+ "unclassified": 0,
1160
+ "single": 0,
1161
+ "multi": 0
1162
+ },
1163
+ "ONNX": {
1164
+ "unclassified": 0,
1165
+ "single": 0,
1166
+ "multi": 0
1167
+ },
1168
+ "Auto": {
1169
+ "unclassified": 0,
1170
+ "single": 0,
1171
+ "multi": 0
1172
+ },
1173
+ "Quantization": {
1174
+ "unclassified": 0,
1175
+ "single": 0,
1176
+ "multi": 0
1177
+ },
1178
+ "Unclassified": {
1179
+ "unclassified": 0,
1180
+ "single": 0,
1181
+ "multi": 0
1182
+ }
1183
+ },
1184
+ "errors": 0,
1185
+ "success": 204,
1186
+ "skipped": 202,
1187
+ "time_spent": [
1188
+ 138.86,
1189
+ 132.16
1190
+ ],
1191
+ "error": false,
1192
+ "failures": {
1193
+ "multi": [
1194
+ {
1195
+ "line": "tests/models/pi0/test_modeling_pi0.py::PI0ModelIntegrationTest::test_pi0_base_libero",
1196
+ "trace": "(line 899) AssertionError: 2.119363307952881 != 2.5087 within 3 places (0.3893366920471193 difference)"
1197
+ },
1198
+ {
1199
+ "line": "tests/models/pi0/test_modeling_pi0.py::PI0ModelIntegrationTest::test_pi0_base_reference_values",
1200
+ "trace": "(line 899) AssertionError: 0.01838020235300064 != 0.022478658705949783 within 3 places (0.0040984563529491425 difference)"
1201
+ },
1202
+ {
1203
+ "line": "tests/models/pi0/test_modeling_pi0.py::PI0ModelIntegrationTest::test_train_pi0_base_libero",
1204
+ "trace": "(line 769) torch.OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0."
1205
+ }
1206
+ ],
1207
+ "single": [
1208
+ {
1209
+ "line": "tests/models/pi0/test_modeling_pi0.py::PI0ModelIntegrationTest::test_pi0_base_libero",
1210
+ "trace": "(line 899) AssertionError: 1.690470576286316 != 2.5087 within 3 places (0.8182294237136842 difference)"
1211
+ },
1212
+ {
1213
+ "line": "tests/models/pi0/test_modeling_pi0.py::PI0ModelIntegrationTest::test_pi0_base_reference_values",
1214
+ "trace": "(line 118) httpx.RemoteProtocolError: Server disconnected without sending a response."
1215
+ },
1216
+ {
1217
+ "line": "tests/models/pi0/test_modeling_pi0.py::PI0ModelIntegrationTest::test_train_pi0_base_libero",
1218
+ "trace": "(line 193) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 18.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 8.69 MiB is free. Process 51884 has 22.29 GiB memory in use. Of the allocated memory 21.50 GiB is allocated by PyTorch, and 478.93 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
1219
+ }
1220
+ ]
1221
+ },
1222
+ "job_link": {
1223
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682685",
1224
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682489"
1225
+ },
1226
+ "captured_info": {}
1227
+ },
1228
+ "models_qwen2": {
1229
+ "failed": {
1230
+ "PyTorch": {
1231
+ "unclassified": 0,
1232
+ "single": 1,
1233
+ "multi": 1
1234
+ },
1235
+ "Tokenizers": {
1236
+ "unclassified": 0,
1237
+ "single": 0,
1238
+ "multi": 0
1239
+ },
1240
+ "Pipelines": {
1241
+ "unclassified": 0,
1242
+ "single": 0,
1243
+ "multi": 0
1244
+ },
1245
+ "Trainer": {
1246
+ "unclassified": 0,
1247
+ "single": 0,
1248
+ "multi": 0
1249
+ },
1250
+ "ONNX": {
1251
+ "unclassified": 0,
1252
+ "single": 0,
1253
+ "multi": 0
1254
+ },
1255
+ "Auto": {
1256
+ "unclassified": 0,
1257
+ "single": 0,
1258
+ "multi": 0
1259
+ },
1260
+ "Quantization": {
1261
+ "unclassified": 0,
1262
+ "single": 0,
1263
+ "multi": 0
1264
+ },
1265
+ "Unclassified": {
1266
+ "unclassified": 0,
1267
+ "single": 0,
1268
+ "multi": 0
1269
+ }
1270
+ },
1271
+ "errors": 0,
1272
+ "success": 451,
1273
+ "skipped": 177,
1274
+ "time_spent": [
1275
+ 234.61,
1276
+ 226.5
1277
+ ],
1278
+ "error": false,
1279
+ "failures": {
1280
+ "multi": [
1281
+ {
1282
+ "line": "tests/models/qwen2/test_modeling_qwen2.py::Qwen2IntegrationTest::test_export_static_cache",
1283
+ "trace": "(line 287) AssertionError: Lists differ: ['My [35 chars], organic, gluten free, vegan, and free from preservatives. I'] != ['My [35 chars], organic, gluten free, vegan, and vegetarian. I love to use']"
1284
+ }
1285
+ ],
1286
+ "single": [
1287
+ {
1288
+ "line": "tests/models/qwen2/test_modeling_qwen2.py::Qwen2IntegrationTest::test_export_static_cache",
1289
+ "trace": "(line 287) AssertionError: Lists differ: ['My [35 chars], organic, gluten free, vegan, and free from preservatives. I'] != ['My [35 chars], organic, gluten free, vegan, and vegetarian. I love to use']"
1290
+ }
1291
+ ]
1292
+ },
1293
+ "job_link": {
1294
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682683",
1295
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682459"
1296
+ },
1297
+ "captured_info": {
1298
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682683#step:16:1",
1299
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682459#step:16:1"
1300
+ }
1301
+ },
1302
+ "models_qwen2_5_omni": {
1303
+ "failed": {
1304
+ "PyTorch": {
1305
+ "unclassified": 0,
1306
+ "single": 2,
1307
+ "multi": 3
1308
+ },
1309
+ "Tokenizers": {
1310
+ "unclassified": 0,
1311
+ "single": 0,
1312
+ "multi": 0
1313
+ },
1314
+ "Pipelines": {
1315
+ "unclassified": 0,
1316
+ "single": 0,
1317
+ "multi": 0
1318
+ },
1319
+ "Trainer": {
1320
+ "unclassified": 0,
1321
+ "single": 0,
1322
+ "multi": 0
1323
+ },
1324
+ "ONNX": {
1325
+ "unclassified": 0,
1326
+ "single": 0,
1327
+ "multi": 0
1328
+ },
1329
+ "Auto": {
1330
+ "unclassified": 0,
1331
+ "single": 0,
1332
+ "multi": 0
1333
+ },
1334
+ "Quantization": {
1335
+ "unclassified": 0,
1336
+ "single": 0,
1337
+ "multi": 0
1338
+ },
1339
+ "Unclassified": {
1340
+ "unclassified": 0,
1341
+ "single": 0,
1342
+ "multi": 0
1343
+ }
1344
+ },
1345
+ "errors": 0,
1346
+ "success": 360,
1347
+ "skipped": 235,
1348
+ "time_spent": [
1349
+ 183.4,
1350
+ 222.05
1351
+ ],
1352
+ "error": false,
1353
+ "failures": {
1354
+ "multi": [
1355
+ {
1356
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniThinkerForConditionalGenerationModelTest::test_multi_gpu_data_parallel_forward",
1357
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
1358
+ },
1359
+ {
1360
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniModelIntegrationTest::test_small_model_integration_test",
1361
+ "trace": "(line 692) AssertionError: \"syst[108 chars]d is glass shattering, and the dog is a Labrador Retriever.\" != \"syst[108 chars]d is a glass shattering. The dog in the pictur[22 chars]ver.\""
1362
+ },
1363
+ {
1364
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniModelIntegrationTest::test_small_model_integration_test_batch",
1365
+ "trace": "(line 734) AssertionError: Lists differ: [\"sys[109 chars]d is glass shattering, and the dog is a Labrad[185 chars]er.\"] != [\"sys[109 chars]d is a glass shattering. The dog in the pictur[211 chars]er.\"]"
1366
+ }
1367
+ ],
1368
+ "single": [
1369
+ {
1370
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniModelIntegrationTest::test_small_model_integration_test",
1371
+ "trace": "(line 692) AssertionError: \"syst[108 chars]d is glass shattering, and the dog is a Labrador Retriever.\" != \"syst[108 chars]d is a glass shattering. The dog in the pictur[22 chars]ver.\""
1372
+ },
1373
+ {
1374
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniModelIntegrationTest::test_small_model_integration_test_batch",
1375
+ "trace": "(line 734) AssertionError: Lists differ: [\"sys[109 chars]d is glass shattering, and the dog is a Labrad[185 chars]er.\"] != [\"sys[109 chars]d is a glass shattering. The dog in the pictur[211 chars]er.\"]"
1376
+ }
1377
+ ]
1378
+ },
1379
+ "job_link": {
1380
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682633",
1381
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682454"
1382
+ },
1383
+ "captured_info": {
1384
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682633#step:16:1",
1385
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682454#step:16:1"
1386
+ }
1387
+ },
1388
+ "models_qwen2_5_vl": {
1389
+ "failed": {
1390
+ "PyTorch": {
1391
+ "unclassified": 0,
1392
+ "single": 1,
1393
+ "multi": 1
1394
+ },
1395
+ "Tokenizers": {
1396
+ "unclassified": 0,
1397
+ "single": 0,
1398
+ "multi": 0
1399
+ },
1400
+ "Pipelines": {
1401
+ "unclassified": 0,
1402
+ "single": 0,
1403
+ "multi": 0
1404
+ },
1405
+ "Trainer": {
1406
+ "unclassified": 0,
1407
+ "single": 0,
1408
+ "multi": 0
1409
+ },
1410
+ "ONNX": {
1411
+ "unclassified": 0,
1412
+ "single": 0,
1413
+ "multi": 0
1414
+ },
1415
+ "Auto": {
1416
+ "unclassified": 0,
1417
+ "single": 0,
1418
+ "multi": 0
1419
+ },
1420
+ "Quantization": {
1421
+ "unclassified": 0,
1422
+ "single": 0,
1423
+ "multi": 0
1424
+ },
1425
+ "Unclassified": {
1426
+ "unclassified": 0,
1427
+ "single": 0,
1428
+ "multi": 0
1429
+ }
1430
+ },
1431
+ "errors": 0,
1432
+ "success": 397,
1433
+ "skipped": 121,
1434
+ "time_spent": [
1435
+ 224.58,
1436
+ 226.35
1437
+ ],
1438
+ "error": false,
1439
+ "failures": {
1440
+ "multi": [
1441
+ {
1442
+ "line": "tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py::Qwen2_5_VLIntegrationTest::test_small_model_integration_test_batch_wo_image_flashatt2",
1443
+ "trace": "(line 746) AssertionError: Lists differ: ['sys[216 chars]in', 'system\\nYou are a helpful assistant.\\nus[166 chars]and'] != ['sys[216 chars]in', \"system\\nYou are a helpful assistant.\\nus[162 chars]ing\"]"
1444
+ }
1445
+ ],
1446
+ "single": [
1447
+ {
1448
+ "line": "tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py::Qwen2_5_VLIntegrationTest::test_small_model_integration_test_batch_wo_image_flashatt2",
1449
+ "trace": "(line 746) AssertionError: Lists differ: ['sys[216 chars]in', 'system\\nYou are a helpful assistant.\\nus[166 chars]and'] != ['sys[216 chars]in', \"system\\nYou are a helpful assistant.\\nus[162 chars]ing\"]"
1450
+ }
1451
+ ]
1452
+ },
1453
+ "job_link": {
1454
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682614",
1455
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682484"
1456
+ },
1457
+ "captured_info": {
1458
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682614#step:16:1",
1459
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682484#step:16:1"
1460
+ }
1461
+ },
1462
+ "models_qwen2_audio": {
1463
+ "failed": {
1464
+ "PyTorch": {
1465
+ "unclassified": 0,
1466
+ "single": 0,
1467
+ "multi": 1
1468
+ },
1469
+ "Tokenizers": {
1470
+ "unclassified": 0,
1471
+ "single": 0,
1472
+ "multi": 0
1473
+ },
1474
+ "Pipelines": {
1475
+ "unclassified": 0,
1476
+ "single": 0,
1477
+ "multi": 0
1478
+ },
1479
+ "Trainer": {
1480
+ "unclassified": 0,
1481
+ "single": 0,
1482
+ "multi": 0
1483
+ },
1484
+ "ONNX": {
1485
+ "unclassified": 0,
1486
+ "single": 0,
1487
+ "multi": 0
1488
+ },
1489
+ "Auto": {
1490
+ "unclassified": 0,
1491
+ "single": 0,
1492
+ "multi": 0
1493
+ },
1494
+ "Quantization": {
1495
+ "unclassified": 0,
1496
+ "single": 0,
1497
+ "multi": 0
1498
+ },
1499
+ "Unclassified": {
1500
+ "unclassified": 0,
1501
+ "single": 0,
1502
+ "multi": 0
1503
+ }
1504
+ },
1505
+ "errors": 0,
1506
+ "success": 320,
1507
+ "skipped": 275,
1508
+ "time_spent": [
1509
+ 136.13,
1510
+ 131.94
1511
+ ],
1512
+ "error": false,
1513
+ "failures": {
1514
+ "multi": [
1515
+ {
1516
+ "line": "tests/models/qwen2_audio/test_modeling_qwen2_audio.py::Qwen2AudioForConditionalGenerationModelTest::test_multi_gpu_data_parallel_forward",
1517
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
1518
+ }
1519
+ ]
1520
+ },
1521
+ "job_link": {
1522
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682688",
1523
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682446"
1524
+ },
1525
+ "captured_info": {}
1526
+ },
1527
+ "models_smolvlm": {
1528
+ "failed": {
1529
+ "PyTorch": {
1530
+ "unclassified": 0,
1531
+ "single": 0,
1532
+ "multi": 2
1533
+ },
1534
+ "Tokenizers": {
1535
+ "unclassified": 0,
1536
+ "single": 0,
1537
+ "multi": 0
1538
+ },
1539
+ "Pipelines": {
1540
+ "unclassified": 0,
1541
+ "single": 0,
1542
+ "multi": 0
1543
+ },
1544
+ "Trainer": {
1545
+ "unclassified": 0,
1546
+ "single": 0,
1547
+ "multi": 0
1548
+ },
1549
+ "ONNX": {
1550
+ "unclassified": 0,
1551
+ "single": 0,
1552
+ "multi": 0
1553
+ },
1554
+ "Auto": {
1555
+ "unclassified": 0,
1556
+ "single": 0,
1557
+ "multi": 0
1558
+ },
1559
+ "Quantization": {
1560
+ "unclassified": 0,
1561
+ "single": 0,
1562
+ "multi": 0
1563
+ },
1564
+ "Unclassified": {
1565
+ "unclassified": 0,
1566
+ "single": 0,
1567
+ "multi": 0
1568
+ }
1569
+ },
1570
+ "errors": 0,
1571
+ "success": 667,
1572
+ "skipped": 309,
1573
+ "time_spent": [
1574
+ 123.9,
1575
+ 124.95
1576
+ ],
1577
+ "error": false,
1578
+ "failures": {
1579
+ "multi": [
1580
+ {
1581
+ "line": "tests/models/smolvlm/test_modeling_smolvlm.py::SmolVLMModelTest::test_multi_gpu_data_parallel_forward",
1582
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
1583
+ },
1584
+ {
1585
+ "line": "tests/models/smolvlm/test_modeling_smolvlm.py::SmolVLMForConditionalGenerationModelTest::test_multi_gpu_data_parallel_forward",
1586
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
1587
+ }
1588
+ ]
1589
+ },
1590
+ "job_link": {
1591
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682467",
1592
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682621"
1593
+ },
1594
+ "captured_info": {}
1595
+ },
1596
+ "models_t5": {
1597
+ "failed": {
1598
+ "PyTorch": {
1599
+ "unclassified": 0,
1600
+ "single": 0,
1601
+ "multi": 0
1602
+ },
1603
+ "Tokenizers": {
1604
+ "unclassified": 0,
1605
+ "single": 0,
1606
+ "multi": 0
1607
+ },
1608
+ "Pipelines": {
1609
+ "unclassified": 0,
1610
+ "single": 0,
1611
+ "multi": 0
1612
+ },
1613
+ "Trainer": {
1614
+ "unclassified": 0,
1615
+ "single": 0,
1616
+ "multi": 0
1617
+ },
1618
+ "ONNX": {
1619
+ "unclassified": 0,
1620
+ "single": 0,
1621
+ "multi": 0
1622
+ },
1623
+ "Auto": {
1624
+ "unclassified": 0,
1625
+ "single": 0,
1626
+ "multi": 0
1627
+ },
1628
+ "Quantization": {
1629
+ "unclassified": 0,
1630
+ "single": 0,
1631
+ "multi": 0
1632
+ },
1633
+ "Unclassified": {
1634
+ "unclassified": 0,
1635
+ "single": 0,
1636
+ "multi": 0
1637
+ }
1638
+ },
1639
+ "errors": 0,
1640
+ "success": 521,
1641
+ "skipped": 507,
1642
+ "time_spent": [
1643
+ 164.09,
1644
+ 162.79
1645
+ ],
1646
+ "error": false,
1647
+ "failures": {},
1648
+ "job_link": {
1649
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682452",
1650
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682690"
1651
+ },
1652
+ "captured_info": {}
1653
+ },
1654
+ "models_table_transformer": {
1655
+ "failed": {
1656
+ "PyTorch": {
1657
+ "unclassified": 0,
1658
+ "single": 1,
1659
+ "multi": 1
1660
+ },
1661
+ "Tokenizers": {
1662
+ "unclassified": 0,
1663
+ "single": 0,
1664
+ "multi": 0
1665
+ },
1666
+ "Pipelines": {
1667
+ "unclassified": 0,
1668
+ "single": 0,
1669
+ "multi": 0
1670
+ },
1671
+ "Trainer": {
1672
+ "unclassified": 0,
1673
+ "single": 0,
1674
+ "multi": 0
1675
+ },
1676
+ "ONNX": {
1677
+ "unclassified": 0,
1678
+ "single": 0,
1679
+ "multi": 0
1680
+ },
1681
+ "Auto": {
1682
+ "unclassified": 0,
1683
+ "single": 0,
1684
+ "multi": 0
1685
+ },
1686
+ "Quantization": {
1687
+ "unclassified": 0,
1688
+ "single": 0,
1689
+ "multi": 0
1690
+ },
1691
+ "Unclassified": {
1692
+ "unclassified": 0,
1693
+ "single": 0,
1694
+ "multi": 0
1695
+ }
1696
+ },
1697
+ "errors": 0,
1698
+ "success": 156,
1699
+ "skipped": 238,
1700
+ "time_spent": [
1701
+ 50.28,
1702
+ 48.83
1703
+ ],
1704
+ "error": false,
1705
+ "failures": {
1706
+ "multi": [
1707
+ {
1708
+ "line": "tests/models/table_transformer/test_modeling_table_transformer.py::TableTransformerModelIntegrationTests::test_table_detection",
1709
+ "trace": "(line 554) AssertionError: Tensor-likes are not close!"
1710
+ }
1711
+ ],
1712
+ "single": [
1713
+ {
1714
+ "line": "tests/models/table_transformer/test_modeling_table_transformer.py::TableTransformerModelIntegrationTests::test_table_detection",
1715
+ "trace": "(line 554) AssertionError: Tensor-likes are not close!"
1716
+ }
1717
+ ]
1718
+ },
1719
+ "job_link": {
1720
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682610",
1721
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682443"
1722
+ },
1723
+ "captured_info": {
1724
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682610#step:16:1",
1725
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682443#step:16:1"
1726
+ }
1727
+ },
1728
+ "models_vit": {
1729
+ "failed": {
1730
+ "PyTorch": {
1731
+ "unclassified": 0,
1732
+ "single": 0,
1733
+ "multi": 0
1734
+ },
1735
+ "Tokenizers": {
1736
+ "unclassified": 0,
1737
+ "single": 0,
1738
+ "multi": 0
1739
+ },
1740
+ "Pipelines": {
1741
+ "unclassified": 0,
1742
+ "single": 0,
1743
+ "multi": 0
1744
+ },
1745
+ "Trainer": {
1746
+ "unclassified": 0,
1747
+ "single": 0,
1748
+ "multi": 0
1749
+ },
1750
+ "ONNX": {
1751
+ "unclassified": 0,
1752
+ "single": 0,
1753
+ "multi": 0
1754
+ },
1755
+ "Auto": {
1756
+ "unclassified": 0,
1757
+ "single": 0,
1758
+ "multi": 0
1759
+ },
1760
+ "Quantization": {
1761
+ "unclassified": 0,
1762
+ "single": 0,
1763
+ "multi": 0
1764
+ },
1765
+ "Unclassified": {
1766
+ "unclassified": 0,
1767
+ "single": 0,
1768
+ "multi": 0
1769
+ }
1770
+ },
1771
+ "errors": 0,
1772
+ "success": 259,
1773
+ "skipped": 175,
1774
+ "time_spent": [
1775
+ 52.15,
1776
+ 51.91
1777
+ ],
1778
+ "error": false,
1779
+ "failures": {},
1780
+ "job_link": {
1781
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682486",
1782
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682628"
1783
+ },
1784
+ "captured_info": {}
1785
+ },
1786
+ "models_wav2vec2": {
1787
+ "failed": {
1788
+ "PyTorch": {
1789
+ "unclassified": 0,
1790
+ "single": 0,
1791
+ "multi": 0
1792
+ },
1793
+ "Tokenizers": {
1794
+ "unclassified": 0,
1795
+ "single": 0,
1796
+ "multi": 0
1797
+ },
1798
+ "Pipelines": {
1799
+ "unclassified": 0,
1800
+ "single": 0,
1801
+ "multi": 0
1802
+ },
1803
+ "Trainer": {
1804
+ "unclassified": 0,
1805
+ "single": 0,
1806
+ "multi": 0
1807
+ },
1808
+ "ONNX": {
1809
+ "unclassified": 0,
1810
+ "single": 0,
1811
+ "multi": 0
1812
+ },
1813
+ "Auto": {
1814
+ "unclassified": 0,
1815
+ "single": 0,
1816
+ "multi": 0
1817
+ },
1818
+ "Quantization": {
1819
+ "unclassified": 0,
1820
+ "single": 0,
1821
+ "multi": 0
1822
+ },
1823
+ "Unclassified": {
1824
+ "unclassified": 0,
1825
+ "single": 0,
1826
+ "multi": 0
1827
+ }
1828
+ },
1829
+ "errors": 0,
1830
+ "success": 0,
1831
+ "skipped": 0,
1832
+ "time_spent": [
1833
+ 5.62,
1834
+ 5.62
1835
+ ],
1836
+ "error": false,
1837
+ "failures": {},
1838
+ "job_link": {
1839
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682711",
1840
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682504"
1841
+ },
1842
+ "captured_info": {}
1843
+ },
1844
+ "models_whisper": {
1845
+ "failed": {
1846
+ "PyTorch": {
1847
+ "unclassified": 0,
1848
+ "single": 0,
1849
+ "multi": 0
1850
+ },
1851
+ "Tokenizers": {
1852
+ "unclassified": 0,
1853
+ "single": 0,
1854
+ "multi": 0
1855
+ },
1856
+ "Pipelines": {
1857
+ "unclassified": 0,
1858
+ "single": 0,
1859
+ "multi": 0
1860
+ },
1861
+ "Trainer": {
1862
+ "unclassified": 0,
1863
+ "single": 0,
1864
+ "multi": 0
1865
+ },
1866
+ "ONNX": {
1867
+ "unclassified": 0,
1868
+ "single": 0,
1869
+ "multi": 0
1870
+ },
1871
+ "Auto": {
1872
+ "unclassified": 0,
1873
+ "single": 0,
1874
+ "multi": 0
1875
+ },
1876
+ "Quantization": {
1877
+ "unclassified": 0,
1878
+ "single": 0,
1879
+ "multi": 0
1880
+ },
1881
+ "Unclassified": {
1882
+ "unclassified": 0,
1883
+ "single": 0,
1884
+ "multi": 0
1885
+ }
1886
+ },
1887
+ "errors": 0,
1888
+ "success": 0,
1889
+ "skipped": 0,
1890
+ "time_spent": [
1891
+ 5.59,
1892
+ 5.74
1893
+ ],
1894
+ "error": false,
1895
+ "failures": {},
1896
+ "job_link": {
1897
+ "single": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682462",
1898
+ "multi": "https://github.com/huggingface/transformers/actions/runs/24451435462/job/71441682656"
1899
+ },
1900
+ "captured_info": {}
1901
+ }
1902
+ }