hf-transformers-bot commited on
Commit
ac5b8d3
·
verified ·
1 Parent(s): e8af79e

Upload 2025-08-26/runs/4875-17232703157/ci_results_run_models_gpu/model_results.json with huggingface_hub

Browse files
2025-08-26/runs/4875-17232703157/ci_results_run_models_gpu/model_results.json ADDED
@@ -0,0 +1,1791 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "models_aimv2": {
3
+ "failed": {
4
+ "PyTorch": {
5
+ "unclassified": 0,
6
+ "single": 0,
7
+ "multi": 0
8
+ },
9
+ "TensorFlow": {
10
+ "unclassified": 0,
11
+ "single": 0,
12
+ "multi": 0
13
+ },
14
+ "Flax": {
15
+ "unclassified": 0,
16
+ "single": 0,
17
+ "multi": 0
18
+ },
19
+ "Tokenizers": {
20
+ "unclassified": 0,
21
+ "single": 0,
22
+ "multi": 0
23
+ },
24
+ "Pipelines": {
25
+ "unclassified": 0,
26
+ "single": 0,
27
+ "multi": 0
28
+ },
29
+ "Trainer": {
30
+ "unclassified": 0,
31
+ "single": 0,
32
+ "multi": 0
33
+ },
34
+ "ONNX": {
35
+ "unclassified": 0,
36
+ "single": 0,
37
+ "multi": 0
38
+ },
39
+ "Auto": {
40
+ "unclassified": 0,
41
+ "single": 0,
42
+ "multi": 0
43
+ },
44
+ "Quantization": {
45
+ "unclassified": 0,
46
+ "single": 0,
47
+ "multi": 0
48
+ },
49
+ "Unclassified": {
50
+ "unclassified": 0,
51
+ "single": 0,
52
+ "multi": 0
53
+ }
54
+ },
55
+ "errors": 0,
56
+ "success": 499,
57
+ "skipped": 395,
58
+ "time_spent": [
59
+ 93.42,
60
+ 95.37
61
+ ],
62
+ "failures": {},
63
+ "job_link": {
64
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483690",
65
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483242"
66
+ }
67
+ },
68
+ "models_albert": {
69
+ "failed": {
70
+ "PyTorch": {
71
+ "unclassified": 0,
72
+ "single": 0,
73
+ "multi": 0
74
+ },
75
+ "TensorFlow": {
76
+ "unclassified": 0,
77
+ "single": 0,
78
+ "multi": 0
79
+ },
80
+ "Flax": {
81
+ "unclassified": 0,
82
+ "single": 0,
83
+ "multi": 0
84
+ },
85
+ "Tokenizers": {
86
+ "unclassified": 0,
87
+ "single": 0,
88
+ "multi": 0
89
+ },
90
+ "Pipelines": {
91
+ "unclassified": 0,
92
+ "single": 0,
93
+ "multi": 0
94
+ },
95
+ "Trainer": {
96
+ "unclassified": 0,
97
+ "single": 0,
98
+ "multi": 0
99
+ },
100
+ "ONNX": {
101
+ "unclassified": 0,
102
+ "single": 0,
103
+ "multi": 0
104
+ },
105
+ "Auto": {
106
+ "unclassified": 0,
107
+ "single": 0,
108
+ "multi": 0
109
+ },
110
+ "Quantization": {
111
+ "unclassified": 0,
112
+ "single": 0,
113
+ "multi": 0
114
+ },
115
+ "Unclassified": {
116
+ "unclassified": 0,
117
+ "single": 0,
118
+ "multi": 0
119
+ }
120
+ },
121
+ "errors": 0,
122
+ "success": 434,
123
+ "skipped": 166,
124
+ "time_spent": [
125
+ 171.6,
126
+ 169.29
127
+ ],
128
+ "failures": {},
129
+ "job_link": {
130
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483683",
131
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483243"
132
+ }
133
+ },
134
+ "models_align": {
135
+ "failed": {
136
+ "PyTorch": {
137
+ "unclassified": 0,
138
+ "single": 0,
139
+ "multi": 1
140
+ },
141
+ "TensorFlow": {
142
+ "unclassified": 0,
143
+ "single": 0,
144
+ "multi": 0
145
+ },
146
+ "Flax": {
147
+ "unclassified": 0,
148
+ "single": 0,
149
+ "multi": 0
150
+ },
151
+ "Tokenizers": {
152
+ "unclassified": 0,
153
+ "single": 0,
154
+ "multi": 0
155
+ },
156
+ "Pipelines": {
157
+ "unclassified": 0,
158
+ "single": 0,
159
+ "multi": 0
160
+ },
161
+ "Trainer": {
162
+ "unclassified": 0,
163
+ "single": 0,
164
+ "multi": 0
165
+ },
166
+ "ONNX": {
167
+ "unclassified": 0,
168
+ "single": 0,
169
+ "multi": 0
170
+ },
171
+ "Auto": {
172
+ "unclassified": 0,
173
+ "single": 0,
174
+ "multi": 0
175
+ },
176
+ "Quantization": {
177
+ "unclassified": 0,
178
+ "single": 0,
179
+ "multi": 0
180
+ },
181
+ "Unclassified": {
182
+ "unclassified": 0,
183
+ "single": 0,
184
+ "multi": 0
185
+ }
186
+ },
187
+ "errors": 0,
188
+ "success": 336,
189
+ "skipped": 659,
190
+ "time_spent": [
191
+ 74.79,
192
+ 72.98
193
+ ],
194
+ "failures": {
195
+ "multi": [
196
+ {
197
+ "line": "tests/models/align/test_modeling_align.py::AlignTextModelTest::test_model_parallelism",
198
+ "trace": "(line 596) RuntimeError: Expected all tensors to be on the same device, but got mat2 is on cuda:1, different from other tensors on cuda:0 (when checking argument in method wrapper_CUDA_bmm)"
199
+ }
200
+ ]
201
+ },
202
+ "job_link": {
203
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483267",
204
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483704"
205
+ }
206
+ },
207
+ "models_altclip": {
208
+ "failed": {
209
+ "PyTorch": {
210
+ "unclassified": 0,
211
+ "single": 0,
212
+ "multi": 0
213
+ },
214
+ "TensorFlow": {
215
+ "unclassified": 0,
216
+ "single": 0,
217
+ "multi": 0
218
+ },
219
+ "Flax": {
220
+ "unclassified": 0,
221
+ "single": 0,
222
+ "multi": 0
223
+ },
224
+ "Tokenizers": {
225
+ "unclassified": 0,
226
+ "single": 0,
227
+ "multi": 0
228
+ },
229
+ "Pipelines": {
230
+ "unclassified": 0,
231
+ "single": 0,
232
+ "multi": 0
233
+ },
234
+ "Trainer": {
235
+ "unclassified": 0,
236
+ "single": 0,
237
+ "multi": 0
238
+ },
239
+ "ONNX": {
240
+ "unclassified": 0,
241
+ "single": 0,
242
+ "multi": 0
243
+ },
244
+ "Auto": {
245
+ "unclassified": 0,
246
+ "single": 0,
247
+ "multi": 0
248
+ },
249
+ "Quantization": {
250
+ "unclassified": 0,
251
+ "single": 0,
252
+ "multi": 0
253
+ },
254
+ "Unclassified": {
255
+ "unclassified": 0,
256
+ "single": 0,
257
+ "multi": 0
258
+ }
259
+ },
260
+ "errors": 0,
261
+ "success": 172,
262
+ "skipped": 320,
263
+ "time_spent": [
264
+ 12.16,
265
+ 148.76
266
+ ],
267
+ "failures": {},
268
+ "job_link": {
269
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483652",
270
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483251"
271
+ }
272
+ },
273
+ "models_arcee": {
274
+ "failed": {
275
+ "PyTorch": {
276
+ "unclassified": 0,
277
+ "single": 0,
278
+ "multi": 0
279
+ },
280
+ "TensorFlow": {
281
+ "unclassified": 0,
282
+ "single": 0,
283
+ "multi": 0
284
+ },
285
+ "Flax": {
286
+ "unclassified": 0,
287
+ "single": 0,
288
+ "multi": 0
289
+ },
290
+ "Tokenizers": {
291
+ "unclassified": 0,
292
+ "single": 0,
293
+ "multi": 0
294
+ },
295
+ "Pipelines": {
296
+ "unclassified": 0,
297
+ "single": 0,
298
+ "multi": 0
299
+ },
300
+ "Trainer": {
301
+ "unclassified": 0,
302
+ "single": 0,
303
+ "multi": 0
304
+ },
305
+ "ONNX": {
306
+ "unclassified": 0,
307
+ "single": 0,
308
+ "multi": 0
309
+ },
310
+ "Auto": {
311
+ "unclassified": 0,
312
+ "single": 0,
313
+ "multi": 0
314
+ },
315
+ "Quantization": {
316
+ "unclassified": 0,
317
+ "single": 0,
318
+ "multi": 0
319
+ },
320
+ "Unclassified": {
321
+ "unclassified": 0,
322
+ "single": 0,
323
+ "multi": 0
324
+ }
325
+ },
326
+ "errors": 0,
327
+ "success": 281,
328
+ "skipped": 199,
329
+ "time_spent": [
330
+ 137.34,
331
+ 136.1
332
+ ],
333
+ "failures": {},
334
+ "job_link": {
335
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483271",
336
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483668"
337
+ }
338
+ },
339
+ "models_aria": {
340
+ "failed": {
341
+ "PyTorch": {
342
+ "unclassified": 0,
343
+ "single": 0,
344
+ "multi": 0
345
+ },
346
+ "TensorFlow": {
347
+ "unclassified": 0,
348
+ "single": 0,
349
+ "multi": 0
350
+ },
351
+ "Flax": {
352
+ "unclassified": 0,
353
+ "single": 0,
354
+ "multi": 0
355
+ },
356
+ "Tokenizers": {
357
+ "unclassified": 0,
358
+ "single": 0,
359
+ "multi": 0
360
+ },
361
+ "Pipelines": {
362
+ "unclassified": 0,
363
+ "single": 0,
364
+ "multi": 0
365
+ },
366
+ "Trainer": {
367
+ "unclassified": 0,
368
+ "single": 0,
369
+ "multi": 0
370
+ },
371
+ "ONNX": {
372
+ "unclassified": 0,
373
+ "single": 0,
374
+ "multi": 0
375
+ },
376
+ "Auto": {
377
+ "unclassified": 0,
378
+ "single": 0,
379
+ "multi": 0
380
+ },
381
+ "Quantization": {
382
+ "unclassified": 0,
383
+ "single": 0,
384
+ "multi": 0
385
+ },
386
+ "Unclassified": {
387
+ "unclassified": 0,
388
+ "single": 0,
389
+ "multi": 0
390
+ }
391
+ },
392
+ "errors": 0,
393
+ "success": 303,
394
+ "skipped": 199,
395
+ "time_spent": [
396
+ 203.1,
397
+ 204.84
398
+ ],
399
+ "failures": {},
400
+ "job_link": {
401
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483661",
402
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483272"
403
+ }
404
+ },
405
+ "models_audio_spectrogram_transformer": {
406
+ "failed": {
407
+ "PyTorch": {
408
+ "unclassified": 0,
409
+ "single": 0,
410
+ "multi": 0
411
+ },
412
+ "TensorFlow": {
413
+ "unclassified": 0,
414
+ "single": 0,
415
+ "multi": 0
416
+ },
417
+ "Flax": {
418
+ "unclassified": 0,
419
+ "single": 0,
420
+ "multi": 0
421
+ },
422
+ "Tokenizers": {
423
+ "unclassified": 0,
424
+ "single": 0,
425
+ "multi": 0
426
+ },
427
+ "Pipelines": {
428
+ "unclassified": 0,
429
+ "single": 0,
430
+ "multi": 0
431
+ },
432
+ "Trainer": {
433
+ "unclassified": 0,
434
+ "single": 0,
435
+ "multi": 0
436
+ },
437
+ "ONNX": {
438
+ "unclassified": 0,
439
+ "single": 0,
440
+ "multi": 0
441
+ },
442
+ "Auto": {
443
+ "unclassified": 0,
444
+ "single": 0,
445
+ "multi": 0
446
+ },
447
+ "Quantization": {
448
+ "unclassified": 0,
449
+ "single": 0,
450
+ "multi": 0
451
+ },
452
+ "Unclassified": {
453
+ "unclassified": 0,
454
+ "single": 0,
455
+ "multi": 0
456
+ }
457
+ },
458
+ "errors": 0,
459
+ "success": 250,
460
+ "skipped": 196,
461
+ "time_spent": [
462
+ 46.43,
463
+ 46.67
464
+ ],
465
+ "failures": {},
466
+ "job_link": {
467
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483701",
468
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483265"
469
+ }
470
+ },
471
+ "models_auto": {
472
+ "failed": {
473
+ "PyTorch": {
474
+ "unclassified": 0,
475
+ "single": 0,
476
+ "multi": 0
477
+ },
478
+ "TensorFlow": {
479
+ "unclassified": 0,
480
+ "single": 0,
481
+ "multi": 0
482
+ },
483
+ "Flax": {
484
+ "unclassified": 0,
485
+ "single": 0,
486
+ "multi": 0
487
+ },
488
+ "Tokenizers": {
489
+ "unclassified": 0,
490
+ "single": 0,
491
+ "multi": 0
492
+ },
493
+ "Pipelines": {
494
+ "unclassified": 0,
495
+ "single": 0,
496
+ "multi": 0
497
+ },
498
+ "Trainer": {
499
+ "unclassified": 0,
500
+ "single": 0,
501
+ "multi": 0
502
+ },
503
+ "ONNX": {
504
+ "unclassified": 0,
505
+ "single": 0,
506
+ "multi": 0
507
+ },
508
+ "Auto": {
509
+ "unclassified": 0,
510
+ "single": 0,
511
+ "multi": 0
512
+ },
513
+ "Quantization": {
514
+ "unclassified": 0,
515
+ "single": 0,
516
+ "multi": 0
517
+ },
518
+ "Unclassified": {
519
+ "unclassified": 0,
520
+ "single": 0,
521
+ "multi": 0
522
+ }
523
+ },
524
+ "errors": 0,
525
+ "success": 226,
526
+ "skipped": 10,
527
+ "time_spent": [
528
+ 53.96,
529
+ 58.38
530
+ ],
531
+ "failures": {},
532
+ "job_link": {
533
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483726",
534
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483250"
535
+ }
536
+ },
537
+ "models_autoformer": {
538
+ "failed": {
539
+ "PyTorch": {
540
+ "unclassified": 0,
541
+ "single": 0,
542
+ "multi": 0
543
+ },
544
+ "TensorFlow": {
545
+ "unclassified": 0,
546
+ "single": 0,
547
+ "multi": 0
548
+ },
549
+ "Flax": {
550
+ "unclassified": 0,
551
+ "single": 0,
552
+ "multi": 0
553
+ },
554
+ "Tokenizers": {
555
+ "unclassified": 0,
556
+ "single": 0,
557
+ "multi": 0
558
+ },
559
+ "Pipelines": {
560
+ "unclassified": 0,
561
+ "single": 0,
562
+ "multi": 0
563
+ },
564
+ "Trainer": {
565
+ "unclassified": 0,
566
+ "single": 0,
567
+ "multi": 0
568
+ },
569
+ "ONNX": {
570
+ "unclassified": 0,
571
+ "single": 0,
572
+ "multi": 0
573
+ },
574
+ "Auto": {
575
+ "unclassified": 0,
576
+ "single": 0,
577
+ "multi": 0
578
+ },
579
+ "Quantization": {
580
+ "unclassified": 0,
581
+ "single": 0,
582
+ "multi": 0
583
+ },
584
+ "Unclassified": {
585
+ "unclassified": 0,
586
+ "single": 0,
587
+ "multi": 0
588
+ }
589
+ },
590
+ "errors": 0,
591
+ "success": 106,
592
+ "skipped": 272,
593
+ "time_spent": [
594
+ 27.42,
595
+ 28.23
596
+ ],
597
+ "failures": {},
598
+ "job_link": {
599
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483677",
600
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483275"
601
+ }
602
+ },
603
+ "models_aya_vision": {
604
+ "failed": {
605
+ "PyTorch": {
606
+ "unclassified": 0,
607
+ "single": 0,
608
+ "multi": 1
609
+ },
610
+ "TensorFlow": {
611
+ "unclassified": 0,
612
+ "single": 0,
613
+ "multi": 0
614
+ },
615
+ "Flax": {
616
+ "unclassified": 0,
617
+ "single": 0,
618
+ "multi": 0
619
+ },
620
+ "Tokenizers": {
621
+ "unclassified": 0,
622
+ "single": 0,
623
+ "multi": 0
624
+ },
625
+ "Pipelines": {
626
+ "unclassified": 0,
627
+ "single": 0,
628
+ "multi": 0
629
+ },
630
+ "Trainer": {
631
+ "unclassified": 0,
632
+ "single": 0,
633
+ "multi": 0
634
+ },
635
+ "ONNX": {
636
+ "unclassified": 0,
637
+ "single": 0,
638
+ "multi": 0
639
+ },
640
+ "Auto": {
641
+ "unclassified": 0,
642
+ "single": 0,
643
+ "multi": 0
644
+ },
645
+ "Quantization": {
646
+ "unclassified": 0,
647
+ "single": 0,
648
+ "multi": 0
649
+ },
650
+ "Unclassified": {
651
+ "unclassified": 0,
652
+ "single": 0,
653
+ "multi": 0
654
+ }
655
+ },
656
+ "errors": 0,
657
+ "success": 130,
658
+ "skipped": 148,
659
+ "time_spent": [
660
+ 161.27
661
+ ],
662
+ "failures": {
663
+ "multi": [
664
+ {
665
+ "line": "tests/models/aya_vision/test_modeling_aya_vision.py::AyaVisionModelTest::test_sdpa_can_dispatch_on_flash",
666
+ "trace": "(line 83) RuntimeError: No available kernel. Aborting execution."
667
+ }
668
+ ]
669
+ },
670
+ "job_link": {
671
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483274"
672
+ }
673
+ },
674
+ "models_bamba": {
675
+ "failed": {
676
+ "PyTorch": {
677
+ "unclassified": 0,
678
+ "single": 0,
679
+ "multi": 5
680
+ },
681
+ "TensorFlow": {
682
+ "unclassified": 0,
683
+ "single": 0,
684
+ "multi": 0
685
+ },
686
+ "Flax": {
687
+ "unclassified": 0,
688
+ "single": 0,
689
+ "multi": 0
690
+ },
691
+ "Tokenizers": {
692
+ "unclassified": 0,
693
+ "single": 0,
694
+ "multi": 0
695
+ },
696
+ "Pipelines": {
697
+ "unclassified": 0,
698
+ "single": 0,
699
+ "multi": 0
700
+ },
701
+ "Trainer": {
702
+ "unclassified": 0,
703
+ "single": 0,
704
+ "multi": 0
705
+ },
706
+ "ONNX": {
707
+ "unclassified": 0,
708
+ "single": 0,
709
+ "multi": 0
710
+ },
711
+ "Auto": {
712
+ "unclassified": 0,
713
+ "single": 0,
714
+ "multi": 0
715
+ },
716
+ "Quantization": {
717
+ "unclassified": 0,
718
+ "single": 0,
719
+ "multi": 0
720
+ },
721
+ "Unclassified": {
722
+ "unclassified": 0,
723
+ "single": 0,
724
+ "multi": 0
725
+ }
726
+ },
727
+ "errors": 0,
728
+ "success": 111,
729
+ "skipped": 116,
730
+ "time_spent": [
731
+ 109.23
732
+ ],
733
+ "failures": {
734
+ "multi": [
735
+ {
736
+ "line": "tests/models/bamba/test_modeling_bamba.py::BambaModelTest::test_flash_attention_2_padding_matches_padding_free_with_position_ids_seq_idx_and_fa_kwargs",
737
+ "trace": "(line 932) NotImplementedError: `seq_idx` support requires fast path support. Please install `mamba_ssm` and `causal_conv1d`"
738
+ },
739
+ {
740
+ "line": "tests/models/bamba/test_modeling_bamba.py::BambaModelTest::test_flash_attn_2_inference_equivalence",
741
+ "trace": "(line 181) ValueError: The dimensions for the Mamba head state do not match the model intermediate_size"
742
+ },
743
+ {
744
+ "line": "tests/models/bamba/test_modeling_bamba.py::BambaModelTest::test_flash_attn_2_inference_equivalence_right_padding",
745
+ "trace": "(line 181) ValueError: The dimensions for the Mamba head state do not match the model intermediate_size"
746
+ },
747
+ {
748
+ "line": "tests/models/bamba/test_modeling_bamba.py::BambaModelIntegrationTest::test_simple_batched_generate_with_padding",
749
+ "trace": "(line 861) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 8.00 GiB. GPU 0 has a total capacity of 22.18 GiB of which 3.17 GiB is free. Process 22400 has 19.00 GiB memory in use. Of the allocated memory 18.54 GiB is allocated by PyTorch, and 4.28 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
750
+ },
751
+ {
752
+ "line": "tests/models/bamba/test_modeling_bamba.py::BambaModelIntegrationTest::test_simple_generate",
753
+ "trace": "(line 861) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.00 GiB. GPU 0 has a total capacity of 22.18 GiB of which 3.03 GiB is free. Process 22400 has 19.15 GiB memory in use. Of the allocated memory 18.67 GiB is allocated by PyTorch, and 11.39 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
754
+ }
755
+ ]
756
+ },
757
+ "job_link": {
758
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483294"
759
+ }
760
+ },
761
+ "models_bartpho": {
762
+ "failed": {
763
+ "PyTorch": {
764
+ "unclassified": 0,
765
+ "single": 0,
766
+ "multi": 0
767
+ },
768
+ "TensorFlow": {
769
+ "unclassified": 0,
770
+ "single": 0,
771
+ "multi": 0
772
+ },
773
+ "Flax": {
774
+ "unclassified": 0,
775
+ "single": 0,
776
+ "multi": 0
777
+ },
778
+ "Tokenizers": {
779
+ "unclassified": 0,
780
+ "single": 0,
781
+ "multi": 0
782
+ },
783
+ "Pipelines": {
784
+ "unclassified": 0,
785
+ "single": 0,
786
+ "multi": 0
787
+ },
788
+ "Trainer": {
789
+ "unclassified": 0,
790
+ "single": 0,
791
+ "multi": 0
792
+ },
793
+ "ONNX": {
794
+ "unclassified": 0,
795
+ "single": 0,
796
+ "multi": 0
797
+ },
798
+ "Auto": {
799
+ "unclassified": 0,
800
+ "single": 0,
801
+ "multi": 0
802
+ },
803
+ "Quantization": {
804
+ "unclassified": 0,
805
+ "single": 0,
806
+ "multi": 0
807
+ },
808
+ "Unclassified": {
809
+ "unclassified": 0,
810
+ "single": 0,
811
+ "multi": 0
812
+ }
813
+ },
814
+ "errors": 0,
815
+ "success": 89,
816
+ "skipped": 13,
817
+ "time_spent": [
818
+ 18.81
819
+ ],
820
+ "failures": {},
821
+ "job_link": {
822
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483326"
823
+ }
824
+ },
825
+ "models_beit": {
826
+ "failed": {
827
+ "PyTorch": {
828
+ "unclassified": 0,
829
+ "single": 0,
830
+ "multi": 0
831
+ },
832
+ "TensorFlow": {
833
+ "unclassified": 0,
834
+ "single": 0,
835
+ "multi": 0
836
+ },
837
+ "Flax": {
838
+ "unclassified": 0,
839
+ "single": 0,
840
+ "multi": 0
841
+ },
842
+ "Tokenizers": {
843
+ "unclassified": 0,
844
+ "single": 0,
845
+ "multi": 0
846
+ },
847
+ "Pipelines": {
848
+ "unclassified": 0,
849
+ "single": 0,
850
+ "multi": 0
851
+ },
852
+ "Trainer": {
853
+ "unclassified": 0,
854
+ "single": 0,
855
+ "multi": 0
856
+ },
857
+ "ONNX": {
858
+ "unclassified": 0,
859
+ "single": 0,
860
+ "multi": 0
861
+ },
862
+ "Auto": {
863
+ "unclassified": 0,
864
+ "single": 0,
865
+ "multi": 0
866
+ },
867
+ "Quantization": {
868
+ "unclassified": 0,
869
+ "single": 0,
870
+ "multi": 0
871
+ },
872
+ "Unclassified": {
873
+ "unclassified": 0,
874
+ "single": 0,
875
+ "multi": 0
876
+ }
877
+ },
878
+ "errors": 0,
879
+ "success": 125,
880
+ "skipped": 102,
881
+ "time_spent": [
882
+ 80.45
883
+ ],
884
+ "failures": {},
885
+ "job_link": {
886
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483303"
887
+ }
888
+ },
889
+ "models_bert_japanese": {
890
+ "failed": {
891
+ "PyTorch": {
892
+ "unclassified": 0,
893
+ "single": 0,
894
+ "multi": 0
895
+ },
896
+ "TensorFlow": {
897
+ "unclassified": 0,
898
+ "single": 0,
899
+ "multi": 0
900
+ },
901
+ "Flax": {
902
+ "unclassified": 0,
903
+ "single": 0,
904
+ "multi": 0
905
+ },
906
+ "Tokenizers": {
907
+ "unclassified": 0,
908
+ "single": 0,
909
+ "multi": 0
910
+ },
911
+ "Pipelines": {
912
+ "unclassified": 0,
913
+ "single": 0,
914
+ "multi": 0
915
+ },
916
+ "Trainer": {
917
+ "unclassified": 0,
918
+ "single": 0,
919
+ "multi": 0
920
+ },
921
+ "ONNX": {
922
+ "unclassified": 0,
923
+ "single": 0,
924
+ "multi": 0
925
+ },
926
+ "Auto": {
927
+ "unclassified": 0,
928
+ "single": 0,
929
+ "multi": 0
930
+ },
931
+ "Quantization": {
932
+ "unclassified": 0,
933
+ "single": 0,
934
+ "multi": 0
935
+ },
936
+ "Unclassified": {
937
+ "unclassified": 0,
938
+ "single": 0,
939
+ "multi": 0
940
+ }
941
+ },
942
+ "errors": 0,
943
+ "success": 1,
944
+ "skipped": 236,
945
+ "time_spent": [
946
+ 1.78
947
+ ],
948
+ "failures": {},
949
+ "job_link": {
950
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483682"
951
+ }
952
+ },
953
+ "models_megatron_gpt2": {
954
+ "failed": {
955
+ "PyTorch": {
956
+ "unclassified": 0,
957
+ "single": 0,
958
+ "multi": 0
959
+ },
960
+ "TensorFlow": {
961
+ "unclassified": 0,
962
+ "single": 0,
963
+ "multi": 0
964
+ },
965
+ "Flax": {
966
+ "unclassified": 0,
967
+ "single": 0,
968
+ "multi": 0
969
+ },
970
+ "Tokenizers": {
971
+ "unclassified": 0,
972
+ "single": 0,
973
+ "multi": 0
974
+ },
975
+ "Pipelines": {
976
+ "unclassified": 0,
977
+ "single": 0,
978
+ "multi": 0
979
+ },
980
+ "Trainer": {
981
+ "unclassified": 0,
982
+ "single": 0,
983
+ "multi": 0
984
+ },
985
+ "ONNX": {
986
+ "unclassified": 0,
987
+ "single": 0,
988
+ "multi": 0
989
+ },
990
+ "Auto": {
991
+ "unclassified": 0,
992
+ "single": 0,
993
+ "multi": 0
994
+ },
995
+ "Quantization": {
996
+ "unclassified": 0,
997
+ "single": 0,
998
+ "multi": 0
999
+ },
1000
+ "Unclassified": {
1001
+ "unclassified": 0,
1002
+ "single": 0,
1003
+ "multi": 0
1004
+ }
1005
+ },
1006
+ "errors": 0,
1007
+ "success": 0,
1008
+ "skipped": 2,
1009
+ "time_spent": [
1010
+ 0.87,
1011
+ 0.88
1012
+ ],
1013
+ "failures": {},
1014
+ "job_link": {
1015
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483311",
1016
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483443"
1017
+ }
1018
+ },
1019
+ "models_metaclip_2": {
1020
+ "failed": {
1021
+ "PyTorch": {
1022
+ "unclassified": 0,
1023
+ "single": 1,
1024
+ "multi": 0
1025
+ },
1026
+ "TensorFlow": {
1027
+ "unclassified": 0,
1028
+ "single": 0,
1029
+ "multi": 0
1030
+ },
1031
+ "Flax": {
1032
+ "unclassified": 0,
1033
+ "single": 0,
1034
+ "multi": 0
1035
+ },
1036
+ "Tokenizers": {
1037
+ "unclassified": 0,
1038
+ "single": 0,
1039
+ "multi": 0
1040
+ },
1041
+ "Pipelines": {
1042
+ "unclassified": 0,
1043
+ "single": 0,
1044
+ "multi": 0
1045
+ },
1046
+ "Trainer": {
1047
+ "unclassified": 0,
1048
+ "single": 0,
1049
+ "multi": 0
1050
+ },
1051
+ "ONNX": {
1052
+ "unclassified": 0,
1053
+ "single": 0,
1054
+ "multi": 0
1055
+ },
1056
+ "Auto": {
1057
+ "unclassified": 0,
1058
+ "single": 0,
1059
+ "multi": 0
1060
+ },
1061
+ "Quantization": {
1062
+ "unclassified": 0,
1063
+ "single": 0,
1064
+ "multi": 0
1065
+ },
1066
+ "Unclassified": {
1067
+ "unclassified": 0,
1068
+ "single": 0,
1069
+ "multi": 0
1070
+ }
1071
+ },
1072
+ "errors": 0,
1073
+ "success": 671,
1074
+ "skipped": 596,
1075
+ "time_spent": [
1076
+ 120.0,
1077
+ 120.63
1078
+ ],
1079
+ "failures": {
1080
+ "single": [
1081
+ {
1082
+ "line": "tests/models/metaclip_2/test_modeling_metaclip_2.py::MetaClip2ModelTest::test_eager_matches_sdpa_inference_21_bf16_pad_right",
1083
+ "trace": "(line 484) ValueError: mean relative difference for logits_per_text: 3.198e-02, torch atol = 0.01, torch rtol = 0.01"
1084
+ }
1085
+ ]
1086
+ },
1087
+ "job_link": {
1088
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483325",
1089
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483424"
1090
+ }
1091
+ },
1092
+ "models_mgp_str": {
1093
+ "failed": {
1094
+ "PyTorch": {
1095
+ "unclassified": 0,
1096
+ "single": 0,
1097
+ "multi": 1
1098
+ },
1099
+ "TensorFlow": {
1100
+ "unclassified": 0,
1101
+ "single": 0,
1102
+ "multi": 0
1103
+ },
1104
+ "Flax": {
1105
+ "unclassified": 0,
1106
+ "single": 0,
1107
+ "multi": 0
1108
+ },
1109
+ "Tokenizers": {
1110
+ "unclassified": 0,
1111
+ "single": 0,
1112
+ "multi": 0
1113
+ },
1114
+ "Pipelines": {
1115
+ "unclassified": 0,
1116
+ "single": 0,
1117
+ "multi": 0
1118
+ },
1119
+ "Trainer": {
1120
+ "unclassified": 0,
1121
+ "single": 0,
1122
+ "multi": 0
1123
+ },
1124
+ "ONNX": {
1125
+ "unclassified": 0,
1126
+ "single": 0,
1127
+ "multi": 0
1128
+ },
1129
+ "Auto": {
1130
+ "unclassified": 0,
1131
+ "single": 0,
1132
+ "multi": 0
1133
+ },
1134
+ "Quantization": {
1135
+ "unclassified": 0,
1136
+ "single": 0,
1137
+ "multi": 0
1138
+ },
1139
+ "Unclassified": {
1140
+ "unclassified": 0,
1141
+ "single": 0,
1142
+ "multi": 0
1143
+ }
1144
+ },
1145
+ "errors": 0,
1146
+ "success": 267,
1147
+ "skipped": 318,
1148
+ "time_spent": [
1149
+ 32.67,
1150
+ 32.78
1151
+ ],
1152
+ "failures": {
1153
+ "multi": [
1154
+ {
1155
+ "line": "tests/models/mgp_str/test_modeling_mgp_str.py::MgpstrModelTest::test_model_parallelism",
1156
+ "trace": "(line 422) RuntimeError: Expected all tensors to be on the same device, but got mat2 is on cuda:1, different from other tensors on cuda:0 (when checking argument in method wrapper_CUDA_bmm)"
1157
+ }
1158
+ ]
1159
+ },
1160
+ "job_link": {
1161
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483310",
1162
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483464"
1163
+ }
1164
+ },
1165
+ "models_mimi": {
1166
+ "failed": {
1167
+ "PyTorch": {
1168
+ "unclassified": 0,
1169
+ "single": 2,
1170
+ "multi": 3
1171
+ },
1172
+ "TensorFlow": {
1173
+ "unclassified": 0,
1174
+ "single": 0,
1175
+ "multi": 0
1176
+ },
1177
+ "Flax": {
1178
+ "unclassified": 0,
1179
+ "single": 0,
1180
+ "multi": 0
1181
+ },
1182
+ "Tokenizers": {
1183
+ "unclassified": 0,
1184
+ "single": 0,
1185
+ "multi": 0
1186
+ },
1187
+ "Pipelines": {
1188
+ "unclassified": 0,
1189
+ "single": 0,
1190
+ "multi": 0
1191
+ },
1192
+ "Trainer": {
1193
+ "unclassified": 0,
1194
+ "single": 0,
1195
+ "multi": 0
1196
+ },
1197
+ "ONNX": {
1198
+ "unclassified": 0,
1199
+ "single": 0,
1200
+ "multi": 0
1201
+ },
1202
+ "Auto": {
1203
+ "unclassified": 0,
1204
+ "single": 0,
1205
+ "multi": 0
1206
+ },
1207
+ "Quantization": {
1208
+ "unclassified": 0,
1209
+ "single": 0,
1210
+ "multi": 0
1211
+ },
1212
+ "Unclassified": {
1213
+ "unclassified": 0,
1214
+ "single": 0,
1215
+ "multi": 0
1216
+ }
1217
+ },
1218
+ "errors": 0,
1219
+ "success": 149,
1220
+ "skipped": 112,
1221
+ "time_spent": [
1222
+ 30.11,
1223
+ 28.03
1224
+ ],
1225
+ "failures": {
1226
+ "multi": [
1227
+ {
1228
+ "line": "tests/models/mimi/test_modeling_mimi.py::MimiModelTest::test_model_parallelism",
1229
+ "trace": "(line 982) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!"
1230
+ },
1231
+ {
1232
+ "line": "tests/models/mimi/test_modeling_mimi.py::MimiModelTest::test_sdpa_can_dispatch_on_flash",
1233
+ "trace": "(line 899) RuntimeError: No available kernel. Aborting execution."
1234
+ },
1235
+ {
1236
+ "line": "tests/models/mimi/test_modeling_mimi.py::MimiIntegrationTest::test_integration",
1237
+ "trace": "(line 687) AssertionError: False is not true"
1238
+ }
1239
+ ],
1240
+ "single": [
1241
+ {
1242
+ "line": "tests/models/mimi/test_modeling_mimi.py::MimiModelTest::test_sdpa_can_dispatch_on_flash",
1243
+ "trace": "(line 899) RuntimeError: No available kernel. Aborting execution."
1244
+ },
1245
+ {
1246
+ "line": "tests/models/mimi/test_modeling_mimi.py::MimiIntegrationTest::test_integration",
1247
+ "trace": "(line 687) AssertionError: False is not true"
1248
+ }
1249
+ ]
1250
+ },
1251
+ "job_link": {
1252
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483425",
1253
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483348"
1254
+ }
1255
+ },
1256
+ "models_minimax": {
1257
+ "failed": {
1258
+ "PyTorch": {
1259
+ "unclassified": 0,
1260
+ "single": 2,
1261
+ "multi": 3
1262
+ },
1263
+ "TensorFlow": {
1264
+ "unclassified": 0,
1265
+ "single": 0,
1266
+ "multi": 0
1267
+ },
1268
+ "Flax": {
1269
+ "unclassified": 0,
1270
+ "single": 0,
1271
+ "multi": 0
1272
+ },
1273
+ "Tokenizers": {
1274
+ "unclassified": 0,
1275
+ "single": 0,
1276
+ "multi": 0
1277
+ },
1278
+ "Pipelines": {
1279
+ "unclassified": 0,
1280
+ "single": 0,
1281
+ "multi": 0
1282
+ },
1283
+ "Trainer": {
1284
+ "unclassified": 0,
1285
+ "single": 0,
1286
+ "multi": 0
1287
+ },
1288
+ "ONNX": {
1289
+ "unclassified": 0,
1290
+ "single": 0,
1291
+ "multi": 0
1292
+ },
1293
+ "Auto": {
1294
+ "unclassified": 0,
1295
+ "single": 0,
1296
+ "multi": 0
1297
+ },
1298
+ "Quantization": {
1299
+ "unclassified": 0,
1300
+ "single": 0,
1301
+ "multi": 0
1302
+ },
1303
+ "Unclassified": {
1304
+ "unclassified": 0,
1305
+ "single": 0,
1306
+ "multi": 0
1307
+ }
1308
+ },
1309
+ "errors": 0,
1310
+ "success": 236,
1311
+ "skipped": 237,
1312
+ "time_spent": [
1313
+ 100.2,
1314
+ 98.05
1315
+ ],
1316
+ "failures": {
1317
+ "multi": [
1318
+ {
1319
+ "line": "tests/models/minimax/test_modeling_minimax.py::MiniMaxModelTest::test_flash_attention_2_padding_matches_padding_free_with_position_ids",
1320
+ "trace": "(line 4271) AssertionError: Tensor-likes are not close!"
1321
+ },
1322
+ {
1323
+ "line": "tests/models/minimax/test_modeling_minimax.py::MiniMaxModelTest::test_flash_attention_2_padding_matches_padding_free_with_position_ids_and_fa_kwargs",
1324
+ "trace": "(line 4271) AssertionError: Tensor-likes are not close!"
1325
+ },
1326
+ {
1327
+ "line": "tests/models/minimax/test_modeling_minimax.py::MiniMaxModelTest::test_multi_gpu_data_parallel_forward",
1328
+ "trace": "(line 129) TypeError: MiniMaxCache.__init__() takes 1 positional argument but 2 were given"
1329
+ }
1330
+ ],
1331
+ "single": [
1332
+ {
1333
+ "line": "tests/models/minimax/test_modeling_minimax.py::MiniMaxModelTest::test_flash_attention_2_padding_matches_padding_free_with_position_ids",
1334
+ "trace": "(line 4271) AssertionError: Tensor-likes are not close!"
1335
+ },
1336
+ {
1337
+ "line": "tests/models/minimax/test_modeling_minimax.py::MiniMaxModelTest::test_flash_attention_2_padding_matches_padding_free_with_position_ids_and_fa_kwargs",
1338
+ "trace": "(line 4271) AssertionError: Tensor-likes are not close!"
1339
+ }
1340
+ ]
1341
+ },
1342
+ "job_link": {
1343
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483432",
1344
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483333"
1345
+ }
1346
+ },
1347
+ "models_mixtral": {
1348
+ "failed": {
1349
+ "PyTorch": {
1350
+ "unclassified": 0,
1351
+ "single": 1,
1352
+ "multi": 1
1353
+ },
1354
+ "TensorFlow": {
1355
+ "unclassified": 0,
1356
+ "single": 0,
1357
+ "multi": 0
1358
+ },
1359
+ "Flax": {
1360
+ "unclassified": 0,
1361
+ "single": 0,
1362
+ "multi": 0
1363
+ },
1364
+ "Tokenizers": {
1365
+ "unclassified": 0,
1366
+ "single": 0,
1367
+ "multi": 0
1368
+ },
1369
+ "Pipelines": {
1370
+ "unclassified": 0,
1371
+ "single": 0,
1372
+ "multi": 0
1373
+ },
1374
+ "Trainer": {
1375
+ "unclassified": 0,
1376
+ "single": 0,
1377
+ "multi": 0
1378
+ },
1379
+ "ONNX": {
1380
+ "unclassified": 0,
1381
+ "single": 0,
1382
+ "multi": 0
1383
+ },
1384
+ "Auto": {
1385
+ "unclassified": 0,
1386
+ "single": 0,
1387
+ "multi": 0
1388
+ },
1389
+ "Quantization": {
1390
+ "unclassified": 0,
1391
+ "single": 0,
1392
+ "multi": 0
1393
+ },
1394
+ "Unclassified": {
1395
+ "unclassified": 0,
1396
+ "single": 0,
1397
+ "multi": 0
1398
+ }
1399
+ },
1400
+ "errors": 0,
1401
+ "success": 257,
1402
+ "skipped": 219,
1403
+ "time_spent": [
1404
+ 84.41,
1405
+ 87.58
1406
+ ],
1407
+ "failures": {
1408
+ "single": [
1409
+ {
1410
+ "line": "tests/models/mixtral/test_modeling_mixtral.py::MistralModelTest::test_flash_attn_2_equivalence",
1411
+ "trace": "(line 452) AssertionError: Tensor-likes are not close!"
1412
+ }
1413
+ ],
1414
+ "multi": [
1415
+ {
1416
+ "line": "tests/models/mixtral/test_modeling_mixtral.py::MistralModelTest::test_flash_attn_2_equivalence",
1417
+ "trace": "(line 452) AssertionError: Tensor-likes are not close!"
1418
+ }
1419
+ ]
1420
+ },
1421
+ "job_link": {
1422
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483422",
1423
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483461"
1424
+ }
1425
+ },
1426
+ "models_mlcd": {
1427
+ "failed": {
1428
+ "PyTorch": {
1429
+ "unclassified": 0,
1430
+ "single": 1,
1431
+ "multi": 1
1432
+ },
1433
+ "TensorFlow": {
1434
+ "unclassified": 0,
1435
+ "single": 0,
1436
+ "multi": 0
1437
+ },
1438
+ "Flax": {
1439
+ "unclassified": 0,
1440
+ "single": 0,
1441
+ "multi": 0
1442
+ },
1443
+ "Tokenizers": {
1444
+ "unclassified": 0,
1445
+ "single": 0,
1446
+ "multi": 0
1447
+ },
1448
+ "Pipelines": {
1449
+ "unclassified": 0,
1450
+ "single": 0,
1451
+ "multi": 0
1452
+ },
1453
+ "Trainer": {
1454
+ "unclassified": 0,
1455
+ "single": 0,
1456
+ "multi": 0
1457
+ },
1458
+ "ONNX": {
1459
+ "unclassified": 0,
1460
+ "single": 0,
1461
+ "multi": 0
1462
+ },
1463
+ "Auto": {
1464
+ "unclassified": 0,
1465
+ "single": 0,
1466
+ "multi": 0
1467
+ },
1468
+ "Quantization": {
1469
+ "unclassified": 0,
1470
+ "single": 0,
1471
+ "multi": 0
1472
+ },
1473
+ "Unclassified": {
1474
+ "unclassified": 0,
1475
+ "single": 0,
1476
+ "multi": 0
1477
+ }
1478
+ },
1479
+ "errors": 0,
1480
+ "success": 166,
1481
+ "skipped": 88,
1482
+ "time_spent": [
1483
+ 36.91,
1484
+ 36.49
1485
+ ],
1486
+ "failures": {
1487
+ "single": [
1488
+ {
1489
+ "line": "tests/models/mlcd/test_modeling_mlcd.py::MLCDVisionModelIntegrationTest::test_inference",
1490
+ "trace": "(line 189) AttributeError: 'NoneType' object has no attribute 'shape'"
1491
+ }
1492
+ ],
1493
+ "multi": [
1494
+ {
1495
+ "line": "tests/models/mlcd/test_modeling_mlcd.py::MLCDVisionModelIntegrationTest::test_inference",
1496
+ "trace": "(line 189) AttributeError: 'NoneType' object has no attribute 'shape'"
1497
+ }
1498
+ ]
1499
+ },
1500
+ "job_link": {
1501
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483343",
1502
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483426"
1503
+ }
1504
+ },
1505
+ "models_mluke": {
1506
+ "failed": {
1507
+ "PyTorch": {
1508
+ "unclassified": 0,
1509
+ "single": 0,
1510
+ "multi": 0
1511
+ },
1512
+ "TensorFlow": {
1513
+ "unclassified": 0,
1514
+ "single": 0,
1515
+ "multi": 0
1516
+ },
1517
+ "Flax": {
1518
+ "unclassified": 0,
1519
+ "single": 0,
1520
+ "multi": 0
1521
+ },
1522
+ "Tokenizers": {
1523
+ "unclassified": 0,
1524
+ "single": 1,
1525
+ "multi": 1
1526
+ },
1527
+ "Pipelines": {
1528
+ "unclassified": 0,
1529
+ "single": 0,
1530
+ "multi": 0
1531
+ },
1532
+ "Trainer": {
1533
+ "unclassified": 0,
1534
+ "single": 0,
1535
+ "multi": 0
1536
+ },
1537
+ "ONNX": {
1538
+ "unclassified": 0,
1539
+ "single": 0,
1540
+ "multi": 0
1541
+ },
1542
+ "Auto": {
1543
+ "unclassified": 0,
1544
+ "single": 0,
1545
+ "multi": 0
1546
+ },
1547
+ "Quantization": {
1548
+ "unclassified": 0,
1549
+ "single": 0,
1550
+ "multi": 0
1551
+ },
1552
+ "Unclassified": {
1553
+ "unclassified": 0,
1554
+ "single": 0,
1555
+ "multi": 0
1556
+ }
1557
+ },
1558
+ "errors": 0,
1559
+ "success": 200,
1560
+ "skipped": 38,
1561
+ "time_spent": [
1562
+ 48.14,
1563
+ 48.32
1564
+ ],
1565
+ "failures": {
1566
+ "multi": [
1567
+ {
1568
+ "line": "tests/models/mluke/test_tokenization_mluke.py::MLukeTokenizerIntegrationTests::test_entity_span_classification_no_padding_or_truncation",
1569
+ "trace": "(line 675) AssertionError: '<s> [33 chars]e spoken by about 128 million people, primarily in Japan .</s>' != '<s> [33 chars]e spoken by about 128 million people, primarily in Japan.</s>'"
1570
+ }
1571
+ ],
1572
+ "single": [
1573
+ {
1574
+ "line": "tests/models/mluke/test_tokenization_mluke.py::MLukeTokenizerIntegrationTests::test_entity_span_classification_no_padding_or_truncation",
1575
+ "trace": "(line 675) AssertionError: '<s> [33 chars]e spoken by about 128 million people, primarily in Japan .</s>' != '<s> [33 chars]e spoken by about 128 million people, primarily in Japan.</s>'"
1576
+ }
1577
+ ]
1578
+ },
1579
+ "job_link": {
1580
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483493",
1581
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483344"
1582
+ }
1583
+ },
1584
+ "models_mm_grounding_dino": {
1585
+ "failed": {
1586
+ "PyTorch": {
1587
+ "unclassified": 0,
1588
+ "single": 1,
1589
+ "multi": 1
1590
+ },
1591
+ "TensorFlow": {
1592
+ "unclassified": 0,
1593
+ "single": 0,
1594
+ "multi": 0
1595
+ },
1596
+ "Flax": {
1597
+ "unclassified": 0,
1598
+ "single": 0,
1599
+ "multi": 0
1600
+ },
1601
+ "Tokenizers": {
1602
+ "unclassified": 0,
1603
+ "single": 0,
1604
+ "multi": 0
1605
+ },
1606
+ "Pipelines": {
1607
+ "unclassified": 0,
1608
+ "single": 0,
1609
+ "multi": 0
1610
+ },
1611
+ "Trainer": {
1612
+ "unclassified": 0,
1613
+ "single": 0,
1614
+ "multi": 0
1615
+ },
1616
+ "ONNX": {
1617
+ "unclassified": 0,
1618
+ "single": 0,
1619
+ "multi": 0
1620
+ },
1621
+ "Auto": {
1622
+ "unclassified": 0,
1623
+ "single": 0,
1624
+ "multi": 0
1625
+ },
1626
+ "Quantization": {
1627
+ "unclassified": 0,
1628
+ "single": 0,
1629
+ "multi": 0
1630
+ },
1631
+ "Unclassified": {
1632
+ "unclassified": 0,
1633
+ "single": 0,
1634
+ "multi": 0
1635
+ }
1636
+ },
1637
+ "errors": 0,
1638
+ "success": 120,
1639
+ "skipped": 266,
1640
+ "time_spent": [
1641
+ 65.33,
1642
+ 65.42
1643
+ ],
1644
+ "failures": {
1645
+ "multi": [
1646
+ {
1647
+ "line": "tests/models/mm_grounding_dino/test_modeling_mm_grounding_dino.py::MMGroundingDinoModelIntegrationTests::test_mm_grounding_dino_loss",
1648
+ "trace": "(line 687) AssertionError: False is not true"
1649
+ }
1650
+ ],
1651
+ "single": [
1652
+ {
1653
+ "line": "tests/models/mm_grounding_dino/test_modeling_mm_grounding_dino.py::MMGroundingDinoModelIntegrationTests::test_mm_grounding_dino_loss",
1654
+ "trace": "(line 687) AssertionError: False is not true"
1655
+ }
1656
+ ]
1657
+ },
1658
+ "job_link": {
1659
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483484",
1660
+ "single": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483399"
1661
+ }
1662
+ },
1663
+ "models_mobilebert": {
1664
+ "failed": {
1665
+ "PyTorch": {
1666
+ "unclassified": 0,
1667
+ "single": 0,
1668
+ "multi": 0
1669
+ },
1670
+ "TensorFlow": {
1671
+ "unclassified": 0,
1672
+ "single": 0,
1673
+ "multi": 0
1674
+ },
1675
+ "Flax": {
1676
+ "unclassified": 0,
1677
+ "single": 0,
1678
+ "multi": 0
1679
+ },
1680
+ "Tokenizers": {
1681
+ "unclassified": 0,
1682
+ "single": 0,
1683
+ "multi": 0
1684
+ },
1685
+ "Pipelines": {
1686
+ "unclassified": 0,
1687
+ "single": 0,
1688
+ "multi": 0
1689
+ },
1690
+ "Trainer": {
1691
+ "unclassified": 0,
1692
+ "single": 0,
1693
+ "multi": 0
1694
+ },
1695
+ "ONNX": {
1696
+ "unclassified": 0,
1697
+ "single": 0,
1698
+ "multi": 0
1699
+ },
1700
+ "Auto": {
1701
+ "unclassified": 0,
1702
+ "single": 0,
1703
+ "multi": 0
1704
+ },
1705
+ "Quantization": {
1706
+ "unclassified": 0,
1707
+ "single": 0,
1708
+ "multi": 0
1709
+ },
1710
+ "Unclassified": {
1711
+ "unclassified": 0,
1712
+ "single": 0,
1713
+ "multi": 0
1714
+ }
1715
+ },
1716
+ "errors": 0,
1717
+ "success": 197,
1718
+ "skipped": 116,
1719
+ "time_spent": [
1720
+ 82.95
1721
+ ],
1722
+ "failures": {},
1723
+ "job_link": {
1724
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483458"
1725
+ }
1726
+ },
1727
+ "models_mobilenet_v1": {
1728
+ "failed": {
1729
+ "PyTorch": {
1730
+ "unclassified": 0,
1731
+ "single": 0,
1732
+ "multi": 0
1733
+ },
1734
+ "TensorFlow": {
1735
+ "unclassified": 0,
1736
+ "single": 0,
1737
+ "multi": 0
1738
+ },
1739
+ "Flax": {
1740
+ "unclassified": 0,
1741
+ "single": 0,
1742
+ "multi": 0
1743
+ },
1744
+ "Tokenizers": {
1745
+ "unclassified": 0,
1746
+ "single": 0,
1747
+ "multi": 0
1748
+ },
1749
+ "Pipelines": {
1750
+ "unclassified": 0,
1751
+ "single": 0,
1752
+ "multi": 0
1753
+ },
1754
+ "Trainer": {
1755
+ "unclassified": 0,
1756
+ "single": 0,
1757
+ "multi": 0
1758
+ },
1759
+ "ONNX": {
1760
+ "unclassified": 0,
1761
+ "single": 0,
1762
+ "multi": 0
1763
+ },
1764
+ "Auto": {
1765
+ "unclassified": 0,
1766
+ "single": 0,
1767
+ "multi": 0
1768
+ },
1769
+ "Quantization": {
1770
+ "unclassified": 0,
1771
+ "single": 0,
1772
+ "multi": 0
1773
+ },
1774
+ "Unclassified": {
1775
+ "unclassified": 0,
1776
+ "single": 0,
1777
+ "multi": 0
1778
+ }
1779
+ },
1780
+ "errors": 0,
1781
+ "success": 74,
1782
+ "skipped": 135,
1783
+ "time_spent": [
1784
+ 38.99
1785
+ ],
1786
+ "failures": {},
1787
+ "job_link": {
1788
+ "multi": "https://github.com/huggingface/transformers/actions/runs/17232703157/job/48890483481"
1789
+ }
1790
+ }
1791
+ }