ror HF Staff commited on
Commit
e83d737
·
1 Parent(s): d786867
Files changed (13) hide show
  1. .gitignore +2 -1
  2. CLAUDE.md +91 -0
  3. README.md +37 -7
  4. app.py +178 -11
  5. data.py +227 -0
  6. grid_search_tab.py +0 -117
  7. requirements.txt +1 -3
  8. sample_amd.json +1839 -0
  9. sample_nvidia.json +1475 -0
  10. styles.css +669 -0
  11. summary_page.py +208 -0
  12. theme_config.py +0 -167
  13. utils.py +51 -0
.gitignore CHANGED
@@ -1 +1,2 @@
1
- **.pyc
 
 
1
+ __pycache__
2
+ __ignore*
CLAUDE.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CLAUDE.md
2
+
3
+ This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4
+
5
+ ## Project Overview
6
+
7
+ This is **TCID** (Transformer CI Dashboard) - a Gradio-based web dashboard that displays test results for Transformer models across AMD and NVIDIA hardware. The application fetches CI test data from HuggingFace datasets and presents it through interactive visualizations and detailed failure reports.
8
+
9
+ ## Architecture
10
+
11
+ ### Core Components
12
+
13
+ - **`app.py`** - Main Gradio application with UI components, plotting functions, and data visualization logic
14
+ - **`data.py`** - Data fetching module that retrieves test results from HuggingFace datasets for AMD and NVIDIA CI runs
15
+ - **`styles.css`** - Complete dark theme styling for the Gradio interface
16
+ - **`requirements.txt`** - Python dependencies (matplotlib only)
17
+
18
+ ### Data Flow
19
+
20
+ 1. **Data Loading**: `get_data()` in `data.py` fetches latest CI results from:
21
+ - AMD: `hf://datasets/optimum-amd/transformers_daily_ci`
22
+ - NVIDIA: `hf://datasets/hf-internal-testing/transformers_daily_ci`
23
+
24
+ 2. **Data Processing**: Results are joined and filtered to show only important models defined in `IMPORTANT_MODELS` list
25
+
26
+ 3. **Visualization**: Two main views:
27
+ - **Summary Page**: Horizontal bar charts showing test results for all models
28
+ - **Detail View**: Pie charts for individual models with failure details
29
+
30
+ ### UI Architecture
31
+
32
+ - **Sidebar**: Model selection, refresh controls, CI job links
33
+ - **Main Content**: Dynamic display switching between summary and detail views
34
+ - **Auto-refresh**: Data reloads every 15 minutes via background threading
35
+
36
+ ## Running the Application
37
+
38
+ ### Development Commands
39
+
40
+ ```bash
41
+ # Install dependencies
42
+ pip install -r requirements.txt
43
+
44
+ # Run the application
45
+ python app.py
46
+ ```
47
+
48
+ ### HuggingFace Spaces Deployment
49
+
50
+ This application is configured for HuggingFace Spaces deployment:
51
+ - **Framework**: Gradio 5.38.0
52
+ - **App file**: `app.py`
53
+ - **Configuration**: See `README.md` header for Spaces metadata
54
+
55
+ ## Key Data Structures
56
+
57
+ ### Model Results DataFrame
58
+ The joined DataFrame contains these columns:
59
+ - `success_amd` / `success_nvidia` - Number of passing tests
60
+ - `failed_multi_no_amd` / `failed_multi_no_nvidia` - Multi-GPU failure counts
61
+ - `failed_single_no_amd` / `failed_single_no_nvidia` - Single-GPU failure counts
62
+ - `failures_amd` / `failures_nvidia` - Detailed failure information objects
63
+ - `job_link_amd` / `job_link_nvidia` - CI job URLs
64
+
65
+ ### Important Models List
66
+ Predefined list in `data.py` focusing on significant models:
67
+ - Classic models: bert, gpt2, t5, vit, clip, whisper
68
+ - Modern models: llama, gemma3, qwen2, mistral3
69
+ - Multimodal: qwen2_5_vl, llava, smolvlm, internvl
70
+
71
+ ## Styling and Theming
72
+
73
+ The application uses a comprehensive dark theme with:
74
+ - Fixed sidebar layout (300px width)
75
+ - Black background throughout (`#000000`)
76
+ - Custom scrollbars with dark styling
77
+ - Monospace fonts for technical aesthetics
78
+ - Gradient buttons and hover effects
79
+
80
+ ## Error Handling
81
+
82
+ - **Data Loading Failures**: Falls back to predefined model list for testing
83
+ - **Missing Model Data**: Shows "No data available" message in visualizations
84
+ - **Empty Results**: Gracefully handles cases with no test results
85
+
86
+ ## Performance Considerations
87
+
88
+ - **Memory Management**: Matplotlib configured to prevent memory warnings
89
+ - **Interactive Mode**: Disabled to prevent figure accumulation
90
+ - **Auto-reload**: Background threading with daemon timers
91
+ - **Data Caching**: Global variables store loaded data between UI updates
README.md CHANGED
@@ -1,13 +1,43 @@
1
  ---
2
- title: Performative Dashboard
3
- emoji: 📈
4
- colorFrom: yellow
5
- colorTo: green
6
  sdk: gradio
7
- sdk_version: 5.47.1
8
  app_file: app.py
9
  pinned: false
10
- short_description: damn i love matcha
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Tcid
3
+ emoji: 👁
4
+ colorFrom: indigo
5
+ colorTo: pink
6
  sdk: gradio
7
+ sdk_version: 5.38.0
8
  app_file: app.py
9
  pinned: false
10
+ short_description: A dashboard
11
  ---
12
 
13
+ # TCID
14
+
15
+ This space displays the state of the `transformers` CI on two hardwares, for a subset **important models**. The CI is run daily, on both AMD MI325 and Nvidia A10. The CI runs a different number of tests for each model. When a test finishes, it is assigned a status depending on its outcome:
16
+
17
+ - passed: the test finsihed and the expected output (or outputs) were retrieved;
18
+ - failed: the test either did not finish or the output was different from the expected output;
19
+ - skipped: the test was not run, but it not expected to. More details on this at the end of the README;
20
+ - error: the test did not finish and python crashed;
21
+
22
+ The dashboard is divided in two main parts:
23
+
24
+ ## Summary page
25
+
26
+ On the summary page, you can see a snapshot of the mix of test passed, failed and skipped for each model. The summary page also features an "Overall failures rate" for AMD and NVIDIA, which is computed this way:
27
+ ```overall_failure_rate = (failed + error) / (passed + failed + error)```
28
+
29
+ We do not account for the test skipped in this overall failure rate, because skipped test have no chance to neither pass nor fail.
30
+ We only consider the tests for a **subset of models** out of all the models supported in `transformers`. This subset is named important models, and is mainly defined by model usage.
31
+
32
+ ## Models page
33
+
34
+ From the sidebar, you can access a detailled view of each model. In it, you will find the breakdown of test statuses and the names of the test that failed for single and multi-gpu runs.
35
+
36
+ ## Skipped test
37
+
38
+ You can probably see many skipped tests in the `transformers` CI, which be perplexing. When a test is skipped, it's usually one of three reasons:
39
+ - the test requires a package that is not included in the default transformers docker that the CI uses, like flash attention 3 or deepspeed;
40
+ - the hardware is not the correct one, for instance there are a bunch of MPS (apple hardware) tests that are of course not run on AMD or Nvidia CI;
41
+ - the model is incompatible with what the test is for, say torch.fx or flash-attention, which are incompatible with some models architecture;
42
+
43
+ Skipping tests rather than not collecting them offers the advantage of having similar test counts across CIs that do not run on the same hardware. Thus, it total test count differs between two CIs, one can immediately know that one of the two only ran partialy. This would not be the case if some skipped tests were not collected at all.
app.py CHANGED
@@ -1,17 +1,184 @@
 
 
 
1
  import gradio as gr
2
- from grid_search_tab import create_grid_search_tab
3
- from theme_config import DashboardTheme
4
 
5
- def create_dashboard():
6
- DashboardTheme.setup_matplotlib_style()
7
- with gr.Blocks(title="Performance Dashboard", theme=DashboardTheme.get_gradio_theme()) as demo:
8
- with gr.Tabs():
9
 
10
- with gr.TabItem("Grid search benchmark"):
11
- create_grid_search_tab()
12
 
13
- return demo
 
 
 
 
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  if __name__ == "__main__":
16
- app = create_dashboard()
17
- app.launch(server_name="0.0.0.0", server_port=7860)
 
1
+ import matplotlib.pyplot as plt
2
+ import matplotlib
3
+ import pandas as pd
4
  import gradio as gr
 
 
5
 
6
+ from data import CIResults
7
+ from utils import logger
8
+ from summary_page import create_summary_page
 
9
 
 
 
10
 
11
+ # Configure matplotlib to prevent memory warnings and set dark background
12
+ matplotlib.rcParams['figure.facecolor'] = '#000000'
13
+ matplotlib.rcParams['axes.facecolor'] = '#000000'
14
+ matplotlib.rcParams['savefig.facecolor'] = '#000000'
15
+ plt.ioff() # Turn off interactive mode to prevent figure accumulation
16
 
17
+
18
+ # Load data once at startup
19
+ Ci_results = CIResults()
20
+ Ci_results.load_data()
21
+ # Start the auto-reload scheduler
22
+ Ci_results.schedule_data_reload()
23
+
24
+
25
+ # Function to get current description text
26
+ def get_description_text():
27
+ """Get description text with integrated last update time."""
28
+ msg = [
29
+ "Transformer CI Dashboard",
30
+ "-",
31
+ "AMD runs on MI325",
32
+ "NVIDIA runs on A10",
33
+ ]
34
+ msg = ["**" + x + "**" for x in msg] + [""]
35
+ if Ci_results.latest_update_msg:
36
+ msg.append(f"*This dashboard only tracks important models*<br>*({Ci_results.latest_update_msg})*")
37
+ else:
38
+ msg.append("*This dashboard only tracks important models*<br>*(loading...)*")
39
+ return "<br>".join(msg)
40
+
41
+ # Load CSS from external file
42
+ def load_css():
43
+ try:
44
+ with open("styles.css", "r") as f:
45
+ css_content = f.read()
46
+
47
+ return css_content
48
+ except FileNotFoundError:
49
+ logger.warning("styles.css not found, using minimal default styles")
50
+ return "body { background: #000; color: #fff; }"
51
+
52
+
53
+ # Create the Gradio interface with sidebar and dark theme
54
+ with gr.Blocks(title="Model Test Results Dashboard", css=load_css(), fill_height=True, fill_width=True) as demo:
55
+
56
+
57
+ with gr.Row():
58
+ # Sidebar for model selection
59
+ with gr.Column(scale=1, elem_classes=["sidebar"]):
60
+ gr.Markdown("# 🤖 TCID", elem_classes=["sidebar-title"])
61
+
62
+ # Description with integrated last update time
63
+ description_text = get_description_text()
64
+ description_display = gr.Markdown(description_text, elem_classes=["sidebar-description"])
65
+
66
+ # Summary button
67
+ summary_button = gr.Button(
68
+ "summary\n📊",
69
+ variant="primary",
70
+ size="lg",
71
+ elem_classes=["summary-button"]
72
+ )
73
+
74
+ # CI job links at bottom of sidebar
75
+ ci_links_display = gr.Markdown("🔗 **CI Jobs:** *Loading...*", elem_classes=["sidebar-links"])
76
+
77
+ # Main content area
78
+ with gr.Column(scale=4, elem_classes=["main-content"]):
79
+ # Summary display (default view)
80
+ summary_display = gr.ScatterPlot(
81
+ pd.DataFrame({
82
+ "x": [i for i in range(10)] + [100, -100],
83
+ "y": [i ** 2 for i in range(10)] + [100, -100],
84
+ }),
85
+ x = "x",
86
+ y = "y",
87
+ height="100vh",
88
+ container=False,
89
+ show_fullscreen_button=True,
90
+ elem_classes=["plot-container"],
91
+ )
92
+
93
+
94
+
95
+ # Summary button click handler
96
+ def show_summary_and_update_links():
97
+ """Show summary page and update CI links."""
98
+ return create_summary_page(Ci_results.df, Ci_results.available_models), get_description_text(), get_ci_links()
99
+
100
+ summary_button.click(
101
+ fn=show_summary_and_update_links,
102
+ outputs=[summary_display, description_display, ci_links_display]
103
+ )
104
+
105
+ # Function to get CI job links
106
+ def get_ci_links():
107
+ """Get CI job links from the most recent data."""
108
+ try:
109
+ # Check if df exists and is not empty
110
+ if Ci_results.df is None or Ci_results.df.empty:
111
+ return "🔗 **CI Jobs:** *Loading...*"
112
+
113
+ # Get links from any available model (they should be the same for all models in a run)
114
+ amd_multi_link = None
115
+ amd_single_link = None
116
+ nvidia_multi_link = None
117
+ nvidia_single_link = None
118
+
119
+ for model_name in Ci_results.df.index:
120
+ row = Ci_results.df.loc[model_name]
121
+
122
+ # Extract AMD links
123
+ if pd.notna(row.get('job_link_amd')) and (not amd_multi_link or not amd_single_link):
124
+ amd_link_raw = row.get('job_link_amd')
125
+ if isinstance(amd_link_raw, dict):
126
+ if 'multi' in amd_link_raw and not amd_multi_link:
127
+ amd_multi_link = amd_link_raw['multi']
128
+ if 'single' in amd_link_raw and not amd_single_link:
129
+ amd_single_link = amd_link_raw['single']
130
+
131
+ # Extract NVIDIA links
132
+ if pd.notna(row.get('job_link_nvidia')) and (not nvidia_multi_link or not nvidia_single_link):
133
+ nvidia_link_raw = row.get('job_link_nvidia')
134
+ if isinstance(nvidia_link_raw, dict):
135
+ if 'multi' in nvidia_link_raw and not nvidia_multi_link:
136
+ nvidia_multi_link = nvidia_link_raw['multi']
137
+ if 'single' in nvidia_link_raw and not nvidia_single_link:
138
+ nvidia_single_link = nvidia_link_raw['single']
139
+
140
+ # Break if we have all links
141
+ if amd_multi_link and amd_single_link and nvidia_multi_link and nvidia_single_link:
142
+ break
143
+
144
+
145
+ # Add FAQ link at the bottom
146
+ links_md = "❓ [**FAQ**](https://huggingface.co/spaces/transformers-community/transformers-ci-dashboard/blob/main/README.md)\n\n"
147
+ links_md += "🔗 **CI Jobs:**\n\n"
148
+
149
+ # AMD links
150
+ if amd_multi_link or amd_single_link:
151
+ links_md += "**AMD:**\n"
152
+ if amd_multi_link:
153
+ links_md += f"• [Multi GPU]({amd_multi_link})\n"
154
+ if amd_single_link:
155
+ links_md += f"• [Single GPU]({amd_single_link})\n"
156
+ links_md += "\n"
157
+
158
+ # NVIDIA links
159
+ if nvidia_multi_link or nvidia_single_link:
160
+ links_md += "**NVIDIA:**\n"
161
+ if nvidia_multi_link:
162
+ links_md += f"• [Multi GPU]({nvidia_multi_link})\n"
163
+ if nvidia_single_link:
164
+ links_md += f"• [Single GPU]({nvidia_single_link})\n"
165
+
166
+ if not (amd_multi_link or amd_single_link or nvidia_multi_link or nvidia_single_link):
167
+ links_md += "*No links available*"
168
+
169
+ return links_md
170
+ except Exception as e:
171
+ logger.error(f"getting CI links: {e}")
172
+ return "🔗 **CI Jobs:** *Error loading links*\n\n❓ **[FAQ](README.md)**"
173
+
174
+
175
+ # Auto-update CI links when the interface loads
176
+ demo.load(
177
+ fn=get_ci_links,
178
+ outputs=[ci_links_display]
179
+ )
180
+
181
+
182
+ # Gradio entrypoint
183
  if __name__ == "__main__":
184
+ demo.launch()
 
data.py ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from huggingface_hub import HfFileSystem
2
+ import pandas as pd
3
+ from utils import logger
4
+ import threading
5
+ import traceback
6
+ import json
7
+ import re
8
+
9
+ # NOTE: if caching is an issue, try adding `use_listings_cache=False`
10
+ fs = HfFileSystem()
11
+
12
+ IMPORTANT_MODELS = [
13
+ "auto",
14
+ "bert", # old but dominant (encoder only)
15
+ "gpt2", # old (decoder)
16
+ "t5", # old (encoder-decoder)
17
+ "modernbert", # (encoder only)
18
+ "vit", # old (vision) - fixed comma
19
+ "clip", # old but dominant (vision)
20
+ "detr", # objection detection, segmentation (vision)
21
+ "table_transformer", # objection detection (visioin) - maybe just detr?
22
+ "got_ocr2", # ocr (vision)
23
+ "whisper", # old but dominant (audio)
24
+ "wav2vec2", # old (audio)
25
+ "qwen2_audio", # (audio)
26
+ "speech_t5", # (audio)
27
+ "csm", # (audio)
28
+ "llama", # new and dominant (meta)
29
+ "gemma3", # new (google)
30
+ "qwen2", # new (Alibaba)
31
+ "mistral3", # new (Mistral) - added missing comma
32
+ "qwen2_5_vl", # new (vision)
33
+ "llava", # many models from it (vision)
34
+ "smolvlm", # new (video)
35
+ "internvl", # new (video)
36
+ "gemma3n", # new (omnimodal models)
37
+ "qwen2_5_omni", # new (omnimodal models)
38
+ # "gpt_oss", # new (quite used)
39
+ "qwen2_5_omni", # new (omnimodal models)
40
+ ]
41
+
42
+ KEYS_TO_KEEP = [
43
+ "success_amd",
44
+ "success_nvidia",
45
+ "skipped_amd",
46
+ "skipped_nvidia",
47
+ "failed_multi_no_amd",
48
+ "failed_multi_no_nvidia",
49
+ "failed_single_no_amd",
50
+ "failed_single_no_nvidia",
51
+ "failures_amd",
52
+ "failures_nvidia",
53
+ "job_link_amd",
54
+ "job_link_nvidia",
55
+ ]
56
+
57
+
58
+ def log_dataframe_link(link: str) -> str:
59
+ """
60
+ Adds the link to the dataset in the logs, modifies it to get a clockable link and then returns the date of the
61
+ report.
62
+ """
63
+ logger.info(f"Reading df located at {link}")
64
+ # Make sure the links starts with an http adress
65
+ if link.startswith("hf://"):
66
+ link = "https://huggingface.co/" + link.removeprefix("hf://")
67
+ # Pattern to match transformers_daily_ci followed by any path, then a date (YYYY-MM-DD format)
68
+ pattern = r'transformers_daily_ci(.*?)/(\d{4}-\d{2}-\d{2})'
69
+ match = re.search(pattern, link)
70
+ # Failure case:
71
+ if not match:
72
+ logger.error("Could not find transformers_daily_ci and.or date in the link")
73
+ return "9999-99-99"
74
+ # Replace the path between with blob/main
75
+ path_between = match.group(1)
76
+ link = link.replace("transformers_daily_ci" + path_between, "transformers_daily_ci/blob/main")
77
+ logger.info(f"Link to data source: {link}")
78
+ # Return the date
79
+ return match.group(2)
80
+
81
+ def infer_latest_update_msg(date_df_amd: str, date_df_nvidia: str) -> str:
82
+ # Early return if one of the dates is invalid
83
+ if date_df_amd.startswith("9999") and date_df_nvidia.startswith("9999"):
84
+ return "could not find last update time"
85
+ # Warn if dates are not the same
86
+ if date_df_amd != date_df_nvidia:
87
+ logger.warning(f"Different dates found: {date_df_amd} (AMD) vs {date_df_nvidia} (NVIDIA)")
88
+ # Take the latest date and format it
89
+ try:
90
+ latest_date = max(date_df_amd, date_df_nvidia)
91
+ yyyy, mm, dd = latest_date.split("-")
92
+ return f"last updated {mm}/{dd}/{yyyy}"
93
+ except Exception as e:
94
+ logger.error(f"When trying to infer latest date, got error {e}")
95
+ return "could not find last update time"
96
+
97
+ def read_one_dataframe(json_path: str, device_label: str) -> tuple[pd.DataFrame, str]:
98
+ df_upload_date = log_dataframe_link(json_path)
99
+ df = pd.read_json(json_path, orient="index")
100
+ df.index.name = "model_name"
101
+ df[f"failed_multi_no_{device_label}"] = df["failures"].apply(lambda x: len(x["multi"]) if "multi" in x else 0)
102
+ df[f"failed_single_no_{device_label}"] = df["failures"].apply(lambda x: len(x["single"]) if "single" in x else 0)
103
+ return df, df_upload_date
104
+
105
+ def get_first_working_df(file_list: list[str]) -> str:
106
+ for file in file_list:
107
+ job_links = file.rsplit('/', 1)[0] + "/job_links.json"
108
+ try:
109
+ links = pd.read_json(f"hf://{job_links}", typ="series")
110
+ has_one_working_link = any(links.values)
111
+ except Exception as e:
112
+ logger.error(f"Could not read job links from {job_links}: {e}")
113
+ has_one_working_link = False
114
+ if has_one_working_link:
115
+ return file
116
+ logger.warning(f"Skipping {file} as it has no working job links.")
117
+ raise RuntimeError("Could not find any working dataframe in the provided list.")
118
+
119
+
120
+ def get_sample_data() -> tuple[pd.DataFrame, str]:
121
+ # Retrieve sample dataframes
122
+ df_amd, _ = read_one_dataframe("sample_amd.json", "amd")
123
+ df_nvidia, _ = read_one_dataframe("sample_nvidia.json", "nvidia")
124
+ # Join both dataframes
125
+ joined = df_amd.join(df_nvidia, rsuffix="_nvidia", lsuffix="_amd", how="outer")
126
+ joined = joined[KEYS_TO_KEEP]
127
+ joined.index = joined.index.str.replace("^models_", "", regex=True)
128
+ # Fitler out all but important models
129
+ important_models_lower = [model.lower() for model in IMPORTANT_MODELS]
130
+ filtered_joined = joined[joined.index.str.lower().isin(important_models_lower)]
131
+ # Prefix all model names with "sample_"
132
+ filtered_joined.index = "sample_" + filtered_joined.index
133
+ return filtered_joined, "sample data was loaded"
134
+
135
+ def safe_extract(row: pd.DataFrame, key: str) -> int:
136
+ return int(row.get(key, 0)) if pd.notna(row.get(key, 0)) else 0
137
+
138
+ def extract_model_data(row: pd.Series) -> tuple[dict[str, int], dict[str, int], int, int, int, int]:
139
+ """Extract and process model data from DataFrame row."""
140
+ # Handle missing values and get counts directly from dataframe
141
+ success_nvidia = safe_extract(row, "success_nvidia")
142
+ success_amd = safe_extract(row, "success_amd")
143
+
144
+ skipped_nvidia = safe_extract(row, "skipped_nvidia")
145
+ skipped_amd = safe_extract(row, "skipped_amd")
146
+
147
+ failed_multi_amd = safe_extract(row, 'failed_multi_no_amd')
148
+ failed_multi_nvidia = safe_extract(row, 'failed_multi_no_nvidia')
149
+ failed_single_amd = safe_extract(row, 'failed_single_no_amd')
150
+ failed_single_nvidia = safe_extract(row, 'failed_single_no_nvidia')
151
+ # Calculate total failures
152
+ total_failed_amd = failed_multi_amd + failed_single_amd
153
+ total_failed_nvidia = failed_multi_nvidia + failed_single_nvidia
154
+ # Create stats dictionaries directly from dataframe values
155
+ amd_stats = {
156
+ 'passed': success_amd,
157
+ 'failed': total_failed_amd,
158
+ 'skipped': skipped_amd,
159
+ 'error': 0 # Not available in this dataset
160
+ }
161
+ nvidia_stats = {
162
+ 'passed': success_nvidia,
163
+ 'failed': total_failed_nvidia,
164
+ 'skipped': skipped_nvidia,
165
+ 'error': 0 # Not available in this dataset
166
+ }
167
+ return amd_stats, nvidia_stats, failed_multi_amd, failed_single_amd, failed_multi_nvidia, failed_single_nvidia
168
+
169
+
170
+
171
+ class CIResults:
172
+
173
+ def __init__(self):
174
+ self.df = pd.DataFrame()
175
+ self.available_models = []
176
+ self.latest_update_msg = ""
177
+
178
+ def load_data(self) -> None:
179
+ """Load data from the data source."""
180
+ # Try loading the distant data, and fall back on sample data for local tinkering
181
+
182
+ error_msg = [
183
+ "Loading data failed:",
184
+ "-" * 120,
185
+ traceback.format_exc(),
186
+ "-" * 120,
187
+ "Falling back on sample data."
188
+ ]
189
+ logger.error("\n".join(error_msg))
190
+ new_df, latest_update_msg = get_sample_data()
191
+ self.latest_update_msg = latest_update_msg
192
+
193
+ # Update attributes
194
+ self.df = new_df
195
+ self.available_models = new_df.index.tolist()
196
+ # Log and return distant load status
197
+ logger.info(f"Data loaded successfully: {len(self.available_models)} models")
198
+ logger.info(f"Models: {self.available_models[:5]}{'...' if len(self.available_models) > 5 else ''}")
199
+ logger.info(f"Latest update message: {self.latest_update_msg}")
200
+ # Log a preview of the df
201
+ msg = {}
202
+ for model in self.available_models[:3]:
203
+ msg[model] = {}
204
+ for col in self.df.columns:
205
+ value = self.df.loc[model, col]
206
+ if not isinstance(value, int):
207
+ value = str(value)
208
+ if len(value) > 10:
209
+ value = value[:10] + "..."
210
+ msg[model][col] = value
211
+ logger.info(json.dumps(msg, indent=4))
212
+
213
+ def schedule_data_reload(self):
214
+ """Schedule the next data reload."""
215
+ def reload_data():
216
+ self.load_data()
217
+ # Schedule the next reload in 15 minutes (900 seconds)
218
+ timer = threading.Timer(900.0, reload_data)
219
+ timer.daemon = True # Dies when main thread dies
220
+ timer.start()
221
+ logger.info("Next data reload scheduled in 15 minutes")
222
+
223
+ # Start the first reload timer
224
+ timer = threading.Timer(900.0, reload_data)
225
+ timer.daemon = True
226
+ timer.start()
227
+ logger.info("Data auto-reload scheduled every 15 minutes")
grid_search_tab.py DELETED
@@ -1,117 +0,0 @@
1
- import gradio as gr
2
- import matplotlib.pyplot as plt
3
- import numpy as np
4
- from datetime import datetime, timedelta
5
- from theme_config import DashboardTheme
6
-
7
- def generate_sales_chart(chart_type, time_period, product_filter, primary_color, secondary_color):
8
- colors = DashboardTheme.get_chart_colors()
9
- fig, ax = plt.subplots(figsize=(10, 6))
10
-
11
- # Generate dummy data based on time period
12
- if time_period == "Last 7 days":
13
- days = 7
14
- elif time_period == "Last 30 days":
15
- days = 30
16
- else: # Last 90 days
17
- days = 90
18
-
19
- dates = [(datetime.now() - timedelta(days=i)) for i in range(days, 0, -1)]
20
-
21
- if chart_type == "Revenue":
22
- # Generate dummy revenue data
23
- revenue = np.random.randint(1000, 5000, days) + np.sin(np.arange(days)) * 500
24
- ax.plot(dates, revenue, marker='o', linewidth=2.5, markersize=5, color=primary_color)
25
- ax.set_ylabel('Revenue ($)')
26
- ax.set_title(f'Sales Revenue - {product_filter} ({time_period})')
27
-
28
- elif chart_type == "Units Sold":
29
- # Generate dummy units data
30
- units = np.random.randint(50, 200, days) + np.cos(np.arange(days)) * 20
31
- ax.bar(dates, units, alpha=0.8, color=primary_color, edgecolor=secondary_color, linewidth=0.5)
32
- ax.set_ylabel('Units Sold')
33
- ax.set_title(f'Units Sold - {product_filter} ({time_period})')
34
-
35
- else: # Conversion Rate
36
- # Generate dummy conversion rate data
37
- conversion = np.random.uniform(2, 8, days) + np.sin(np.arange(days) * 0.3) * 1
38
- ax.plot(dates, conversion, marker='s', linewidth=2.5, markersize=5, color=primary_color)
39
- ax.set_ylabel('Conversion Rate (%)')
40
- ax.set_title(f'Conversion Rate - {product_filter} ({time_period})')
41
-
42
- ax.set_xlabel('Date')
43
- ax.grid(True, alpha=0.3)
44
- plt.xticks(rotation=45)
45
- plt.tight_layout()
46
-
47
- # Apply consistent styling
48
- ax.spines['top'].set_visible(False)
49
- ax.spines['right'].set_visible(False)
50
- ax.tick_params(colors=DashboardTheme.LIGHT_GREY)
51
- ax.set_facecolor(DashboardTheme.WHITE)
52
-
53
- return fig
54
-
55
- def create_grid_search_tab():
56
- with gr.Row():
57
- with gr.Column(scale=1):
58
- gr.Markdown("### Sales Dashboard Options")
59
-
60
- chart_type = gr.Radio(
61
- choices=["Revenue", "Units Sold", "Conversion Rate"],
62
- value="Revenue",
63
- label="Chart Type"
64
- )
65
-
66
- time_period = gr.Dropdown(
67
- choices=["Last 7 days", "Last 30 days", "Last 90 days"],
68
- value="Last 30 days",
69
- label="Time Period"
70
- )
71
-
72
- product_filter = gr.Dropdown(
73
- choices=["All Products", "Electronics", "Clothing", "Books", "Home & Garden"],
74
- value="All Products",
75
- label="Product Category"
76
- )
77
-
78
- region_filter = gr.CheckboxGroup(
79
- choices=["North America", "Europe", "Asia", "South America"],
80
- value=["North America", "Europe"],
81
- label="Regions"
82
- )
83
-
84
- gr.Markdown("#### Color Options")
85
- primary_color = gr.ColorPicker(
86
- value="#2C3E50",
87
- label="Primary Color"
88
- )
89
- secondary_color = gr.ColorPicker(
90
- value="#7F8C8D",
91
- label="Secondary Color"
92
- )
93
-
94
- update_btn = gr.Button("Update Chart", variant="primary")
95
-
96
- with gr.Column(scale=3):
97
- plot = gr.Plot(label="Sales Analytics")
98
-
99
- # Update plot when inputs change
100
- inputs = [chart_type, time_period, product_filter, primary_color, secondary_color]
101
-
102
- update_btn.click(
103
- fn=generate_sales_chart,
104
- inputs=inputs,
105
- outputs=plot
106
- )
107
-
108
- # Initialize with default plot
109
- chart_type.change(
110
- fn=generate_sales_chart,
111
- inputs=inputs,
112
- outputs=plot
113
- )
114
-
115
- # Set initial plot
116
- demo_plot = generate_sales_chart("Revenue", "Last 30 days", "All Products", "#2C3E50", "#7F8C8D")
117
- plot.value = demo_plot
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
requirements.txt CHANGED
@@ -1,3 +1 @@
1
- gradio>=4.0.0
2
- matplotlib>=3.5.0
3
- numpy>=1.21.0
 
1
+ matplotlib>=3.8
 
 
sample_amd.json ADDED
@@ -0,0 +1,1839 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "models_auto": {
3
+ "failed": {
4
+ "PyTorch": {
5
+ "unclassified": 0,
6
+ "single": 0,
7
+ "multi": 0
8
+ },
9
+ "TensorFlow": {
10
+ "unclassified": 0,
11
+ "single": 0,
12
+ "multi": 0
13
+ },
14
+ "Flax": {
15
+ "unclassified": 0,
16
+ "single": 0,
17
+ "multi": 0
18
+ },
19
+ "Tokenizers": {
20
+ "unclassified": 0,
21
+ "single": 0,
22
+ "multi": 0
23
+ },
24
+ "Pipelines": {
25
+ "unclassified": 0,
26
+ "single": 0,
27
+ "multi": 0
28
+ },
29
+ "Trainer": {
30
+ "unclassified": 0,
31
+ "single": 0,
32
+ "multi": 0
33
+ },
34
+ "ONNX": {
35
+ "unclassified": 0,
36
+ "single": 0,
37
+ "multi": 0
38
+ },
39
+ "Auto": {
40
+ "unclassified": 0,
41
+ "single": 0,
42
+ "multi": 0
43
+ },
44
+ "Quantization": {
45
+ "unclassified": 0,
46
+ "single": 0,
47
+ "multi": 0
48
+ },
49
+ "Unclassified": {
50
+ "unclassified": 0,
51
+ "single": 0,
52
+ "multi": 0
53
+ }
54
+ },
55
+ "errors": 0,
56
+ "success": 80,
57
+ "skipped": 2,
58
+ "time_spent": "0.99, 2.41, ",
59
+ "failures": {},
60
+ "job_link": {
61
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301329937",
62
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330183"
63
+ }
64
+ },
65
+ "models_bert": {
66
+ "failed": {
67
+ "PyTorch": {
68
+ "unclassified": 0,
69
+ "single": 0,
70
+ "multi": 0
71
+ },
72
+ "TensorFlow": {
73
+ "unclassified": 0,
74
+ "single": 0,
75
+ "multi": 0
76
+ },
77
+ "Flax": {
78
+ "unclassified": 0,
79
+ "single": 0,
80
+ "multi": 0
81
+ },
82
+ "Tokenizers": {
83
+ "unclassified": 0,
84
+ "single": 0,
85
+ "multi": 0
86
+ },
87
+ "Pipelines": {
88
+ "unclassified": 0,
89
+ "single": 0,
90
+ "multi": 0
91
+ },
92
+ "Trainer": {
93
+ "unclassified": 0,
94
+ "single": 0,
95
+ "multi": 0
96
+ },
97
+ "ONNX": {
98
+ "unclassified": 0,
99
+ "single": 0,
100
+ "multi": 0
101
+ },
102
+ "Auto": {
103
+ "unclassified": 0,
104
+ "single": 0,
105
+ "multi": 0
106
+ },
107
+ "Quantization": {
108
+ "unclassified": 0,
109
+ "single": 0,
110
+ "multi": 0
111
+ },
112
+ "Unclassified": {
113
+ "unclassified": 0,
114
+ "single": 0,
115
+ "multi": 0
116
+ }
117
+ },
118
+ "errors": 0,
119
+ "success": 239,
120
+ "skipped": 111,
121
+ "time_spent": "8.85, 0:01:00, ",
122
+ "failures": {},
123
+ "job_link": {
124
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301329946",
125
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330199"
126
+ }
127
+ },
128
+ "models_clip": {
129
+ "failed": {
130
+ "PyTorch": {
131
+ "unclassified": 0,
132
+ "single": 0,
133
+ "multi": 0
134
+ },
135
+ "TensorFlow": {
136
+ "unclassified": 0,
137
+ "single": 0,
138
+ "multi": 0
139
+ },
140
+ "Flax": {
141
+ "unclassified": 0,
142
+ "single": 0,
143
+ "multi": 0
144
+ },
145
+ "Tokenizers": {
146
+ "unclassified": 0,
147
+ "single": 0,
148
+ "multi": 0
149
+ },
150
+ "Pipelines": {
151
+ "unclassified": 0,
152
+ "single": 0,
153
+ "multi": 0
154
+ },
155
+ "Trainer": {
156
+ "unclassified": 0,
157
+ "single": 0,
158
+ "multi": 0
159
+ },
160
+ "ONNX": {
161
+ "unclassified": 0,
162
+ "single": 0,
163
+ "multi": 0
164
+ },
165
+ "Auto": {
166
+ "unclassified": 0,
167
+ "single": 0,
168
+ "multi": 0
169
+ },
170
+ "Quantization": {
171
+ "unclassified": 0,
172
+ "single": 0,
173
+ "multi": 0
174
+ },
175
+ "Unclassified": {
176
+ "unclassified": 0,
177
+ "single": 0,
178
+ "multi": 0
179
+ }
180
+ },
181
+ "errors": 0,
182
+ "success": 288,
183
+ "skipped": 590,
184
+ "time_spent": "0:01:55, 0:01:58, ",
185
+ "failures": {},
186
+ "job_link": {
187
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330217",
188
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301329991"
189
+ }
190
+ },
191
+ "models_detr": {
192
+ "failed": {
193
+ "PyTorch": {
194
+ "unclassified": 0,
195
+ "single": 0,
196
+ "multi": 0
197
+ },
198
+ "TensorFlow": {
199
+ "unclassified": 0,
200
+ "single": 0,
201
+ "multi": 0
202
+ },
203
+ "Flax": {
204
+ "unclassified": 0,
205
+ "single": 0,
206
+ "multi": 0
207
+ },
208
+ "Tokenizers": {
209
+ "unclassified": 0,
210
+ "single": 0,
211
+ "multi": 0
212
+ },
213
+ "Pipelines": {
214
+ "unclassified": 0,
215
+ "single": 0,
216
+ "multi": 0
217
+ },
218
+ "Trainer": {
219
+ "unclassified": 0,
220
+ "single": 0,
221
+ "multi": 0
222
+ },
223
+ "ONNX": {
224
+ "unclassified": 0,
225
+ "single": 0,
226
+ "multi": 0
227
+ },
228
+ "Auto": {
229
+ "unclassified": 0,
230
+ "single": 0,
231
+ "multi": 0
232
+ },
233
+ "Quantization": {
234
+ "unclassified": 0,
235
+ "single": 0,
236
+ "multi": 0
237
+ },
238
+ "Unclassified": {
239
+ "unclassified": 0,
240
+ "single": 0,
241
+ "multi": 0
242
+ }
243
+ },
244
+ "errors": 0,
245
+ "success": 77,
246
+ "skipped": 159,
247
+ "time_spent": "4.40, 6.77, ",
248
+ "failures": {},
249
+ "job_link": {
250
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330035",
251
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330267"
252
+ }
253
+ },
254
+ "models_gemma3": {
255
+ "failed": {
256
+ "PyTorch": {
257
+ "unclassified": 0,
258
+ "single": 6,
259
+ "multi": 7
260
+ },
261
+ "TensorFlow": {
262
+ "unclassified": 0,
263
+ "single": 0,
264
+ "multi": 0
265
+ },
266
+ "Flax": {
267
+ "unclassified": 0,
268
+ "single": 0,
269
+ "multi": 0
270
+ },
271
+ "Tokenizers": {
272
+ "unclassified": 0,
273
+ "single": 0,
274
+ "multi": 0
275
+ },
276
+ "Pipelines": {
277
+ "unclassified": 0,
278
+ "single": 0,
279
+ "multi": 0
280
+ },
281
+ "Trainer": {
282
+ "unclassified": 0,
283
+ "single": 0,
284
+ "multi": 0
285
+ },
286
+ "ONNX": {
287
+ "unclassified": 0,
288
+ "single": 0,
289
+ "multi": 0
290
+ },
291
+ "Auto": {
292
+ "unclassified": 0,
293
+ "single": 0,
294
+ "multi": 0
295
+ },
296
+ "Quantization": {
297
+ "unclassified": 0,
298
+ "single": 0,
299
+ "multi": 0
300
+ },
301
+ "Unclassified": {
302
+ "unclassified": 0,
303
+ "single": 0,
304
+ "multi": 0
305
+ }
306
+ },
307
+ "errors": 0,
308
+ "success": 349,
309
+ "skipped": 260,
310
+ "time_spent": "0:11:14, 0:11:08, ",
311
+ "failures": {
312
+ "single": [
313
+ {
314
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_1b_text_only",
315
+ "trace": "(line 715) AssertionError: Lists differ: ['Wri[57 chars]s, a silent stream,\\nInto the neural net, a wa[42 chars],\\n'] != ['Wri[57 chars]s, a river deep,\\nWith patterns hidden, secret[46 chars]ing']"
316
+ },
317
+ {
318
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_batch",
319
+ "trace": "(line 715) AssertionError: Lists differ: ['use[114 chars]rown cow standing on a sandy beach with clear [264 chars]cow\"] != ['use[114 chars]rown and white cow standing on a sandy beach n[272 chars]ach']"
320
+ },
321
+ {
322
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_batch_crops",
323
+ "trace": "(line 715) AssertionError: Lists differ: [\"user\\nYou are a helpful assistant.\\n\\nHe[678 chars]h a'] != ['user\\nYou are a helpful assistant.\\n\\nHe[658 chars]h a']"
324
+ },
325
+ {
326
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_bf16",
327
+ "trace": "(line 715) AssertionError: Lists differ: ['use[114 chars]rown cow standing on a sandy beach with clear [55 chars]ike'] != ['use[114 chars]rown and white cow standing on a sandy beach w[68 chars]oks']"
328
+ },
329
+ {
330
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_crops",
331
+ "trace": "(line 715) AssertionError: Lists differ: [\"use[251 chars]. There's a blue sky with some white clouds in the background\"] != [\"use[251 chars]. There's a bright blue sky with some white clouds in the\"]"
332
+ },
333
+ {
334
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_multiimage",
335
+ "trace": "(line 715) AssertionError: Lists differ: [\"use[122 chars]n\\n**Main Features:**\\n\\n* **Chinese Archway[19 chars]ent\"] != [\"use[122 chars]n\\n**Overall Scene:**\\n\\nIt looks like a stree[18 chars]nt,\"]"
336
+ }
337
+ ],
338
+ "multi": [
339
+ {
340
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3Vision2TextModelTest::test_model_parallelism",
341
+ "trace": "(line 925) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!"
342
+ },
343
+ {
344
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_1b_text_only",
345
+ "trace": "(line 715) AssertionError: Lists differ: ['Wri[57 chars]s, a silent stream,\\nInto the neural net, a wa[42 chars],\\n'] != ['Wri[57 chars]s, a river deep,\\nWith patterns hidden, secret[46 chars]ing']"
346
+ },
347
+ {
348
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_batch",
349
+ "trace": "(line 715) AssertionError: Lists differ: ['use[114 chars]rown cow standing on a sandy beach with clear [264 chars]cow\"] != ['use[114 chars]rown and white cow standing on a sandy beach n[272 chars]ach']"
350
+ },
351
+ {
352
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_batch_crops",
353
+ "trace": "(line 715) AssertionError: Lists differ: [\"user\\nYou are a helpful assistant.\\n\\nHe[678 chars]h a'] != ['user\\nYou are a helpful assistant.\\n\\nHe[658 chars]h a']"
354
+ },
355
+ {
356
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_bf16",
357
+ "trace": "(line 715) AssertionError: Lists differ: ['use[114 chars]rown cow standing on a sandy beach with clear [55 chars]ike'] != ['use[114 chars]rown and white cow standing on a sandy beach w[68 chars]oks']"
358
+ },
359
+ {
360
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_crops",
361
+ "trace": "(line 715) AssertionError: Lists differ: [\"use[251 chars]. There's a blue sky with some white clouds in the background\"] != [\"use[251 chars]. There's a bright blue sky with some white clouds in the\"]"
362
+ },
363
+ {
364
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_multiimage",
365
+ "trace": "(line 715) AssertionError: Lists differ: [\"use[122 chars]n\\n**Main Features:**\\n\\n* **Chinese Archway[19 chars]ent\"] != [\"use[122 chars]n\\n**Overall Scene:**\\n\\nIt looks like a stree[18 chars]nt,\"]"
366
+ }
367
+ ]
368
+ },
369
+ "job_link": {
370
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330061",
371
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330319"
372
+ }
373
+ },
374
+ "models_gemma3n": {
375
+ "failed": {
376
+ "PyTorch": {
377
+ "unclassified": 0,
378
+ "single": 0,
379
+ "multi": 0
380
+ },
381
+ "TensorFlow": {
382
+ "unclassified": 0,
383
+ "single": 0,
384
+ "multi": 0
385
+ },
386
+ "Flax": {
387
+ "unclassified": 0,
388
+ "single": 0,
389
+ "multi": 0
390
+ },
391
+ "Tokenizers": {
392
+ "unclassified": 0,
393
+ "single": 0,
394
+ "multi": 0
395
+ },
396
+ "Pipelines": {
397
+ "unclassified": 0,
398
+ "single": 0,
399
+ "multi": 0
400
+ },
401
+ "Trainer": {
402
+ "unclassified": 0,
403
+ "single": 0,
404
+ "multi": 0
405
+ },
406
+ "ONNX": {
407
+ "unclassified": 0,
408
+ "single": 0,
409
+ "multi": 0
410
+ },
411
+ "Auto": {
412
+ "unclassified": 0,
413
+ "single": 0,
414
+ "multi": 0
415
+ },
416
+ "Quantization": {
417
+ "unclassified": 0,
418
+ "single": 0,
419
+ "multi": 0
420
+ },
421
+ "Unclassified": {
422
+ "unclassified": 0,
423
+ "single": 0,
424
+ "multi": 0
425
+ }
426
+ },
427
+ "errors": 0,
428
+ "success": 197,
429
+ "skipped": 635,
430
+ "time_spent": "0:01:06, 0:01:08, ",
431
+ "failures": {},
432
+ "job_link": {
433
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330294",
434
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330077"
435
+ }
436
+ },
437
+ "models_got_ocr2": {
438
+ "failed": {
439
+ "PyTorch": {
440
+ "unclassified": 0,
441
+ "single": 0,
442
+ "multi": 0
443
+ },
444
+ "TensorFlow": {
445
+ "unclassified": 0,
446
+ "single": 0,
447
+ "multi": 0
448
+ },
449
+ "Flax": {
450
+ "unclassified": 0,
451
+ "single": 0,
452
+ "multi": 0
453
+ },
454
+ "Tokenizers": {
455
+ "unclassified": 0,
456
+ "single": 0,
457
+ "multi": 0
458
+ },
459
+ "Pipelines": {
460
+ "unclassified": 0,
461
+ "single": 0,
462
+ "multi": 0
463
+ },
464
+ "Trainer": {
465
+ "unclassified": 0,
466
+ "single": 0,
467
+ "multi": 0
468
+ },
469
+ "ONNX": {
470
+ "unclassified": 0,
471
+ "single": 0,
472
+ "multi": 0
473
+ },
474
+ "Auto": {
475
+ "unclassified": 0,
476
+ "single": 0,
477
+ "multi": 0
478
+ },
479
+ "Quantization": {
480
+ "unclassified": 0,
481
+ "single": 0,
482
+ "multi": 0
483
+ },
484
+ "Unclassified": {
485
+ "unclassified": 0,
486
+ "single": 0,
487
+ "multi": 0
488
+ }
489
+ },
490
+ "errors": 0,
491
+ "success": 147,
492
+ "skipped": 163,
493
+ "time_spent": "0:01:03, 0:01:01, ",
494
+ "failures": {},
495
+ "job_link": {
496
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330314",
497
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330094"
498
+ }
499
+ },
500
+ "models_gpt2": {
501
+ "failed": {
502
+ "PyTorch": {
503
+ "unclassified": 0,
504
+ "single": 0,
505
+ "multi": 0
506
+ },
507
+ "TensorFlow": {
508
+ "unclassified": 0,
509
+ "single": 0,
510
+ "multi": 0
511
+ },
512
+ "Flax": {
513
+ "unclassified": 0,
514
+ "single": 0,
515
+ "multi": 0
516
+ },
517
+ "Tokenizers": {
518
+ "unclassified": 0,
519
+ "single": 0,
520
+ "multi": 0
521
+ },
522
+ "Pipelines": {
523
+ "unclassified": 0,
524
+ "single": 0,
525
+ "multi": 0
526
+ },
527
+ "Trainer": {
528
+ "unclassified": 0,
529
+ "single": 0,
530
+ "multi": 0
531
+ },
532
+ "ONNX": {
533
+ "unclassified": 0,
534
+ "single": 0,
535
+ "multi": 0
536
+ },
537
+ "Auto": {
538
+ "unclassified": 0,
539
+ "single": 0,
540
+ "multi": 0
541
+ },
542
+ "Quantization": {
543
+ "unclassified": 0,
544
+ "single": 0,
545
+ "multi": 0
546
+ },
547
+ "Unclassified": {
548
+ "unclassified": 0,
549
+ "single": 0,
550
+ "multi": 0
551
+ }
552
+ },
553
+ "errors": 0,
554
+ "success": 249,
555
+ "skipped": 99,
556
+ "time_spent": "0:02:01, 0:01:46, ",
557
+ "failures": {},
558
+ "job_link": {
559
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330311",
560
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330113"
561
+ }
562
+ },
563
+ "models_internvl": {
564
+ "failed": {
565
+ "PyTorch": {
566
+ "unclassified": 0,
567
+ "single": 1,
568
+ "multi": 1
569
+ },
570
+ "TensorFlow": {
571
+ "unclassified": 0,
572
+ "single": 0,
573
+ "multi": 0
574
+ },
575
+ "Flax": {
576
+ "unclassified": 0,
577
+ "single": 0,
578
+ "multi": 0
579
+ },
580
+ "Tokenizers": {
581
+ "unclassified": 0,
582
+ "single": 0,
583
+ "multi": 0
584
+ },
585
+ "Pipelines": {
586
+ "unclassified": 0,
587
+ "single": 0,
588
+ "multi": 0
589
+ },
590
+ "Trainer": {
591
+ "unclassified": 0,
592
+ "single": 0,
593
+ "multi": 0
594
+ },
595
+ "ONNX": {
596
+ "unclassified": 0,
597
+ "single": 0,
598
+ "multi": 0
599
+ },
600
+ "Auto": {
601
+ "unclassified": 0,
602
+ "single": 0,
603
+ "multi": 0
604
+ },
605
+ "Quantization": {
606
+ "unclassified": 0,
607
+ "single": 0,
608
+ "multi": 0
609
+ },
610
+ "Unclassified": {
611
+ "unclassified": 0,
612
+ "single": 0,
613
+ "multi": 0
614
+ }
615
+ },
616
+ "errors": 0,
617
+ "success": 253,
618
+ "skipped": 107,
619
+ "time_spent": "0:01:50, 0:02:00, ",
620
+ "failures": {
621
+ "multi": [
622
+ {
623
+ "line": "tests/models/internvl/test_modeling_internvl.py::InternVLLlamaIntegrationTest::test_llama_small_model_integration_forward",
624
+ "trace": "(line 727) AssertionError: False is not true : Actual logits: tensor([ -9.8750, -0.4885, 1.4668, -10.3359, -10.3359], dtype=torch.float16)"
625
+ }
626
+ ],
627
+ "single": [
628
+ {
629
+ "line": "tests/models/internvl/test_modeling_internvl.py::InternVLLlamaIntegrationTest::test_llama_small_model_integration_forward",
630
+ "trace": "(line 727) AssertionError: False is not true : Actual logits: tensor([ -9.8750, -0.4885, 1.4668, -10.3359, -10.3359], dtype=torch.float16)"
631
+ }
632
+ ]
633
+ },
634
+ "job_link": {
635
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330361",
636
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330105"
637
+ }
638
+ },
639
+ "models_llama": {
640
+ "failed": {
641
+ "PyTorch": {
642
+ "unclassified": 0,
643
+ "single": 1,
644
+ "multi": 1
645
+ },
646
+ "TensorFlow": {
647
+ "unclassified": 0,
648
+ "single": 0,
649
+ "multi": 0
650
+ },
651
+ "Flax": {
652
+ "unclassified": 0,
653
+ "single": 0,
654
+ "multi": 0
655
+ },
656
+ "Tokenizers": {
657
+ "unclassified": 0,
658
+ "single": 0,
659
+ "multi": 0
660
+ },
661
+ "Pipelines": {
662
+ "unclassified": 0,
663
+ "single": 0,
664
+ "multi": 0
665
+ },
666
+ "Trainer": {
667
+ "unclassified": 0,
668
+ "single": 0,
669
+ "multi": 0
670
+ },
671
+ "ONNX": {
672
+ "unclassified": 0,
673
+ "single": 0,
674
+ "multi": 0
675
+ },
676
+ "Auto": {
677
+ "unclassified": 0,
678
+ "single": 0,
679
+ "multi": 0
680
+ },
681
+ "Quantization": {
682
+ "unclassified": 0,
683
+ "single": 0,
684
+ "multi": 0
685
+ },
686
+ "Unclassified": {
687
+ "unclassified": 0,
688
+ "single": 0,
689
+ "multi": 0
690
+ }
691
+ },
692
+ "errors": 0,
693
+ "success": 235,
694
+ "skipped": 101,
695
+ "time_spent": "0:03:15, 0:02:51, ",
696
+ "failures": {
697
+ "multi": [
698
+ {
699
+ "line": "tests/models/llama/test_modeling_llama.py::LlamaIntegrationTest::test_model_7b_logits_bf16",
700
+ "trace": "(line 727) AssertionError: False is not true"
701
+ }
702
+ ],
703
+ "single": [
704
+ {
705
+ "line": "tests/models/llama/test_modeling_llama.py::LlamaIntegrationTest::test_model_7b_logits_bf16",
706
+ "trace": "(line 727) AssertionError: False is not true"
707
+ }
708
+ ]
709
+ },
710
+ "job_link": {
711
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330531",
712
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330138"
713
+ }
714
+ },
715
+ "models_llava": {
716
+ "failed": {
717
+ "PyTorch": {
718
+ "unclassified": 0,
719
+ "single": 1,
720
+ "multi": 1
721
+ },
722
+ "TensorFlow": {
723
+ "unclassified": 0,
724
+ "single": 0,
725
+ "multi": 0
726
+ },
727
+ "Flax": {
728
+ "unclassified": 0,
729
+ "single": 0,
730
+ "multi": 0
731
+ },
732
+ "Tokenizers": {
733
+ "unclassified": 0,
734
+ "single": 0,
735
+ "multi": 0
736
+ },
737
+ "Pipelines": {
738
+ "unclassified": 0,
739
+ "single": 0,
740
+ "multi": 0
741
+ },
742
+ "Trainer": {
743
+ "unclassified": 0,
744
+ "single": 0,
745
+ "multi": 0
746
+ },
747
+ "ONNX": {
748
+ "unclassified": 0,
749
+ "single": 0,
750
+ "multi": 0
751
+ },
752
+ "Auto": {
753
+ "unclassified": 0,
754
+ "single": 0,
755
+ "multi": 0
756
+ },
757
+ "Quantization": {
758
+ "unclassified": 0,
759
+ "single": 0,
760
+ "multi": 0
761
+ },
762
+ "Unclassified": {
763
+ "unclassified": 0,
764
+ "single": 0,
765
+ "multi": 0
766
+ }
767
+ },
768
+ "errors": 0,
769
+ "success": 206,
770
+ "skipped": 124,
771
+ "time_spent": "0:03:58, 0:04:34, ",
772
+ "failures": {
773
+ "multi": [
774
+ {
775
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_batched_generation",
776
+ "trace": "(line 399) importlib.metadata.PackageNotFoundError: No package metadata was found for bitsandbytes"
777
+ }
778
+ ],
779
+ "single": [
780
+ {
781
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_batched_generation",
782
+ "trace": "(line 399) importlib.metadata.PackageNotFoundError: No package metadata was found for bitsandbytes"
783
+ }
784
+ ]
785
+ },
786
+ "job_link": {
787
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330406",
788
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330161"
789
+ }
790
+ },
791
+ "models_mistral3": {
792
+ "failed": {
793
+ "PyTorch": {
794
+ "unclassified": 0,
795
+ "single": 1,
796
+ "multi": 1
797
+ },
798
+ "TensorFlow": {
799
+ "unclassified": 0,
800
+ "single": 0,
801
+ "multi": 0
802
+ },
803
+ "Flax": {
804
+ "unclassified": 0,
805
+ "single": 0,
806
+ "multi": 0
807
+ },
808
+ "Tokenizers": {
809
+ "unclassified": 0,
810
+ "single": 0,
811
+ "multi": 0
812
+ },
813
+ "Pipelines": {
814
+ "unclassified": 0,
815
+ "single": 0,
816
+ "multi": 0
817
+ },
818
+ "Trainer": {
819
+ "unclassified": 0,
820
+ "single": 0,
821
+ "multi": 0
822
+ },
823
+ "ONNX": {
824
+ "unclassified": 0,
825
+ "single": 0,
826
+ "multi": 0
827
+ },
828
+ "Auto": {
829
+ "unclassified": 0,
830
+ "single": 0,
831
+ "multi": 0
832
+ },
833
+ "Quantization": {
834
+ "unclassified": 0,
835
+ "single": 0,
836
+ "multi": 0
837
+ },
838
+ "Unclassified": {
839
+ "unclassified": 0,
840
+ "single": 0,
841
+ "multi": 0
842
+ }
843
+ },
844
+ "errors": 0,
845
+ "success": 199,
846
+ "skipped": 105,
847
+ "time_spent": "0:04:34, 0:04:39, ",
848
+ "failures": {
849
+ "single": [
850
+ {
851
+ "line": "tests/models/mistral3/test_modeling_mistral3.py::Mistral3IntegrationTest::test_mistral3_integration_generate",
852
+ "trace": "(line 715) AssertionError: 'The [14 chars] two cats lying on a pink surface, which appea[21 chars] bed' != 'The [14 chars] two tabby cats lying on a pink surface, which[23 chars]n or'"
853
+ }
854
+ ],
855
+ "multi": [
856
+ {
857
+ "line": "tests/models/mistral3/test_modeling_mistral3.py::Mistral3IntegrationTest::test_mistral3_integration_generate",
858
+ "trace": "(line 715) AssertionError: 'The [14 chars] two cats lying on a pink surface, which appea[21 chars] bed' != 'The [14 chars] two tabby cats lying on a pink surface, which[23 chars]n or'"
859
+ }
860
+ ]
861
+ },
862
+ "job_link": {
863
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330418",
864
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301329678"
865
+ }
866
+ },
867
+ "models_modernbert": {
868
+ "failed": {
869
+ "PyTorch": {
870
+ "unclassified": 0,
871
+ "single": 0,
872
+ "multi": 0
873
+ },
874
+ "TensorFlow": {
875
+ "unclassified": 0,
876
+ "single": 0,
877
+ "multi": 0
878
+ },
879
+ "Flax": {
880
+ "unclassified": 0,
881
+ "single": 0,
882
+ "multi": 0
883
+ },
884
+ "Tokenizers": {
885
+ "unclassified": 0,
886
+ "single": 0,
887
+ "multi": 0
888
+ },
889
+ "Pipelines": {
890
+ "unclassified": 0,
891
+ "single": 0,
892
+ "multi": 0
893
+ },
894
+ "Trainer": {
895
+ "unclassified": 0,
896
+ "single": 0,
897
+ "multi": 0
898
+ },
899
+ "ONNX": {
900
+ "unclassified": 0,
901
+ "single": 0,
902
+ "multi": 0
903
+ },
904
+ "Auto": {
905
+ "unclassified": 0,
906
+ "single": 0,
907
+ "multi": 0
908
+ },
909
+ "Quantization": {
910
+ "unclassified": 0,
911
+ "single": 0,
912
+ "multi": 0
913
+ },
914
+ "Unclassified": {
915
+ "unclassified": 0,
916
+ "single": 0,
917
+ "multi": 0
918
+ }
919
+ },
920
+ "errors": 0,
921
+ "success": 142,
922
+ "skipped": 102,
923
+ "time_spent": "0:01:03, 9.02, ",
924
+ "failures": {},
925
+ "job_link": {
926
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301329712",
927
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330429"
928
+ }
929
+ },
930
+ "models_qwen2": {
931
+ "failed": {
932
+ "PyTorch": {
933
+ "unclassified": 0,
934
+ "single": 1,
935
+ "multi": 1
936
+ },
937
+ "TensorFlow": {
938
+ "unclassified": 0,
939
+ "single": 0,
940
+ "multi": 0
941
+ },
942
+ "Flax": {
943
+ "unclassified": 0,
944
+ "single": 0,
945
+ "multi": 0
946
+ },
947
+ "Tokenizers": {
948
+ "unclassified": 0,
949
+ "single": 0,
950
+ "multi": 0
951
+ },
952
+ "Pipelines": {
953
+ "unclassified": 0,
954
+ "single": 0,
955
+ "multi": 0
956
+ },
957
+ "Trainer": {
958
+ "unclassified": 0,
959
+ "single": 0,
960
+ "multi": 0
961
+ },
962
+ "ONNX": {
963
+ "unclassified": 0,
964
+ "single": 0,
965
+ "multi": 0
966
+ },
967
+ "Auto": {
968
+ "unclassified": 0,
969
+ "single": 0,
970
+ "multi": 0
971
+ },
972
+ "Quantization": {
973
+ "unclassified": 0,
974
+ "single": 0,
975
+ "multi": 0
976
+ },
977
+ "Unclassified": {
978
+ "unclassified": 0,
979
+ "single": 0,
980
+ "multi": 0
981
+ }
982
+ },
983
+ "errors": 0,
984
+ "success": 217,
985
+ "skipped": 113,
986
+ "time_spent": "0:01:08, 0:01:05, ",
987
+ "failures": {
988
+ "multi": [
989
+ {
990
+ "line": "tests/models/qwen2/test_modeling_qwen2.py::Qwen2IntegrationTest::test_export_static_cache",
991
+ "trace": "(line 715) AssertionError: Lists differ: ['My [35 chars], organic, gluten free, vegan, and vegetarian. I love to use'] != ['My [35 chars], organic, gluten free, vegan, and free from preservatives. I']"
992
+ }
993
+ ],
994
+ "single": [
995
+ {
996
+ "line": "tests/models/qwen2/test_modeling_qwen2.py::Qwen2IntegrationTest::test_export_static_cache",
997
+ "trace": "(line 715) AssertionError: Lists differ: ['My [35 chars], organic, gluten free, vegan, and vegetarian. I love to use'] != ['My [35 chars], organic, gluten free, vegan, and free from preservatives. I']"
998
+ }
999
+ ]
1000
+ },
1001
+ "job_link": {
1002
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301329761",
1003
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330508"
1004
+ }
1005
+ },
1006
+ "models_qwen2_5_omni": {
1007
+ "failed": {
1008
+ "PyTorch": {
1009
+ "unclassified": 0,
1010
+ "single": 2,
1011
+ "multi": 2
1012
+ },
1013
+ "TensorFlow": {
1014
+ "unclassified": 0,
1015
+ "single": 0,
1016
+ "multi": 0
1017
+ },
1018
+ "Flax": {
1019
+ "unclassified": 0,
1020
+ "single": 0,
1021
+ "multi": 0
1022
+ },
1023
+ "Tokenizers": {
1024
+ "unclassified": 0,
1025
+ "single": 0,
1026
+ "multi": 0
1027
+ },
1028
+ "Pipelines": {
1029
+ "unclassified": 0,
1030
+ "single": 0,
1031
+ "multi": 0
1032
+ },
1033
+ "Trainer": {
1034
+ "unclassified": 0,
1035
+ "single": 0,
1036
+ "multi": 0
1037
+ },
1038
+ "ONNX": {
1039
+ "unclassified": 0,
1040
+ "single": 0,
1041
+ "multi": 0
1042
+ },
1043
+ "Auto": {
1044
+ "unclassified": 0,
1045
+ "single": 0,
1046
+ "multi": 0
1047
+ },
1048
+ "Quantization": {
1049
+ "unclassified": 0,
1050
+ "single": 0,
1051
+ "multi": 0
1052
+ },
1053
+ "Unclassified": {
1054
+ "unclassified": 0,
1055
+ "single": 0,
1056
+ "multi": 0
1057
+ }
1058
+ },
1059
+ "errors": 0,
1060
+ "success": 167,
1061
+ "skipped": 141,
1062
+ "time_spent": "0:02:23, 0:01:53, ",
1063
+ "failures": {
1064
+ "multi": [
1065
+ {
1066
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniThinkerForConditionalGenerationModelTest::test_model_parallelism",
1067
+ "trace": "(line 715) AssertionError: Items in the second set but not the first:"
1068
+ },
1069
+ {
1070
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniModelIntegrationTest::test_small_model_integration_test_batch",
1071
+ "trace": "(line 715) AssertionError: Lists differ: [\"sys[293 chars]s shattering, and the dog appears to be a Labrador Retriever.\"] != [\"sys[293 chars]s shattering, and the dog is a Labrador Retriever.\"]"
1072
+ }
1073
+ ],
1074
+ "single": [
1075
+ {
1076
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniModelIntegrationTest::test_small_model_integration_test",
1077
+ "trace": "(line 700) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='qianwen-res.oss-accelerate-overseas.aliyuncs.com', port=443): Max retries exceeded with url: /Qwen2-VL/demo_small.jpg (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7cb8c91d02f0>: Failed to establish a new connection: [Errno -2] Name or service not known'))"
1078
+ },
1079
+ {
1080
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniModelIntegrationTest::test_small_model_integration_test_batch",
1081
+ "trace": "(line 715) AssertionError: Lists differ: [\"sys[109 chars]d is a glass shattering, and the dog is a Labr[187 chars]er.\"] != [\"sys[109 chars]d is glass shattering, and the dog is a Labrad[185 chars]er.\"]"
1082
+ }
1083
+ ]
1084
+ },
1085
+ "job_link": {
1086
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301329806",
1087
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330503"
1088
+ }
1089
+ },
1090
+ "models_qwen2_5_vl": {
1091
+ "failed": {
1092
+ "PyTorch": {
1093
+ "unclassified": 0,
1094
+ "single": 1,
1095
+ "multi": 1
1096
+ },
1097
+ "TensorFlow": {
1098
+ "unclassified": 0,
1099
+ "single": 0,
1100
+ "multi": 0
1101
+ },
1102
+ "Flax": {
1103
+ "unclassified": 0,
1104
+ "single": 0,
1105
+ "multi": 0
1106
+ },
1107
+ "Tokenizers": {
1108
+ "unclassified": 0,
1109
+ "single": 0,
1110
+ "multi": 0
1111
+ },
1112
+ "Pipelines": {
1113
+ "unclassified": 0,
1114
+ "single": 0,
1115
+ "multi": 0
1116
+ },
1117
+ "Trainer": {
1118
+ "unclassified": 0,
1119
+ "single": 0,
1120
+ "multi": 0
1121
+ },
1122
+ "ONNX": {
1123
+ "unclassified": 0,
1124
+ "single": 0,
1125
+ "multi": 0
1126
+ },
1127
+ "Auto": {
1128
+ "unclassified": 0,
1129
+ "single": 0,
1130
+ "multi": 0
1131
+ },
1132
+ "Quantization": {
1133
+ "unclassified": 0,
1134
+ "single": 0,
1135
+ "multi": 0
1136
+ },
1137
+ "Unclassified": {
1138
+ "unclassified": 0,
1139
+ "single": 0,
1140
+ "multi": 0
1141
+ }
1142
+ },
1143
+ "errors": 0,
1144
+ "success": 205,
1145
+ "skipped": 113,
1146
+ "time_spent": "0:02:32, 0:02:29, ",
1147
+ "failures": {
1148
+ "multi": [
1149
+ {
1150
+ "line": "tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py::Qwen2_5_VLIntegrationTest::test_small_model_integration_test_batch_different_resolutions",
1151
+ "trace": "(line 715) AssertionError: Lists differ: ['sys[314 chars]ion\\n addCriterion\\n\\n addCriterion\\n\\n addCri[75 chars]n\\n'] != ['sys[314 chars]ion\\nThe dog in the picture appears to be a La[81 chars] is']"
1152
+ }
1153
+ ],
1154
+ "single": [
1155
+ {
1156
+ "line": "tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py::Qwen2_5_VLIntegrationTest::test_small_model_integration_test_batch_different_resolutions",
1157
+ "trace": "(line 715) AssertionError: Lists differ: ['sys[314 chars]ion\\n addCriterion\\n\\n addCriterion\\n\\n addCri[75 chars]n\\n'] != ['sys[314 chars]ion\\nThe dog in the picture appears to be a La[81 chars] is']"
1158
+ }
1159
+ ]
1160
+ },
1161
+ "job_link": {
1162
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301329760",
1163
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330498"
1164
+ }
1165
+ },
1166
+ "models_smolvlm": {
1167
+ "failed": {
1168
+ "PyTorch": {
1169
+ "unclassified": 0,
1170
+ "single": 0,
1171
+ "multi": 0
1172
+ },
1173
+ "TensorFlow": {
1174
+ "unclassified": 0,
1175
+ "single": 0,
1176
+ "multi": 0
1177
+ },
1178
+ "Flax": {
1179
+ "unclassified": 0,
1180
+ "single": 0,
1181
+ "multi": 0
1182
+ },
1183
+ "Tokenizers": {
1184
+ "unclassified": 0,
1185
+ "single": 0,
1186
+ "multi": 0
1187
+ },
1188
+ "Pipelines": {
1189
+ "unclassified": 0,
1190
+ "single": 0,
1191
+ "multi": 0
1192
+ },
1193
+ "Trainer": {
1194
+ "unclassified": 0,
1195
+ "single": 0,
1196
+ "multi": 0
1197
+ },
1198
+ "ONNX": {
1199
+ "unclassified": 0,
1200
+ "single": 0,
1201
+ "multi": 0
1202
+ },
1203
+ "Auto": {
1204
+ "unclassified": 0,
1205
+ "single": 0,
1206
+ "multi": 0
1207
+ },
1208
+ "Quantization": {
1209
+ "unclassified": 0,
1210
+ "single": 0,
1211
+ "multi": 0
1212
+ },
1213
+ "Unclassified": {
1214
+ "unclassified": 0,
1215
+ "single": 0,
1216
+ "multi": 0
1217
+ }
1218
+ },
1219
+ "errors": 0,
1220
+ "success": 323,
1221
+ "skipped": 231,
1222
+ "time_spent": "0:01:08, 0:01:13, ",
1223
+ "failures": {},
1224
+ "job_link": {
1225
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330553",
1226
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301329835"
1227
+ }
1228
+ },
1229
+ "models_t5": {
1230
+ "failed": {
1231
+ "PyTorch": {
1232
+ "unclassified": 0,
1233
+ "single": 2,
1234
+ "multi": 3
1235
+ },
1236
+ "TensorFlow": {
1237
+ "unclassified": 0,
1238
+ "single": 0,
1239
+ "multi": 0
1240
+ },
1241
+ "Flax": {
1242
+ "unclassified": 0,
1243
+ "single": 0,
1244
+ "multi": 0
1245
+ },
1246
+ "Tokenizers": {
1247
+ "unclassified": 0,
1248
+ "single": 0,
1249
+ "multi": 0
1250
+ },
1251
+ "Pipelines": {
1252
+ "unclassified": 0,
1253
+ "single": 0,
1254
+ "multi": 0
1255
+ },
1256
+ "Trainer": {
1257
+ "unclassified": 0,
1258
+ "single": 0,
1259
+ "multi": 0
1260
+ },
1261
+ "ONNX": {
1262
+ "unclassified": 0,
1263
+ "single": 0,
1264
+ "multi": 0
1265
+ },
1266
+ "Auto": {
1267
+ "unclassified": 0,
1268
+ "single": 0,
1269
+ "multi": 0
1270
+ },
1271
+ "Quantization": {
1272
+ "unclassified": 0,
1273
+ "single": 0,
1274
+ "multi": 0
1275
+ },
1276
+ "Unclassified": {
1277
+ "unclassified": 0,
1278
+ "single": 0,
1279
+ "multi": 0
1280
+ }
1281
+ },
1282
+ "errors": 0,
1283
+ "success": 254,
1284
+ "skipped": 325,
1285
+ "time_spent": "0:01:50, 0:01:40, ",
1286
+ "failures": {
1287
+ "multi": [
1288
+ {
1289
+ "line": "tests/models/t5/test_modeling_t5.py::T5ModelTest::test_multi_gpu_data_parallel_forward",
1290
+ "trace": "(line 131) TypeError: EncoderDecoderCache.__init__() missing 1 required positional argument: 'cross_attention_cache'"
1291
+ },
1292
+ {
1293
+ "line": "tests/models/t5/test_modeling_t5.py::T5ModelIntegrationTests::test_export_t5_summarization",
1294
+ "trace": "(line 687) AttributeError: 'dict' object has no attribute 'batch_size'"
1295
+ },
1296
+ {
1297
+ "line": "tests/models/t5/test_modeling_t5.py::T5ModelIntegrationTests::test_small_integration_test",
1298
+ "trace": "(line 727) AssertionError: False is not true"
1299
+ }
1300
+ ],
1301
+ "single": [
1302
+ {
1303
+ "line": "tests/models/t5/test_modeling_t5.py::T5ModelIntegrationTests::test_export_t5_summarization",
1304
+ "trace": "(line 687) AttributeError: 'dict' object has no attribute 'batch_size'"
1305
+ },
1306
+ {
1307
+ "line": "tests/models/t5/test_modeling_t5.py::T5ModelIntegrationTests::test_small_integration_test",
1308
+ "trace": "(line 727) AssertionError: False is not true"
1309
+ }
1310
+ ]
1311
+ },
1312
+ "job_link": {
1313
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301329815",
1314
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330559"
1315
+ }
1316
+ },
1317
+ "models_vit": {
1318
+ "failed": {
1319
+ "PyTorch": {
1320
+ "unclassified": 0,
1321
+ "single": 0,
1322
+ "multi": 0
1323
+ },
1324
+ "TensorFlow": {
1325
+ "unclassified": 0,
1326
+ "single": 0,
1327
+ "multi": 0
1328
+ },
1329
+ "Flax": {
1330
+ "unclassified": 0,
1331
+ "single": 0,
1332
+ "multi": 0
1333
+ },
1334
+ "Tokenizers": {
1335
+ "unclassified": 0,
1336
+ "single": 0,
1337
+ "multi": 0
1338
+ },
1339
+ "Pipelines": {
1340
+ "unclassified": 0,
1341
+ "single": 0,
1342
+ "multi": 0
1343
+ },
1344
+ "Trainer": {
1345
+ "unclassified": 0,
1346
+ "single": 0,
1347
+ "multi": 0
1348
+ },
1349
+ "ONNX": {
1350
+ "unclassified": 0,
1351
+ "single": 0,
1352
+ "multi": 0
1353
+ },
1354
+ "Auto": {
1355
+ "unclassified": 0,
1356
+ "single": 0,
1357
+ "multi": 0
1358
+ },
1359
+ "Quantization": {
1360
+ "unclassified": 0,
1361
+ "single": 0,
1362
+ "multi": 0
1363
+ },
1364
+ "Unclassified": {
1365
+ "unclassified": 0,
1366
+ "single": 0,
1367
+ "multi": 0
1368
+ }
1369
+ },
1370
+ "errors": 0,
1371
+ "success": 135,
1372
+ "skipped": 93,
1373
+ "time_spent": "9.85, 7.74, ",
1374
+ "failures": {},
1375
+ "job_link": {
1376
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301329875",
1377
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330596"
1378
+ }
1379
+ },
1380
+ "models_wav2vec2": {
1381
+ "failed": {
1382
+ "PyTorch": {
1383
+ "unclassified": 0,
1384
+ "single": 0,
1385
+ "multi": 0
1386
+ },
1387
+ "TensorFlow": {
1388
+ "unclassified": 0,
1389
+ "single": 0,
1390
+ "multi": 0
1391
+ },
1392
+ "Flax": {
1393
+ "unclassified": 0,
1394
+ "single": 0,
1395
+ "multi": 0
1396
+ },
1397
+ "Tokenizers": {
1398
+ "unclassified": 0,
1399
+ "single": 0,
1400
+ "multi": 0
1401
+ },
1402
+ "Pipelines": {
1403
+ "unclassified": 0,
1404
+ "single": 0,
1405
+ "multi": 0
1406
+ },
1407
+ "Trainer": {
1408
+ "unclassified": 0,
1409
+ "single": 0,
1410
+ "multi": 0
1411
+ },
1412
+ "ONNX": {
1413
+ "unclassified": 0,
1414
+ "single": 0,
1415
+ "multi": 0
1416
+ },
1417
+ "Auto": {
1418
+ "unclassified": 0,
1419
+ "single": 0,
1420
+ "multi": 0
1421
+ },
1422
+ "Quantization": {
1423
+ "unclassified": 0,
1424
+ "single": 0,
1425
+ "multi": 0
1426
+ },
1427
+ "Unclassified": {
1428
+ "unclassified": 0,
1429
+ "single": 0,
1430
+ "multi": 0
1431
+ }
1432
+ },
1433
+ "errors": 0,
1434
+ "success": 292,
1435
+ "skipped": 246,
1436
+ "time_spent": "0:01:56, 0:01:54, ",
1437
+ "failures": {},
1438
+ "job_link": {
1439
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301329877",
1440
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330632"
1441
+ }
1442
+ },
1443
+ "models_whisper": {
1444
+ "failed": {
1445
+ "PyTorch": {
1446
+ "unclassified": 0,
1447
+ "single": 40,
1448
+ "multi": 42
1449
+ },
1450
+ "TensorFlow": {
1451
+ "unclassified": 0,
1452
+ "single": 0,
1453
+ "multi": 0
1454
+ },
1455
+ "Flax": {
1456
+ "unclassified": 0,
1457
+ "single": 0,
1458
+ "multi": 0
1459
+ },
1460
+ "Tokenizers": {
1461
+ "unclassified": 0,
1462
+ "single": 0,
1463
+ "multi": 0
1464
+ },
1465
+ "Pipelines": {
1466
+ "unclassified": 0,
1467
+ "single": 0,
1468
+ "multi": 0
1469
+ },
1470
+ "Trainer": {
1471
+ "unclassified": 0,
1472
+ "single": 0,
1473
+ "multi": 0
1474
+ },
1475
+ "ONNX": {
1476
+ "unclassified": 0,
1477
+ "single": 0,
1478
+ "multi": 0
1479
+ },
1480
+ "Auto": {
1481
+ "unclassified": 0,
1482
+ "single": 0,
1483
+ "multi": 0
1484
+ },
1485
+ "Quantization": {
1486
+ "unclassified": 0,
1487
+ "single": 0,
1488
+ "multi": 0
1489
+ },
1490
+ "Unclassified": {
1491
+ "unclassified": 0,
1492
+ "single": 0,
1493
+ "multi": 0
1494
+ }
1495
+ },
1496
+ "errors": 0,
1497
+ "success": 537,
1498
+ "skipped": 337,
1499
+ "time_spent": "0:03:23, 0:03:02, ",
1500
+ "failures": {
1501
+ "single": [
1502
+ {
1503
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_distil_token_timestamp_generation",
1504
+ "trace": "(line 2938) Failed: (subprocess)"
1505
+ },
1506
+ {
1507
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_generate_with_forced_decoder_ids",
1508
+ "trace": "(line 2938) Failed: (subprocess)"
1509
+ },
1510
+ {
1511
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_generate_with_prompt_ids",
1512
+ "trace": "(line 2938) Failed: (subprocess)"
1513
+ },
1514
+ {
1515
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_generate_with_prompt_ids_task_language",
1516
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1517
+ },
1518
+ {
1519
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_language_detection",
1520
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1521
+ },
1522
+ {
1523
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_batched_generation",
1524
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1525
+ },
1526
+ {
1527
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_batched_generation_multilingual",
1528
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1529
+ },
1530
+ {
1531
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_generation",
1532
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1533
+ },
1534
+ {
1535
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_generation_multilingual",
1536
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1537
+ },
1538
+ {
1539
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_logits_librispeech",
1540
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1541
+ },
1542
+ {
1543
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_timestamp_generation",
1544
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1545
+ },
1546
+ {
1547
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_small_en_logits_librispeech",
1548
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1549
+ },
1550
+ {
1551
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_small_longform_timestamps_generation",
1552
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1553
+ },
1554
+ {
1555
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_small_token_timestamp_generation",
1556
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1557
+ },
1558
+ {
1559
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_speculative_decoding_distil",
1560
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1561
+ },
1562
+ {
1563
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_speculative_decoding_non_distil",
1564
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1565
+ },
1566
+ {
1567
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_en_batched_generation",
1568
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1569
+ },
1570
+ {
1571
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_en_generation",
1572
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1573
+ },
1574
+ {
1575
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_generation",
1576
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1577
+ },
1578
+ {
1579
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_logits_librispeech",
1580
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1581
+ },
1582
+ {
1583
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_longform_timestamps_generation",
1584
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1585
+ },
1586
+ {
1587
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_specaugment_librispeech",
1588
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1589
+ },
1590
+ {
1591
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_static_generation",
1592
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1593
+ },
1594
+ {
1595
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_static_generation_long_form",
1596
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1597
+ },
1598
+ {
1599
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_timestamp_generation",
1600
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1601
+ },
1602
+ {
1603
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_token_timestamp_batch_generation",
1604
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1605
+ },
1606
+ {
1607
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_token_timestamp_generation",
1608
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1609
+ },
1610
+ {
1611
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_token_timestamp_generation_longform",
1612
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1613
+ },
1614
+ {
1615
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_empty_longform",
1616
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1617
+ },
1618
+ {
1619
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_multi_batch",
1620
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1621
+ },
1622
+ {
1623
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_multi_batch_hard",
1624
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1625
+ },
1626
+ {
1627
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_multi_batch_hard_prev_cond",
1628
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1629
+ },
1630
+ {
1631
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_multi_batch_prev_cond",
1632
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1633
+ },
1634
+ {
1635
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_no_speech_detection",
1636
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1637
+ },
1638
+ {
1639
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_prompt_ids",
1640
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1641
+ },
1642
+ {
1643
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_single_batch",
1644
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1645
+ },
1646
+ {
1647
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_single_batch_beam",
1648
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1649
+ },
1650
+ {
1651
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_single_batch_prev_cond",
1652
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1653
+ },
1654
+ {
1655
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_shortform_multi_batch_hard_prev_cond",
1656
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1657
+ },
1658
+ {
1659
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_shortform_single_batch_prev_cond",
1660
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1661
+ }
1662
+ ],
1663
+ "multi": [
1664
+ {
1665
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelTest::test_multi_gpu_data_parallel_forward",
1666
+ "trace": "(line 2938) Failed: (subprocess)"
1667
+ },
1668
+ {
1669
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_distil_token_timestamp_generation",
1670
+ "trace": "(line 2938) Failed: (subprocess)"
1671
+ },
1672
+ {
1673
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_generate_with_forced_decoder_ids",
1674
+ "trace": "(line 2938) Failed: (subprocess)"
1675
+ },
1676
+ {
1677
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_generate_with_prompt_ids",
1678
+ "trace": "(line 131) TypeError: EncoderDecoderCache.__init__() missing 1 required positional argument: 'cross_attention_cache'"
1679
+ },
1680
+ {
1681
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_generate_with_prompt_ids_task_language",
1682
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1683
+ },
1684
+ {
1685
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_language_detection",
1686
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1687
+ },
1688
+ {
1689
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_batched_generation",
1690
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1691
+ },
1692
+ {
1693
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_batched_generation_multilingual",
1694
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1695
+ },
1696
+ {
1697
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_generation",
1698
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1699
+ },
1700
+ {
1701
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_generation_multilingual",
1702
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1703
+ },
1704
+ {
1705
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_logits_librispeech",
1706
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1707
+ },
1708
+ {
1709
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_timestamp_generation",
1710
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1711
+ },
1712
+ {
1713
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_small_en_logits_librispeech",
1714
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1715
+ },
1716
+ {
1717
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_small_longform_timestamps_generation",
1718
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1719
+ },
1720
+ {
1721
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_small_token_timestamp_generation",
1722
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1723
+ },
1724
+ {
1725
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_speculative_decoding_distil",
1726
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1727
+ },
1728
+ {
1729
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_speculative_decoding_non_distil",
1730
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1731
+ },
1732
+ {
1733
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_en_batched_generation",
1734
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1735
+ },
1736
+ {
1737
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_en_generation",
1738
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1739
+ },
1740
+ {
1741
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_generation",
1742
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1743
+ },
1744
+ {
1745
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_logits_librispeech",
1746
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1747
+ },
1748
+ {
1749
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_longform_timestamps_generation",
1750
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1751
+ },
1752
+ {
1753
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_specaugment_librispeech",
1754
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1755
+ },
1756
+ {
1757
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_static_generation",
1758
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1759
+ },
1760
+ {
1761
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_static_generation_long_form",
1762
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1763
+ },
1764
+ {
1765
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_timestamp_generation",
1766
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1767
+ },
1768
+ {
1769
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_token_timestamp_batch_generation",
1770
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1771
+ },
1772
+ {
1773
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_token_timestamp_generation",
1774
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1775
+ },
1776
+ {
1777
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_token_timestamp_generation_longform",
1778
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1779
+ },
1780
+ {
1781
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_empty_longform",
1782
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1783
+ },
1784
+ {
1785
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_empty_longform_multi_gpu",
1786
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1787
+ },
1788
+ {
1789
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_multi_batch",
1790
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1791
+ },
1792
+ {
1793
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_multi_batch_hard",
1794
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1795
+ },
1796
+ {
1797
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_multi_batch_hard_prev_cond",
1798
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1799
+ },
1800
+ {
1801
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_multi_batch_prev_cond",
1802
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1803
+ },
1804
+ {
1805
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_no_speech_detection",
1806
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1807
+ },
1808
+ {
1809
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_prompt_ids",
1810
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1811
+ },
1812
+ {
1813
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_single_batch",
1814
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1815
+ },
1816
+ {
1817
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_single_batch_beam",
1818
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1819
+ },
1820
+ {
1821
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_single_batch_prev_cond",
1822
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1823
+ },
1824
+ {
1825
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_shortform_multi_batch_hard_prev_cond",
1826
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1827
+ },
1828
+ {
1829
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_shortform_single_batch_prev_cond",
1830
+ "trace": "(line 172) ImportError: To support decoding audio data, please install 'torchcodec'."
1831
+ }
1832
+ ]
1833
+ },
1834
+ "job_link": {
1835
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301330636",
1836
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712966867/job/47301329883"
1837
+ }
1838
+ }
1839
+ }
sample_nvidia.json ADDED
@@ -0,0 +1,1475 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "models_auto": {
3
+ "failed": {
4
+ "PyTorch": {
5
+ "unclassified": 0,
6
+ "single": 0,
7
+ "multi": 0
8
+ },
9
+ "TensorFlow": {
10
+ "unclassified": 0,
11
+ "single": 0,
12
+ "multi": 0
13
+ },
14
+ "Flax": {
15
+ "unclassified": 0,
16
+ "single": 0,
17
+ "multi": 0
18
+ },
19
+ "Tokenizers": {
20
+ "unclassified": 0,
21
+ "single": 0,
22
+ "multi": 0
23
+ },
24
+ "Pipelines": {
25
+ "unclassified": 0,
26
+ "single": 0,
27
+ "multi": 0
28
+ },
29
+ "Trainer": {
30
+ "unclassified": 0,
31
+ "single": 0,
32
+ "multi": 0
33
+ },
34
+ "ONNX": {
35
+ "unclassified": 0,
36
+ "single": 0,
37
+ "multi": 0
38
+ },
39
+ "Auto": {
40
+ "unclassified": 0,
41
+ "single": 0,
42
+ "multi": 0
43
+ },
44
+ "Quantization": {
45
+ "unclassified": 0,
46
+ "single": 0,
47
+ "multi": 0
48
+ },
49
+ "Unclassified": {
50
+ "unclassified": 0,
51
+ "single": 0,
52
+ "multi": 0
53
+ }
54
+ },
55
+ "errors": 0,
56
+ "success": 226,
57
+ "skipped": 10,
58
+ "time_spent": "3.79, 5.93, ",
59
+ "failures": {},
60
+ "job_link": {
61
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301215208",
62
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301215147"
63
+ }
64
+ },
65
+ "models_bert": {
66
+ "failed": {
67
+ "PyTorch": {
68
+ "unclassified": 0,
69
+ "single": 0,
70
+ "multi": 0
71
+ },
72
+ "TensorFlow": {
73
+ "unclassified": 0,
74
+ "single": 0,
75
+ "multi": 0
76
+ },
77
+ "Flax": {
78
+ "unclassified": 0,
79
+ "single": 0,
80
+ "multi": 0
81
+ },
82
+ "Tokenizers": {
83
+ "unclassified": 0,
84
+ "single": 0,
85
+ "multi": 0
86
+ },
87
+ "Pipelines": {
88
+ "unclassified": 0,
89
+ "single": 0,
90
+ "multi": 0
91
+ },
92
+ "Trainer": {
93
+ "unclassified": 0,
94
+ "single": 0,
95
+ "multi": 0
96
+ },
97
+ "ONNX": {
98
+ "unclassified": 0,
99
+ "single": 0,
100
+ "multi": 0
101
+ },
102
+ "Auto": {
103
+ "unclassified": 0,
104
+ "single": 0,
105
+ "multi": 0
106
+ },
107
+ "Quantization": {
108
+ "unclassified": 0,
109
+ "single": 0,
110
+ "multi": 0
111
+ },
112
+ "Unclassified": {
113
+ "unclassified": 0,
114
+ "single": 0,
115
+ "multi": 0
116
+ }
117
+ },
118
+ "errors": 0,
119
+ "success": 527,
120
+ "skipped": 211,
121
+ "time_spent": "0:01:47, 0:01:50, ",
122
+ "failures": {},
123
+ "job_link": {
124
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301215196",
125
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301215175"
126
+ }
127
+ },
128
+ "models_clip": {
129
+ "failed": {
130
+ "PyTorch": {
131
+ "unclassified": 0,
132
+ "single": 0,
133
+ "multi": 0
134
+ },
135
+ "TensorFlow": {
136
+ "unclassified": 0,
137
+ "single": 0,
138
+ "multi": 0
139
+ },
140
+ "Flax": {
141
+ "unclassified": 0,
142
+ "single": 0,
143
+ "multi": 0
144
+ },
145
+ "Tokenizers": {
146
+ "unclassified": 0,
147
+ "single": 0,
148
+ "multi": 0
149
+ },
150
+ "Pipelines": {
151
+ "unclassified": 0,
152
+ "single": 0,
153
+ "multi": 0
154
+ },
155
+ "Trainer": {
156
+ "unclassified": 0,
157
+ "single": 0,
158
+ "multi": 0
159
+ },
160
+ "ONNX": {
161
+ "unclassified": 0,
162
+ "single": 0,
163
+ "multi": 0
164
+ },
165
+ "Auto": {
166
+ "unclassified": 0,
167
+ "single": 0,
168
+ "multi": 0
169
+ },
170
+ "Quantization": {
171
+ "unclassified": 0,
172
+ "single": 0,
173
+ "multi": 0
174
+ },
175
+ "Unclassified": {
176
+ "unclassified": 0,
177
+ "single": 0,
178
+ "multi": 0
179
+ }
180
+ },
181
+ "errors": 0,
182
+ "success": 660,
183
+ "skipped": 934,
184
+ "time_spent": "0:02:15, 0:02:11, ",
185
+ "failures": {},
186
+ "job_link": {
187
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301215674",
188
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301215699"
189
+ }
190
+ },
191
+ "models_detr": {
192
+ "failed": {
193
+ "PyTorch": {
194
+ "unclassified": 0,
195
+ "single": 0,
196
+ "multi": 0
197
+ },
198
+ "TensorFlow": {
199
+ "unclassified": 0,
200
+ "single": 0,
201
+ "multi": 0
202
+ },
203
+ "Flax": {
204
+ "unclassified": 0,
205
+ "single": 0,
206
+ "multi": 0
207
+ },
208
+ "Tokenizers": {
209
+ "unclassified": 0,
210
+ "single": 0,
211
+ "multi": 0
212
+ },
213
+ "Pipelines": {
214
+ "unclassified": 0,
215
+ "single": 0,
216
+ "multi": 0
217
+ },
218
+ "Trainer": {
219
+ "unclassified": 0,
220
+ "single": 0,
221
+ "multi": 0
222
+ },
223
+ "ONNX": {
224
+ "unclassified": 0,
225
+ "single": 0,
226
+ "multi": 0
227
+ },
228
+ "Auto": {
229
+ "unclassified": 0,
230
+ "single": 0,
231
+ "multi": 0
232
+ },
233
+ "Quantization": {
234
+ "unclassified": 0,
235
+ "single": 0,
236
+ "multi": 0
237
+ },
238
+ "Unclassified": {
239
+ "unclassified": 0,
240
+ "single": 0,
241
+ "multi": 0
242
+ }
243
+ },
244
+ "errors": 0,
245
+ "success": 177,
246
+ "skipped": 271,
247
+ "time_spent": "0:01:07, 0:01:11, ",
248
+ "failures": {},
249
+ "job_link": {
250
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216030",
251
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216008"
252
+ }
253
+ },
254
+ "models_gemma3": {
255
+ "failed": {
256
+ "PyTorch": {
257
+ "unclassified": 0,
258
+ "single": 0,
259
+ "multi": 1
260
+ },
261
+ "TensorFlow": {
262
+ "unclassified": 0,
263
+ "single": 0,
264
+ "multi": 0
265
+ },
266
+ "Flax": {
267
+ "unclassified": 0,
268
+ "single": 0,
269
+ "multi": 0
270
+ },
271
+ "Tokenizers": {
272
+ "unclassified": 0,
273
+ "single": 0,
274
+ "multi": 0
275
+ },
276
+ "Pipelines": {
277
+ "unclassified": 0,
278
+ "single": 0,
279
+ "multi": 0
280
+ },
281
+ "Trainer": {
282
+ "unclassified": 0,
283
+ "single": 0,
284
+ "multi": 0
285
+ },
286
+ "ONNX": {
287
+ "unclassified": 0,
288
+ "single": 0,
289
+ "multi": 0
290
+ },
291
+ "Auto": {
292
+ "unclassified": 0,
293
+ "single": 0,
294
+ "multi": 0
295
+ },
296
+ "Quantization": {
297
+ "unclassified": 0,
298
+ "single": 0,
299
+ "multi": 0
300
+ },
301
+ "Unclassified": {
302
+ "unclassified": 0,
303
+ "single": 0,
304
+ "multi": 0
305
+ }
306
+ },
307
+ "errors": 0,
308
+ "success": 507,
309
+ "skipped": 320,
310
+ "time_spent": "0:09:30, 0:09:28, ",
311
+ "failures": {
312
+ "multi": [
313
+ {
314
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3Vision2TextModelTest::test_model_parallelism",
315
+ "trace": "(line 925) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!"
316
+ }
317
+ ]
318
+ },
319
+ "job_link": {
320
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216642",
321
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216593"
322
+ }
323
+ },
324
+ "models_gemma3n": {
325
+ "failed": {
326
+ "PyTorch": {
327
+ "unclassified": 0,
328
+ "single": 1,
329
+ "multi": 0
330
+ },
331
+ "TensorFlow": {
332
+ "unclassified": 0,
333
+ "single": 0,
334
+ "multi": 0
335
+ },
336
+ "Flax": {
337
+ "unclassified": 0,
338
+ "single": 0,
339
+ "multi": 0
340
+ },
341
+ "Tokenizers": {
342
+ "unclassified": 0,
343
+ "single": 0,
344
+ "multi": 0
345
+ },
346
+ "Pipelines": {
347
+ "unclassified": 0,
348
+ "single": 0,
349
+ "multi": 0
350
+ },
351
+ "Trainer": {
352
+ "unclassified": 0,
353
+ "single": 0,
354
+ "multi": 0
355
+ },
356
+ "ONNX": {
357
+ "unclassified": 0,
358
+ "single": 0,
359
+ "multi": 0
360
+ },
361
+ "Auto": {
362
+ "unclassified": 0,
363
+ "single": 0,
364
+ "multi": 0
365
+ },
366
+ "Quantization": {
367
+ "unclassified": 0,
368
+ "single": 0,
369
+ "multi": 0
370
+ },
371
+ "Unclassified": {
372
+ "unclassified": 0,
373
+ "single": 0,
374
+ "multi": 0
375
+ }
376
+ },
377
+ "errors": 0,
378
+ "success": 288,
379
+ "skipped": 703,
380
+ "time_spent": "0:02:15, 0:02:15, ",
381
+ "failures": {
382
+ "single": [
383
+ {
384
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_sdpa_padding_matches_padding_free_with_position_ids",
385
+ "trace": "(line 4243) AssertionError: Tensor-likes are not close!"
386
+ }
387
+ ]
388
+ },
389
+ "job_link": {
390
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216605",
391
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216660"
392
+ }
393
+ },
394
+ "models_got_ocr2": {
395
+ "failed": {
396
+ "PyTorch": {
397
+ "unclassified": 0,
398
+ "single": 0,
399
+ "multi": 0
400
+ },
401
+ "TensorFlow": {
402
+ "unclassified": 0,
403
+ "single": 0,
404
+ "multi": 0
405
+ },
406
+ "Flax": {
407
+ "unclassified": 0,
408
+ "single": 0,
409
+ "multi": 0
410
+ },
411
+ "Tokenizers": {
412
+ "unclassified": 0,
413
+ "single": 0,
414
+ "multi": 0
415
+ },
416
+ "Pipelines": {
417
+ "unclassified": 0,
418
+ "single": 0,
419
+ "multi": 0
420
+ },
421
+ "Trainer": {
422
+ "unclassified": 0,
423
+ "single": 0,
424
+ "multi": 0
425
+ },
426
+ "ONNX": {
427
+ "unclassified": 0,
428
+ "single": 0,
429
+ "multi": 0
430
+ },
431
+ "Auto": {
432
+ "unclassified": 0,
433
+ "single": 0,
434
+ "multi": 0
435
+ },
436
+ "Quantization": {
437
+ "unclassified": 0,
438
+ "single": 0,
439
+ "multi": 0
440
+ },
441
+ "Unclassified": {
442
+ "unclassified": 0,
443
+ "single": 0,
444
+ "multi": 0
445
+ }
446
+ },
447
+ "errors": 0,
448
+ "success": 257,
449
+ "skipped": 333,
450
+ "time_spent": "0:01:49, 0:01:49, ",
451
+ "failures": {},
452
+ "job_link": {
453
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216911",
454
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216742"
455
+ }
456
+ },
457
+ "models_gpt2": {
458
+ "failed": {
459
+ "PyTorch": {
460
+ "unclassified": 0,
461
+ "single": 0,
462
+ "multi": 0
463
+ },
464
+ "TensorFlow": {
465
+ "unclassified": 0,
466
+ "single": 0,
467
+ "multi": 0
468
+ },
469
+ "Flax": {
470
+ "unclassified": 0,
471
+ "single": 0,
472
+ "multi": 0
473
+ },
474
+ "Tokenizers": {
475
+ "unclassified": 0,
476
+ "single": 0,
477
+ "multi": 0
478
+ },
479
+ "Pipelines": {
480
+ "unclassified": 0,
481
+ "single": 0,
482
+ "multi": 0
483
+ },
484
+ "Trainer": {
485
+ "unclassified": 0,
486
+ "single": 0,
487
+ "multi": 0
488
+ },
489
+ "ONNX": {
490
+ "unclassified": 0,
491
+ "single": 0,
492
+ "multi": 0
493
+ },
494
+ "Auto": {
495
+ "unclassified": 0,
496
+ "single": 0,
497
+ "multi": 0
498
+ },
499
+ "Quantization": {
500
+ "unclassified": 0,
501
+ "single": 0,
502
+ "multi": 0
503
+ },
504
+ "Unclassified": {
505
+ "unclassified": 0,
506
+ "single": 0,
507
+ "multi": 0
508
+ }
509
+ },
510
+ "errors": 0,
511
+ "success": 487,
512
+ "skipped": 229,
513
+ "time_spent": "0:02:11, 0:02:01, ",
514
+ "failures": {},
515
+ "job_link": {
516
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216717",
517
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216759"
518
+ }
519
+ },
520
+ "models_internvl": {
521
+ "failed": {
522
+ "PyTorch": {
523
+ "unclassified": 0,
524
+ "single": 1,
525
+ "multi": 1
526
+ },
527
+ "TensorFlow": {
528
+ "unclassified": 0,
529
+ "single": 0,
530
+ "multi": 0
531
+ },
532
+ "Flax": {
533
+ "unclassified": 0,
534
+ "single": 0,
535
+ "multi": 0
536
+ },
537
+ "Tokenizers": {
538
+ "unclassified": 0,
539
+ "single": 0,
540
+ "multi": 0
541
+ },
542
+ "Pipelines": {
543
+ "unclassified": 0,
544
+ "single": 0,
545
+ "multi": 0
546
+ },
547
+ "Trainer": {
548
+ "unclassified": 0,
549
+ "single": 0,
550
+ "multi": 0
551
+ },
552
+ "ONNX": {
553
+ "unclassified": 0,
554
+ "single": 0,
555
+ "multi": 0
556
+ },
557
+ "Auto": {
558
+ "unclassified": 0,
559
+ "single": 0,
560
+ "multi": 0
561
+ },
562
+ "Quantization": {
563
+ "unclassified": 0,
564
+ "single": 0,
565
+ "multi": 0
566
+ },
567
+ "Unclassified": {
568
+ "unclassified": 0,
569
+ "single": 0,
570
+ "multi": 0
571
+ }
572
+ },
573
+ "errors": 0,
574
+ "success": 355,
575
+ "skipped": 241,
576
+ "time_spent": "0:04:33, 0:04:31, ",
577
+ "failures": {
578
+ "multi": [
579
+ {
580
+ "line": "tests/models/internvl/test_modeling_internvl.py::InternVLModelTest::test_flex_attention_with_grads",
581
+ "trace": "(line 439) torch._inductor.exc.InductorError: RuntimeError: No valid triton configs. OutOfResources: out of resource: shared memory, Required: 106496, Hardware limit: 101376. Reducing block sizes or `num_stages` may help."
582
+ }
583
+ ],
584
+ "single": [
585
+ {
586
+ "line": "tests/models/internvl/test_modeling_internvl.py::InternVLModelTest::test_flex_attention_with_grads",
587
+ "trace": "(line 439) torch._inductor.exc.InductorError: RuntimeError: No valid triton configs. OutOfResources: out of resource: shared memory, Required: 106496, Hardware limit: 101376. Reducing block sizes or `num_stages` may help."
588
+ }
589
+ ]
590
+ },
591
+ "job_link": {
592
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301217017",
593
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301217056"
594
+ }
595
+ },
596
+ "models_llama": {
597
+ "failed": {
598
+ "PyTorch": {
599
+ "unclassified": 0,
600
+ "single": 0,
601
+ "multi": 0
602
+ },
603
+ "TensorFlow": {
604
+ "unclassified": 0,
605
+ "single": 0,
606
+ "multi": 0
607
+ },
608
+ "Flax": {
609
+ "unclassified": 0,
610
+ "single": 0,
611
+ "multi": 0
612
+ },
613
+ "Tokenizers": {
614
+ "unclassified": 0,
615
+ "single": 0,
616
+ "multi": 0
617
+ },
618
+ "Pipelines": {
619
+ "unclassified": 0,
620
+ "single": 0,
621
+ "multi": 0
622
+ },
623
+ "Trainer": {
624
+ "unclassified": 0,
625
+ "single": 0,
626
+ "multi": 0
627
+ },
628
+ "ONNX": {
629
+ "unclassified": 0,
630
+ "single": 0,
631
+ "multi": 0
632
+ },
633
+ "Auto": {
634
+ "unclassified": 0,
635
+ "single": 0,
636
+ "multi": 0
637
+ },
638
+ "Quantization": {
639
+ "unclassified": 0,
640
+ "single": 0,
641
+ "multi": 0
642
+ },
643
+ "Unclassified": {
644
+ "unclassified": 0,
645
+ "single": 0,
646
+ "multi": 0
647
+ }
648
+ },
649
+ "errors": 0,
650
+ "success": 481,
651
+ "skipped": 253,
652
+ "time_spent": "0:03:43, 0:03:37, ",
653
+ "failures": {},
654
+ "job_link": {
655
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301217239",
656
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301217242"
657
+ }
658
+ },
659
+ "models_llava": {
660
+ "failed": {
661
+ "PyTorch": {
662
+ "unclassified": 0,
663
+ "single": 0,
664
+ "multi": 0
665
+ },
666
+ "TensorFlow": {
667
+ "unclassified": 0,
668
+ "single": 0,
669
+ "multi": 0
670
+ },
671
+ "Flax": {
672
+ "unclassified": 0,
673
+ "single": 0,
674
+ "multi": 0
675
+ },
676
+ "Tokenizers": {
677
+ "unclassified": 0,
678
+ "single": 0,
679
+ "multi": 0
680
+ },
681
+ "Pipelines": {
682
+ "unclassified": 0,
683
+ "single": 0,
684
+ "multi": 0
685
+ },
686
+ "Trainer": {
687
+ "unclassified": 0,
688
+ "single": 0,
689
+ "multi": 0
690
+ },
691
+ "ONNX": {
692
+ "unclassified": 0,
693
+ "single": 0,
694
+ "multi": 0
695
+ },
696
+ "Auto": {
697
+ "unclassified": 0,
698
+ "single": 0,
699
+ "multi": 0
700
+ },
701
+ "Quantization": {
702
+ "unclassified": 0,
703
+ "single": 0,
704
+ "multi": 0
705
+ },
706
+ "Unclassified": {
707
+ "unclassified": 0,
708
+ "single": 0,
709
+ "multi": 0
710
+ }
711
+ },
712
+ "errors": 0,
713
+ "success": 349,
714
+ "skipped": 159,
715
+ "time_spent": "0:08:59, 0:09:11, ",
716
+ "failures": {},
717
+ "job_link": {
718
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301217250",
719
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301217263"
720
+ }
721
+ },
722
+ "models_mistral3": {
723
+ "failed": {
724
+ "PyTorch": {
725
+ "unclassified": 0,
726
+ "single": 0,
727
+ "multi": 0
728
+ },
729
+ "TensorFlow": {
730
+ "unclassified": 0,
731
+ "single": 0,
732
+ "multi": 0
733
+ },
734
+ "Flax": {
735
+ "unclassified": 0,
736
+ "single": 0,
737
+ "multi": 0
738
+ },
739
+ "Tokenizers": {
740
+ "unclassified": 0,
741
+ "single": 0,
742
+ "multi": 0
743
+ },
744
+ "Pipelines": {
745
+ "unclassified": 0,
746
+ "single": 0,
747
+ "multi": 0
748
+ },
749
+ "Trainer": {
750
+ "unclassified": 0,
751
+ "single": 0,
752
+ "multi": 0
753
+ },
754
+ "ONNX": {
755
+ "unclassified": 0,
756
+ "single": 0,
757
+ "multi": 0
758
+ },
759
+ "Auto": {
760
+ "unclassified": 0,
761
+ "single": 0,
762
+ "multi": 0
763
+ },
764
+ "Quantization": {
765
+ "unclassified": 0,
766
+ "single": 0,
767
+ "multi": 0
768
+ },
769
+ "Unclassified": {
770
+ "unclassified": 0,
771
+ "single": 0,
772
+ "multi": 0
773
+ }
774
+ },
775
+ "errors": 0,
776
+ "success": 283,
777
+ "skipped": 267,
778
+ "time_spent": "0:09:53, 0:09:40, ",
779
+ "failures": {},
780
+ "job_link": {
781
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301215108",
782
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301215124"
783
+ }
784
+ },
785
+ "models_modernbert": {
786
+ "failed": {
787
+ "PyTorch": {
788
+ "unclassified": 0,
789
+ "single": 0,
790
+ "multi": 0
791
+ },
792
+ "TensorFlow": {
793
+ "unclassified": 0,
794
+ "single": 0,
795
+ "multi": 0
796
+ },
797
+ "Flax": {
798
+ "unclassified": 0,
799
+ "single": 0,
800
+ "multi": 0
801
+ },
802
+ "Tokenizers": {
803
+ "unclassified": 0,
804
+ "single": 0,
805
+ "multi": 0
806
+ },
807
+ "Pipelines": {
808
+ "unclassified": 0,
809
+ "single": 0,
810
+ "multi": 0
811
+ },
812
+ "Trainer": {
813
+ "unclassified": 0,
814
+ "single": 0,
815
+ "multi": 0
816
+ },
817
+ "ONNX": {
818
+ "unclassified": 0,
819
+ "single": 0,
820
+ "multi": 0
821
+ },
822
+ "Auto": {
823
+ "unclassified": 0,
824
+ "single": 0,
825
+ "multi": 0
826
+ },
827
+ "Quantization": {
828
+ "unclassified": 0,
829
+ "single": 0,
830
+ "multi": 0
831
+ },
832
+ "Unclassified": {
833
+ "unclassified": 0,
834
+ "single": 0,
835
+ "multi": 0
836
+ }
837
+ },
838
+ "errors": 0,
839
+ "success": 174,
840
+ "skipped": 218,
841
+ "time_spent": "0:01:27, 0:01:24, ",
842
+ "failures": {},
843
+ "job_link": {
844
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301215158",
845
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301215123"
846
+ }
847
+ },
848
+ "models_qwen2": {
849
+ "failed": {
850
+ "PyTorch": {
851
+ "unclassified": 0,
852
+ "single": 0,
853
+ "multi": 0
854
+ },
855
+ "TensorFlow": {
856
+ "unclassified": 0,
857
+ "single": 0,
858
+ "multi": 0
859
+ },
860
+ "Flax": {
861
+ "unclassified": 0,
862
+ "single": 0,
863
+ "multi": 0
864
+ },
865
+ "Tokenizers": {
866
+ "unclassified": 0,
867
+ "single": 0,
868
+ "multi": 0
869
+ },
870
+ "Pipelines": {
871
+ "unclassified": 0,
872
+ "single": 0,
873
+ "multi": 0
874
+ },
875
+ "Trainer": {
876
+ "unclassified": 0,
877
+ "single": 0,
878
+ "multi": 0
879
+ },
880
+ "ONNX": {
881
+ "unclassified": 0,
882
+ "single": 0,
883
+ "multi": 0
884
+ },
885
+ "Auto": {
886
+ "unclassified": 0,
887
+ "single": 0,
888
+ "multi": 0
889
+ },
890
+ "Quantization": {
891
+ "unclassified": 0,
892
+ "single": 0,
893
+ "multi": 0
894
+ },
895
+ "Unclassified": {
896
+ "unclassified": 0,
897
+ "single": 0,
898
+ "multi": 0
899
+ }
900
+ },
901
+ "errors": 0,
902
+ "success": 443,
903
+ "skipped": 251,
904
+ "time_spent": "0:02:16, 0:02:16, ",
905
+ "failures": {},
906
+ "job_link": {
907
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301215909",
908
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301215891"
909
+ }
910
+ },
911
+ "models_qwen2_5_omni": {
912
+ "failed": {
913
+ "PyTorch": {
914
+ "unclassified": 0,
915
+ "single": 0,
916
+ "multi": 1
917
+ },
918
+ "TensorFlow": {
919
+ "unclassified": 0,
920
+ "single": 0,
921
+ "multi": 0
922
+ },
923
+ "Flax": {
924
+ "unclassified": 0,
925
+ "single": 0,
926
+ "multi": 0
927
+ },
928
+ "Tokenizers": {
929
+ "unclassified": 0,
930
+ "single": 0,
931
+ "multi": 0
932
+ },
933
+ "Pipelines": {
934
+ "unclassified": 0,
935
+ "single": 0,
936
+ "multi": 0
937
+ },
938
+ "Trainer": {
939
+ "unclassified": 0,
940
+ "single": 0,
941
+ "multi": 0
942
+ },
943
+ "ONNX": {
944
+ "unclassified": 0,
945
+ "single": 0,
946
+ "multi": 0
947
+ },
948
+ "Auto": {
949
+ "unclassified": 0,
950
+ "single": 0,
951
+ "multi": 0
952
+ },
953
+ "Quantization": {
954
+ "unclassified": 0,
955
+ "single": 0,
956
+ "multi": 0
957
+ },
958
+ "Unclassified": {
959
+ "unclassified": 0,
960
+ "single": 0,
961
+ "multi": 0
962
+ }
963
+ },
964
+ "errors": 0,
965
+ "success": 278,
966
+ "skipped": 159,
967
+ "time_spent": "0:02:55, 0:03:00, ",
968
+ "failures": {
969
+ "multi": [
970
+ {
971
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniThinkerForConditionalGenerationModelTest::test_model_parallelism",
972
+ "trace": "(line 675) AssertionError: Items in the second set but not the first:"
973
+ }
974
+ ]
975
+ },
976
+ "job_link": {
977
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301215907",
978
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301215896"
979
+ }
980
+ },
981
+ "models_qwen2_5_vl": {
982
+ "failed": {
983
+ "PyTorch": {
984
+ "unclassified": 0,
985
+ "single": 1,
986
+ "multi": 1
987
+ },
988
+ "TensorFlow": {
989
+ "unclassified": 0,
990
+ "single": 0,
991
+ "multi": 0
992
+ },
993
+ "Flax": {
994
+ "unclassified": 0,
995
+ "single": 0,
996
+ "multi": 0
997
+ },
998
+ "Tokenizers": {
999
+ "unclassified": 0,
1000
+ "single": 0,
1001
+ "multi": 0
1002
+ },
1003
+ "Pipelines": {
1004
+ "unclassified": 0,
1005
+ "single": 0,
1006
+ "multi": 0
1007
+ },
1008
+ "Trainer": {
1009
+ "unclassified": 0,
1010
+ "single": 0,
1011
+ "multi": 0
1012
+ },
1013
+ "ONNX": {
1014
+ "unclassified": 0,
1015
+ "single": 0,
1016
+ "multi": 0
1017
+ },
1018
+ "Auto": {
1019
+ "unclassified": 0,
1020
+ "single": 0,
1021
+ "multi": 0
1022
+ },
1023
+ "Quantization": {
1024
+ "unclassified": 0,
1025
+ "single": 0,
1026
+ "multi": 0
1027
+ },
1028
+ "Unclassified": {
1029
+ "unclassified": 0,
1030
+ "single": 0,
1031
+ "multi": 0
1032
+ }
1033
+ },
1034
+ "errors": 0,
1035
+ "success": 309,
1036
+ "skipped": 141,
1037
+ "time_spent": "0:03:13, 0:03:14, ",
1038
+ "failures": {
1039
+ "multi": [
1040
+ {
1041
+ "line": "tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py::Qwen2_5_VLIntegrationTest::test_small_model_integration_test_batch_different_resolutions",
1042
+ "trace": "(line 675) AssertionError: Lists differ: ['sys[314 chars]ion\\n addCriterion\\n\\n addCriterion\\n\\n addCri[75 chars]n\\n'] != ['sys[314 chars]ion\\nThe dog in the picture appears to be a La[81 chars] is']"
1043
+ }
1044
+ ],
1045
+ "single": [
1046
+ {
1047
+ "line": "tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py::Qwen2_5_VLIntegrationTest::test_small_model_integration_test_batch_different_resolutions",
1048
+ "trace": "(line 675) AssertionError: Lists differ: ['sys[314 chars]ion\\n addCriterion\\n\\n addCriterion\\n\\n addCri[75 chars]n\\n'] != ['sys[314 chars]ion\\nThe dog in the picture appears to be a La[81 chars] is']"
1049
+ }
1050
+ ]
1051
+ },
1052
+ "job_link": {
1053
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301215945",
1054
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301215911"
1055
+ }
1056
+ },
1057
+ "models_smolvlm": {
1058
+ "failed": {
1059
+ "PyTorch": {
1060
+ "unclassified": 0,
1061
+ "single": 0,
1062
+ "multi": 0
1063
+ },
1064
+ "TensorFlow": {
1065
+ "unclassified": 0,
1066
+ "single": 0,
1067
+ "multi": 0
1068
+ },
1069
+ "Flax": {
1070
+ "unclassified": 0,
1071
+ "single": 0,
1072
+ "multi": 0
1073
+ },
1074
+ "Tokenizers": {
1075
+ "unclassified": 0,
1076
+ "single": 0,
1077
+ "multi": 0
1078
+ },
1079
+ "Pipelines": {
1080
+ "unclassified": 0,
1081
+ "single": 0,
1082
+ "multi": 0
1083
+ },
1084
+ "Trainer": {
1085
+ "unclassified": 0,
1086
+ "single": 0,
1087
+ "multi": 0
1088
+ },
1089
+ "ONNX": {
1090
+ "unclassified": 0,
1091
+ "single": 0,
1092
+ "multi": 0
1093
+ },
1094
+ "Auto": {
1095
+ "unclassified": 0,
1096
+ "single": 0,
1097
+ "multi": 0
1098
+ },
1099
+ "Quantization": {
1100
+ "unclassified": 0,
1101
+ "single": 0,
1102
+ "multi": 0
1103
+ },
1104
+ "Unclassified": {
1105
+ "unclassified": 0,
1106
+ "single": 0,
1107
+ "multi": 0
1108
+ }
1109
+ },
1110
+ "errors": 0,
1111
+ "success": 497,
1112
+ "skipped": 269,
1113
+ "time_spent": "0:01:33, 0:01:36, ",
1114
+ "failures": {},
1115
+ "job_link": {
1116
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216282",
1117
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216321"
1118
+ }
1119
+ },
1120
+ "models_t5": {
1121
+ "failed": {
1122
+ "PyTorch": {
1123
+ "unclassified": 0,
1124
+ "single": 1,
1125
+ "multi": 2
1126
+ },
1127
+ "TensorFlow": {
1128
+ "unclassified": 0,
1129
+ "single": 0,
1130
+ "multi": 0
1131
+ },
1132
+ "Flax": {
1133
+ "unclassified": 0,
1134
+ "single": 0,
1135
+ "multi": 0
1136
+ },
1137
+ "Tokenizers": {
1138
+ "unclassified": 0,
1139
+ "single": 0,
1140
+ "multi": 0
1141
+ },
1142
+ "Pipelines": {
1143
+ "unclassified": 0,
1144
+ "single": 0,
1145
+ "multi": 0
1146
+ },
1147
+ "Trainer": {
1148
+ "unclassified": 0,
1149
+ "single": 0,
1150
+ "multi": 0
1151
+ },
1152
+ "ONNX": {
1153
+ "unclassified": 0,
1154
+ "single": 0,
1155
+ "multi": 0
1156
+ },
1157
+ "Auto": {
1158
+ "unclassified": 0,
1159
+ "single": 0,
1160
+ "multi": 0
1161
+ },
1162
+ "Quantization": {
1163
+ "unclassified": 0,
1164
+ "single": 0,
1165
+ "multi": 0
1166
+ },
1167
+ "Unclassified": {
1168
+ "unclassified": 0,
1169
+ "single": 0,
1170
+ "multi": 0
1171
+ }
1172
+ },
1173
+ "errors": 0,
1174
+ "success": 592,
1175
+ "skipped": 535,
1176
+ "time_spent": "0:03:13, 0:02:52, ",
1177
+ "failures": {
1178
+ "multi": [
1179
+ {
1180
+ "line": "tests/models/t5/test_modeling_t5.py::T5ModelTest::test_multi_gpu_data_parallel_forward",
1181
+ "trace": "(line 131) TypeError: EncoderDecoderCache.__init__() missing 1 required positional argument: 'cross_attention_cache'"
1182
+ },
1183
+ {
1184
+ "line": "tests/models/t5/test_modeling_t5.py::T5ModelIntegrationTests::test_export_t5_summarization",
1185
+ "trace": "(line 687) AttributeError: 'dict' object has no attribute 'batch_size'"
1186
+ }
1187
+ ],
1188
+ "single": [
1189
+ {
1190
+ "line": "tests/models/t5/test_modeling_t5.py::T5ModelIntegrationTests::test_export_t5_summarization",
1191
+ "trace": "(line 687) AttributeError: 'dict' object has no attribute 'batch_size'"
1192
+ }
1193
+ ]
1194
+ },
1195
+ "job_link": {
1196
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216565",
1197
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216464"
1198
+ }
1199
+ },
1200
+ "models_vit": {
1201
+ "failed": {
1202
+ "PyTorch": {
1203
+ "unclassified": 0,
1204
+ "single": 0,
1205
+ "multi": 0
1206
+ },
1207
+ "TensorFlow": {
1208
+ "unclassified": 0,
1209
+ "single": 0,
1210
+ "multi": 0
1211
+ },
1212
+ "Flax": {
1213
+ "unclassified": 0,
1214
+ "single": 0,
1215
+ "multi": 0
1216
+ },
1217
+ "Tokenizers": {
1218
+ "unclassified": 0,
1219
+ "single": 0,
1220
+ "multi": 0
1221
+ },
1222
+ "Pipelines": {
1223
+ "unclassified": 0,
1224
+ "single": 0,
1225
+ "multi": 0
1226
+ },
1227
+ "Trainer": {
1228
+ "unclassified": 0,
1229
+ "single": 0,
1230
+ "multi": 0
1231
+ },
1232
+ "ONNX": {
1233
+ "unclassified": 0,
1234
+ "single": 0,
1235
+ "multi": 0
1236
+ },
1237
+ "Auto": {
1238
+ "unclassified": 0,
1239
+ "single": 0,
1240
+ "multi": 0
1241
+ },
1242
+ "Quantization": {
1243
+ "unclassified": 0,
1244
+ "single": 0,
1245
+ "multi": 0
1246
+ },
1247
+ "Unclassified": {
1248
+ "unclassified": 0,
1249
+ "single": 0,
1250
+ "multi": 0
1251
+ }
1252
+ },
1253
+ "errors": 0,
1254
+ "success": 217,
1255
+ "skipped": 199,
1256
+ "time_spent": "2.03, 1.28, ",
1257
+ "failures": {},
1258
+ "job_link": {
1259
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216869",
1260
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216833"
1261
+ }
1262
+ },
1263
+ "models_wav2vec2": {
1264
+ "failed": {
1265
+ "PyTorch": {
1266
+ "unclassified": 0,
1267
+ "single": 4,
1268
+ "multi": 4
1269
+ },
1270
+ "TensorFlow": {
1271
+ "unclassified": 0,
1272
+ "single": 0,
1273
+ "multi": 0
1274
+ },
1275
+ "Flax": {
1276
+ "unclassified": 0,
1277
+ "single": 0,
1278
+ "multi": 0
1279
+ },
1280
+ "Tokenizers": {
1281
+ "unclassified": 0,
1282
+ "single": 0,
1283
+ "multi": 0
1284
+ },
1285
+ "Pipelines": {
1286
+ "unclassified": 0,
1287
+ "single": 0,
1288
+ "multi": 0
1289
+ },
1290
+ "Trainer": {
1291
+ "unclassified": 0,
1292
+ "single": 0,
1293
+ "multi": 0
1294
+ },
1295
+ "ONNX": {
1296
+ "unclassified": 0,
1297
+ "single": 0,
1298
+ "multi": 0
1299
+ },
1300
+ "Auto": {
1301
+ "unclassified": 0,
1302
+ "single": 0,
1303
+ "multi": 0
1304
+ },
1305
+ "Quantization": {
1306
+ "unclassified": 0,
1307
+ "single": 0,
1308
+ "multi": 0
1309
+ },
1310
+ "Unclassified": {
1311
+ "unclassified": 0,
1312
+ "single": 0,
1313
+ "multi": 0
1314
+ }
1315
+ },
1316
+ "errors": 0,
1317
+ "success": 672,
1318
+ "skipped": 438,
1319
+ "time_spent": "0:03:37, 0:03:36, ",
1320
+ "failures": {
1321
+ "multi": [
1322
+ {
1323
+ "line": "tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelIntegrationTest::test_inference_mms_1b_all",
1324
+ "trace": "(line 989) RuntimeError: Dataset scripts are no longer supported, but found common_voice_11_0.py"
1325
+ },
1326
+ {
1327
+ "line": "tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelIntegrationTest::test_wav2vec2_with_lm",
1328
+ "trace": "(line 989) RuntimeError: Dataset scripts are no longer supported, but found common_voice_11_0.py"
1329
+ },
1330
+ {
1331
+ "line": "tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelIntegrationTest::test_wav2vec2_with_lm_invalid_pool",
1332
+ "trace": "(line 675) AssertionError: Traceback (most recent call last):"
1333
+ },
1334
+ {
1335
+ "line": "tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelIntegrationTest::test_wav2vec2_with_lm_pool",
1336
+ "trace": "(line 989) RuntimeError: Dataset scripts are no longer supported, but found common_voice_11_0.py"
1337
+ }
1338
+ ],
1339
+ "single": [
1340
+ {
1341
+ "line": "tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelIntegrationTest::test_inference_mms_1b_all",
1342
+ "trace": "(line 989) RuntimeError: Dataset scripts are no longer supported, but found common_voice_11_0.py"
1343
+ },
1344
+ {
1345
+ "line": "tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelIntegrationTest::test_wav2vec2_with_lm",
1346
+ "trace": "(line 989) RuntimeError: Dataset scripts are no longer supported, but found common_voice_11_0.py"
1347
+ },
1348
+ {
1349
+ "line": "tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelIntegrationTest::test_wav2vec2_with_lm_invalid_pool",
1350
+ "trace": "(line 675) AssertionError: Traceback (most recent call last):"
1351
+ },
1352
+ {
1353
+ "line": "tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelIntegrationTest::test_wav2vec2_with_lm_pool",
1354
+ "trace": "(line 989) RuntimeError: Dataset scripts are no longer supported, but found common_voice_11_0.py"
1355
+ }
1356
+ ]
1357
+ },
1358
+ "job_link": {
1359
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216956",
1360
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216929"
1361
+ }
1362
+ },
1363
+ "models_whisper": {
1364
+ "failed": {
1365
+ "PyTorch": {
1366
+ "unclassified": 0,
1367
+ "single": 5,
1368
+ "multi": 6
1369
+ },
1370
+ "TensorFlow": {
1371
+ "unclassified": 0,
1372
+ "single": 0,
1373
+ "multi": 0
1374
+ },
1375
+ "Flax": {
1376
+ "unclassified": 0,
1377
+ "single": 0,
1378
+ "multi": 0
1379
+ },
1380
+ "Tokenizers": {
1381
+ "unclassified": 0,
1382
+ "single": 0,
1383
+ "multi": 0
1384
+ },
1385
+ "Pipelines": {
1386
+ "unclassified": 0,
1387
+ "single": 0,
1388
+ "multi": 0
1389
+ },
1390
+ "Trainer": {
1391
+ "unclassified": 0,
1392
+ "single": 0,
1393
+ "multi": 0
1394
+ },
1395
+ "ONNX": {
1396
+ "unclassified": 0,
1397
+ "single": 0,
1398
+ "multi": 0
1399
+ },
1400
+ "Auto": {
1401
+ "unclassified": 0,
1402
+ "single": 0,
1403
+ "multi": 0
1404
+ },
1405
+ "Quantization": {
1406
+ "unclassified": 0,
1407
+ "single": 0,
1408
+ "multi": 0
1409
+ },
1410
+ "Unclassified": {
1411
+ "unclassified": 0,
1412
+ "single": 0,
1413
+ "multi": 0
1414
+ }
1415
+ },
1416
+ "errors": 0,
1417
+ "success": 1014,
1418
+ "skipped": 475,
1419
+ "time_spent": "0:11:09, 0:11:47, ",
1420
+ "failures": {
1421
+ "single": [
1422
+ {
1423
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_batched_generation_multilingual",
1424
+ "trace": "(line 756) RuntimeError: The frame has 0 channels, expected 1. If you are hitting this, it may be because you are using a buggy FFmpeg version. FFmpeg4 is known to fail here in some valid scenarios. Try to upgrade FFmpeg?"
1425
+ },
1426
+ {
1427
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_small_longform_timestamps_generation",
1428
+ "trace": "(line 756) RuntimeError: The frame has 0 channels, expected 1. If you are hitting this, it may be because you are using a buggy FFmpeg version. FFmpeg4 is known to fail here in some valid scenarios. Try to upgrade FFmpeg?"
1429
+ },
1430
+ {
1431
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_longform_timestamps_generation",
1432
+ "trace": "(line 756) RuntimeError: The frame has 0 channels, expected 1. If you are hitting this, it may be because you are using a buggy FFmpeg version. FFmpeg4 is known to fail here in some valid scenarios. Try to upgrade FFmpeg?"
1433
+ },
1434
+ {
1435
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_multi_batch_hard",
1436
+ "trace": "(line 675) AssertionError: Lists differ: [\" Fo[272 chars]ting of classics, Sicilian, nade door variatio[8147 chars]le!'] != [\" Fo[272 chars]ting a classic Sicilian, nade door variation o[8150 chars]le!']"
1437
+ },
1438
+ {
1439
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_shortform_single_batch_prev_cond",
1440
+ "trace": "(line 675) AssertionError: Lists differ: [\" Fo[268 chars]ating, so soft, it would make JD power and her[196 chars]ke.\"] != [\" Fo[268 chars]ating so soft, it would make JD power and her [195 chars]ke.\"]"
1441
+ }
1442
+ ],
1443
+ "multi": [
1444
+ {
1445
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelTest::test_multi_gpu_data_parallel_forward",
1446
+ "trace": "(line 131) TypeError: EncoderDecoderCache.__init__() missing 1 required positional argument: 'cross_attention_cache'"
1447
+ },
1448
+ {
1449
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_batched_generation_multilingual",
1450
+ "trace": "(line 756) RuntimeError: The frame has 0 channels, expected 1. If you are hitting this, it may be because you are using a buggy FFmpeg version. FFmpeg4 is known to fail here in some valid scenarios. Try to upgrade FFmpeg?"
1451
+ },
1452
+ {
1453
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_small_longform_timestamps_generation",
1454
+ "trace": "(line 756) RuntimeError: The frame has 0 channels, expected 1. If you are hitting this, it may be because you are using a buggy FFmpeg version. FFmpeg4 is known to fail here in some valid scenarios. Try to upgrade FFmpeg?"
1455
+ },
1456
+ {
1457
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_longform_timestamps_generation",
1458
+ "trace": "(line 756) RuntimeError: The frame has 0 channels, expected 1. If you are hitting this, it may be because you are using a buggy FFmpeg version. FFmpeg4 is known to fail here in some valid scenarios. Try to upgrade FFmpeg?"
1459
+ },
1460
+ {
1461
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_multi_batch_hard",
1462
+ "trace": "(line 675) AssertionError: Lists differ: [\" Fo[272 chars]ting of classics, Sicilian, nade door variatio[8147 chars]le!'] != [\" Fo[272 chars]ting a classic Sicilian, nade door variation o[8150 chars]le!']"
1463
+ },
1464
+ {
1465
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_shortform_single_batch_prev_cond",
1466
+ "trace": "(line 675) AssertionError: Lists differ: [\" Fo[268 chars]ating, so soft, it would make JD power and her[196 chars]ke.\"] != [\" Fo[268 chars]ating so soft, it would make JD power and her [195 chars]ke.\"]"
1467
+ }
1468
+ ]
1469
+ },
1470
+ "job_link": {
1471
+ "single": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301216943",
1472
+ "multi": "https://github.com/huggingface/transformers/actions/runs/16712955100/job/47301217012"
1473
+ }
1474
+ }
1475
+ }
styles.css ADDED
@@ -0,0 +1,669 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /* Global dark theme with configurable bottom margin */
2
+ :root {
3
+ --main-content-bottom-margin: 10px; /* Configurable bottom margin for main content */
4
+ }
5
+
6
+ .gradio-container {
7
+ background-color: #000000 !important;
8
+ color: white !important;
9
+ height: 100vh !important;
10
+ max-height: 100vh !important;
11
+ overflow: hidden !important;
12
+ }
13
+
14
+ /* Remove borders from all components */
15
+ .gr-box, .gr-form, .gr-panel {
16
+ border: none !important;
17
+ background-color: #000000 !important;
18
+ }
19
+
20
+ /* Simplified sidebar styling */
21
+ .sidebar {
22
+ background: linear-gradient(145deg, #111111, #1a1a1a) !important;
23
+ border: none !important;
24
+ padding: 15px !important;
25
+ margin: 0 !important;
26
+ height: 100vh !important;
27
+ position: fixed !important;
28
+ left: 0 !important;
29
+ top: 0 !important;
30
+ width: 300px !important;
31
+ box-sizing: border-box !important;
32
+ overflow-y: auto !important;
33
+ overflow-x: hidden !important;
34
+ }
35
+
36
+ /* Target the actual Gradio column containing sidebar */
37
+ div[data-testid="column"]:has(.sidebar) {
38
+ height: 100vh !important;
39
+ overflow-y: auto !important;
40
+ overflow-x: hidden !important;
41
+ }
42
+
43
+ /* Individual sidebar elements */
44
+ .sidebar-title {
45
+ margin-bottom: 10px !important;
46
+ }
47
+
48
+ .sidebar-description {
49
+ margin-bottom: 15px !important;
50
+ }
51
+
52
+ /* Summary button styling - distinct from model buttons */
53
+ .summary-button {
54
+ background: linear-gradient(135deg, #4a4a4a, #3e3e3e) !important;
55
+ color: white !important;
56
+ border: 2px solid #555555 !important;
57
+ margin: 0 0 15px 0 !important;
58
+ border-radius: 5px !important;
59
+ padding: 12px 10px !important;
60
+ transition: all 0.4s cubic-bezier(0.4, 0, 0.2, 1) !important;
61
+ position: relative !important;
62
+ overflow: hidden !important;
63
+ box-shadow:
64
+ 0 4px 15px rgba(0, 0, 0, 0.3),
65
+ inset 0 1px 0 rgba(255, 255, 255, 0.2) !important;
66
+ font-weight: 600 !important;
67
+ font-size: 14px !important;
68
+ text-transform: uppercase !important;
69
+ letter-spacing: 0.3px !important;
70
+ font-family: monospace !important;
71
+ height: 60px !important;
72
+ display: flex !important;
73
+ flex-direction: column !important;
74
+ justify-content: center !important;
75
+ align-items: center !important;
76
+ line-height: 1.2 !important;
77
+ width: 100% !important;
78
+ max-width: 100% !important;
79
+ min-width: 0 !important;
80
+ box-sizing: border-box !important;
81
+ }
82
+
83
+ .model-header {
84
+ margin-bottom: 10px !important;
85
+ background: linear-gradient(135deg, #2a2a2a, #1e1e1e) !important;
86
+ color: white !important;
87
+ border: 1px solid #333 !important;
88
+ border-radius: 5px !important;
89
+ font-weight: 600 !important;
90
+ font-size: 14px !important;
91
+ font-family: monospace !important;
92
+ text-align: left !important;
93
+ width: 100% !important;
94
+ }
95
+
96
+ .model-header:hover {
97
+ background: linear-gradient(135deg, #3a3a3a, #2e2e2e) !important;
98
+ }
99
+
100
+ .sidebar-links {
101
+ margin-top: 15px !important;
102
+ }
103
+
104
+ /* Hide scrollbar for model container */
105
+ .model-container::-webkit-scrollbar {
106
+ display: none !important;
107
+ }
108
+
109
+ /* Ensure all sidebar content fits within width */
110
+ .sidebar * {
111
+ max-width: 100% !important;
112
+ word-wrap: break-word !important;
113
+ overflow-wrap: break-word !important;
114
+ }
115
+
116
+ /* Specific control for markdown content */
117
+ .sidebar .markdown,
118
+ .sidebar h1,
119
+ .sidebar h2,
120
+ .sidebar h3,
121
+ .sidebar p {
122
+ max-width: 100% !important;
123
+ word-wrap: break-word !important;
124
+ overflow: hidden !important;
125
+ }
126
+
127
+ /* Sidebar scrollbar styling */
128
+ .sidebar::-webkit-scrollbar {
129
+ width: 8px !important;
130
+ background: #111111 !important;
131
+ }
132
+
133
+ .sidebar::-webkit-scrollbar-track {
134
+ background: #111111 !important;
135
+ }
136
+
137
+ .sidebar::-webkit-scrollbar-thumb {
138
+ background-color: #333333 !important;
139
+ border-radius: 4px !important;
140
+ }
141
+
142
+ .sidebar::-webkit-scrollbar-thumb:hover {
143
+ background-color: #555555 !important;
144
+ }
145
+
146
+ /* Force button containers to single column in model list */
147
+ .model-list .gr-button,
148
+ .model-list button {
149
+ display: block !important;
150
+ width: 100% !important;
151
+ max-width: 100% !important;
152
+ margin: 4px 0 !important;
153
+ flex: none !important;
154
+ }
155
+
156
+ /* Simple unfolding menu with invisible scrollbar */
157
+ .model-list-visible {
158
+ max-height: 200px !important;
159
+ overflow-y: auto !important;
160
+ transition: max-height 0.3s ease !important;
161
+ scrollbar-width: none !important;
162
+ -ms-overflow-style: none !important;
163
+ }
164
+
165
+ .model-list-visible::-webkit-scrollbar {
166
+ width: 0px !important;
167
+ background: transparent !important;
168
+ }
169
+
170
+ .model-list-hidden {
171
+ max-height: 0 !important;
172
+ overflow: hidden !important;
173
+ transition: max-height 0.3s ease !important;
174
+ }
175
+
176
+
177
+ /* Model button styling */
178
+ .model-button {
179
+ background: linear-gradient(135deg, #2a2a2a, #1e1e1e) !important;
180
+ color: white !important;
181
+ margin: 3px 0 !important;
182
+ padding: 8px 12px !important;
183
+ font-weight: 600 !important;
184
+ font-size: 14px !important;
185
+ text-transform: uppercase !important;
186
+ letter-spacing: 0.3px !important;
187
+ font-family: monospace !important;
188
+ width: 100% !important;
189
+ max-width: 100% !important;
190
+ white-space: nowrap !important;
191
+ text-overflow: ellipsis !important;
192
+ display: block !important;
193
+ cursor: pointer !important;
194
+ transition: all 0.3s ease !important;
195
+ border: 1px solid #333 !important;
196
+ border-radius: 5px !important;
197
+ }
198
+
199
+ .model-button:hover {
200
+ background: linear-gradient(135deg, #3a3a3a, #2e2e2e) !important;
201
+ border-color: #74b9ff !important;
202
+ color: #74b9ff !important;
203
+ transform: translateY(-1px) !important;
204
+ box-shadow: 0 2px 8px rgba(116, 185, 255, 0.2) !important;
205
+ }
206
+
207
+ /* Model buttons with failures - fuzzy red border with inner glow */
208
+ .model-button-failed {
209
+ border: 1px solid #712626 !important;
210
+ box-shadow: inset 0 0 8px rgba(204, 68, 68, 0.4) !important;
211
+ }
212
+
213
+ .model-button-failed:hover {
214
+ border-color: #712626 !important;
215
+ box-shadow: 0 0 12px rgba(255, 107, 107, 0.5) !important;
216
+ }
217
+
218
+ /*
219
+ .model-button:active {
220
+ background: linear-gradient(135deg, #2a2a2a, #1e1e1e) !important;
221
+ color: #5a9bd4 !important;
222
+ }
223
+ */
224
+
225
+ /* Model stats badge */
226
+ .model-stats {
227
+ display: flex !important;
228
+ justify-content: space-between !important;
229
+ align-items: center !important;
230
+ margin-top: 8px !important;
231
+ font-size: 12px !important;
232
+ opacity: 0.8 !important;
233
+ }
234
+
235
+ .stats-badge {
236
+ background: rgba(116, 185, 255, 0.2) !important;
237
+ padding: 4px 8px !important;
238
+ border-radius: 10px !important;
239
+ font-weight: 500 !important;
240
+ font-size: 11px !important;
241
+ color: #74b9ff !important;
242
+ }
243
+
244
+ .success-indicator {
245
+ width: 8px !important;
246
+ height: 8px !important;
247
+ border-radius: 50% !important;
248
+ display: inline-block !important;
249
+ margin-right: 6px !important;
250
+ }
251
+
252
+ .success-high { background-color: #4CAF50 !important; }
253
+ .success-medium { background-color: #FF9800 !important; }
254
+ .success-low { background-color: #F44336 !important; }
255
+
256
+ /* Refresh button styling */
257
+ .refresh-button {
258
+ background: linear-gradient(135deg, #2d5aa0, #1e3f73) !important;
259
+ color: white !important;
260
+ border: 1px solid #3a6bc7 !important;
261
+ margin: 0 0 10px 0 !important;
262
+ border-radius: 5px !important;
263
+ padding: 6px 8px !important;
264
+ transition: all 0.3s ease !important;
265
+ font-weight: 500 !important;
266
+ font-size: 11px !important;
267
+ text-transform: lowercase !important;
268
+ letter-spacing: 0.1px !important;
269
+ font-family: monospace !important;
270
+ width: 100% !important;
271
+ max-width: 100% !important;
272
+ min-width: 0 !important;
273
+ box-sizing: border-box !important;
274
+ white-space: nowrap !important;
275
+ overflow: hidden !important;
276
+ text-overflow: ellipsis !important;
277
+ }
278
+
279
+ .refresh-button:hover {
280
+ background: linear-gradient(135deg, #3a6bc7, #2d5aa0) !important;
281
+ border-color: #4a7bd9 !important;
282
+ }
283
+
284
+ /* Summary button styling - distinct from model buttons */
285
+ .summary-button {
286
+ background: linear-gradient(135deg, #4a4a4a, #3e3e3e) !important;
287
+ color: white !important;
288
+ border: 2px solid #555555 !important;
289
+ margin: 0 0 15px 0 !important;
290
+ border-radius: 5px !important;
291
+ padding: 12px 10px !important;
292
+ transition: all 0.4s cubic-bezier(0.4, 0, 0.2, 1) !important;
293
+ position: relative !important;
294
+ overflow: hidden !important;
295
+ box-shadow:
296
+ 0 4px 15px rgba(0, 0, 0, 0.3),
297
+ inset 0 1px 0 rgba(255, 255, 255, 0.2) !important;
298
+ font-weight: 600 !important;
299
+ font-size: 14px !important;
300
+ text-transform: uppercase !important;
301
+ letter-spacing: 0.3px !important;
302
+ font-family: monospace !important;
303
+ height: 60px !important;
304
+ display: flex !important;
305
+ flex-direction: column !important;
306
+ justify-content: center !important;
307
+ align-items: center !important;
308
+ line-height: 1.2 !important;
309
+ width: 100% !important;
310
+ max-width: 100% !important;
311
+ min-width: 0 !important;
312
+ box-sizing: border-box !important;
313
+ }
314
+
315
+ /* Simplified Gradio layout control */
316
+ .sidebar .gr-column,
317
+ .sidebar .gradio-column {
318
+ width: 100% !important;
319
+ }
320
+
321
+ /* Simplified Gradio targeting */
322
+ div[data-testid="column"]:has(.sidebar) {
323
+ width: 300px !important;
324
+ min-width: 300px !important;
325
+ }
326
+
327
+ /* Button container with fixed height - DISABLED */
328
+ /*
329
+ .button-container {
330
+ height: 50vh !important;
331
+ max-height: 50vh !important;
332
+ overflow-y: auto !important;
333
+ overflow-x: hidden !important;
334
+ scrollbar-width: thin !important;
335
+ scrollbar-color: #333333 #111111 !important;
336
+ width: 100% !important;
337
+ max-width: 100% !important;
338
+ box-sizing: border-box !important;
339
+ padding: 5px 0 !important;
340
+ margin-top: 10px !important;
341
+ }
342
+ */
343
+
344
+ /* Removed simple scroll CSS - was hiding buttons */
345
+
346
+ .summary-button:hover {
347
+ background: linear-gradient(135deg, #5a5a5a, #4e4e4e) !important;
348
+ color: #74b9ff !important;
349
+ border-color: #666666 !important;
350
+ }
351
+
352
+ .summary-button:active {
353
+ background: linear-gradient(135deg, #4a4a4a, #3e3e3e) !important;
354
+ color: #5a9bd4 !important;
355
+ }
356
+
357
+ /* Regular button styling for non-model buttons */
358
+ .gr-button:not(.model-button):not(.summary-button) {
359
+ background-color: #222222 !important;
360
+ color: white !important;
361
+ border: 1px solid #444444 !important;
362
+ margin: 5px 0 !important;
363
+ border-radius: 8px !important;
364
+ transition: all 0.3s ease !important;
365
+ }
366
+
367
+ .gr-button:not(.model-button):not(.summary-button):hover {
368
+ background-color: #333333 !important;
369
+ border-color: #666666 !important;
370
+ }
371
+
372
+ /* Plot container with smooth transitions and controlled scrolling */
373
+ .plot-container {
374
+ background-color: #000000 !important;
375
+ border: none !important;
376
+ transition: opacity 0.6s ease-in-out !important;
377
+ flex: 1 1 auto !important;
378
+ min-height: 0 !important;
379
+ overflow-y: auto !important;
380
+ scrollbar-width: thin !important;
381
+ scrollbar-color: #333333 #000000 !important;
382
+ }
383
+
384
+ /* Custom scrollbar for plot container */
385
+ .plot-container::-webkit-scrollbar {
386
+ width: 8px !important;
387
+ background: #000000 !important;
388
+ }
389
+
390
+ .plot-container::-webkit-scrollbar-track {
391
+ background: #000000 !important;
392
+ }
393
+
394
+ .plot-container::-webkit-scrollbar-thumb {
395
+ background-color: #333333 !important;
396
+ border-radius: 4px !important;
397
+ }
398
+
399
+ .plot-container::-webkit-scrollbar-thumb:hover {
400
+ background-color: #555555 !important;
401
+ }
402
+
403
+ /* Gradio plot component styling */
404
+ .gr-plot {
405
+ background-color: #000000 !important;
406
+ transition: opacity 0.6s ease-in-out !important;
407
+ }
408
+
409
+ .gr-plot .gradio-plot {
410
+ background-color: #000000 !important;
411
+ transition: opacity 0.6s ease-in-out !important;
412
+ }
413
+
414
+ .gr-plot img {
415
+ transition: opacity 0.6s ease-in-out !important;
416
+ }
417
+
418
+ /* Target the plot wrapper */
419
+ div[data-testid="plot"] {
420
+ background-color: #000000 !important;
421
+ }
422
+
423
+ /* Target all possible plot containers */
424
+ .plot-container img,
425
+ .gr-plot img,
426
+ .gradio-plot img {
427
+ background-color: #000000 !important;
428
+ }
429
+
430
+ /* Ensure plot area background */
431
+ .gr-plot > div,
432
+ .plot-container > div {
433
+ background-color: #000000 !important;
434
+ }
435
+
436
+ /* Prevent white flash during plot updates */
437
+ .plot-container::before {
438
+ content: "";
439
+ position: absolute;
440
+ top: 0;
441
+ left: 0;
442
+ right: 0;
443
+ bottom: 0;
444
+ background-color: #000000;
445
+ z-index: -1;
446
+ }
447
+
448
+ /* Force all plot elements to have black background */
449
+ .plot-container *,
450
+ .gr-plot *,
451
+ div[data-testid="plot"] * {
452
+ background-color: #000000 !important;
453
+ }
454
+
455
+ /* Override any white backgrounds in matplotlib */
456
+ .plot-container canvas,
457
+ .gr-plot canvas {
458
+ background-color: #000000 !important;
459
+ }
460
+
461
+ /* Text elements */
462
+ h1, h2, h3, p, .markdown {
463
+ color: white !important;
464
+ }
465
+
466
+ /* Sidebar header enhancement */
467
+ .sidebar h1 {
468
+ background: linear-gradient(45deg, #74b9ff, #a29bfe) !important;
469
+ -webkit-background-clip: text !important;
470
+ -webkit-text-fill-color: transparent !important;
471
+ background-clip: text !important;
472
+ text-align: center !important;
473
+ margin-bottom: 15px !important;
474
+ font-size: 28px !important;
475
+ font-weight: 700 !important;
476
+ font-family: monospace !important;
477
+ }
478
+
479
+ /* Sidebar description text */
480
+ .sidebar p {
481
+ text-align: center !important;
482
+ margin-bottom: 20px !important;
483
+ line-height: 1.5 !important;
484
+ font-size: 14px !important;
485
+ font-family: monospace !important;
486
+ }
487
+
488
+ /* CI Links styling */
489
+ .sidebar a {
490
+ color: #74b9ff !important;
491
+ text-decoration: none !important;
492
+ font-weight: 500 !important;
493
+ font-family: monospace !important;
494
+ transition: color 0.3s ease !important;
495
+ }
496
+
497
+ .sidebar a:hover {
498
+ color: #a29bfe !important;
499
+ text-decoration: underline !important;
500
+ }
501
+
502
+ .sidebar strong {
503
+ color: #74b9ff !important;
504
+ font-weight: 600 !important;
505
+ font-family: monospace !important;
506
+ }
507
+
508
+ .sidebar em {
509
+ color: #a29bfe !important;
510
+ font-style: normal !important;
511
+ opacity: 0.9 !important;
512
+ font-family: monospace !important;
513
+ }
514
+
515
+ /* Remove all borders globally */
516
+ * {
517
+ border-color: transparent !important;
518
+ }
519
+
520
+ /* Main content area */
521
+ .main-content {
522
+ background-color: #000000 !important;
523
+ padding: 0px 20px var(--main-content-bottom-margin, 10px) 20px !important;
524
+ margin-left: 300px !important;
525
+ height: 100vh !important;
526
+ overflow-y: auto !important;
527
+ box-sizing: border-box !important;
528
+ display: flex !important;
529
+ flex-direction: column !important;
530
+ }
531
+
532
+ /* Custom scrollbar for main content */
533
+ .main-content {
534
+ scrollbar-width: thin !important;
535
+ scrollbar-color: #333333 #000000 !important;
536
+ }
537
+
538
+ .main-content::-webkit-scrollbar {
539
+ width: 8px !important;
540
+ background: #000000 !important;
541
+ }
542
+
543
+ .main-content::-webkit-scrollbar-track {
544
+ background: #000000 !important;
545
+ }
546
+
547
+ .main-content::-webkit-scrollbar-thumb {
548
+ background-color: #333333 !important;
549
+ border-radius: 4px !important;
550
+ }
551
+
552
+ .main-content::-webkit-scrollbar-thumb:hover {
553
+ background-color: #555555 !important;
554
+ }
555
+
556
+ /* Failed tests display - seamless appearance with constrained height */
557
+ .failed-tests textarea {
558
+ background-color: #000000 !important;
559
+ color: #FFFFFF !important;
560
+ font-family: monospace !important;
561
+ font-size: 14px !important;
562
+ border: none !important;
563
+ padding: 10px !important;
564
+ outline: none !important;
565
+ line-height: 1.4 !important;
566
+ height: 180px !important;
567
+ max-height: 180px !important;
568
+ min-height: 180px !important;
569
+ overflow-y: auto !important;
570
+ resize: none !important;
571
+ scrollbar-width: thin !important;
572
+ scrollbar-color: #333333 #000000 !important;
573
+ scroll-behavior: auto !important;
574
+ transition: opacity 0.5s ease-in-out !important;
575
+ scroll-padding-top: 0 !important;
576
+ }
577
+
578
+ /* WebKit scrollbar styling for failed tests */
579
+ .failed-tests textarea::-webkit-scrollbar {
580
+ width: 8px !important;
581
+ }
582
+
583
+ .failed-tests textarea::-webkit-scrollbar-track {
584
+ background: #000000 !important;
585
+ }
586
+
587
+ .failed-tests textarea::-webkit-scrollbar-thumb {
588
+ background-color: #333333 !important;
589
+ border-radius: 4px !important;
590
+ }
591
+
592
+ .failed-tests textarea::-webkit-scrollbar-thumb:hover {
593
+ background-color: #555555 !important;
594
+ }
595
+
596
+ /* Prevent white flash in text boxes during updates */
597
+ .failed-tests::before {
598
+ content: "";
599
+ position: absolute;
600
+ top: 0;
601
+ left: 0;
602
+ right: 0;
603
+ bottom: 0;
604
+ background-color: #000000;
605
+ z-index: -1;
606
+ }
607
+
608
+ .failed-tests {
609
+ background-color: #000000 !important;
610
+ height: 200px !important;
611
+ max-height: 200px !important;
612
+ min-height: 200px !important;
613
+ position: relative;
614
+ transition: opacity 0.5s ease-in-out !important;
615
+ flex-shrink: 0 !important;
616
+ }
617
+
618
+ .failed-tests .gr-textbox {
619
+ background-color: #000000 !important;
620
+ border: none !important;
621
+ height: 180px !important;
622
+ max-height: 180px !important;
623
+ min-height: 180px !important;
624
+ transition: opacity 0.5s ease-in-out !important;
625
+ }
626
+
627
+ /* Force all textbox elements to have black background */
628
+ .failed-tests *,
629
+ .failed-tests .gr-textbox *,
630
+ .failed-tests textarea * {
631
+ background-color: #000000 !important;
632
+ }
633
+
634
+ /* Summary display styling */
635
+ .summary-display textarea {
636
+ background-color: #000000 !important;
637
+ color: #FFFFFF !important;
638
+ font-family: monospace !important;
639
+ font-size: 24px !important;
640
+ border: none !important;
641
+ padding: 20px !important;
642
+ outline: none !important;
643
+ line-height: 2 !important;
644
+ text-align: right !important;
645
+ resize: none !important;
646
+ }
647
+
648
+ .summary-display {
649
+ background-color: #000000 !important;
650
+ }
651
+
652
+ /* Detail view layout */
653
+ .detail-view {
654
+ display: flex !important;
655
+ flex-direction: column !important;
656
+ height: 100% !important;
657
+ min-height: 0 !important;
658
+ }
659
+
660
+ /* JavaScript to reset scroll position */
661
+ .scroll-reset {
662
+ animation: resetScroll 0.1s ease;
663
+ }
664
+
665
+ @keyframes resetScroll {
666
+ 0% { scroll-behavior: auto; }
667
+ 100% { scroll-behavior: auto; }
668
+ }
669
+
summary_page.py ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ from data import extract_model_data
3
+ import matplotlib.pyplot as plt
4
+
5
+ # Layout parameters
6
+ COLUMNS = 3
7
+
8
+ # Derived constants
9
+ COLUMN_WIDTH = 100 / COLUMNS # Each column takes 25% of width
10
+ BAR_WIDTH = COLUMN_WIDTH * 0.8 # 80% of column width for bars
11
+ BAR_MARGIN = COLUMN_WIDTH * 0.1 # 10% margin on each side
12
+
13
+ # Figure dimensions
14
+ FIGURE_WIDTH = 22 # Wider to accommodate columns and legend
15
+ MAX_HEIGHT = 14 # Maximum height in inches
16
+ MIN_HEIGHT_PER_ROW = 2.8
17
+ FIGURE_PADDING = 1
18
+
19
+ # Bar styling
20
+ BAR_HEIGHT_RATIO = 0.22 # Bar height as ratio of vertical spacing
21
+ VERTICAL_SPACING_RATIO = 0.2 # Base vertical position ratio
22
+ AMD_BAR_OFFSET = 0.25 # AMD bar offset ratio
23
+ NVIDIA_BAR_OFFSET = 0.54 # NVIDIA bar offset ratio
24
+
25
+ # Colors
26
+ COLORS = {
27
+ 'passed': '#4CAF50',
28
+ 'failed': '#E53E3E',
29
+ 'skipped': '#FFD54F',
30
+ 'error': '#8B0000',
31
+ 'empty': "#5B5B5B"
32
+ }
33
+
34
+ # Font styling
35
+ MODEL_NAME_FONT_SIZE = 16
36
+ LABEL_FONT_SIZE = 14
37
+ LABEL_OFFSET = 1 # Distance of label from bar
38
+ FAILURE_RATE_FONT_SIZE = 28
39
+
40
+
41
+ def draw_text_and_bar(
42
+ label: str,
43
+ stats: dict[str, int],
44
+ y_bar: float,
45
+ column_left_position: float,
46
+ bar_height: float,
47
+ ax,
48
+ ) -> None:
49
+ """Draw a horizontal bar chart for given stats and its label on the left."""
50
+ # Text
51
+ label_x = column_left_position - LABEL_OFFSET
52
+ failures_present = any(stats[category] > 0 for category in ['failed', 'error'])
53
+ if failures_present:
54
+ props = dict(boxstyle='round', facecolor=COLORS['failed'], alpha=0.35)
55
+ else:
56
+ props = dict(alpha=0)
57
+ ax.text(
58
+ label_x, y_bar, label, ha='right', va='center', color='#CCCCCC', fontsize=LABEL_FONT_SIZE,
59
+ fontfamily='monospace', fontweight='normal', bbox=props
60
+ )
61
+ # Bar
62
+ total = sum(stats.values())
63
+ if total > 0:
64
+ left = column_left_position
65
+ for category in ['passed', 'failed', 'skipped', 'error']:
66
+ if stats[category] > 0:
67
+ width = stats[category] / total * BAR_WIDTH
68
+ ax.barh(y_bar, width, left=left, height=bar_height, color=COLORS[category], alpha=0.9)
69
+ left += width
70
+ else:
71
+ ax.barh(y_bar, BAR_WIDTH, left=column_left_position, height=bar_height, color=COLORS['empty'], alpha=0.9)
72
+
73
+ def create_summary_page(df: pd.DataFrame, available_models: list[str]):
74
+ """Create a summary page with model names and both AMD/NVIDIA test stats bars."""
75
+ return None
76
+
77
+ # Calculate overall failure rates
78
+ amd_counts, nvidia_counts = get_overall_stats(df, available_models)
79
+
80
+ amd_failure_rate = (amd_counts[1] / sum(amd_counts)) if sum(amd_counts) > 0 else 0.0
81
+ amd_failure_rate *= 100
82
+ nvidia_failure_rate = (nvidia_counts[1] / sum(nvidia_counts)) if sum(nvidia_counts) > 0 else 0.0
83
+ nvidia_failure_rate *= 100
84
+
85
+ # Calculate dimensions for N-column layout
86
+ model_count = len(available_models)
87
+ rows = (model_count + COLUMNS - 1) // COLUMNS # Ceiling division
88
+
89
+ # Figure dimensions - wider for columns, height based on rows
90
+ height_per_row = min(MIN_HEIGHT_PER_ROW, MAX_HEIGHT / max(rows, 1))
91
+ figure_height = min(MAX_HEIGHT, rows * height_per_row + FIGURE_PADDING)
92
+
93
+ fig = plt.figure(figsize=(FIGURE_WIDTH, figure_height), facecolor='#000000')
94
+ ax = fig.add_subplot(111)
95
+ ax.set_facecolor('#000000')
96
+
97
+ # Add overall failure rates at the top as a proper title
98
+ failure_text = f"Overall Failure Rates: AMD {amd_failure_rate:.1f}% | NVIDIA {nvidia_failure_rate:.1f}%"
99
+ ax.text(50, -1.25, failure_text, ha='center', va='top',
100
+ color='#FFFFFF', fontsize=FAILURE_RATE_FONT_SIZE,
101
+ fontfamily='monospace', fontweight='bold')
102
+
103
+ visible_model_count = 0
104
+ max_y = 0
105
+
106
+ for i, model_name in enumerate(available_models):
107
+ if model_name not in df.index:
108
+ continue
109
+
110
+ row = df.loc[model_name]
111
+
112
+ # Extract and process model data
113
+ amd_stats, nvidia_stats = extract_model_data(row)[:2]
114
+
115
+ # Calculate position in 4-column grid
116
+ col = visible_model_count % COLUMNS
117
+ row = visible_model_count // COLUMNS
118
+
119
+ # Calculate horizontal position for this column
120
+ col_left = col * COLUMN_WIDTH + BAR_MARGIN
121
+ col_center = col * COLUMN_WIDTH + COLUMN_WIDTH / 2
122
+
123
+ # Calculate vertical position for this row - start from top
124
+ vertical_spacing = height_per_row
125
+ y_base = (VERTICAL_SPACING_RATIO + row) * vertical_spacing
126
+ y_model_name = y_base # Model name above AMD bar
127
+ y_amd_bar = y_base + vertical_spacing * AMD_BAR_OFFSET # AMD bar
128
+ y_nvidia_bar = y_base + vertical_spacing * NVIDIA_BAR_OFFSET # NVIDIA bar
129
+ max_y = max(max_y, y_nvidia_bar + vertical_spacing * 0.3)
130
+
131
+ # Model name centered above the bars in this column
132
+ ax.text(col_center, y_model_name, model_name.lower(),
133
+ ha='center', va='center', color='#FFFFFF',
134
+ fontsize=MODEL_NAME_FONT_SIZE, fontfamily='monospace', fontweight='bold')
135
+
136
+ # AMD label and bar in this column
137
+ bar_height = min(0.4, vertical_spacing * BAR_HEIGHT_RATIO)
138
+ # Draw AMD bar
139
+ draw_text_and_bar("amd", amd_stats, y_amd_bar, col_left, bar_height, ax)
140
+ # Draw NVIDIA bar
141
+ draw_text_and_bar("nvidia", nvidia_stats, y_nvidia_bar, col_left, bar_height, ax)
142
+
143
+ # Increment counter for next visible model
144
+ visible_model_count += 1
145
+
146
+
147
+ # Add AMD and NVIDIA test totals in the bottom left
148
+ # Calculate line spacing to align middle with legend
149
+ line_height = 0.4 # Height between lines
150
+ legend_y = max_y + 1
151
+
152
+ # Position the two lines so their middle aligns with legend_y
153
+ amd_y = legend_y - line_height / 2
154
+ nvidia_y = legend_y + line_height / 2
155
+
156
+ amd_totals_text = f"AMD Tests - Passed: {amd_counts[0]}, Failed: {amd_counts[1]}, Skipped: {amd_counts[2]}"
157
+ nvidia_totals_text = f"NVIDIA Tests - Passed: {nvidia_counts[0]}, Failed: {nvidia_counts[1]}, Skipped: {nvidia_counts[2]}"
158
+
159
+ ax.text(0, amd_y, amd_totals_text,
160
+ ha='left', va='bottom', color='#CCCCCC',
161
+ fontsize=14, fontfamily='monospace')
162
+
163
+ ax.text(0, nvidia_y, nvidia_totals_text,
164
+ ha='left', va='bottom', color='#CCCCCC',
165
+ fontsize=14, fontfamily='monospace')
166
+
167
+ # Add legend horizontally in bottom right corner
168
+ patch_height = 0.3
169
+ patch_width = 3
170
+
171
+ legend_start_x = 68.7
172
+ legend_y = max_y + 1
173
+ legend_spacing = 10
174
+ legend_font_size = 15
175
+
176
+ # Legend entries
177
+ legend_items = [
178
+ ('passed', 'Passed'),
179
+ ('failed', 'Failed'),
180
+ ('skipped', 'Skipped'),
181
+ ]
182
+
183
+ for i, (status, label) in enumerate(legend_items):
184
+ x_pos = legend_start_x + i * legend_spacing
185
+ # Small colored square
186
+ ax.add_patch(plt.Rectangle((x_pos - 0.6, legend_y), patch_width, -patch_height,
187
+ facecolor=COLORS[status], alpha=0.9))
188
+ # Status label
189
+ ax.text(x_pos + patch_width, legend_y, label,
190
+ ha='left', va='bottom', color='#CCCCCC',
191
+ fontsize=legend_font_size, fontfamily='monospace')
192
+
193
+ # Style the axes to be completely invisible and span full width
194
+ ax.set_xlim(-5, 105) # Slightly wider to accommodate labels
195
+ ax.set_ylim(0, max_y + 1) # Add some padding at the top for title
196
+ ax.set_xlabel('')
197
+ ax.set_ylabel('')
198
+ ax.spines['bottom'].set_visible(False)
199
+ ax.spines['left'].set_visible(False)
200
+ ax.spines['top'].set_visible(False)
201
+ ax.spines['right'].set_visible(False)
202
+ ax.set_xticks([])
203
+ ax.set_yticks([])
204
+ ax.yaxis.set_inverted(True)
205
+
206
+ # Remove all margins to make figure stick to top
207
+ plt.tight_layout()
208
+ return fig
theme_config.py DELETED
@@ -1,167 +0,0 @@
1
- import gradio as gr
2
- import matplotlib.pyplot as plt
3
- import matplotlib as mpl
4
-
5
- class DashboardTheme:
6
- # Color palette - Grey, minimalistic, sleek
7
- PRIMARY_GREY = "#2C3E50" # Dark slate grey
8
- SECONDARY_GREY = "#34495E" # Medium slate grey
9
- LIGHT_GREY = "#7F8C8D" # Light grey
10
- BACKGROUND_GREY = "#ECF0F1" # Very light grey background
11
- WHITE = "#FFFFFF"
12
- ACCENT_BLUE = "#3498DB" # Clean blue accent
13
- SUCCESS_GREEN = "#27AE60" # Clean green
14
- WARNING_ORANGE = "#F39C12" # Clean orange
15
- ERROR_RED = "#E74C3C" # Clean red
16
-
17
- # Chart colors - Professional grey scale with subtle accents
18
- CHART_COLORS = [
19
- "#34495E", # Dark grey
20
- "#5D6D7E", # Medium grey
21
- "#85929E", # Light grey
22
- "#3498DB", # Accent blue
23
- "#27AE60", # Accent green
24
- "#F39C12", # Accent orange
25
- "#9B59B6", # Accent purple
26
- "#E74C3C" # Accent red
27
- ]
28
-
29
- # Typography
30
- FONT_FAMILY = "'Inter', 'SF Pro Display', 'Helvetica Neue', Arial, sans-serif"
31
-
32
- @staticmethod
33
- def get_gradio_theme():
34
- """Create a custom Gradio theme with grey minimalistic design"""
35
- theme = gr.themes.Soft(
36
- primary_hue=gr.themes.colors.slate,
37
- secondary_hue=gr.themes.colors.gray,
38
- neutral_hue=gr.themes.colors.slate,
39
- font=gr.themes.GoogleFont("Inter")
40
- ).set(
41
- # Overall styling
42
- body_background_fill=DashboardTheme.BACKGROUND_GREY,
43
- background_fill_primary=DashboardTheme.WHITE,
44
- background_fill_secondary=DashboardTheme.BACKGROUND_GREY,
45
-
46
- # Border colors
47
- border_color_primary=DashboardTheme.LIGHT_GREY,
48
- border_color_accent=DashboardTheme.ACCENT_BLUE,
49
-
50
- # Button styling
51
- button_primary_background_fill=DashboardTheme.PRIMARY_GREY,
52
- button_primary_background_fill_hover=DashboardTheme.SECONDARY_GREY,
53
- button_primary_text_color=DashboardTheme.WHITE,
54
- button_secondary_background_fill=DashboardTheme.WHITE,
55
- button_secondary_border_color=DashboardTheme.LIGHT_GREY,
56
- button_secondary_text_color=DashboardTheme.PRIMARY_GREY,
57
-
58
- # Input styling
59
- input_background_fill=DashboardTheme.WHITE,
60
- input_border_color=DashboardTheme.LIGHT_GREY,
61
-
62
- # Text colors
63
- body_text_color=DashboardTheme.PRIMARY_GREY,
64
- body_text_color_subdued=DashboardTheme.LIGHT_GREY,
65
-
66
- # Panel styling
67
- panel_background_fill=DashboardTheme.WHITE,
68
- panel_border_color=DashboardTheme.LIGHT_GREY,
69
-
70
- # Checkbox/Radio styling
71
- checkbox_background_color=DashboardTheme.WHITE,
72
- checkbox_background_color_selected=DashboardTheme.ACCENT_BLUE,
73
- checkbox_border_color=DashboardTheme.LIGHT_GREY,
74
-
75
- # Tab styling
76
- block_title_text_color=DashboardTheme.PRIMARY_GREY,
77
- block_label_text_color=DashboardTheme.SECONDARY_GREY,
78
- )
79
-
80
- return theme
81
-
82
- @staticmethod
83
- def setup_matplotlib_style():
84
- """Configure matplotlib with the dashboard theme"""
85
- # Set the style parameters
86
- plt.style.use('default') # Reset to default first
87
-
88
- # Configure matplotlib parameters
89
- mpl.rcParams.update({
90
- 'figure.facecolor': DashboardTheme.WHITE,
91
- 'axes.facecolor': DashboardTheme.WHITE,
92
- 'axes.edgecolor': DashboardTheme.LIGHT_GREY,
93
- 'axes.labelcolor': DashboardTheme.PRIMARY_GREY,
94
- 'axes.axisbelow': True,
95
- 'axes.grid': True,
96
- 'axes.spines.left': True,
97
- 'axes.spines.bottom': True,
98
- 'axes.spines.top': False,
99
- 'axes.spines.right': False,
100
- 'axes.linewidth': 1.0,
101
-
102
- # Grid styling
103
- 'grid.color': DashboardTheme.LIGHT_GREY,
104
- 'grid.linestyle': '-',
105
- 'grid.linewidth': 0.5,
106
- 'grid.alpha': 0.3,
107
-
108
- # Text styling
109
- 'text.color': DashboardTheme.PRIMARY_GREY,
110
- 'font.family': 'sans-serif',
111
- 'font.size': 10,
112
- 'axes.titlesize': 12,
113
- 'axes.labelsize': 10,
114
- 'xtick.labelsize': 9,
115
- 'ytick.labelsize': 9,
116
- 'legend.fontsize': 9,
117
- 'axes.titleweight': 'bold',
118
-
119
- # Tick styling
120
- 'xtick.color': DashboardTheme.LIGHT_GREY,
121
- 'ytick.color': DashboardTheme.LIGHT_GREY,
122
- 'xtick.direction': 'out',
123
- 'ytick.direction': 'out',
124
-
125
- # Legend styling
126
- 'legend.frameon': True,
127
- 'legend.facecolor': DashboardTheme.WHITE,
128
- 'legend.edgecolor': DashboardTheme.LIGHT_GREY,
129
- 'legend.shadow': False,
130
- 'legend.framealpha': 0.9,
131
- })
132
-
133
- @staticmethod
134
- def get_chart_colors():
135
- """Get the standardized chart color palette"""
136
- return DashboardTheme.CHART_COLORS.copy()
137
-
138
- @staticmethod
139
- def get_status_colors():
140
- """Get colors for different status indicators"""
141
- return {
142
- 'success': DashboardTheme.SUCCESS_GREEN,
143
- 'warning': DashboardTheme.WARNING_ORANGE,
144
- 'error': DashboardTheme.ERROR_RED,
145
- 'info': DashboardTheme.ACCENT_BLUE,
146
- 'neutral': DashboardTheme.LIGHT_GREY
147
- }
148
-
149
- # Common styling utilities
150
- def get_section_title_style():
151
- """Get consistent styling for section titles"""
152
- return {
153
- 'color': DashboardTheme.PRIMARY_GREY,
154
- 'font-weight': 'bold',
155
- 'font-size': '1.1em',
156
- 'margin-bottom': '0.5em'
157
- }
158
-
159
- def get_metric_card_style():
160
- """Get consistent styling for metric cards"""
161
- return {
162
- 'background': DashboardTheme.WHITE,
163
- 'border': f'1px solid {DashboardTheme.LIGHT_GREY}',
164
- 'border-radius': '8px',
165
- 'padding': '1em',
166
- 'box-shadow': '0 2px 4px rgba(0,0,0,0.05)'
167
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
utils.py ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+ import sys
3
+ from datetime import datetime
4
+
5
+
6
+ class TimestampFormatter(logging.Formatter):
7
+ """Custom formatter that matches the existing timestamp format used in print statements."""
8
+
9
+ def format(self, record):
10
+ # Create timestamp in the same format as existing print statements
11
+ timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
12
+
13
+ # Format the message with timestamp prefix
14
+ if record.levelno == logging.WARNING:
15
+ return f"WARNING: {record.getMessage()}"
16
+ elif record.levelno == logging.ERROR:
17
+ return f"Error {record.getMessage()}"
18
+ else:
19
+ return f"[{timestamp}] {record.getMessage()}"
20
+
21
+
22
+ def setup_logger(name="tcid", level=logging.INFO):
23
+ """Set up logger with custom timestamp formatting to match existing print format."""
24
+ logger = logging.getLogger(name)
25
+
26
+ # Avoid adding multiple handlers if logger already exists
27
+ if logger.handlers:
28
+ return logger
29
+
30
+ logger.setLevel(level)
31
+
32
+ # Create console handler
33
+ handler = logging.StreamHandler(sys.stdout)
34
+ handler.setLevel(level)
35
+
36
+ # Set custom formatter
37
+ formatter = TimestampFormatter()
38
+ handler.setFormatter(formatter)
39
+
40
+ logger.addHandler(handler)
41
+
42
+ return logger
43
+
44
+
45
+ # Create default logger instance
46
+ logger = setup_logger()
47
+
48
+
49
+
50
+ def generate_underlined_line(text: str) -> str:
51
+ return text + "\n" + "─" * len(text)