File size: 17,238 Bytes
cdeb1d3
 
e7e29cb
cdeb1d3
fae4e5b
 
664f166
fae4e5b
98dc4d3
ea9bb7d
dafc8f1
fae4e5b
 
 
 
659d404
 
cdeb1d3
 
e7e29cb
fae4e5b
73f859d
8b1f39c
 
 
73f859d
 
 
 
 
 
 
 
 
 
 
fae4e5b
 
 
 
 
8dccf7d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fae4e5b
 
 
d0bd9af
 
3fbacd1
 
 
fae4e5b
d0bd9af
fae4e5b
 
 
ddbf0ce
fae4e5b
 
 
 
 
 
 
 
 
 
 
4449927
 
fae4e5b
 
4449927
 
 
 
 
 
 
fae4e5b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0b8bed8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4449927
0b8bed8
4449927
0b8bed8
4449927
0b8bed8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4449927
 
3fbacd1
 
 
4449927
 
 
3fbacd1
4449927
 
 
 
 
 
 
 
 
 
 
 
 
 
3fbacd1
 
 
 
 
 
 
 
 
 
 
 
 
4449927
 
 
 
 
 
 
3fbacd1
4449927
3fbacd1
4449927
 
 
 
3fbacd1
4449927
3fbacd1
 
 
 
 
 
4449927
 
 
 
3fbacd1
 
4449927
 
 
3fbacd1
4449927
3fbacd1
4449927
3fbacd1
4449927
 
 
3fbacd1
 
 
4449927
 
 
 
3fbacd1
 
 
 
4449927
 
 
 
3fbacd1
 
 
 
 
4449927
fae4e5b
 
 
 
 
 
 
 
d0bd9af
 
 
 
 
 
 
 
 
ddbf0ce
fae4e5b
 
 
 
994b341
 
fae4e5b
 
d0bd9af
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fae4e5b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d0bd9af
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fae4e5b
 
664f166
d0bd9af
 
fae4e5b
 
d0bd9af
 
 
 
fae4e5b
 
 
 
 
 
 
 
 
994b341
 
fae4e5b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ea9bb7d
 
 
fae4e5b
 
 
 
 
50b95c9
 
 
 
 
 
 
 
 
fae4e5b
 
 
 
 
 
3578f6e
fae4e5b
 
 
ddbf0ce
 
fae4e5b
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
---
title: TraceMind AI
emoji: 🧠
colorFrom: indigo
colorTo: purple
sdk: gradio
sdk_version: 5.49.1
app_file: app.py
short_description: AI agent evaluation with MCP-powered intelligence
license: agpl-3.0
pinned: true
tags:
  - mcp-in-action-track-enterprise
  - agent-evaluation
  - mcp-client
  - leaderboard
  - gradio
---

# 🧠 TraceMind-AI

<p align="center">
  <img src="https://raw.githubusercontent.com/Mandark-droid/TraceMind-AI/assets/TraceVerse_Logo.png" alt="TraceVerse Ecosystem" width="400"/>
  <br/>
  <br/>
  <img src="https://raw.githubusercontent.com/Mandark-droid/TraceMind-AI/assets/Logo.png" alt="TraceMind-AI Logo" width="200"/>
</p>

**Agent Evaluation Platform with MCP-Powered Intelligence**

[![MCP's 1st Birthday Hackathon](https://img.shields.io/badge/MCP%27s%201st%20Birthday-Hackathon-blue)](https://github.com/modelcontextprotocol)
[![Track](https://img.shields.io/badge/Track-MCP%20in%20Action%20(Enterprise)-purple)](https://github.com/modelcontextprotocol/hackathon)
[![Powered by Gradio](https://img.shields.io/badge/Powered%20by-Gradio-orange)](https://gradio.app/)

> **🎯 Track 2 Submission**: MCP in Action (Enterprise)
> **πŸ“… MCP's 1st Birthday Hackathon**: November 14-30, 2025

## Overview

TraceMind-AI is a comprehensive platform for evaluating AI agent performance across different models, providers, and configurations. It provides real-time insights, cost analysis, and detailed trace visualization powered by the Model Context Protocol (MCP).

### πŸ—οΈ **Built on Open Source Foundation**

This platform is part of a complete agent evaluation ecosystem built on two foundational open-source projects:

**πŸ”­ TraceVerde (genai_otel_instrument)** - Automatic OpenTelemetry Instrumentation
- **What**: Zero-code OTEL instrumentation for LLM frameworks (LiteLLM, Transformers, LangChain, etc.)
- **Why**: Captures every LLM call, tool usage, and agent step automatically
- **Links**: [GitHub](https://github.com/Mandark-droid/genai_otel_instrument) | [PyPI](https://pypi.org/project/genai-otel-instrument)

**πŸ“Š SMOLTRACE** - Agent Evaluation Engine
- **What**: Lightweight, production-ready evaluation framework with OTEL tracing built-in
- **Why**: Generates structured datasets (leaderboard, results, traces, metrics) displayed in this UI
- **Links**: [GitHub](https://github.com/Mandark-droid/SMOLTRACE) | [PyPI](https://pypi.org/project/smoltrace/)

**The Flow**: `TraceVerde` instruments your agents β†’ `SMOLTRACE` evaluates them β†’ `TraceMind-AI` visualizes results with MCP-powered intelligence

---

## Features

- **πŸ“Š Real-time Leaderboard**: Live evaluation data from HuggingFace datasets
- **πŸ€– Autonomous Agent Chat**: Interactive agent powered by smolagents with MCP tools (Track 2)
- **πŸ’¬ MCP Integration**: AI-powered analysis using remote MCP servers
- **☁️ Multi-Cloud Evaluation**: Submit jobs to HuggingFace Jobs or Modal (H200, A100, A10 GPUs)
- **πŸ’° Smart Cost Estimation**: Auto-select hardware and predict costs before running evaluations
- **πŸ” Trace Visualization**: Detailed OpenTelemetry trace analysis with GPU metrics
- **πŸ“ˆ Performance Metrics**: GPU utilization, CO2 emissions, token usage tracking
- **🧠 Agent Reasoning**: View step-by-step agent planning and tool execution

## MCP Integration

TraceMind demonstrates enterprise MCP client usage by connecting to [TraceMind-mcp-server](https://huggingface.co/spaces/MCP-1st-Birthday/TraceMind-mcp-server) via the Model Context Protocol.

**MCP Tools Used:**
- `analyze_leaderboard` - AI-generated insights about evaluation trends
- `estimate_cost` - Cost estimation with hardware recommendations
- `debug_trace` - Interactive trace analysis and debugging
- `compare_runs` - Side-by-side run comparison
- `analyze_results` - Test case analysis with optimization recommendations

## Quick Start

### Prerequisites

**For Viewing Leaderboard & Analysis:**
- Python 3.10+
- HuggingFace account (for authentication)

**For Submitting Evaluation Jobs:**
- ⚠️ **HuggingFace Pro account** ($9/month) with credit card
- HuggingFace token with **Read + Write + Run Jobs** permissions
- API keys for model providers (OpenAI, Anthropic, etc.)

> **Note**: Job submission requires a paid HuggingFace Pro account to access compute infrastructure. Viewing existing results is free.

### Installation

1. Clone the repository:
```bash
git clone https://github.com/Mandark-droid/TraceMind-AI.git
cd TraceMind-AI
```

2. Install dependencies:
```bash
pip install -r requirements.txt
```

3. Configure environment:
```bash
cp .env.example .env
# Edit .env with your configuration
```

4. Run the application:
```bash
python app.py
```

Visit http://localhost:7860

## 🎯 For Hackathon Judges & Visitors

### Using Your Own API Keys (Recommended)

TraceMind-AI integrates with the TraceMind MCP Server to provide AI-powered analysis. To **prevent credit issues during evaluation**, we recommend configuring your own API keys:

#### Step-by-Step Configuration

**Step 1: Configure MCP Server** (Required for MCP tool features)

1. **Open MCP Server**: https://huggingface.co/spaces/MCP-1st-Birthday/TraceMind-mcp-server
2. Go to **βš™οΈ Settings** tab
3. Enter your **Gemini API Key** and **HuggingFace Token**
4. Click **"Save & Override Keys"**

**Step 2: Configure TraceMind-AI** (Optional, for additional features)

1. **Open TraceMind-AI**: https://huggingface.co/spaces/MCP-1st-Birthday/TraceMind
2. Go to **βš™οΈ Settings** tab
3. Enter your **Gemini API Key** and **HuggingFace Token**
4. Click **"Save API Keys"**

### Why Configure Both?

- **MCP Server**: Provides AI-powered tools (leaderboard analysis, trace debugging, cost estimation)
- **TraceMind-AI**: Main UI that calls the MCP server for intelligent analysis
- They run in **separate sessions** β†’ need separate configuration
- Configuring both ensures your keys are used for the complete evaluation flow

### Getting Free API Keys

Both APIs have generous free tiers:

**Google Gemini API Key**:
- Visit: https://ai.google.dev/
- Click "Get API Key" β†’ Create project β†’ Generate key
- **Free tier**: 1,500 requests/day (sufficient for evaluation)

**HuggingFace Token** (for viewing):
- Visit: https://huggingface.co/settings/tokens
- Click "New token" β†’ Name it (e.g., "TraceMind Viewer")
- **Permissions**:
  - Select "Read" for viewing datasets (sufficient for browsing leaderboard)
- **Free tier**: No rate limits for public dataset access

### Default Configuration (Without Your Keys)

If you don't configure your own keys:
- Apps will use our pre-configured keys from HuggingFace Spaces Secrets
- Fine for brief testing, but may hit rate limits during high traffic
- Recommended to configure your keys for full evaluation

### Security Notes

βœ… **Session-only storage**: Keys stored only in browser memory
βœ… **No server persistence**: Keys never saved to disk
βœ… **Not exposed via API**: Settings forms use `api_name=False`
βœ… **HTTPS encryption**: All API calls over secure connections

## πŸš€ Submitting Evaluation Jobs

TraceMind-AI allows you to submit evaluation jobs to **two cloud platforms**:
- **HuggingFace Jobs**: Managed compute with H200, A100, A10, T4 GPUs
- **Modal**: Serverless GPU compute with pay-per-second pricing

### ⚠️ Requirements for Job Submission

**For HuggingFace Jobs:**

1. **HuggingFace Pro Account** ($9/month)
   - Sign up at: https://huggingface.co/pricing
   - **Credit card required** to pay for compute usage
   - Free accounts cannot submit jobs

2. **HuggingFace Token with Enhanced Permissions**
   - Visit: https://huggingface.co/settings/tokens
   - Create token with these permissions:
     - βœ… **Read** (view datasets)
     - βœ… **Write** (upload results)
     - βœ… **Run Jobs** (submit evaluation jobs)
   - ⚠️ Read-only tokens will NOT work

**For Modal (Optional Alternative):**

1. **Modal Account** (Free tier available)
   - Sign up at: https://modal.com
   - Generate API token at: https://modal.com/settings/tokens
   - Pay-per-second billing (no monthly subscription)

2. **Configure Modal Credentials in Settings**
   - MODAL_TOKEN_ID (starts with `ak-`)
   - MODAL_TOKEN_SECRET (starts with `as-`)

**Both Platforms Require:**

3. **Model Provider API Keys**
   - OpenAI, Anthropic, Google, etc.
   - Configure in Settings β†’ LLM Provider API Keys
   - Passed securely as job secrets

### Hardware Options & Pricing

TraceMind **auto-selects optimal hardware** based on your model size and provider:

**HuggingFace Jobs:**
- **cpu-basic**: API models (OpenAI, Anthropic) - ~$0.05/hr
- **t4-small**: Small models (4B-8B parameters) - ~$0.60/hr
- **a10g-small**: Medium models (7B-13B) - ~$1.10/hr
- **a100-large**: Large models (70B+) - ~$3.00/hr
- Pricing: https://huggingface.co/pricing#spaces-pricing

**Modal:**
- **CPU**: API models - ~$0.0001/sec
- **A10G**: Small-medium models (7B-13B) - ~$0.0006/sec
- **A100-80GB**: Large models (70B+) - ~$0.0030/sec
- **H200**: Fastest inference - ~$0.0050/sec
- Pricing: https://modal.com/pricing

### How to Submit a Job

1. **Configure API Keys** (Settings tab):
   - Add HF Token (with Run Jobs permission) - **required for both platforms**
   - Add Modal credentials (MODAL_TOKEN_ID + MODAL_TOKEN_SECRET) - **for Modal only**
   - Add LLM provider keys (OpenAI, Anthropic, etc.)

2. **Create Evaluation** (New Evaluation tab):
   - **Select infrastructure**: HuggingFace Jobs or Modal
   - Choose model and agent type
   - Configure hardware (or use **"auto"** for smart selection)
   - Set timeout (default: 1h)
   - Click "πŸ’° Estimate Cost" to preview cost/duration
   - Click "Submit Evaluation"

3. **Monitor Job**:
   - View job ID and status in confirmation screen
   - **HF Jobs**: Track at https://huggingface.co/jobs or use Job Monitoring tab
   - **Modal**: Track at https://modal.com/apps
   - Results automatically appear in leaderboard when complete

### What Happens During a Job

1. Job starts on selected infrastructure (HF Jobs or Modal)
2. Docker container built with required dependencies
3. SMOLTRACE evaluates your model with OpenTelemetry tracing
4. Results uploaded to 4 HuggingFace datasets:
   - Leaderboard entry (summary stats)
   - Results dataset (test case details)
   - Traces dataset (OTEL spans)
   - Metrics dataset (GPU metrics, CO2 emissions)
5. Results appear in TraceMind leaderboard automatically

**Expected Duration:**
- CPU jobs (API models): 2-5 minutes
- GPU jobs (local models): 15-30 minutes (includes model download)

## Configuration

Create a `.env` file with the following variables:

```env
# HuggingFace Configuration
HF_TOKEN=your_token_here

# Agent Model Configuration (for Chat Screen - Track 2)
# Options: "hfapi" (default), "inference_client", "litellm"
AGENT_MODEL_TYPE=hfapi

# API Keys for different model types
# Required if AGENT_MODEL_TYPE=litellm
GEMINI_API_KEY=your_gemini_api_key_here

# MCP Server URL (note: /sse endpoint for smolagents integration)
MCP_SERVER_URL=https://mcp-1st-birthday-tracemind-mcp-server.hf.space/gradio_api/mcp/sse

# Dataset Configuration
LEADERBOARD_REPO=kshitijthakkar/smoltrace-leaderboard

# Development Mode (optional - disables OAuth for local testing)
DISABLE_OAUTH=true
```

### Agent Model Options

The Agent Chat screen supports three model configurations:

1. **`hfapi` (Default)**: Uses HuggingFace Inference API
   - Model: `Qwen/Qwen2.5-Coder-32B-Instruct`
   - Requires: `HF_TOKEN`
   - Best for: General use, free tier available

2. **`inference_client`**: Uses Nebius provider
   - Model: `deepseek-ai/DeepSeek-V3-0324`
   - Requires: `HF_TOKEN`
   - Best for: Advanced reasoning, faster inference

3. **`litellm`**: Uses Google Gemini
   - Model: `gemini/gemini-2.5-flash`
   - Requires: `GEMINI_API_KEY`
   - Best for: Gemini-specific features

## Data Sources

TraceMind-AI loads evaluation data from HuggingFace datasets:

- **Leaderboard**: Aggregate statistics for all evaluation runs
- **Results**: Individual test case results
- **Traces**: OpenTelemetry trace data
- **Metrics**: GPU metrics and performance data

## Architecture

### Project Structure

```
TraceMind-AI/
β”œβ”€β”€ app.py                 # Main Gradio application
β”œβ”€β”€ data_loader.py         # HuggingFace dataset integration
β”œβ”€β”€ mcp_client/            # MCP client implementation
β”‚   β”œβ”€β”€ client.py          # Async MCP client
β”‚   └── sync_wrapper.py    # Synchronous wrapper
β”œβ”€β”€ utils/                 # Utilities
β”‚   β”œβ”€β”€ auth.py            # HuggingFace OAuth
β”‚   └── navigation.py      # Screen navigation
β”œβ”€β”€ screens/               # UI screens
β”œβ”€β”€ components/            # Reusable components
└── styles/                # Custom CSS
```

### MCP Client Integration

TraceMind-AI uses the MCP Python SDK to connect to remote MCP servers:

```python
from mcp_client.sync_wrapper import get_sync_mcp_client

# Initialize MCP client
mcp_client = get_sync_mcp_client()
mcp_client.initialize()

# Call MCP tools
insights = mcp_client.analyze_leaderboard(
    metric_focus="overall",
    time_range="last_week",
    top_n=5
)
```

## Usage

### Viewing the Leaderboard

1. Log in with your HuggingFace account
2. Navigate to the "Leaderboard" tab
3. Click "Load Leaderboard" to fetch the latest data
4. View AI-powered insights generated by the MCP server

### Estimating Costs

1. Navigate to the "Cost Estimator" tab
2. Enter the model name (e.g., `openai/gpt-4`)
3. Select agent type and number of tests
4. Click "Estimate Cost" for AI-powered analysis

### Viewing Trace Details

1. Select an evaluation run from the leaderboard
2. Click on a specific test case
3. View detailed OpenTelemetry trace visualization
4. Ask questions about the trace using MCP-powered analysis

### Using the Agent Chat (Track 2)

1. Navigate to the "πŸ€– Agent Chat" tab
2. The autonomous agent will initialize with MCP tools from TraceMind MCP Server
3. Ask questions about agent evaluations:
   - "What are the top 3 performing models and their costs?"
   - "Estimate the cost of running 500 tests with DeepSeek-V3 on H200"
   - "Load the leaderboard and show me the last 5 run IDs"
4. Watch the agent plan, execute tools, and provide detailed answers
5. Enable "Show Agent Reasoning" to see step-by-step tool execution
6. Use Quick Action buttons for common queries

**Example Questions:**
- Analysis: "Analyze the current leaderboard and show me the top performing models with their costs"
- Cost Comparison: "Compare the costs of the top 3 models - which one offers the best value?"
- Recommendations: "Based on the leaderboard data, which model would you recommend for a production system?"

## Technology Stack

- **UI Framework**: Gradio 5.49.1
- **Agent Framework**: smolagents 1.22.0+ (Track 2)
- **MCP Protocol**: MCP integration via Gradio & smolagents MCPClient
- **Data**: HuggingFace Datasets API
- **Authentication**: HuggingFace OAuth
- **AI Models**:
  - Default: Qwen/Qwen2.5-Coder-32B-Instruct (HF Inference API)
  - Optional: DeepSeek-V3 (Nebius), Gemini 2.5 Flash
  - MCP Server: Google Gemini 2.5 Pro

## Development

### Running Locally

```bash
# Install dependencies
pip install -r requirements.txt

# Set development mode (optional - disables OAuth)
export DISABLE_OAUTH=true

# Run the app
python app.py
```

### Running on HuggingFace Spaces

This application is configured for deployment on HuggingFace Spaces using the Gradio SDK. The `app.py` file serves as the entry point.

## Documentation

For detailed implementation documentation, see:
- [Data Loader API](data_loader.py) - Dataset loading and caching
- [MCP Client API](mcp_client/client.py) - MCP protocol integration
- [Authentication](utils/auth.py) - HuggingFace OAuth integration

## Demo Video

[Link to demo video showing the application in action]

## Social Media

[Link to social media post about this project]

## License

AGPL-3.0 License

This project is licensed under the GNU Affero General Public License v3.0. See the LICENSE file for details.

## Contributing

Contributions are welcome! Please open an issue or submit a pull request.

## Built By

**Track**: MCP in Action (Enterprise)
**Author**: Kshitij Thakkar
**Powered by**: MCP Servers (TraceMind-mcp-server) + Gradio
**Built with**: Gradio 5.49.1 (MCP client integration)

---

## Acknowledgments

- **MCP Team** - For the Model Context Protocol specification
- **Gradio Team** - For Gradio 6 with MCP integration
- **HuggingFace** - For Spaces hosting and dataset infrastructure
- **Google** - For Gemini API access
- **[Eliseu Silva](https://huggingface.co/elismasilva)** - For the [gradio_htmlplus](https://huggingface.co/spaces/elismasilva/gradio_htmlplus) custom component that powers our interactive leaderboard table. Eliseu's timely help and collaboration during the hackathon was invaluable!

## Links

- **Live Demo**: https://huggingface.co/spaces/MCP-1st-Birthday/TraceMind
- **MCP Server**: https://huggingface.co/spaces/MCP-1st-Birthday/TraceMind-mcp-server
- **GitHub**: https://github.com/Mandark-droid/TraceMind-AI
- **MCP Specification**: https://modelcontextprotocol.io

---

**MCP's 1st Birthday Hackathon Submission**
*Track: MCP in Action - Enterprise*