avaliev commited on
Commit
0491e54
·
verified ·
1 Parent(s): 648d828

Initial commit

Browse files

Initial commit of FocusFlow AI productivity helper

.env.example ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # FocusFlow Environment Configuration
2
+ # Copy this file to .env and fill in your values
3
+
4
+ # Launch Mode
5
+ # - demo: Uses text area for workspace monitoring (ideal for HuggingFace Spaces)
6
+ # - local: Monitors actual file system changes
7
+ LAUNCH_MODE=demo
8
+
9
+ # AI Provider
10
+ # Options: openai, anthropic, gemini, vllm, mock
11
+ AI_PROVIDER=anthropic
12
+
13
+ # Monitoring Settings
14
+ MONITOR_INTERVAL=30 # Seconds between automatic focus checks
15
+
16
+ # MCP Server
17
+ ENABLE_MCP=true # Enable Model Context Protocol server
18
+
19
+ # ===== AI Provider API Keys =====
20
+
21
+ # OpenAI (GPT-4)
22
+ # Get your key from: https://platform.openai.com/api-keys
23
+ OPENAI_API_KEY=
24
+
25
+ # Anthropic (Claude)
26
+ # Get your key from: https://console.anthropic.com/
27
+ ANTHROPIC_API_KEY=
28
+
29
+ # Google Gemini
30
+ # Get your key from: https://makersuite.google.com/app/apikey
31
+ GEMINI_API_KEY=
32
+
33
+ # vLLM (Local Inference)
34
+ VLLM_BASE_URL=http://localhost:8000/v1
35
+ VLLM_MODEL=ibm-granite/granite-4.0-h-1b
36
+ VLLM_API_KEY=EMPTY
37
+
38
+ # ===== Demo API Keys (For Hackathon Organizers) =====
39
+ # These are checked FIRST before user keys
40
+ # Set these on HuggingFace Spaces to enable AI for judges/testers
41
+ # Falls back to Mock AI if keys are invalid or out of credits
42
+
43
+ DEMO_ANTHROPIC_API_KEY=
44
+ DEMO_OPENAI_API_KEY=
45
+ DEMO_GEMINI_API_KEY=
46
+
47
+ # ===== ElevenLabs Voice Integration (Optional) =====
48
+ # Get your key from: https://elevenlabs.io/
49
+ # Voice feedback is OPTIONAL - app works perfectly without it
50
+
51
+ ELEVEN_API_KEY=
52
+ DEMO_ELEVEN_API_KEY=
53
+
54
+ # Voice settings (optional)
55
+ VOICE_ENABLED=true # Set to false to disable voice globally
56
+
57
+ # ===== Notes =====
58
+ # - Leave API keys empty to use Mock AI (demo mode)
59
+ # - Demo keys are perfect for hackathon deployments
60
+ # - App gracefully degrades to Mock AI on any errors
README.md CHANGED
@@ -1,17 +1,730 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
- title: FocusFlowAI
3
- emoji: 💬
4
- colorFrom: yellow
5
- colorTo: purple
6
- sdk: gradio
7
- sdk_version: 5.42.0
8
- app_file: app.py
9
- pinned: false
10
- hf_oauth: true
11
- hf_oauth_scopes:
12
- - inference-api
13
- license: mit
14
- short_description: FocusFlow, an AI productivity companion + task tracker
15
- ---
16
 
17
- An example chatbot using [Gradio](https://gradio.app), [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/v0.22.2/en/index), and the [Hugging Face Inference API](https://huggingface.co/docs/api-inference/index).
 
 
 
1
+ # 🦉 FocusFlow - AI Productivity Accountability Agent
2
+
3
+ **Your Duolingo-style AI buddy that keeps you focused while coding**
4
+
5
+ [![Gradio](https://img.shields.io/badge/Built%20with-Gradio-orange)](https://gradio.app)
6
+ [![MCP](https://img.shields.io/badge/MCP-Enabled-blue)](https://modelcontextprotocol.io)
7
+ [![Python](https://img.shields.io/badge/Python-3.11+-green)](https://python.org)
8
+
9
+ ## 🎯 The Problem
10
+
11
+ Developers with ADHD and procrastination tendencies struggle with:
12
+ - **Task paralysis**: Breaking projects into manageable pieces
13
+ - **Context switching**: Getting distracted by unrelated files/tasks
14
+ - **Accountability**: No one keeping them on track during work sessions
15
+ - **Progress tracking**: Not knowing if they're making real progress
16
+
17
+ ## ✨ The Solution
18
+
19
+ FocusFlow is an AI-powered productivity partner that:
20
+ 1. **Breaks down projects** into 5-8 micro-tasks (15-30 min each)
21
+ 2. **Monitors your workspace** in real-time (file changes or text input)
22
+ 3. **Provides Duolingo-style nudges** when you get distracted or idle
23
+ 4. **Tracks focus metrics** with streaks, scores, and visualizations
24
+ 5. **Integrates with LLMs** via MCP (Model Context Protocol) for natural language task management
25
+
26
+ ## 🚀 Key Features
27
+
28
+ ### 🎯 AI-Powered Project Onboarding
29
+ - Describe your project in plain English
30
+ - Get actionable micro-tasks instantly
31
+ - Smart task generation based on project type (web, API, etc.)
32
+
33
+ ### 👁️ Real-Time Focus Monitoring
34
+ - **Local Mode**: Watches your project directory for file changes
35
+ - **Demo Mode**: Simulates workspace with text area (perfect for HuggingFace Spaces)
36
+ - Content-aware analysis (reads code changes, not just filenames)
37
+
38
+ ### 🦉 Duolingo-Style Personality
39
+ - **On Track**: "Great work! You're making solid progress! 🎯"
40
+ - **Distracted**: "Wait, what are you working on? That doesn't look like the task! 🤨"
41
+ - **Idle**: "Files won't write themselves. *Hoot hoot.* 🦉"
42
+
43
+ ### 📊 Productivity Dashboard
44
+ - **Focus Score**: 0-100 rating based on on-track percentage
45
+ - **Streaks**: Consecutive "On Track" checks 🔥
46
+ - **Weekly Trends**: Visualize your focus patterns
47
+ - **State Distribution**: See where your time goes
48
+
49
+ ### 🍅 Built-in Pomodoro Timer
50
+ - 25-minute work sessions
51
+ - 5-minute break reminders
52
+ - Audio alerts + browser notifications
53
+ - Auto-switching between work and break modes
54
+
55
+ ### 🔗 MCP Integration (Game Changer!)
56
+ Connect FocusFlow to Claude Desktop, Cursor, or any MCP-compatible client!
57
+
58
+ **Available MCP Tools:**
59
+ - `add_task(title, description, duration)` - Create tasks via conversation
60
+ - `get_current_task()` - Check what you should be working on
61
+ - `start_task(task_id)` - Begin a focus session
62
+ - `mark_task_done(task_id)` - Complete tasks
63
+ - `get_all_tasks()` - List all tasks
64
+ - `get_productivity_stats()` - View your metrics
65
+
66
+ **MCP Resources:**
67
+ - `focusflow://tasks/all` - Full task list
68
+ - `focusflow://tasks/active` - Current active task
69
+ - `focusflow://stats` - Productivity statistics
70
+
71
+ ## 📦 Installation
72
+
73
+ ### Quick Start (Demo Mode)
74
+
75
+ ```bash
76
+ # Clone the repository
77
+ git clone https://github.com/Rebell-Leader/FocusFlow.git
78
+ cd FocusFlow
79
+
80
+ # Install dependencies
81
+ pip install -r requirements.txt
82
+
83
+ # Run in demo mode (no API keys needed!)
84
+ python app.py
85
+ ```
86
+
87
+ Open `http://localhost:5000` in your browser.
88
+
89
+ ### With AI Provider (Optional)
90
+
91
+ FocusFlow supports multiple AI providers:
92
+
93
+ ```bash
94
+ # Option 1: OpenAI
95
+ export AI_PROVIDER=openai
96
+ export OPENAI_API_KEY=your_key_here
97
+
98
+ # Option 2: Anthropic Claude
99
+ export AI_PROVIDER=anthropic
100
+ export ANTHROPIC_API_KEY=your_key_here
101
+
102
+ # Option 3: Google Gemini
103
+ export AI_PROVIDER=gemini
104
+ export GEMINI_API_KEY=your_key_here
105
+
106
+ # Option 4: vLLM (local inference)
107
+ export AI_PROVIDER=vllm
108
+ export VLLM_BASE_URL=http://localhost:8000/v1
109
+ export VLLM_MODEL=ibm-granite/granite-4.0-h-1b
110
+
111
+ # Then run
112
+ python app.py
113
+ ```
114
+
115
+ **No API keys? No problem!** FocusFlow automatically uses a **Mock AI** agent with predefined responses for testing.
116
+
117
+ ### For Hackathon Organizers (HuggingFace Spaces)
118
+
119
+ To enable AI features on demo deployments, set **demo API keys** as environment variables:
120
+
121
+ ```bash
122
+ DEMO_ANTHROPIC_API_KEY=sk-ant-xxx # Checked first, falls back to user keys
123
+ DEMO_OPENAI_API_KEY=sk-xxx # Same fallback logic
124
+ DEMO_GEMINI_API_KEY=xxx # Same fallback logic
125
+ ```
126
+
127
+ If demo keys run out of credits, FocusFlow gracefully falls back to Mock AI mode automatically.
128
+
129
+ ## 🔌 Connecting to Claude Desktop (MCP)
130
+
131
+ ### Step 1: Start FocusFlow
132
+
133
+ ```bash
134
+ python app.py
135
+ ```
136
+
137
+ You'll see:
138
+ ```
139
+ 🔗 MCP Server enabled! Connect via Claude Desktop or other MCP clients.
140
+ * Streamable HTTP URL: http://localhost:5000/gradio_api/mcp/
141
+ ```
142
+
143
+ ### 🔌 Connecting via MCP (Claude Desktop / Windows)
144
+
145
+ FocusFlow runs an MCP server that you can connect to from external tools like Claude Desktop or LM Studio.
146
+
147
+ **If running on WSL and connecting from Windows:**
148
+
149
+ 1. **Ensure the app is running**: `python app.py` (it listens on `0.0.0.0` by default).
150
+ 2. **Find your WSL IP address**: Run `wsl hostname -I` in your terminal.
151
+ 3. **Configure your MCP Client**:
152
+ * **Type**: SSE (Server-Sent Events)
153
+ * **URL**: `http://<YOUR_WSL_IP>:5000/gradio_api/mcp/sse`
154
+ * *(Or try `http://localhost:5000/gradio_api/mcp/sse` if localhost forwarding is working)*
155
+
156
+ **Available MCP Tools:**
157
+ * `get_active_task`: Get the currently active task.
158
+ * `add_task`: Create a new task.
159
+ * `update_task`: Update task status or details.
160
+ * `get_productivity_stats`: Get focus scores and metrics.
161
+
162
+ **Configuration (mcp.json / claude_desktop_config.json):**
163
+
164
+ ```json
165
+ {
166
+ "mcpServers": {
167
+ "focusflow": {
168
+ "url": "http://<YOUR_WSL_IP>:5000/gradio_api/mcp/sse"
169
+ }
170
+ }
171
+ }
172
+ ```
173
+ *Replace `<YOUR_WSL_IP>` with the IP address from step 2 (e.g., `172.x.x.x`).*
174
+
175
+ ### Step 2: Configure Claude Desktop
176
+
177
+ #### macOS
178
+ Edit `~/Library/Application Support/Claude/claude_desktop_config.json`:
179
+
180
+ ```json
181
+ {
182
+ "mcpServers": {
183
+ "focusflow": {
184
+ "command": "python",
185
+ "args": ["/absolute/path/to/your/focusflow/app.py"]
186
+ }
187
+ }
188
+ }
189
+ ```
190
+
191
+ #### Windows
192
+ Edit `%APPDATA%\Claude\claude_desktop_config.json` with the same JSON format.
193
+
194
+ ### Step 3: Restart Claude Desktop
195
+
196
+ Close and reopen Claude Desktop. You should see FocusFlow tools available!
197
+
198
+ ### Step 4: Test It Out
199
+
200
+ In Claude Desktop, try:
201
+ ```
202
+ "Add a task to build a REST API with authentication"
203
+ "What task should I work on now?"
204
+ "Mark task 1 as done"
205
+ "Show me my productivity stats"
206
+ ```
207
+
208
+ ## 🆕 Recent Updates
209
+
210
+ ### Version 1.0 (Hackathon Release)
211
+ - ✅ **MCP Integration** - Full Model Context Protocol support with 8 tools and 3 resources
212
+ - ✅ **Voice Feedback** - ElevenLabs integration for Duolingo-style audio nudges
213
+ - ✅ **Demo API Keys** - Support for `DEMO_ANTHROPIC_API_KEY`, `DEMO_OPENAI_API_KEY`, and `DEMO_ELEVEN_API_KEY` for hackathon deployments
214
+ - ✅ **Productivity Dashboard** - Focus scores, streaks, weekly trends, and state distribution charts
215
+ - ✅ **Mock AI Mode** - Works without API keys for testing and demos
216
+ - ✅ **Graceful Degradation** - Automatically falls back to Mock AI if API keys are invalid or out of credits
217
+ - ✅ **Dual Launch Modes** - Demo mode (text area) and Local mode (file monitoring)
218
+ - ✅ **Comprehensive Testing** - Full testing checklist in `TESTING_CHECKLIST.md`
219
+ - ✅ **Error Handling** - Invalid task IDs return helpful error messages in MCP tools
220
+ - ✅ **Metrics Integration** - MCP `get_productivity_stats()` includes focus scores and streaks
221
+
222
+ ## ⚙️ Configuration & Environment Variables
223
+
224
+ ### Core Settings
225
+
226
+ | Variable | Default | Options | Description |
227
+ |----------|---------|---------|-------------|
228
+ | `LAUNCH_MODE` | `demo` | `demo`, `local` | Workspace monitoring mode (see Launch Modes below) |
229
+ | `AI_PROVIDER` | `anthropic` | `openai`, `anthropic`, `vllm`, `mock` | AI provider to use |
230
+ | `MONITOR_INTERVAL` | `30` | Any integer | Seconds between automatic focus checks |
231
+ | `ENABLE_MCP` | `true` | `true`, `false` | Enable/disable MCP server |
232
+
233
+ ### AI Provider API Keys
234
+
235
+ **Priority Order:** Demo keys are checked first, then user keys, then falls back to Mock AI.
236
+
237
+ #### User API Keys
238
+ | Variable | Description | Get Key From |
239
+ |----------|-------------|--------------|
240
+ | `OPENAI_API_KEY` | Your personal OpenAI API key | https://platform.openai.com/api-keys |
241
+ | `ANTHROPIC_API_KEY` | Your personal Anthropic API key | https://console.anthropic.com/ |
242
+ | `ELEVEN_API_KEY` | Your personal ElevenLabs API key (optional) | https://elevenlabs.io/api |
243
+ | `LINEAR_API_KEY` | Your personal Linear API key (optional) | https://linear.app/settings/api |
244
+
245
+ #### Demo API Keys (For Hackathon Organizers)
246
+ | Variable | Description | Use Case |
247
+ |----------|-------------|----------|
248
+ | `DEMO_ANTHROPIC_API_KEY` | Shared Anthropic key for demos | Set on HuggingFace Spaces for judges/testers |
249
+ | `DEMO_OPENAI_API_KEY` | Shared OpenAI key for demos | Set on HuggingFace Spaces for judges/testers |
250
+ | `DEMO_ELEVEN_API_KEY` | Shared ElevenLabs key for voice | Set on HuggingFace Spaces for voice feedback |
251
+
252
+ **How It Works:**
253
+ ```python
254
+ # Priority chain (Anthropic example):
255
+ 1. Check DEMO_ANTHROPIC_API_KEY (hackathon demo key)
256
+ 2. If not found, check ANTHROPIC_API_KEY (user's personal key)
257
+ 3. If not found or invalid, fall back to Mock AI (no errors!)
258
+
259
+ # Voice integration (optional):
260
+ 1. Check DEMO_ELEVEN_API_KEY (hackathon demo key)
261
+ 2. If not found, check ELEVEN_API_KEY (user's personal key)
262
+ 3. If not found, gracefully disable voice (text-only mode)
263
+ ```
264
+
265
+ #### vLLM Settings (Local Inference)
266
+ | Variable | Default | Description |
267
+ |----------|---------|-------------|
268
+ | `VLLM_BASE_URL` | `http://localhost:8000/v1` | vLLM server endpoint |
269
+ | `VLLM_MODEL` | `ibm-granite/granite-4.0-h-1b` | Model name |
270
+ | `VLLM_API_KEY` | `EMPTY` | API key (usually not needed for local) |
271
+
272
+ ### API Key Management Best Practices
273
+
274
+ **For Local Development:**
275
+ ```bash
276
+ # Copy the example file
277
+ cp .env.example .env
278
+
279
+ # Edit .env and add your personal keys
280
+ nano .env # or your preferred editor
281
+ ```
282
+
283
+ **For HuggingFace Spaces Deployment:**
284
+ ```bash
285
+ # In Space Settings > Variables, add:
286
+ LAUNCH_MODE=demo
287
+ AI_PROVIDER=anthropic
288
+ DEMO_ANTHROPIC_API_KEY=sk-ant-your-hackathon-key
289
+ ```
290
+
291
+ **For Testing Without API Keys:**
292
+ ```bash
293
+ # Just run - Mock AI activates automatically!
294
+ python app.py
295
+ # Status: "ℹ️ Running in DEMO MODE with Mock AI (no API keys needed). Perfect for testing! 🎭"
296
+ ```
297
+
298
+ ### Graceful Degradation
299
+
300
+ FocusFlow **never crashes** due to missing or invalid API keys:
301
+
302
+ | Scenario | Behavior | User Experience |
303
+ |----------|----------|-----------------|
304
+ | No API keys set | Uses Mock AI | ✅ Full demo functionality |
305
+ | Invalid API key | Falls back to Mock AI | ✅ App continues working |
306
+ | API out of credits | Falls back to Mock AI | ✅ Seamless transition |
307
+ | API rate limited | Retries, then Mock AI | ✅ No interruption |
308
+
309
+ **Status Messages:**
310
+ - `✅ Anthropic Claude initialized successfully (demo key)` - Demo key working
311
+ - `✅ OpenAI GPT-4 initialized successfully (user key)` - User key working
312
+ - `ℹ️ Running in DEMO MODE with Mock AI (no API keys needed)` - Fallback active
313
+
314
+ ## 🎮 Launch Modes Explained
315
+
316
+ FocusFlow supports two workspace monitoring modes:
317
+
318
+ ### Demo Mode (`LAUNCH_MODE=demo`)
319
+
320
+ **Best for:**
321
+ - HuggingFace Spaces deployments
322
+ - Replit deployments
323
+ - Testing without file system access
324
+ - Hackathon demos for judges
325
+
326
+ **How it works:**
327
+ - Provides a text area for simulating workspace activity
328
+ - Users type what they're working on
329
+ - AI analyzes text content for focus checks
330
+ - No file system permissions needed
331
+
332
+ **Example:**
333
+ ```bash
334
+ export LAUNCH_MODE=demo
335
+ python app.py
336
+ # Monitor tab shows: "Demo Workspace" text area
337
+ ```
338
+
339
+ **User Workflow:**
340
+ 1. Type: "Working on authentication API, creating login endpoint"
341
+ 2. Click "Check Focus Now"
342
+ 3. Result: "On Track! Great work! 🎯"
343
+
344
+ ### Local Mode (`LAUNCH_MODE=local`)
345
+
346
+ **Best for:**
347
+ - Local development environments
348
+ - Real-time file monitoring
349
+ - Production use cases
350
+ - Personal productivity tracking
351
+
352
+ **How it works:**
353
+ - Uses `watchdog` library to monitor file system changes
354
+ - Automatically detects file modifications in project directory
355
+ - Reads actual file diffs for intelligent analysis
356
+ - Triggers focus checks automatically when files change
357
+
358
+ **Example:**
359
+ ```bash
360
+ export LAUNCH_MODE=local
361
+ python app.py
362
+ # Monitor tab shows: "Watching directory: /your/project/path"
363
+ ```
364
+
365
+ **User Workflow:**
366
+ 1. Start a task in Task Manager
367
+ 2. Edit files in your project
368
+ 3. FocusFlow automatically detects changes and runs focus checks
369
+ 4. Receive real-time feedback
370
+
371
+ ### Choosing the Right Mode
372
+
373
+ | Use Case | Recommended Mode | Reason |
374
+ |----------|------------------|--------|
375
+ | HuggingFace Spaces | `demo` | No file system access in web deployments |
376
+ | Hackathon demo | `demo` | Easy for judges to test without setup |
377
+ | Local development | `local` | Real-time file monitoring is more natural |
378
+ | Replit | `demo` | Simpler, no file permissions issues |
379
+ | Personal productivity | `local` | Authentic workspace monitoring |
380
+
381
+ ## 📁 Project Structure
382
+
383
+ ```
384
+ focusflow/
385
+ ├── app.py # Main Gradio application
386
+ ├── agent.py # AI focus agent (OpenAI/Anthropic/Mock)
387
+ ├── storage.py # Task manager with SQLite
388
+ ├── monitor.py # File monitoring with watchdog
389
+ ├── metrics.py # Productivity tracking
390
+ ├── mcp_tools.py # MCP tools and resources
391
+ ├── requirements.txt # Python dependencies
392
+ ├── .env.example # Environment template
393
+ └── README.md # This file
394
+ ```
395
+
396
+ ## 🎥 Demo & Screenshots
397
+
398
+ ### Home Screen
399
+ Clean interface with feature overview and configuration status.
400
+
401
+ ### Onboarding
402
+ AI generates micro-tasks from project descriptions.
403
+
404
+ ### Task Manager
405
+ Kanban-style task board with drag-and-drop (coming soon).
406
+
407
+ ### Dashboard
408
+ Visualize focus scores, streaks, and productivity trends.
409
+
410
+ ### Monitor
411
+ Real-time focus checks with Duolingo-style feedback.
412
+
413
+ ## 🏆 Why This is Perfect for the Gradio MCP Hackathon
414
+
415
+ 1. **Novel MCP Use Case**: First MCP-powered productivity/accountability tool
416
+ 2. **Deep Integration**: Natural language task management through Claude Desktop
417
+ 3. **Real Problem**: Solves actual pain points for developers with ADHD
418
+ 4. **Gradio Showcase**: Uses tabs, plots, timers, state management, custom JS
419
+ 5. **Demo-Friendly**: Works without API keys, deployable to HF Spaces
420
+ 6. **Production-Ready**: SQLite persistence, metrics tracking, error handling
421
+
422
+ ## 🧪 Testing
423
+
424
+ ### Quick Feature Test (5 minutes)
425
+
426
+ Use the comprehensive **`TESTING_CHECKLIST.md`** file for detailed testing instructions. Here's a quick verification:
427
+
428
+ ```bash
429
+ # 1. Start the app
430
+ python app.py
431
+
432
+ # 2. Open browser
433
+ open http://localhost:5000
434
+
435
+ # 3. Test each tab:
436
+ # ✅ Home: Check status message shows AI provider
437
+ # ✅ Onboarding: Generate tasks from project description
438
+ # ✅ Task Manager: Create/edit/delete/start tasks
439
+ # ✅ Monitor: Perform focus checks (demo workspace or file changes)
440
+ # ✅ Dashboard: View metrics after focus checks
441
+ # ✅ Pomodoro: Start/pause/reset timer
442
+ ```
443
+
444
+ ### Test Scenarios
445
+
446
+ **Scenario 1: Demo Mode (No API Keys)**
447
+ ```bash
448
+ # No .env file needed
449
+ python app.py
450
+ # Expected: "ℹ️ Running in DEMO MODE with Mock AI"
451
+ # Test: All features work with predefined responses
452
+ ```
453
+
454
+ **Scenario 2: With Anthropic API Key**
455
+ ```bash
456
+ export AI_PROVIDER=anthropic
457
+ export ANTHROPIC_API_KEY=sk-ant-your-key
458
+ python app.py
459
+ # Expected: "✅ Anthropic Claude initialized successfully (user key)"
460
+ # Test: Intelligent task generation and focus analysis
461
+ ```
462
+
463
+ **Scenario 3: Demo Key (Hackathon Deployment)**
464
+ ```bash
465
+ export AI_PROVIDER=anthropic
466
+ export DEMO_ANTHROPIC_API_KEY=sk-ant-demo-key
467
+ python app.py
468
+ # Expected: "✅ Anthropic Claude initialized successfully (demo key)"
469
+ # Test: Uses demo key, falls back to Mock if exhausted
470
+ ```
471
+
472
+ **Scenario 4: MCP Integration**
473
+ ```bash
474
+ # 1. Configure Claude Desktop (see MCP section above)
475
+ # 2. Start FocusFlow
476
+ python app.py
477
+ # 3. In Claude Desktop, test tools:
478
+ # - "Add a task to implement OAuth2"
479
+ # - "What's my current task?"
480
+ # - "Show my productivity stats"
481
+ ```
482
+
483
+ ### Automated Testing
484
+
485
+ Run the full test suite:
486
+ ```bash
487
+ # Follow TESTING_CHECKLIST.md step-by-step
488
+ # Expected: All features pass without errors
489
+ # Time: ~15 minutes for comprehensive test
490
+ ```
491
+
492
+ ### Common Issues & Solutions
493
+
494
+ | Issue | Solution |
495
+ |-------|----------|
496
+ | "No API key" error | Set `AI_PROVIDER=mock` or leave keys empty - Mock AI activates automatically |
497
+ | Charts show "Infinite extent" warning | Normal on first load with no data - warnings disappear after focus checks |
498
+ | MCP tools not visible in Claude | Restart Claude Desktop after config changes |
499
+ | File monitoring not working | Check `LAUNCH_MODE=local` and file permissions |
500
+ | Tasks not persisting | Check `focusflow.db` file exists and is writable |
501
+
502
+ ## 🚀 Deployment
503
+
504
+ ### Deployment Option 1: HuggingFace Spaces (Recommended for Hackathon)
505
+
506
+ **Step 1: Create Space**
507
+ 1. Go to https://huggingface.co/spaces
508
+ 2. Click "Create new Space"
509
+ 3. Select "Gradio" as SDK
510
+ 4. Choose a name (e.g., `focusflow-demo`)
511
+
512
+ **Step 2: Upload Files**
513
+ Upload these files to your Space:
514
+ - `app.py`
515
+ - `agent.py`
516
+ - `storage.py`
517
+ - `monitor.py`
518
+ - `metrics.py`
519
+ - `mcp_tools.py`
520
+ - `requirements.txt`
521
+ - `README.md`
522
+ - `.env.example` (optional, for documentation)
523
+
524
+ **Step 3: Configure Environment Variables**
525
+
526
+ In Space Settings → Variables, add:
527
+
528
+ **Option A: With Demo AI (Recommended for Judges)**
529
+ ```bash
530
+ LAUNCH_MODE=demo
531
+ AI_PROVIDER=anthropic
532
+ DEMO_ANTHROPIC_API_KEY=sk-ant-your-hackathon-shared-key
533
+ MONITOR_INTERVAL=30
534
+ ENABLE_MCP=true
535
+ ```
536
+
537
+ **Option B: Mock AI Only (No API Keys Needed)**
538
+ ```bash
539
+ LAUNCH_MODE=demo
540
+ AI_PROVIDER=mock
541
+ MONITOR_INTERVAL=30
542
+ ENABLE_MCP=true
543
+ ```
544
+
545
+ **Step 4: Deploy**
546
+ - Click "Save" - Space will automatically rebuild and deploy
547
+ - Your app will be live at: `https://huggingface.co/spaces/yourusername/focusflow-demo`
548
+ - MCP server endpoint: `https://yourusername-focusflow-demo.hf.space/gradio_api/mcp/`
549
+
550
+ **Step 5: Test Deployment**
551
+ 1. Open the Space URL
552
+ 2. Test onboarding → Generate tasks
553
+ 3. Test task manager → CRUD operations
554
+ 4. Test monitor → Focus checks
555
+ 5. Test dashboard → View metrics
556
+ 6. Test MCP (optional) → Connect from Claude Desktop
557
+
558
+ ### Deployment Option 2: Replit
559
+
560
+ **Step 1: Import Repository**
561
+ 1. Go to https://replit.com
562
+ 2. Click "Create Repl" → "Import from GitHub"
563
+ 3. Paste your FocusFlow repository URL
564
+
565
+ **Step 2: Configure Secrets**
566
+ In Replit Secrets (Tools → Secrets):
567
+ ```bash
568
+ AI_PROVIDER=anthropic
569
+ ANTHROPIC_API_KEY=your_key_here
570
+ LAUNCH_MODE=demo
571
+ ```
572
+
573
+ **Step 3: Run**
574
+ ```bash
575
+ python app.py
576
+ ```
577
+
578
+ **Step 4: Share**
579
+ - Click "Share" button
580
+ - Copy the public URL
581
+ - MCP endpoint: `https://yourrepl.repl.co/gradio_api/mcp/`
582
+
583
+ ### Deployment Option 3: Local Development
584
+
585
+ **Step 1: Clone Repository**
586
+ ```bash
587
+ git clone https://github.com/yourusername/focusflow.git
588
+ cd focusflow
589
+ ```
590
+
591
+ **Step 2: Install Dependencies**
592
+ ```bash
593
+ pip install -r requirements.txt
594
+ ```
595
+
596
+ **Step 3: Configure Environment**
597
+ ```bash
598
+ # Copy example
599
+ cp .env.example .env
600
+
601
+ # Edit .env with your settings
602
+ nano .env # or code .env, vim .env, etc.
603
+ ```
604
+
605
+ **Step 4: Run**
606
+ ```bash
607
+ # For local file monitoring
608
+ export LAUNCH_MODE=local
609
+ python app.py
610
+
611
+ # For demo mode (text area)
612
+ export LAUNCH_MODE=demo
613
+ python app.py
614
+ ```
615
+
616
+ **Step 5: Access**
617
+ - Web UI: http://localhost:5000
618
+ - MCP endpoint: http://localhost:5000/gradio_api/mcp/
619
+
620
+ ### Deployment Option 4: Docker (Advanced)
621
+
622
+ Create `Dockerfile`:
623
+ ```dockerfile
624
+ FROM python:3.11-slim
625
+
626
+ WORKDIR /app
627
+ COPY requirements.txt .
628
+ RUN pip install --no-cache-dir -r requirements.txt
629
+
630
+ COPY . .
631
+
632
+ ENV LAUNCH_MODE=demo
633
+ ENV AI_PROVIDER=mock
634
+ ENV ENABLE_MCP=true
635
+
636
+ EXPOSE 5000
637
+
638
+ CMD ["python", "app.py"]
639
+ ```
640
+
641
+ Build and run:
642
+ ```bash
643
+ docker build -t focusflow .
644
+ docker run -p 5000:5000 -e ANTHROPIC_API_KEY=your_key focusflow
645
+ ```
646
+
647
+ ### Post-Deployment Checklist
648
+
649
+ After deploying to any platform:
650
+
651
+ - [ ] App loads without errors
652
+ - [ ] Home tab shows correct AI provider status
653
+ - [ ] Onboarding generates tasks
654
+ - [ ] Task Manager CRUD operations work
655
+ - [ ] Monitor tab performs focus checks
656
+ - [ ] Dashboard displays metrics (after checks)
657
+ - [ ] Pomodoro timer functions
658
+ - [ ] MCP endpoint accessible (optional)
659
+ - [ ] No sensitive data exposed in logs
660
+ - [ ] Database file (`focusflow.db`) created successfully
661
+
662
+ ### Environment Variables Reference (Deployment)
663
+
664
+ **Required:**
665
+ - `LAUNCH_MODE` - Always set to `demo` for web deployments
666
+
667
+ **Optional:**
668
+ - `AI_PROVIDER` - `anthropic`, `openai`, `vllm`, or `mock` (default: `anthropic`)
669
+ - `DEMO_ANTHROPIC_API_KEY` - For hackathon/shared deployments
670
+ - `DEMO_OPENAI_API_KEY` - Alternative demo provider
671
+ - `ANTHROPIC_API_KEY` - User's personal key
672
+ - `OPENAI_API_KEY` - User's personal key
673
+ - `MONITOR_INTERVAL` - Seconds between checks (default: 30)
674
+ - `ENABLE_MCP` - Enable MCP server (default: true)
675
+
676
+ ### Deployment Troubleshooting
677
+
678
+ | Issue | Platform | Solution |
679
+ |-------|----------|----------|
680
+ | Import errors | HF Spaces | Check `requirements.txt` includes all dependencies |
681
+ | "Port already in use" | Local | Change port in `app.py` or kill process using port 5000 |
682
+ | MCP not accessible | All | Ensure `ENABLE_MCP=true` and check firewall settings |
683
+ | Database errors | HF Spaces | Ensure space has write permissions (SQLite needs filesystem) |
684
+ | Mock AI always active | All | Check environment variables are set correctly |
685
+ | Slow performance | HF Spaces | Free tier has limited resources - consider upgrading |
686
+
687
+ ## 🛠️ Tech Stack
688
+
689
+ - **Frontend**: Gradio 5.0+ (Python UI framework)
690
+ - **Backend**: Python 3.11+
691
+ - **Database**: SQLite (zero-config persistence)
692
+ - **AI Providers**: OpenAI GPT-4, Anthropic Claude, Google Gemini, vLLM, or Mock
693
+ - **Voice**: ElevenLabs text-to-speech (optional)
694
+ - **File Monitoring**: Watchdog (real-time filesystem events)
695
+ - **MCP Integration**: Model Context Protocol for LLM interoperability
696
+ - **Charts**: Gradio native plots (pandas DataFrames)
697
+
698
+ ## 📈 Roadmap
699
+
700
+ - [ ] Mobile app (React Native + Gradio backend)
701
+ - [ ] GitHub integration (auto-detect tasks from issues)
702
+ - [ ] Slack/Discord notifications
703
+ - [ ] Team mode (shared accountability)
704
+ - [ ] Voice commands (Whisper integration)
705
+ - [ ] VS Code extension
706
+
707
+ ## 🤝 Contributing
708
+
709
+ Contributions welcome! Please:
710
+ 1. Fork the repository
711
+ 2. Create a feature branch
712
+ 3. Make your changes
713
+ 4. Submit a pull request
714
+
715
+ ## 📄 License
716
+
717
+ MIT License - feel free to use this for personal or commercial projects!
718
+
719
+ ## 🙏 Acknowledgments
720
+
721
+ - Built for the [Gradio MCP Hackathon](https://gradio.app) - competing for **Track 1: Building MCP**
722
+ - Voice integration powered by [ElevenLabs](https://elevenlabs.io) - competing for **$2,000 Best Use of ElevenLabs** sponsor award
723
+ - Inspired by Duolingo's encouraging UX
724
+ - Uses [Model Context Protocol](https://modelcontextprotocol.io) for LLM integration
725
+
726
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
727
 
728
+ **Made with ❤️ for developers with ADHD who just need a little nudge to stay focused.**
729
+
730
+ *Hoot hoot!* 🦉
agent.py ADDED
@@ -0,0 +1,404 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ AI Focus Agent with OpenAI/Claude integration and personality system.
3
+ """
4
+ import os
5
+ from typing import Dict, List, Optional, Union
6
+ from datetime import datetime
7
+ import json
8
+
9
+
10
+ class FocusAgent:
11
+ """AI agent that monitors focus and provides Duolingo-style nudges."""
12
+
13
+ def __init__(self, provider: str = "openai", api_key: Optional[str] = None,
14
+ base_url: Optional[str] = None, model: Optional[str] = None):
15
+ """Initialize the focus agent with AI provider."""
16
+ self.provider = provider.lower()
17
+ self.last_verdict: Optional[str] = None
18
+ self.idle_count = 0
19
+ self.distracted_count = 0
20
+ self.connection_healthy = False
21
+
22
+ if self.provider == "openai":
23
+ from openai import OpenAI
24
+ self.api_key = api_key or os.getenv("OPENAI_API_KEY")
25
+ self.client = OpenAI(api_key=self.api_key) if self.api_key else None
26
+ self.model = model or "gpt-4o"
27
+ self.connection_healthy = bool(self.api_key)
28
+ elif self.provider == "anthropic":
29
+ from anthropic import Anthropic
30
+ self.api_key = api_key or os.getenv("ANTHROPIC_API_KEY")
31
+ self.client = Anthropic(api_key=self.api_key) if self.api_key else None
32
+ self.model = model or "claude-haiku-4-5-20251001"
33
+ self.connection_healthy = bool(self.api_key)
34
+ elif self.provider == "gemini":
35
+ import google.generativeai as genai
36
+ self.api_key = api_key or os.getenv("GEMINI_API_KEY")
37
+ if self.api_key:
38
+ genai.configure(api_key=self.api_key)
39
+ self.client = genai.GenerativeModel(model or "gemini-2.0-flash-exp")
40
+ self.model = model or "gemini-2.0-flash-exp"
41
+ self.connection_healthy = True
42
+ else:
43
+ self.client = None
44
+ self.connection_healthy = False
45
+ elif self.provider == "vllm":
46
+ from openai import OpenAI
47
+ import httpx
48
+ self.api_key = api_key or os.getenv("VLLM_API_KEY", "EMPTY")
49
+ self.base_url = base_url or os.getenv("VLLM_BASE_URL", "http://localhost:8000/v1")
50
+ self.model = model or os.getenv("VLLM_MODEL", "ibm-granite/granite-4.0-h-1b")
51
+
52
+ try:
53
+ timeout = httpx.Timeout(5.0, connect=2.0)
54
+ self.client = OpenAI(api_key=self.api_key, base_url=self.base_url, timeout=timeout)
55
+ test_response = self.client.models.list()
56
+ self.connection_healthy = True
57
+ except Exception as e:
58
+ print(f"⚠️ vLLM connection failed: {e}")
59
+ print(f" Make sure vLLM server is running at {self.base_url}")
60
+ self.client = None
61
+ self.connection_healthy = False
62
+ else:
63
+ raise ValueError(f"Unsupported provider: {provider}. Supported: openai, anthropic, gemini, vllm")
64
+
65
+ def _create_analysis_prompt(self, active_task: Dict, recent_activity: List[Dict]) -> str:
66
+ """Create the analysis prompt for the LLM."""
67
+ if not recent_activity:
68
+ return f"""You are FocusFlow, a Duolingo-style accountability buddy for developers.
69
+
70
+ **Current Task:**
71
+ - Title: {active_task.get('title', 'No task')}
72
+ - Description: {active_task.get('description', 'No description')}
73
+
74
+ **Recent Activity:** No file changes detected in the last 60 seconds.
75
+
76
+ **Your Job:** Analyze the situation and respond with ONE of these verdicts:
77
+ 1. "On Track" - If there's activity related to the task
78
+ 2. "Distracted" - If files unrelated to the task are being edited
79
+ 3. "Idle" - If there's no activity
80
+
81
+ Respond in JSON format:
82
+ {{
83
+ "verdict": "On Track" | "Distracted" | "Idle",
84
+ "message": "Your encouraging/sassy/nudging message (1-2 sentences, Duolingo style)",
85
+ "reasoning": "Brief explanation of your analysis"
86
+ }}"""
87
+
88
+ activity_summary = []
89
+ for event in recent_activity[-5:]:
90
+ activity_summary.append(
91
+ f"- {event['type'].upper()}: {event['filename']}\n Content: {event.get('content', 'N/A')[:200]}"
92
+ )
93
+
94
+ activity_text = "\n".join(activity_summary)
95
+
96
+ return f"""You are FocusFlow, a Duolingo-style accountability buddy for developers.
97
+
98
+ **Current Task:**
99
+ - Title: {active_task.get('title', 'No task')}
100
+ - Description: {active_task.get('description', 'No description')}
101
+
102
+ **Recent File Activity (last 60 seconds):**
103
+ {activity_text}
104
+
105
+ **Your Job:** Analyze if the file changes are related to the current task.
106
+
107
+ **Personality Guidelines:**
108
+ - "On Track": Be encouraging and specific (e.g., "Great job! I see you're working on the login form!")
109
+ - "Distracted": Be playfully sassy (e.g., "Wait, why are you editing random_file.py? We're building a Snake game! 🤨")
110
+ - "Idle": Be gently nudging (e.g., "Files won't write themselves. *Hoot hoot.* 🦉")
111
+
112
+ Respond in JSON format:
113
+ {{
114
+ "verdict": "On Track" | "Distracted" | "Idle",
115
+ "message": "Your message (1-2 sentences)",
116
+ "reasoning": "Brief explanation"
117
+ }}"""
118
+
119
+ def _call_llm(self, prompt: str) -> Dict:
120
+ """Call the LLM and parse the response."""
121
+ try:
122
+ if self.provider in ["openai", "vllm"]:
123
+ if not self.client:
124
+ return {"verdict": "On Track", "message": "API client not initialized", "reasoning": "No client"}
125
+ response = self.client.chat.completions.create(
126
+ model=self.model,
127
+ messages=[{"role": "user", "content": prompt}],
128
+ temperature=0.7,
129
+ max_tokens=300
130
+ )
131
+ content = response.choices[0].message.content
132
+ elif self.provider == "gemini":
133
+ if not self.client:
134
+ return {"verdict": "On Track", "message": "API client not initialized", "reasoning": "No client"}
135
+ response = self.client.generate_content(
136
+ prompt,
137
+ generation_config={
138
+ "temperature": 0.7,
139
+ "max_output_tokens": 300,
140
+ }
141
+ )
142
+ content = response.text
143
+ else: # anthropic
144
+ if not self.client:
145
+ return {"verdict": "On Track", "message": "API client not initialized", "reasoning": "No client"}
146
+ response = self.client.messages.create(
147
+ model=self.model,
148
+ max_tokens=300,
149
+ temperature=0.7,
150
+ messages=[{"role": "user", "content": prompt}]
151
+ )
152
+ content = response.content[0].text
153
+
154
+ if not content:
155
+ return {"verdict": "On Track", "message": "Empty response from API", "reasoning": "No content"}
156
+
157
+ # Try to parse JSON from the response
158
+ content = content.strip()
159
+ if "```json" in content:
160
+ content = content.split("```json")[1].split("```")[0].strip()
161
+ elif "```" in content:
162
+ content = content.split("```")[1].split("```")[0].strip()
163
+
164
+ result = json.loads(content)
165
+ return result
166
+
167
+ except json.JSONDecodeError:
168
+ # Fallback if JSON parsing fails
169
+ return {
170
+ "verdict": "On Track",
171
+ "message": content[:200],
172
+ "reasoning": "AI response parsing fallback"
173
+ }
174
+ except Exception as e:
175
+ return {
176
+ "verdict": "On Track",
177
+ "message": f"Error analyzing activity: {str(e)}",
178
+ "reasoning": "Error occurred"
179
+ }
180
+
181
+ def analyze(self, active_task: Optional[Dict], recent_activity: List[Dict]) -> Dict:
182
+ """Analyze current activity and return verdict."""
183
+ if not active_task:
184
+ return {
185
+ "verdict": "Idle",
186
+ "message": "No active task selected. Pick a task to get started! 🎯",
187
+ "reasoning": "No active task",
188
+ "timestamp": datetime.now().isoformat()
189
+ }
190
+
191
+ if not self.connection_healthy or not self.client:
192
+ provider_name = self.provider.upper()
193
+ if self.provider == "vllm":
194
+ msg = f"⚠️ vLLM server not reachable. Make sure it's running at {self.base_url}"
195
+ else:
196
+ msg = f"⚠️ {provider_name} API key not configured. Add your API key to enable AI monitoring."
197
+ return {
198
+ "verdict": "On Track",
199
+ "message": msg,
200
+ "reasoning": "No connection",
201
+ "timestamp": datetime.now().isoformat()
202
+ }
203
+
204
+ prompt = self._create_analysis_prompt(active_task, recent_activity)
205
+ result = self._call_llm(prompt)
206
+ result["timestamp"] = datetime.now().isoformat()
207
+
208
+ # Track consecutive idle/distracted states
209
+ verdict = result.get("verdict", "On Track")
210
+ if verdict == "Idle":
211
+ self.idle_count += 1
212
+ self.distracted_count = 0
213
+ elif verdict == "Distracted":
214
+ self.distracted_count += 1
215
+ self.idle_count = 0
216
+ else:
217
+ self.idle_count = 0
218
+ self.distracted_count = 0
219
+
220
+ result["should_alert"] = (self.idle_count >= 2 or self.distracted_count >= 2)
221
+ self.last_verdict = verdict
222
+
223
+ return result
224
+
225
+ def get_onboarding_tasks(self, project_description: str) -> List[Dict]:
226
+ """Generate micro-tasks from project description."""
227
+ if not self.connection_healthy or not self.client:
228
+ return []
229
+
230
+ prompt = f"""You are FocusFlow, an AI project planner.
231
+
232
+ The user wants to build: "{project_description}"
233
+
234
+ Break this down into 5-8 concrete, actionable micro-tasks. Each task should be:
235
+ - Specific and achievable in 15-30 minutes
236
+ - Ordered logically (setup → core features → polish)
237
+ - Clearly described
238
+
239
+ Respond in JSON format:
240
+ {{
241
+ "tasks": [
242
+ {{"title": "Task 1 title", "description": "Detailed description", "estimated_duration": "15 min"}},
243
+ {{"title": "Task 2 title", "description": "Detailed description", "estimated_duration": "20 min"}}
244
+ ]
245
+ }}"""
246
+
247
+ try:
248
+ if self.provider in ["openai", "vllm"]:
249
+ if not self.client:
250
+ return []
251
+ response = self.client.chat.completions.create(
252
+ model=self.model,
253
+ messages=[{"role": "user", "content": prompt}],
254
+ temperature=0.7,
255
+ max_tokens=800
256
+ )
257
+ content = response.choices[0].message.content
258
+ elif self.provider == "gemini":
259
+ if not self.client:
260
+ return []
261
+ response = self.client.generate_content(
262
+ prompt,
263
+ generation_config={
264
+ "temperature": 0.7,
265
+ "max_output_tokens": 800,
266
+ }
267
+ )
268
+ content = response.text
269
+ else: # anthropic
270
+ if not self.client:
271
+ return []
272
+ response = self.client.messages.create(
273
+ model=self.model,
274
+ max_tokens=800,
275
+ temperature=0.7,
276
+ messages=[{"role": "user", "content": prompt}]
277
+ )
278
+ content = response.content[0].text
279
+
280
+ if not content:
281
+ return []
282
+
283
+ # Parse JSON
284
+ content = content.strip()
285
+ if "```json" in content:
286
+ content = content.split("```json")[1].split("```")[0].strip()
287
+ elif "```" in content:
288
+ content = content.split("```")[1].split("```")[0].strip()
289
+
290
+ result = json.loads(content)
291
+ return result.get("tasks", [])
292
+
293
+ except Exception as e:
294
+ print(f"Error generating tasks: {e}")
295
+ return []
296
+
297
+
298
+ class MockFocusAgent(FocusAgent):
299
+ """Mock agent for demo mode without API keys. Returns predefined responses."""
300
+
301
+ def __init__(self):
302
+ """Initialize mock agent without any API dependencies."""
303
+ self.provider = "mock"
304
+ self.last_verdict = None
305
+ self.idle_count = 0
306
+ self.distracted_count = 0
307
+ self.connection_healthy = True
308
+ self.client = None
309
+ self.api_key = None
310
+ self.check_counter = 0
311
+
312
+ self.verdicts_cycle = ["On Track", "On Track", "Distracted", "On Track", "Idle"]
313
+ self.messages = {
314
+ "On Track": [
315
+ "Great work! You're making solid progress! 🎯",
316
+ "Keep it up! I see you're focused on the task. 💪",
317
+ "Looking good! You're on the right track! ✨",
318
+ "Nice! Your workflow is looking productive! 🚀"
319
+ ],
320
+ "Distracted": [
321
+ "Wait, what are you working on? That doesn't look like the task! 🤨",
322
+ "Hmm, spotted some wandering there. Let's refocus! 👀",
323
+ "Getting a bit sidetracked? Back to the task! 🎯",
324
+ "I see you there! Time to get back on track! 🦉"
325
+ ],
326
+ "Idle": [
327
+ "Files won't write themselves. *Hoot hoot.* 🦉",
328
+ "Hey! Time to make some progress! ⏰",
329
+ "No activity detected. Let's get moving! 💤",
330
+ "Your task is waiting! Let's code! 🔥"
331
+ ]
332
+ }
333
+
334
+ def analyze(self, active_task: Optional[Dict], recent_activity: List[Dict]) -> Dict:
335
+ """Return mock analysis results."""
336
+ if not active_task:
337
+ return {
338
+ "verdict": "Idle",
339
+ "message": "No active task selected. Pick a task to get started! 🎯",
340
+ "reasoning": "No active task (mock mode)",
341
+ "timestamp": datetime.now().isoformat()
342
+ }
343
+
344
+ # Cycle through verdicts
345
+ verdict = self.verdicts_cycle[self.check_counter % len(self.verdicts_cycle)]
346
+ self.check_counter += 1
347
+
348
+ # Get message for this verdict
349
+ import random
350
+ message = random.choice(self.messages[verdict])
351
+
352
+ # Track consecutive states
353
+ if verdict == "Idle":
354
+ self.idle_count += 1
355
+ self.distracted_count = 0
356
+ elif verdict == "Distracted":
357
+ self.distracted_count += 1
358
+ self.idle_count = 0
359
+ else:
360
+ self.idle_count = 0
361
+ self.distracted_count = 0
362
+
363
+ self.last_verdict = verdict
364
+
365
+ return {
366
+ "verdict": verdict,
367
+ "message": message,
368
+ "reasoning": f"Mock analysis for task: {active_task.get('title', 'Unknown')}",
369
+ "timestamp": datetime.now().isoformat(),
370
+ "should_alert": (self.idle_count >= 2 or self.distracted_count >= 2)
371
+ }
372
+
373
+ def get_onboarding_tasks(self, project_description: str) -> List[Dict]:
374
+ """Generate mock tasks based on project description."""
375
+ # Simple keyword-based task generation
376
+ description_lower = project_description.lower()
377
+
378
+ if any(word in description_lower for word in ["web", "website", "app", "frontend"]):
379
+ return [
380
+ {"title": "Set up project structure", "description": "Create folders and initial files", "estimated_duration": "15 min"},
381
+ {"title": "Design UI mockup", "description": "Sketch out the main interface", "estimated_duration": "20 min"},
382
+ {"title": "Build homepage", "description": "Create the landing page HTML/CSS", "estimated_duration": "30 min"},
383
+ {"title": "Add navigation", "description": "Implement menu and routing", "estimated_duration": "25 min"},
384
+ {"title": "Connect backend", "description": "Set up API integration", "estimated_duration": "30 min"},
385
+ {"title": "Test and debug", "description": "Fix bugs and test functionality", "estimated_duration": "20 min"}
386
+ ]
387
+ elif any(word in description_lower for word in ["api", "backend", "server"]):
388
+ return [
389
+ {"title": "Set up project structure", "description": "Initialize project and dependencies", "estimated_duration": "15 min"},
390
+ {"title": "Design database schema", "description": "Plan data models and relationships", "estimated_duration": "20 min"},
391
+ {"title": "Create API endpoints", "description": "Build REST routes", "estimated_duration": "30 min"},
392
+ {"title": "Add authentication", "description": "Implement user auth", "estimated_duration": "25 min"},
393
+ {"title": "Write tests", "description": "Create unit and integration tests", "estimated_duration": "30 min"}
394
+ ]
395
+ else:
396
+ # Generic tasks
397
+ return [
398
+ {"title": "Research and planning", "description": "Gather requirements and plan approach", "estimated_duration": "20 min"},
399
+ {"title": "Set up environment", "description": "Install dependencies and tools", "estimated_duration": "15 min"},
400
+ {"title": "Build core feature #1", "description": "Implement main functionality", "estimated_duration": "30 min"},
401
+ {"title": "Build core feature #2", "description": "Add secondary features", "estimated_duration": "25 min"},
402
+ {"title": "Testing and debugging", "description": "Test and fix issues", "estimated_duration": "20 min"},
403
+ {"title": "Documentation", "description": "Write README and comments", "estimated_duration": "15 min"}
404
+ ]
app.py CHANGED
@@ -1,70 +1,61 @@
1
- import gradio as gr
2
- from huggingface_hub import InferenceClient
3
-
4
-
5
- def respond(
6
- message,
7
- history: list[dict[str, str]],
8
- system_message,
9
- max_tokens,
10
- temperature,
11
- top_p,
12
- hf_token: gr.OAuthToken,
13
- ):
14
- """
15
- For more information on `huggingface_hub` Inference API support, please check the docs: https://huggingface.co/docs/huggingface_hub/v0.22.2/en/guides/inference
16
- """
17
- client = InferenceClient(token=hf_token.token, model="openai/gpt-oss-20b")
18
-
19
- messages = [{"role": "system", "content": system_message}]
20
-
21
- messages.extend(history)
22
-
23
- messages.append({"role": "user", "content": message})
24
-
25
- response = ""
26
-
27
- for message in client.chat_completion(
28
- messages,
29
- max_tokens=max_tokens,
30
- stream=True,
31
- temperature=temperature,
32
- top_p=top_p,
33
- ):
34
- choices = message.choices
35
- token = ""
36
- if len(choices) and choices[0].delta.content:
37
- token = choices[0].delta.content
38
-
39
- response += token
40
- yield response
41
-
42
-
43
  """
44
- For information on how to customize the ChatInterface, peruse the gradio docs: https://www.gradio.app/docs/chatinterface
 
45
  """
46
- chatbot = gr.ChatInterface(
47
- respond,
48
- type="messages",
49
- additional_inputs=[
50
- gr.Textbox(value="You are a friendly Chatbot.", label="System message"),
51
- gr.Slider(minimum=1, maximum=2048, value=512, step=1, label="Max new tokens"),
52
- gr.Slider(minimum=0.1, maximum=4.0, value=0.7, step=0.1, label="Temperature"),
53
- gr.Slider(
54
- minimum=0.1,
55
- maximum=1.0,
56
- value=0.95,
57
- step=0.05,
58
- label="Top-p (nucleus sampling)",
59
- ),
60
- ],
61
- )
62
-
63
- with gr.Blocks() as demo:
64
- with gr.Sidebar():
65
- gr.LoginButton()
66
- chatbot.render()
67
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
 
69
  if __name__ == "__main__":
70
- demo.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  """
2
+ FocusFlow: AI Accountability Agent with Gradio 5 Interface.
3
+ Configurable via environment variables for HuggingFace Spaces or local use.
4
  """
5
+ import gradio as gr
6
+ import os
7
+ from dotenv import load_dotenv
8
+ from storage import TaskManager
9
+ from monitor import FileMonitor
10
+ from metrics import MetricsTracker
11
+ from voice import voice_generator
12
+ from linear_client import LinearClient
13
+ from core.pomodoro import PomodoroTimer
14
+ from core.focus_check import FocusMonitor
15
+ from ui.handlers import UIHandlers
16
+ from ui.layout import create_app
17
+
18
+ # Load environment variables
19
+ load_dotenv()
20
+
21
+ # Import MCP tools to register them with Gradio
22
+ try:
23
+ import mcp_tools
24
+ MCP_AVAILABLE = True
25
+ except Exception as e:
26
+ print(f"⚠️ MCP tools not available: {e}")
27
+ MCP_AVAILABLE = False
28
+
29
+ # Configuration from environment
30
+ LAUNCH_MODE = os.getenv("LAUNCH_MODE", "demo").lower() # 'demo' or 'local'
31
+ AI_PROVIDER = os.getenv("AI_PROVIDER", "openai").lower() # 'openai', 'anthropic', or 'vllm'
32
+ MONITOR_INTERVAL = int(os.getenv("MONITOR_INTERVAL", "30")) # seconds
33
+
34
+ # Initialize Core Components
35
+ task_manager = TaskManager()
36
+ file_monitor = FileMonitor()
37
+ metrics_tracker = MetricsTracker()
38
+ linear_client = LinearClient()
39
+
40
+ # Initialize Logic Modules
41
+ focus_monitor = FocusMonitor(task_manager, file_monitor, metrics_tracker, voice_generator)
42
+ focus_monitor.set_launch_mode(LAUNCH_MODE)
43
+
44
+ pomodoro_timer = PomodoroTimer()
45
+
46
+ # Initialize UI Handlers
47
+ ui_handlers = UIHandlers(task_manager, file_monitor, metrics_tracker, focus_monitor, linear_client)
48
+
49
+ # Create App
50
+ app = create_app(ui_handlers, pomodoro_timer, LAUNCH_MODE, AI_PROVIDER, MONITOR_INTERVAL)
51
 
52
  if __name__ == "__main__":
53
+ # Enable MCP server if available
54
+ mcp_enabled = os.getenv("ENABLE_MCP", "true").lower() == "true"
55
+
56
+ if MCP_AVAILABLE and mcp_enabled:
57
+ print("🔗 MCP Server enabled! Connect via Claude Desktop or other MCP clients.")
58
+ app.launch(server_name="0.0.0.0", server_port=5000, share=False, mcp_server=True)
59
+ else:
60
+ print("📱 Running without MCP integration")
61
+ app.launch(server_name="0.0.0.0", server_port=5000, share=False)
core/__init__.py ADDED
File without changes
core/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (139 Bytes). View file
 
core/__pycache__/__init__.cpython-313.pyc ADDED
Binary file (143 Bytes). View file
 
core/__pycache__/focus_check.cpython-310.pyc ADDED
Binary file (4.02 kB). View file
 
core/__pycache__/focus_check.cpython-313.pyc ADDED
Binary file (5.96 kB). View file
 
core/__pycache__/pomodoro.cpython-310.pyc ADDED
Binary file (2.39 kB). View file
 
core/__pycache__/pomodoro.cpython-313.pyc ADDED
Binary file (3.57 kB). View file
 
core/focus_check.py ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Focus Monitoring Logic.
3
+ """
4
+ import time
5
+ import json
6
+ from typing import List, Optional, Tuple, Any
7
+
8
+ class FocusMonitor:
9
+ def __init__(self, task_manager, file_monitor, metrics_tracker, voice_generator=None):
10
+ self.task_manager = task_manager
11
+ self.file_monitor = file_monitor
12
+ self.metrics_tracker = metrics_tracker
13
+ self.voice_generator = voice_generator
14
+
15
+ self.focus_agent = None
16
+ self.consecutive_distracted = 0
17
+ self.activity_log: List[str] = []
18
+ self.demo_text_content = ""
19
+ self.launch_mode = "demo" # Default
20
+
21
+ def set_agent(self, agent):
22
+ self.focus_agent = agent
23
+
24
+ def set_launch_mode(self, mode: str):
25
+ self.launch_mode = mode
26
+
27
+ def update_demo_text(self, text: str) -> str:
28
+ """Update demo text content (demo mode only)."""
29
+ self.demo_text_content = text
30
+ return f"✅ Text updated ({len(text)} characters)"
31
+
32
+ def get_activity_summary(self, monitoring_active: bool) -> str:
33
+ """Get recent activity summary."""
34
+ if self.launch_mode == "demo":
35
+ return f"📝 Demo text content: {len(self.demo_text_content)} characters"
36
+
37
+ if not monitoring_active:
38
+ return "⏸️ Monitoring is not active"
39
+
40
+ recent = self.file_monitor.get_recent_activity(5)
41
+ if not recent:
42
+ return "💤 No recent file activity"
43
+
44
+ summary = []
45
+ for event in recent:
46
+ summary.append(f"• {event['type'].upper()}: {event['filename']}")
47
+
48
+ return "\n".join(summary)
49
+
50
+ def run_check(self) -> Tuple[str, Optional[str], Optional[Any]]:
51
+ """
52
+ Run the focus check analysis with distraction escalation.
53
+ Returns:
54
+ Tuple[log_string, alert_js, voice_audio]
55
+ """
56
+ if not self.focus_agent:
57
+ return "⚠️ Agent not initialized. Check environment variables.", None, None
58
+
59
+ active_task = self.task_manager.get_active_task()
60
+
61
+ # Get recent activity based on mode
62
+ if self.launch_mode == "demo":
63
+ # In demo mode, create synthetic activity from text content
64
+ recent_activity = [{
65
+ 'type': 'text_edit',
66
+ 'filename': 'demo_workspace',
67
+ 'content': self.demo_text_content[-500:] if self.demo_text_content else "",
68
+ 'timestamp': time.time()
69
+ }] if self.demo_text_content else []
70
+ else:
71
+ recent_activity = self.file_monitor.get_recent_activity(10)
72
+
73
+ result = self.focus_agent.analyze(active_task, recent_activity)
74
+
75
+ verdict = result.get("verdict", "Unknown")
76
+ message = result.get("message", "No message")
77
+
78
+ # Handle distraction escalation logic
79
+ if verdict == "On Track":
80
+ # Reset counter when back on track
81
+ self.consecutive_distracted = 0
82
+ elif verdict == "Distracted":
83
+ # Increment distraction counter
84
+ self.consecutive_distracted += 1
85
+
86
+ # Log to metrics if we have an active task
87
+ if active_task:
88
+ self.metrics_tracker.log_focus_check(
89
+ active_task['id'],
90
+ active_task['title'],
91
+ verdict,
92
+ message
93
+ )
94
+
95
+ # Determine emoji
96
+ emoji = "✅" if verdict == "On Track" else "⚠️" if verdict == "Distracted" else "💤"
97
+
98
+ log_entry = f"{emoji} [{verdict}] {message}"
99
+ self.activity_log.append(log_entry)
100
+
101
+ # Keep only last 20 entries
102
+ if len(self.activity_log) > 20:
103
+ self.activity_log.pop(0)
104
+
105
+ # Generate voice feedback (optional, graceful if unavailable)
106
+ voice_audio = None
107
+ if self.voice_generator:
108
+ try:
109
+ voice_audio = self.voice_generator.get_focus_message_audio(verdict, message)
110
+ except Exception as e:
111
+ print(f"Voice generation error: {e}")
112
+
113
+ # Trigger browser alert and audio for distracted/idle status with escalation
114
+ alert_js = None
115
+ if verdict in ["Distracted", "Idle"]:
116
+ safe_message = json.dumps(message)
117
+
118
+ # Escalation logic:
119
+ # 1st distraction: play sound only
120
+ # 2nd distraction: play sound again
121
+ # 3rd+ distraction: add voice feedback
122
+ # play_voice = self.consecutive_distracted >= 3 # Logic handled by caller or voice_audio presence?
123
+ # Actually voice_audio is generated regardless, but maybe we only play it if distracted?
124
+ # The original code generated it always if available.
125
+
126
+ alert_js = f"""
127
+ () => {{
128
+ const audio = document.getElementById('nudge-alert');
129
+ if (audio) {{
130
+ audio.currentTime = 0;
131
+ audio.play().catch(e => console.log('Audio play failed:', e));
132
+ }}
133
+ if (Notification.permission === "granted") {{
134
+ new Notification("FocusFlow Alert 🦉", {{
135
+ body: {safe_message},
136
+ icon: "https://em-content.zobj.net/thumbs/160/apple/354/owl_1f989.png"
137
+ }});
138
+ }}
139
+ return null;
140
+ }}
141
+ """
142
+
143
+ return "\n".join(self.activity_log), alert_js, voice_audio
core/pomodoro.py ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Pomodoro Timer Logic for FocusFlow.
3
+ """
4
+ from typing import Dict, Tuple
5
+
6
+ class PomodoroTimer:
7
+ def __init__(self):
8
+ self.state = {
9
+ "minutes": 25,
10
+ "seconds": 0,
11
+ "is_running": False,
12
+ "is_work_time": True,
13
+ "total_seconds": 25 * 60
14
+ }
15
+
16
+ def format_time(self, total_seconds: int) -> str:
17
+ """Format seconds to MM:SS format."""
18
+ mins = total_seconds // 60
19
+ secs = total_seconds % 60
20
+ return f"{mins:02d}:{secs:02d}"
21
+
22
+ def get_display(self) -> str:
23
+ """Get current Pomodoro display string."""
24
+ time_str = self.format_time(self.state["total_seconds"])
25
+ status_str = "Work Time ⏰" if self.state["is_work_time"] else "Break Time ☕"
26
+ running_indicator = " (Running)" if self.state["is_running"] else ""
27
+ return f"**{time_str}** {status_str}{running_indicator}"
28
+
29
+ def start(self) -> str:
30
+ """Start the Pomodoro timer."""
31
+ self.state["is_running"] = True
32
+ return f"▶️ Timer started! {self.get_display()}"
33
+
34
+ def pause(self) -> str:
35
+ """Pause the Pomodoro timer."""
36
+ self.state["is_running"] = False
37
+ return f"⏸️ Timer paused. {self.get_display()}"
38
+
39
+ def reset(self) -> str:
40
+ """Reset the Pomodoro timer."""
41
+ self.state["is_running"] = False
42
+ self.state["total_seconds"] = 25 * 60
43
+ self.state["minutes"] = 25
44
+ self.state["seconds"] = 0
45
+ self.state["is_work_time"] = True
46
+ return f"🔄 Timer reset. {self.get_display()}"
47
+
48
+ def tick(self) -> Tuple[str, bool]:
49
+ """
50
+ Tick the Pomodoro timer.
51
+ Returns:
52
+ Tuple[display_string, should_play_sound]
53
+ """
54
+ if not self.state["is_running"]:
55
+ return self.get_display(), False
56
+
57
+ # Decrement timer
58
+ self.state["total_seconds"] -= 1
59
+
60
+ should_play_sound = False
61
+
62
+ # Check if session complete
63
+ if self.state["total_seconds"] <= 0:
64
+ # Switch between work and break
65
+ self.state["is_work_time"] = not self.state["is_work_time"]
66
+ self.state["total_seconds"] = (25 * 60) if self.state["is_work_time"] else (5 * 60)
67
+ self.state["is_running"] = False # Auto-pause to get attention
68
+ should_play_sound = True
69
+
70
+ return self.get_display(), should_play_sound
linear_client.py ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Linear Client for FocusFlow.
3
+ Handles integration with Linear API (or MCP server) for task synchronization.
4
+ Falls back to mock data if no API key is provided.
5
+ """
6
+ import os
7
+ import json
8
+ import requests
9
+ from typing import List, Dict, Optional
10
+ from datetime import datetime
11
+
12
+ class LinearClient:
13
+ """Client for interacting with Linear."""
14
+
15
+ def __init__(self, api_key: Optional[str] = None):
16
+ """Initialize Linear client."""
17
+ self.api_key = api_key or os.getenv("LINEAR_API_KEY")
18
+ self.api_url = "https://api.linear.app/graphql"
19
+ self.is_active = bool(self.api_key)
20
+
21
+ if not self.is_active:
22
+ print("ℹ️ Linear: No API key found. Using mock data.")
23
+
24
+ def _headers(self) -> Dict[str, str]:
25
+ """Get request headers."""
26
+ return {
27
+ "Content-Type": "application/json",
28
+ "Authorization": self.api_key
29
+ }
30
+
31
+ def _query(self, query: str, variables: Dict = None) -> Dict:
32
+ """Execute GraphQL query."""
33
+ if not self.is_active:
34
+ return {}
35
+
36
+ try:
37
+ response = requests.post(
38
+ self.api_url,
39
+ headers=self._headers(),
40
+ json={"query": query, "variables": variables or {}}
41
+ )
42
+ response.raise_for_status()
43
+ return response.json()
44
+ except Exception as e:
45
+ print(f"⚠️ Linear API error: {e}")
46
+ return {}
47
+
48
+ def get_user_projects(self) -> List[Dict]:
49
+ """Get projects for the current user."""
50
+ if not self.is_active:
51
+ return [
52
+ {"id": "mock-1", "name": "Website Redesign", "description": "Overhaul the company website"},
53
+ {"id": "mock-2", "name": "Mobile App", "description": "iOS and Android app development"},
54
+ {"id": "mock-3", "name": "API Migration", "description": "Migrate legacy API to GraphQL"}
55
+ ]
56
+
57
+ query = """
58
+ query {
59
+ viewer {
60
+ projects(first: 10) {
61
+ nodes {
62
+ id
63
+ name
64
+ description
65
+ }
66
+ }
67
+ }
68
+ }
69
+ """
70
+ result = self._query(query)
71
+ try:
72
+ return result.get("data", {}).get("viewer", {}).get("projects", {}).get("nodes", [])
73
+ except Exception:
74
+ return []
75
+
76
+ def get_project_tasks(self, project_id: str) -> List[Dict]:
77
+ """Get tasks for a specific project."""
78
+ if not self.is_active:
79
+ # Return mock tasks based on project ID
80
+ if project_id == "mock-1":
81
+ return [
82
+ {"id": "L-101", "title": "Design Homepage", "description": "Create Figma mockups", "estimate": 60},
83
+ {"id": "L-102", "title": "Implement Header", "description": "React component for header", "estimate": 30},
84
+ {"id": "L-103", "title": "Fix CSS Bugs", "description": "Fix mobile layout issues", "estimate": 45}
85
+ ]
86
+ return [
87
+ {"id": "L-201", "title": "Setup Repo", "description": "Initialize git repository", "estimate": 15},
88
+ {"id": "L-202", "title": "Basic Auth", "description": "Implement login flow", "estimate": 60}
89
+ ]
90
+
91
+ query = """
92
+ query($projectId: ID!) {
93
+ project(id: $projectId) {
94
+ issues(first: 20, filter: { state: { name: { neq: "Done" } } }) {
95
+ nodes {
96
+ id
97
+ title
98
+ description
99
+ estimate
100
+ }
101
+ }
102
+ }
103
+ }
104
+ """
105
+ result = self._query(query, {"projectId": project_id})
106
+ try:
107
+ return result.get("data", {}).get("project", {}).get("issues", {}).get("nodes", [])
108
+ except Exception:
109
+ return []
110
+
111
+ def create_task(self, title: str, description: str = "", team_id: str = None) -> Optional[str]:
112
+ """Create a new task (issue) in Linear."""
113
+ if not self.is_active:
114
+ print(f"ℹ️ Linear (Mock): Created task '{title}'")
115
+ return "mock-new-id"
116
+
117
+ # Note: This requires a team_id. For simplicity, we might need to fetch a default team first.
118
+ # This is a simplified implementation.
119
+ if not team_id:
120
+ # Try to get the first team
121
+ team_query = """query { viewer { teams(first: 1) { nodes { id } } } }"""
122
+ team_res = self._query(team_query)
123
+ try:
124
+ team_id = team_res["data"]["viewer"]["teams"]["nodes"][0]["id"]
125
+ except:
126
+ return None
127
+
128
+ mutation = """
129
+ mutation($title: String!, $description: String, $teamId: String!) {
130
+ issueCreate(input: { title: $title, description: $description, teamId: $teamId }) {
131
+ issue {
132
+ id
133
+ }
134
+ }
135
+ }
136
+ """
137
+ result = self._query(mutation, {"title": title, "description": description, "teamId": team_id})
138
+ try:
139
+ return result["data"]["issueCreate"]["issue"]["id"]
140
+ except:
141
+ return None
mcp_tools.py ADDED
@@ -0,0 +1,261 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ MCP (Model Context Protocol) tools and resources for FocusFlow.
3
+ Enables LLM assistants like Claude Desktop to interact with FocusFlow.
4
+ """
5
+ import gradio as gr
6
+ from typing import Dict, List, Optional
7
+ from storage import TaskManager
8
+ from metrics import MetricsTracker
9
+
10
+ # Initialize shared instances for MCP tools
11
+ # Note: These are separate from app.py's instances but use the same database
12
+ task_manager = TaskManager()
13
+ metrics_tracker = MetricsTracker()
14
+
15
+
16
+ @gr.mcp.tool()
17
+ def add_task(title: str, description: str = "", duration: int = 30) -> str:
18
+ """
19
+ Create a new task in FocusFlow.
20
+
21
+ Args:
22
+ title: Task title (required)
23
+ description: Detailed task description (optional)
24
+ duration: Estimated duration in minutes (default: 30)
25
+
26
+ Returns:
27
+ Success message with task ID
28
+ """
29
+ try:
30
+ duration_str = f"{duration} min"
31
+ task_id = task_manager.add_task(title, description, duration_str, status="Todo")
32
+ return f"✅ Task created successfully! ID: {task_id}, Title: '{title}', Duration: {duration} min"
33
+ except Exception as e:
34
+ return f"❌ Error creating task: {str(e)}"
35
+
36
+
37
+ @gr.mcp.tool()
38
+ def get_current_task() -> str:
39
+ """
40
+ Get the currently active task (marked as 'In Progress').
41
+
42
+ Returns:
43
+ Details of the active task or message if no active task
44
+ """
45
+ try:
46
+ active_task = task_manager.get_active_task()
47
+ if not active_task:
48
+ return "ℹ️ No active task. Use start_task(task_id) to begin working on a task."
49
+
50
+ return f"""📋 Current Active Task:
51
+ - ID: {active_task['id']}
52
+ - Title: {active_task['title']}
53
+ - Description: {active_task.get('description', 'No description')}
54
+ - Duration: {active_task.get('estimated_duration', 'Not specified')}
55
+ - Status: {active_task['status']}"""
56
+ except Exception as e:
57
+ return f"❌ Error getting current task: {str(e)}"
58
+
59
+
60
+ @gr.mcp.tool()
61
+ def start_task(task_id: int) -> str:
62
+ """
63
+ Mark a task as 'In Progress' and set it as the active task.
64
+ Only one task can be active at a time.
65
+
66
+ Args:
67
+ task_id: ID of the task to start
68
+
69
+ Returns:
70
+ Success or error message
71
+ """
72
+ try:
73
+ # Check if task exists first
74
+ task = task_manager.get_task(task_id)
75
+ if not task:
76
+ return f"❌ Task {task_id} not found. Use get_all_tasks() to see available tasks."
77
+
78
+ success = task_manager.set_active_task(task_id)
79
+ if success:
80
+ return f"✅ Task {task_id} started: '{task['title']}'. FocusFlow is now monitoring your progress!"
81
+ else:
82
+ return f"❌ Failed to start task {task_id}. Task is already marked as Done."
83
+ except Exception as e:
84
+ return f"❌ Error starting task: {str(e)}"
85
+
86
+
87
+ @gr.mcp.tool()
88
+ def mark_task_done(task_id: int) -> str:
89
+ """
90
+ Mark a task as completed ('Done').
91
+
92
+ Args:
93
+ task_id: ID of the task to complete
94
+
95
+ Returns:
96
+ Success or error message
97
+ """
98
+ try:
99
+ # Check if task exists first
100
+ task = task_manager.get_task(task_id)
101
+ if not task:
102
+ return f"❌ Task {task_id} not found. Use get_all_tasks() to see available tasks."
103
+
104
+ task_manager.update_task(task_id, status="Done")
105
+ return f"🎉 Task {task_id} completed: '{task['title']}'! Great work!"
106
+ except Exception as e:
107
+ return f"❌ Error marking task done: {str(e)}"
108
+
109
+
110
+ @gr.mcp.tool()
111
+ def get_all_tasks() -> str:
112
+ """
113
+ Get a list of all tasks with their current status.
114
+
115
+ Returns:
116
+ Formatted list of all tasks
117
+ """
118
+ try:
119
+ tasks = task_manager.get_all_tasks()
120
+ if not tasks:
121
+ return "📝 No tasks yet. Use add_task() to create your first task!"
122
+
123
+ result = f"📋 All Tasks ({len(tasks)} total):\n\n"
124
+ for task in tasks:
125
+ status_emoji = "✅" if task['status'] == "Done" else "🔄" if task['status'] == "In Progress" else "⏳"
126
+ result += f"{status_emoji} [{task['id']}] {task['title']}\n"
127
+ if task.get('description'):
128
+ result += f" Description: {task['description']}\n"
129
+ result += f" Status: {task['status']} | Duration: {task.get('estimated_duration', 'N/A')}\n\n"
130
+
131
+ return result.strip()
132
+ except Exception as e:
133
+ return f"❌ Error getting tasks: {str(e)}"
134
+
135
+
136
+ @gr.mcp.tool()
137
+ def delete_task(task_id: int) -> str:
138
+ """
139
+ Delete a task permanently.
140
+
141
+ Args:
142
+ task_id: ID of the task to delete
143
+
144
+ Returns:
145
+ Success or error message
146
+ """
147
+ try:
148
+ task = task_manager.get_task(task_id)
149
+ if not task:
150
+ return f"❌ Task {task_id} not found."
151
+
152
+ title = task['title']
153
+ task_manager.delete_task(task_id)
154
+ return f"🗑️ Task {task_id} deleted: '{title}'"
155
+ except Exception as e:
156
+ return f"❌ Error deleting task: {str(e)}"
157
+
158
+
159
+ @gr.mcp.tool()
160
+ def update_task(task_id: int, title: Optional[str] = None, description: Optional[str] = None,
161
+ status: Optional[str] = None, duration: Optional[int] = None) -> str:
162
+ """
163
+ Update an existing task.
164
+
165
+ Args:
166
+ task_id: ID of the task to update
167
+ title: New title (optional)
168
+ description: New description (optional)
169
+ status: New status (Todo, In Progress, Done) (optional)
170
+ duration: New estimated duration in minutes (optional)
171
+
172
+ Returns:
173
+ Success or error message
174
+ """
175
+ try:
176
+ task = task_manager.get_task(task_id)
177
+ if not task:
178
+ return f"❌ Task {task_id} not found."
179
+
180
+ updates = {}
181
+ if title is not None:
182
+ updates['title'] = title
183
+ if description is not None:
184
+ updates['description'] = description
185
+ if status is not None:
186
+ updates['status'] = status
187
+ if duration is not None:
188
+ updates['estimated_duration'] = f"{duration} min"
189
+
190
+ if not updates:
191
+ return "ℹ️ No changes provided."
192
+
193
+ task_manager.update_task(task_id, **updates)
194
+ return f"✅ Task {task_id} updated successfully!"
195
+ except Exception as e:
196
+ return f"❌ Error updating task: {str(e)}"
197
+
198
+
199
+ @gr.mcp.tool()
200
+ def get_productivity_stats() -> str:
201
+ """
202
+ Get productivity statistics and insights including focus metrics.
203
+
204
+ Returns:
205
+ Summary of task completion, progress, and focus scores
206
+ """
207
+ try:
208
+ # Task statistics
209
+ tasks = task_manager.get_all_tasks()
210
+ if not tasks:
211
+ return "📊 No tasks to analyze yet. Create some tasks to see your productivity stats!"
212
+
213
+ total = len(tasks)
214
+ completed = sum(1 for t in tasks if t['status'] == 'Done')
215
+ in_progress = sum(1 for t in tasks if t['status'] == 'In Progress')
216
+ todo = sum(1 for t in tasks if t['status'] == 'Todo')
217
+
218
+ completion_rate = (completed / total * 100) if total > 0 else 0
219
+
220
+ # Focus metrics
221
+ today_stats = metrics_tracker.get_today_stats()
222
+ current_streak = metrics_tracker.get_current_streak()
223
+
224
+ result = f"""📊 Productivity Statistics:
225
+
226
+ 📋 Task Progress:
227
+ ✅ Completed: {completed}/{total} tasks ({completion_rate:.1f}%)
228
+ 🔄 In Progress: {in_progress} task(s)
229
+ ⏳ To Do: {todo} tasks
230
+
231
+ 🎯 Focus Metrics (Today):
232
+ ⭐ Focus Score: {today_stats['focus_score']}/100
233
+ 🔥 Current Streak: {current_streak} consecutive "On Track" checks
234
+ 📊 Total Checks: {today_stats['total_checks']}
235
+ • On Track: {today_stats['on_track']}
236
+ • Distracted: {today_stats['distracted']}
237
+ • Idle: {today_stats['idle']}
238
+
239
+ Keep up the good work! 🎯"""
240
+ return result
241
+ except Exception as e:
242
+ return f"❌ Error getting stats: {str(e)}"
243
+
244
+
245
+ # MCP Resources
246
+ @gr.mcp.resource("focusflow://tasks/all")
247
+ def get_all_tasks_resource() -> str:
248
+ """Expose all tasks as an MCP resource."""
249
+ return get_all_tasks()
250
+
251
+
252
+ @gr.mcp.resource("focusflow://tasks/active")
253
+ def get_active_task_resource() -> str:
254
+ """Expose the active task as an MCP resource."""
255
+ return get_current_task()
256
+
257
+
258
+ @gr.mcp.resource("focusflow://stats")
259
+ def get_stats_resource() -> str:
260
+ """Expose productivity statistics as an MCP resource."""
261
+ return get_productivity_stats()
metrics.py ADDED
@@ -0,0 +1,247 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Productivity metrics tracking for FocusFlow.
3
+ Tracks focus scores, completion rates, and streaks.
4
+ """
5
+ import sqlite3
6
+ from datetime import datetime, timedelta
7
+ from typing import Dict, List, Tuple
8
+ import json
9
+
10
+
11
+ class MetricsTracker:
12
+ """Tracks productivity metrics and focus history."""
13
+
14
+ def __init__(self, db_path: str = "focusflow.db"):
15
+ """Initialize metrics tracker with SQLite database."""
16
+ self.db_path = db_path
17
+ self._init_db()
18
+
19
+ def _init_db(self):
20
+ """Create metrics tables if they don't exist."""
21
+ conn = sqlite3.connect(self.db_path)
22
+ cursor = conn.cursor()
23
+
24
+ # Focus check history
25
+ cursor.execute("""
26
+ CREATE TABLE IF NOT EXISTS focus_history (
27
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
28
+ task_id INTEGER,
29
+ task_title TEXT,
30
+ verdict TEXT,
31
+ message TEXT,
32
+ timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
33
+ )
34
+ """)
35
+
36
+ # Daily streaks
37
+ cursor.execute("""
38
+ CREATE TABLE IF NOT EXISTS streaks (
39
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
40
+ date DATE UNIQUE,
41
+ on_track_count INTEGER DEFAULT 0,
42
+ distracted_count INTEGER DEFAULT 0,
43
+ idle_count INTEGER DEFAULT 0,
44
+ max_consecutive_on_track INTEGER DEFAULT 0,
45
+ focus_score REAL DEFAULT 0
46
+ )
47
+ """)
48
+
49
+ conn.commit()
50
+ conn.close()
51
+
52
+ def log_focus_check(self, task_id: int, task_title: str, verdict: str, message: str):
53
+ """Log a focus check result."""
54
+ conn = sqlite3.connect(self.db_path)
55
+ cursor = conn.cursor()
56
+
57
+ cursor.execute("""
58
+ INSERT INTO focus_history (task_id, task_title, verdict, message, timestamp)
59
+ VALUES (?, ?, ?, ?, ?)
60
+ """, (task_id, task_title, verdict, message, datetime.now()))
61
+
62
+ # Update today's streak data
63
+ today = datetime.now().date()
64
+
65
+ # Get or create today's streak record
66
+ cursor.execute("SELECT * FROM streaks WHERE date = ?", (today,))
67
+ row = cursor.fetchone()
68
+
69
+ if row:
70
+ # Update existing record
71
+ on_track = row[2] + (1 if verdict == "On Track" else 0)
72
+ distracted = row[3] + (1 if verdict == "Distracted" else 0)
73
+ idle = row[4] + (1 if verdict == "Idle" else 0)
74
+
75
+ # Calculate consecutive on-track streak
76
+ cursor.execute("""
77
+ SELECT verdict FROM focus_history
78
+ WHERE DATE(timestamp) = ?
79
+ ORDER BY timestamp DESC LIMIT 20
80
+ """, (today,))
81
+ recent_verdicts = [r[0] for r in cursor.fetchall()]
82
+ recent_verdicts.reverse()
83
+
84
+ consecutive = 0
85
+ for v in reversed(recent_verdicts):
86
+ if v == "On Track":
87
+ consecutive += 1
88
+ else:
89
+ break
90
+
91
+ max_consecutive = max(row[5], consecutive)
92
+
93
+ # Calculate focus score (0-100)
94
+ total_checks = on_track + distracted + idle
95
+ focus_score = (on_track / total_checks * 100) if total_checks > 0 else 0
96
+
97
+ cursor.execute("""
98
+ UPDATE streaks
99
+ SET on_track_count = ?, distracted_count = ?, idle_count = ?,
100
+ max_consecutive_on_track = ?, focus_score = ?
101
+ WHERE date = ?
102
+ """, (on_track, distracted, idle, max_consecutive, focus_score, today))
103
+ else:
104
+ # Create new record
105
+ on_track = 1 if verdict == "On Track" else 0
106
+ distracted = 1 if verdict == "Distracted" else 0
107
+ idle = 1 if verdict == "Idle" else 0
108
+ focus_score = (on_track / 1 * 100) if (on_track + distracted + idle) > 0 else 0
109
+
110
+ cursor.execute("""
111
+ INSERT INTO streaks (date, on_track_count, distracted_count, idle_count,
112
+ max_consecutive_on_track, focus_score)
113
+ VALUES (?, ?, ?, ?, ?, ?)
114
+ """, (today, on_track, distracted, idle, on_track, focus_score))
115
+
116
+ conn.commit()
117
+ conn.close()
118
+
119
+ def get_today_stats(self) -> Dict:
120
+ """Get today's productivity statistics."""
121
+ conn = sqlite3.connect(self.db_path)
122
+ cursor = conn.cursor()
123
+
124
+ today = datetime.now().date()
125
+ cursor.execute("SELECT * FROM streaks WHERE date = ?", (today,))
126
+ row = cursor.fetchone()
127
+
128
+ conn.close()
129
+
130
+ if not row:
131
+ return {
132
+ "on_track": 0,
133
+ "distracted": 0,
134
+ "idle": 0,
135
+ "max_streak": 0,
136
+ "focus_score": 0,
137
+ "total_checks": 0
138
+ }
139
+
140
+ return {
141
+ "on_track": row[2],
142
+ "distracted": row[3],
143
+ "idle": row[4],
144
+ "max_streak": row[5],
145
+ "focus_score": round(row[6], 1),
146
+ "total_checks": row[2] + row[3] + row[4]
147
+ }
148
+
149
+ def get_weekly_stats(self) -> List[Dict]:
150
+ """Get last 7 days of statistics."""
151
+ conn = sqlite3.connect(self.db_path)
152
+ conn.row_factory = sqlite3.Row
153
+ cursor = conn.cursor()
154
+
155
+ seven_days_ago = datetime.now().date() - timedelta(days=6)
156
+
157
+ cursor.execute("""
158
+ SELECT date, on_track_count, distracted_count, idle_count, focus_score
159
+ FROM streaks
160
+ WHERE date >= ?
161
+ ORDER BY date DESC
162
+ """, (seven_days_ago,))
163
+
164
+ rows = cursor.fetchall()
165
+ conn.close()
166
+
167
+ return [dict(row) for row in rows]
168
+
169
+ def get_focus_history(self, limit: int = 20) -> List[Dict]:
170
+ """Get recent focus check history."""
171
+ conn = sqlite3.connect(self.db_path)
172
+ conn.row_factory = sqlite3.Row
173
+ cursor = conn.cursor()
174
+
175
+ cursor.execute("""
176
+ SELECT task_title, verdict, message, timestamp
177
+ FROM focus_history
178
+ ORDER BY timestamp DESC
179
+ LIMIT ?
180
+ """, (limit,))
181
+
182
+ rows = cursor.fetchall()
183
+ conn.close()
184
+
185
+ return [dict(row) for row in rows]
186
+
187
+ def get_current_streak(self) -> int:
188
+ """Get current consecutive 'On Track' streak."""
189
+ conn = sqlite3.connect(self.db_path)
190
+ cursor = conn.cursor()
191
+
192
+ today = datetime.now().date()
193
+ cursor.execute("""
194
+ SELECT verdict FROM focus_history
195
+ WHERE DATE(timestamp) = ?
196
+ ORDER BY timestamp DESC LIMIT 50
197
+ """, (today,))
198
+
199
+ verdicts = [r[0] for r in cursor.fetchall()]
200
+ conn.close()
201
+
202
+ streak = 0
203
+ for verdict in verdicts:
204
+ if verdict == "On Track":
205
+ streak += 1
206
+ else:
207
+ break
208
+
209
+ return streak
210
+
211
+ def get_chart_data(self) -> Dict:
212
+ """Get data formatted for charts."""
213
+ weekly = self.get_weekly_stats()
214
+
215
+ # Prepare data for charts
216
+ dates = []
217
+ focus_scores = []
218
+ on_track_counts = []
219
+ distracted_counts = []
220
+ idle_counts = []
221
+
222
+ # Fill in missing days with zeros
223
+ for i in range(7):
224
+ date = datetime.now().date() - timedelta(days=6-i)
225
+ dates.append(date.strftime("%m/%d"))
226
+
227
+ # Find matching data
228
+ day_data = next((d for d in weekly if str(d['date']) == str(date)), None)
229
+
230
+ if day_data:
231
+ focus_scores.append(day_data['focus_score'])
232
+ on_track_counts.append(day_data['on_track_count'])
233
+ distracted_counts.append(day_data['distracted_count'])
234
+ idle_counts.append(day_data['idle_count'])
235
+ else:
236
+ focus_scores.append(0)
237
+ on_track_counts.append(0)
238
+ distracted_counts.append(0)
239
+ idle_counts.append(0)
240
+
241
+ return {
242
+ "dates": dates,
243
+ "focus_scores": focus_scores,
244
+ "on_track": on_track_counts,
245
+ "distracted": distracted_counts,
246
+ "idle": idle_counts
247
+ }
monitor.py ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ File monitoring using watchdog with content-aware diff detection.
3
+ """
4
+ import os
5
+ import time
6
+ from pathlib import Path
7
+ from typing import List, Dict, Optional, Callable
8
+ from watchdog.observers import Observer
9
+ from watchdog.events import FileSystemEventHandler, FileSystemEvent
10
+ from datetime import datetime
11
+ import threading
12
+
13
+
14
+ class ContentAwareHandler(FileSystemEventHandler):
15
+ """Handler that tracks file changes with content awareness."""
16
+
17
+ IGNORED_PATTERNS = [
18
+ '.git', '__pycache__', '.env', 'node_modules',
19
+ '.venv', 'venv', '.idea', '.vscode',
20
+ '.pyc', '.pyo', '.pyd', '.so', '.dll', '.dylib'
21
+ ]
22
+
23
+ TEXT_EXTENSIONS = [
24
+ '.py', '.js', '.jsx', '.ts', '.tsx', '.html', '.css',
25
+ '.json', '.md', '.txt', '.yaml', '.yml', '.toml',
26
+ '.c', '.cpp', '.h', '.java', '.go', '.rs', '.rb'
27
+ ]
28
+
29
+ def __init__(self, callback: Optional[Callable] = None):
30
+ """Initialize the handler with optional callback."""
31
+ super().__init__()
32
+ self.events: List[Dict] = []
33
+ self.callback = callback
34
+ self.last_event_time = {}
35
+ self.debounce_seconds = 1.0
36
+
37
+ def _should_ignore(self, path: str) -> bool:
38
+ """Check if path should be ignored."""
39
+ path_parts = Path(path).parts
40
+ for pattern in self.IGNORED_PATTERNS:
41
+ if pattern in path_parts or path.endswith(pattern):
42
+ return True
43
+ return False
44
+
45
+ def _is_text_file(self, path: str) -> bool:
46
+ """Check if file is a text file we should read."""
47
+ return any(path.endswith(ext) for ext in self.TEXT_EXTENSIONS)
48
+
49
+ def _read_file_content(self, path: str, max_chars: int = 500) -> str:
50
+ """Read last N characters of a text file."""
51
+ try:
52
+ if not os.path.exists(path) or not os.path.isfile(path):
53
+ return ""
54
+
55
+ if not self._is_text_file(path):
56
+ return "[Binary file]"
57
+
58
+ with open(path, 'r', encoding='utf-8', errors='ignore') as f:
59
+ content = f.read()
60
+ if len(content) > max_chars:
61
+ return f"...{content[-max_chars:]}"
62
+ return content
63
+ except Exception as e:
64
+ return f"[Error reading file: {str(e)}]"
65
+
66
+ def _debounce_event(self, path: str) -> bool:
67
+ """Check if event should be debounced (too soon after last event)."""
68
+ now = time.time()
69
+ last_time = self.last_event_time.get(path, 0)
70
+
71
+ if now - last_time < self.debounce_seconds:
72
+ return True
73
+
74
+ self.last_event_time[path] = now
75
+ return False
76
+
77
+ def _create_event(self, event_type: str, path: str):
78
+ """Create and store an event."""
79
+ if self._should_ignore(path):
80
+ return
81
+
82
+ if self._debounce_event(path):
83
+ return
84
+
85
+ event_data = {
86
+ 'type': event_type,
87
+ 'path': path,
88
+ 'filename': os.path.basename(path),
89
+ 'timestamp': datetime.now().isoformat(),
90
+ 'content': self._read_file_content(path) if event_type == 'modified' else ""
91
+ }
92
+
93
+ self.events.append(event_data)
94
+
95
+ # Keep only last 50 events
96
+ if len(self.events) > 50:
97
+ self.events = self.events[-50:]
98
+
99
+ if self.callback:
100
+ self.callback(event_data)
101
+
102
+ def on_modified(self, event: FileSystemEvent):
103
+ """Handle file modification."""
104
+ if not event.is_directory:
105
+ self._create_event('modified', str(event.src_path))
106
+
107
+ def on_created(self, event: FileSystemEvent):
108
+ """Handle file creation."""
109
+ if not event.is_directory:
110
+ self._create_event('created', str(event.src_path))
111
+
112
+ def on_deleted(self, event: FileSystemEvent):
113
+ """Handle file deletion."""
114
+ if not event.is_directory:
115
+ self._create_event('deleted', str(event.src_path))
116
+
117
+ def get_recent_events(self, limit: int = 10) -> List[Dict]:
118
+ """Get the most recent events."""
119
+ return self.events[-limit:]
120
+
121
+ def clear_events(self):
122
+ """Clear all stored events."""
123
+ self.events = []
124
+
125
+
126
+ class FileMonitor:
127
+ """File monitor using watchdog."""
128
+
129
+ def __init__(self):
130
+ """Initialize the file monitor."""
131
+ self.observer = None
132
+ self.handler = None
133
+ self.watching_path = None
134
+
135
+ def start(self, path: str, callback: Optional[Callable] = None):
136
+ """Start monitoring a directory."""
137
+ if self.observer and self.observer.is_alive():
138
+ self.stop()
139
+
140
+ if not os.path.exists(path):
141
+ raise ValueError(f"Path does not exist: {path}")
142
+
143
+ self.watching_path = path
144
+ self.handler = ContentAwareHandler(callback)
145
+ self.observer = Observer()
146
+ self.observer.schedule(self.handler, path, recursive=True)
147
+ self.observer.start()
148
+
149
+ def stop(self):
150
+ """Stop monitoring."""
151
+ if self.observer and self.observer.is_alive():
152
+ self.observer.stop()
153
+ self.observer.join(timeout=2)
154
+ self.observer = None
155
+ self.handler = None
156
+ self.watching_path = None
157
+
158
+ def get_recent_activity(self, limit: int = 10) -> List[Dict]:
159
+ """Get recent file activity."""
160
+ if self.handler:
161
+ return self.handler.get_recent_events(limit)
162
+ return []
163
+
164
+ def clear_activity(self):
165
+ """Clear activity log."""
166
+ if self.handler:
167
+ self.handler.clear_events()
168
+
169
+ def is_running(self) -> bool:
170
+ """Check if monitor is running."""
171
+ return self.observer is not None and self.observer.is_alive()
requirements.txt ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ gradio>=5.0.0
2
+ gradio[mcp]
3
+ openai>=1.0.0
4
+ anthropic>=0.34.0
5
+ google-generativeai>=0.8.0
6
+ watchdog>=3.0.0
7
+ python-dotenv>=1.0.0
8
+ # vllm>=0.6.0
9
+ elevenlabs>=1.0.0
10
+ linear>=0.1.0
11
+ requests>=2.31.0
12
+ pytest
13
+ sseclient-py
storage.py ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Task Manager with SQLite backend for CRUD operations.
3
+ """
4
+ import sqlite3
5
+ import json
6
+ from typing import List, Dict, Optional
7
+ from datetime import datetime
8
+ import os
9
+
10
+
11
+ class TaskManager:
12
+ """Manages tasks with SQLite persistence."""
13
+
14
+ # Strict status enum
15
+ VALID_STATUSES = {"Todo", "In Progress", "Done"}
16
+
17
+ def __init__(self, db_path: str = "focusflow.db"):
18
+ """Initialize the task manager with SQLite database."""
19
+ self.db_path = db_path
20
+ self._init_db()
21
+
22
+ def _init_db(self):
23
+ """Create the tasks table if it doesn't exist."""
24
+ conn = sqlite3.connect(self.db_path)
25
+ cursor = conn.cursor()
26
+ cursor.execute("""
27
+ CREATE TABLE IF NOT EXISTS tasks (
28
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
29
+ title TEXT NOT NULL,
30
+ description TEXT,
31
+ status TEXT DEFAULT 'Todo',
32
+ estimated_duration TEXT,
33
+ position INTEGER,
34
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
35
+ updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
36
+ )
37
+ """)
38
+ conn.commit()
39
+ conn.close()
40
+
41
+ def add_task(self, title: str, description: str = "",
42
+ estimated_duration: str = "", status: str = "Todo") -> int:
43
+ """Add a new task and return its ID."""
44
+ # Validate status
45
+ if status not in self.VALID_STATUSES:
46
+ status = "Todo"
47
+
48
+ conn = sqlite3.connect(self.db_path)
49
+ cursor = conn.cursor()
50
+
51
+ # Get max position
52
+ cursor.execute("SELECT MAX(position) FROM tasks")
53
+ max_pos = cursor.fetchone()[0]
54
+ position = (max_pos or 0) + 1
55
+
56
+ cursor.execute("""
57
+ INSERT INTO tasks (title, description, status, estimated_duration, position)
58
+ VALUES (?, ?, ?, ?, ?)
59
+ """, (title, description, status, estimated_duration, position))
60
+
61
+ task_id = cursor.lastrowid or 0
62
+ conn.commit()
63
+ conn.close()
64
+ return task_id
65
+
66
+ def get_all_tasks(self) -> List[Dict]:
67
+ """Get all tasks ordered by position."""
68
+ conn = sqlite3.connect(self.db_path)
69
+ conn.row_factory = sqlite3.Row
70
+ cursor = conn.cursor()
71
+
72
+ cursor.execute("""
73
+ SELECT id, title, description, status, estimated_duration, position
74
+ FROM tasks ORDER BY position
75
+ """)
76
+
77
+ tasks = [dict(row) for row in cursor.fetchall()]
78
+ conn.close()
79
+ return tasks
80
+
81
+ def get_task(self, task_id: int) -> Optional[Dict]:
82
+ """Get a specific task by ID."""
83
+ conn = sqlite3.connect(self.db_path)
84
+ conn.row_factory = sqlite3.Row
85
+ cursor = conn.cursor()
86
+
87
+ cursor.execute("""
88
+ SELECT id, title, description, status, estimated_duration, position
89
+ FROM tasks WHERE id = ?
90
+ """, (task_id,))
91
+
92
+ row = cursor.fetchone()
93
+ conn.close()
94
+ return dict(row) if row else None
95
+
96
+ def update_task(self, task_id: int, **kwargs):
97
+ """Update a task's fields with validation."""
98
+ # Validate status if provided
99
+ if 'status' in kwargs and kwargs['status'] not in self.VALID_STATUSES:
100
+ raise ValueError(f"Invalid status. Must be one of: {', '.join(self.VALID_STATUSES)}")
101
+
102
+ conn = sqlite3.connect(self.db_path)
103
+ cursor = conn.cursor()
104
+
105
+ allowed_fields = ['title', 'description', 'status', 'estimated_duration', 'position']
106
+ updates = []
107
+ values = []
108
+
109
+ for key, value in kwargs.items():
110
+ if key in allowed_fields:
111
+ updates.append(f"{key} = ?")
112
+ values.append(value)
113
+
114
+ if updates:
115
+ values.append(task_id)
116
+ query = f"UPDATE tasks SET {', '.join(updates)}, updated_at = CURRENT_TIMESTAMP WHERE id = ?"
117
+ cursor.execute(query, values)
118
+ conn.commit()
119
+
120
+ conn.close()
121
+
122
+ def delete_task(self, task_id: int):
123
+ """Delete a task by ID."""
124
+ conn = sqlite3.connect(self.db_path)
125
+ cursor = conn.cursor()
126
+ cursor.execute("DELETE FROM tasks WHERE id = ?", (task_id,))
127
+ conn.commit()
128
+ conn.close()
129
+
130
+ def reorder_tasks(self, task_ids: List[int]):
131
+ """Reorder tasks based on new order."""
132
+ conn = sqlite3.connect(self.db_path)
133
+ cursor = conn.cursor()
134
+
135
+ for position, task_id in enumerate(task_ids, start=1):
136
+ cursor.execute("UPDATE tasks SET position = ? WHERE id = ?", (position, task_id))
137
+
138
+ conn.commit()
139
+ conn.close()
140
+
141
+ def get_active_task(self) -> Optional[Dict]:
142
+ """Get the task marked as 'In Progress'."""
143
+ conn = sqlite3.connect(self.db_path)
144
+ conn.row_factory = sqlite3.Row
145
+ cursor = conn.cursor()
146
+
147
+ cursor.execute("""
148
+ SELECT id, title, description, status, estimated_duration
149
+ FROM tasks WHERE status = 'In Progress'
150
+ ORDER BY position LIMIT 1
151
+ """)
152
+
153
+ row = cursor.fetchone()
154
+ conn.close()
155
+ return dict(row) if row else None
156
+
157
+ def set_active_task(self, task_id: int) -> bool:
158
+ """Set a task as 'In Progress' and ensure only one task has this status.
159
+ Returns True if successful, False otherwise."""
160
+ conn = sqlite3.connect(self.db_path)
161
+ cursor = conn.cursor()
162
+
163
+ # Check if task exists and is not already Done
164
+ cursor.execute("SELECT status FROM tasks WHERE id = ?", (task_id,))
165
+ result = cursor.fetchone()
166
+
167
+ if not result:
168
+ conn.close()
169
+ return False
170
+
171
+ current_status = result[0]
172
+ if current_status == "Done":
173
+ conn.close()
174
+ return False
175
+
176
+ # Enforce single "In Progress" rule: set all current "In Progress" tasks back to "Todo"
177
+ cursor.execute("""
178
+ UPDATE tasks SET status = 'Todo', updated_at = CURRENT_TIMESTAMP
179
+ WHERE status = 'In Progress'
180
+ """)
181
+
182
+ # Set the selected task as 'In Progress'
183
+ cursor.execute("""
184
+ UPDATE tasks SET status = 'In Progress', updated_at = CURRENT_TIMESTAMP
185
+ WHERE id = ?
186
+ """, (task_id,))
187
+
188
+ conn.commit()
189
+ conn.close()
190
+ return True
191
+
192
+ def clear_all_tasks(self):
193
+ """Clear all tasks from the database."""
194
+ conn = sqlite3.connect(self.db_path)
195
+ cursor = conn.cursor()
196
+ cursor.execute("DELETE FROM tasks")
197
+ conn.commit()
198
+ conn.close()
ui/__init__.py ADDED
File without changes
ui/__pycache__/__init__.cpython-313.pyc ADDED
Binary file (141 Bytes). View file
 
ui/__pycache__/handlers.cpython-313.pyc ADDED
Binary file (16.8 kB). View file
 
ui/__pycache__/layout.cpython-313.pyc ADDED
Binary file (23.9 kB). View file
 
ui/handlers.py ADDED
@@ -0,0 +1,307 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ UI Event Handlers for FocusFlow.
3
+ """
4
+ import os
5
+ import gradio as gr
6
+ import pandas as pd
7
+ from agent import FocusAgent, MockFocusAgent
8
+
9
+ class UIHandlers:
10
+ def __init__(self, task_manager, file_monitor, metrics_tracker, focus_monitor, linear_client=None):
11
+ self.task_manager = task_manager
12
+ self.file_monitor = file_monitor
13
+ self.metrics_tracker = metrics_tracker
14
+ self.focus_monitor = focus_monitor
15
+ self.linear_client = linear_client
16
+
17
+ # State
18
+ self.monitoring_active = False
19
+ self.timer_active = False
20
+ self.check_interval = 30 # Default
21
+
22
+ def get_voice_status_ui(self) -> str:
23
+ """Get voice integration status for UI display."""
24
+ from voice import get_voice_status
25
+ return get_voice_status()
26
+
27
+ def initialize_agent(self, ai_provider: str) -> tuple:
28
+ """
29
+ Initialize the AI agent.
30
+ Returns: (status_message, actual_provider_display)
31
+ """
32
+ try:
33
+ use_mock = False
34
+ focus_agent = None
35
+
36
+ if ai_provider == "anthropic":
37
+ api_key = os.getenv("DEMO_ANTHROPIC_API_KEY") or os.getenv("ANTHROPIC_API_KEY")
38
+ if not api_key:
39
+ use_mock = True
40
+ else:
41
+ try:
42
+ focus_agent = FocusAgent(provider="anthropic", api_key=api_key)
43
+ key_type = "demo" if os.getenv("DEMO_ANTHROPIC_API_KEY") else "user"
44
+ self.focus_monitor.set_agent(focus_agent)
45
+ return (f"✅ Anthropic Claude initialized successfully ({key_type} key)",
46
+ f"**AI Provider:** `ANTHROPIC (Claude)`")
47
+ except Exception as e:
48
+ print(f"⚠️ Anthropic API error: {e}")
49
+ use_mock = True
50
+
51
+ elif ai_provider == "openai":
52
+ api_key = os.getenv("DEMO_OPENAI_API_KEY") or os.getenv("OPENAI_API_KEY")
53
+ if not api_key:
54
+ use_mock = True
55
+ else:
56
+ try:
57
+ focus_agent = FocusAgent(provider="openai", api_key=api_key)
58
+ key_type = "demo" if os.getenv("DEMO_OPENAI_API_KEY") else "user"
59
+ self.focus_monitor.set_agent(focus_agent)
60
+ return (f"✅ OpenAI GPT-4 initialized successfully ({key_type} key)",
61
+ f"**AI Provider:** `OPENAI (GPT-4o)`")
62
+ except Exception as e:
63
+ print(f"⚠️ OpenAI API error: {e}")
64
+ use_mock = True
65
+
66
+ elif ai_provider == "gemini":
67
+ api_key = os.getenv("DEMO_GEMINI_API_KEY") or os.getenv("GEMINI_API_KEY")
68
+ if not api_key:
69
+ use_mock = True
70
+ else:
71
+ try:
72
+ focus_agent = FocusAgent(provider="gemini", api_key=api_key)
73
+ key_type = "demo" if os.getenv("DEMO_GEMINI_API_KEY") else "user"
74
+ self.focus_monitor.set_agent(focus_agent)
75
+ return (f"✅ Google Gemini initialized successfully ({key_type} key)",
76
+ f"**AI Provider:** `GEMINI (Flash 2.0)`")
77
+ except Exception as e:
78
+ print(f"⚠️ Gemini API error: {e}")
79
+ use_mock = True
80
+
81
+ elif ai_provider == "vllm":
82
+ try:
83
+ focus_agent = FocusAgent(
84
+ provider="vllm",
85
+ api_key=os.getenv("VLLM_API_KEY", "EMPTY"),
86
+ base_url=os.getenv("VLLM_BASE_URL", "http://localhost:8000/v1"),
87
+ model=os.getenv("VLLM_MODEL", "ibm-granite/granite-4.0-h-1b")
88
+ )
89
+ if not focus_agent.connection_healthy:
90
+ use_mock = True
91
+ else:
92
+ self.focus_monitor.set_agent(focus_agent)
93
+ return (f"✅ vLLM initialized successfully!",
94
+ f"**AI Provider:** `VLLM (Local)`")
95
+ except Exception as e:
96
+ print(f"⚠️ vLLM error: {e}")
97
+ use_mock = True
98
+
99
+ # Use mock agent if no API keys or connections available
100
+ if use_mock:
101
+ focus_agent = MockFocusAgent()
102
+ self.focus_monitor.set_agent(focus_agent)
103
+ return (f"ℹ️ Running in DEMO MODE with Mock AI (no API keys needed). Perfect for testing! 🎭",
104
+ f"**AI Provider:** `MOCK AI (Demo Mode)`")
105
+
106
+ # Fallback
107
+ focus_agent = MockFocusAgent()
108
+ self.focus_monitor.set_agent(focus_agent)
109
+ return (f"ℹ️ Using Mock AI for demo",
110
+ f"**AI Provider:** `MOCK AI (Fallback)`")
111
+
112
+ except Exception as e:
113
+ focus_agent = MockFocusAgent()
114
+ self.focus_monitor.set_agent(focus_agent)
115
+ return (f"ℹ️ Using Mock AI for demo (Error: {str(e)}) 🎭",
116
+ f"**AI Provider:** `MOCK AI (Error Fallback)`")
117
+
118
+ def process_onboarding(self, project_description: str) -> tuple:
119
+ """Process onboarding and generate tasks."""
120
+ if not self.focus_monitor.focus_agent:
121
+ return "❌ Please initialize agent first", self.get_task_dataframe(), 0
122
+
123
+ if not project_description.strip():
124
+ return "❌ Please describe your project", self.get_task_dataframe(), 0
125
+
126
+ # Generate tasks
127
+ tasks = self.focus_monitor.focus_agent.get_onboarding_tasks(project_description)
128
+
129
+ if not tasks:
130
+ return "❌ Failed to generate tasks. Check your AI provider configuration.", self.get_task_dataframe(), 0
131
+
132
+ # Add tasks to database
133
+ self.task_manager.clear_all_tasks()
134
+ for task in tasks:
135
+ self.task_manager.add_task(
136
+ title=task.get("title", "Untitled"),
137
+ description=task.get("description", ""),
138
+ estimated_duration=task.get("estimated_duration", "30 min")
139
+ )
140
+
141
+ return f"✅ Generated {len(tasks)} tasks! Go to Task Manager to start.", self.get_task_dataframe(), self.calculate_progress()
142
+
143
+ def get_task_dataframe(self):
144
+ """Get tasks as a list for display."""
145
+ tasks = self.task_manager.get_all_tasks()
146
+ if not tasks:
147
+ return []
148
+
149
+ display_tasks = []
150
+ for task in tasks:
151
+ display_tasks.append([
152
+ task['id'],
153
+ task['title'],
154
+ task['description'],
155
+ task['status'],
156
+ task['estimated_duration']
157
+ ])
158
+ return display_tasks
159
+
160
+ def calculate_progress(self) -> float:
161
+ """Calculate overall task completion percentage."""
162
+ tasks = self.task_manager.get_all_tasks()
163
+ if not tasks:
164
+ return 0.0
165
+
166
+ completed = sum(1 for task in tasks if task['status'] == "Done")
167
+ return (completed / len(tasks)) * 100
168
+
169
+ def add_new_task(self, title: str, description: str, duration: int, status: str) -> tuple:
170
+ """Add a new task."""
171
+ if not title.strip():
172
+ return "", "", 30, "Todo", self.get_task_dataframe(), self.calculate_progress()
173
+
174
+ duration_str = f"{duration} min"
175
+ self.task_manager.add_task(title, description, duration_str, status)
176
+ return "", "", 30, "Todo", self.get_task_dataframe(), self.calculate_progress()
177
+
178
+ def delete_task(self, task_id: str) -> tuple:
179
+ """Delete a task by ID."""
180
+ try:
181
+ self.task_manager.delete_task(int(task_id))
182
+ return "✅ Task deleted", self.get_task_dataframe(), self.calculate_progress()
183
+ except Exception as e:
184
+ return f"❌ Error: {str(e)}", self.get_task_dataframe(), self.calculate_progress()
185
+
186
+ def set_task_active(self, task_id: str) -> tuple:
187
+ """Set a task as active."""
188
+ try:
189
+ self.task_manager.set_active_task(int(task_id))
190
+ return "✅ Task set as active! Start working and I'll monitor your progress.", self.get_task_dataframe(), self.calculate_progress()
191
+ except Exception as e:
192
+ return f"❌ Error: {str(e)}", self.get_task_dataframe(), self.calculate_progress()
193
+
194
+ def mark_task_done(self, task_id: str) -> tuple:
195
+ """Mark a task as completed."""
196
+ try:
197
+ self.task_manager.update_task(int(task_id), status="Done")
198
+ return "✅ Task marked as completed! 🎉", self.get_task_dataframe(), self.calculate_progress()
199
+ except Exception as e:
200
+ return f"❌ Error: {str(e)}", self.get_task_dataframe(), self.calculate_progress()
201
+
202
+ def start_monitoring(self, watch_path: str, launch_mode: str) -> tuple:
203
+ """Start file monitoring."""
204
+ if launch_mode == "demo":
205
+ return "❌ File monitoring disabled in demo mode. Use the text area instead.", gr.update(active=False)
206
+
207
+ if not watch_path or not os.path.exists(watch_path):
208
+ self.monitoring_active = False
209
+ self.timer_active = False
210
+ return f"❌ Invalid path: {watch_path}", gr.update(active=False)
211
+
212
+ try:
213
+ self.file_monitor.start(watch_path)
214
+ self.monitoring_active = True
215
+ self.timer_active = True
216
+ return f"✅ Monitoring started on: {watch_path}", gr.update(active=True)
217
+ except Exception as e:
218
+ self.monitoring_active = False
219
+ self.timer_active = False
220
+ return f"❌ Error: {str(e)}", gr.update(active=False)
221
+
222
+ def stop_monitoring(self) -> tuple:
223
+ """Stop file monitoring."""
224
+ self.file_monitor.stop()
225
+ self.monitoring_active = False
226
+ self.timer_active = False
227
+ return "⏹️ Monitoring stopped", gr.update(active=False)
228
+
229
+ def set_check_interval(self, frequency_label: str) -> tuple:
230
+ """Update check interval based on dropdown selection."""
231
+ frequency_map = {
232
+ "30 seconds": 30,
233
+ "1 minute": 60,
234
+ "5 minutes": 300,
235
+ "10 minutes": 600,
236
+ }
237
+
238
+ self.check_interval = frequency_map.get(frequency_label, 30)
239
+ # Return updated timer component
240
+ return (
241
+ gr.Timer(value=self.check_interval, active=self.timer_active),
242
+ f"✅ Check interval set to {frequency_label}"
243
+ )
244
+
245
+ def refresh_dashboard(self) -> tuple:
246
+ """Refresh dashboard with latest metrics."""
247
+ today_stats = self.metrics_tracker.get_today_stats()
248
+ current_streak = self.metrics_tracker.get_current_streak()
249
+
250
+ state_data = pd.DataFrame([
251
+ {"state": "On Track", "count": today_stats["on_track"]},
252
+ {"state": "Distracted", "count": today_stats["distracted"]},
253
+ {"state": "Idle", "count": today_stats["idle"]}
254
+ ])
255
+
256
+ chart_data = self.metrics_tracker.get_chart_data()
257
+ weekly_data = pd.DataFrame({
258
+ "date": chart_data["dates"],
259
+ "score": chart_data["focus_scores"]
260
+ })
261
+
262
+ return (
263
+ today_stats["focus_score"],
264
+ current_streak,
265
+ today_stats["total_checks"],
266
+ state_data,
267
+ weekly_data
268
+ )
269
+
270
+ # Linear Integration
271
+ def get_linear_projects_ui(self):
272
+ """Get Linear projects for dropdown."""
273
+ if not self.linear_client:
274
+ return gr.update(choices=[], value=None, visible=True), "⚠️ Linear client not initialized"
275
+
276
+ projects = self.linear_client.get_user_projects()
277
+ if not projects:
278
+ return gr.update(choices=[], value=None, visible=True), "⚠️ No projects found (or API key missing)"
279
+
280
+ choices = [(p['name'], p['id']) for p in projects]
281
+ return gr.update(choices=choices, value=choices[0][1] if choices else None, visible=True), f"✅ Found {len(projects)} projects"
282
+
283
+ def import_linear_tasks_ui(self, project_id):
284
+ """Import tasks from selected Linear project."""
285
+ if not self.linear_client:
286
+ return "⚠️ Linear client not initialized", self.get_task_dataframe(), self.calculate_progress()
287
+
288
+ if not project_id:
289
+ return "❌ Select a project first", self.get_task_dataframe(), self.calculate_progress()
290
+
291
+ tasks = self.linear_client.get_project_tasks(project_id)
292
+ if not tasks:
293
+ return "⚠️ No open tasks found in this project", self.get_task_dataframe(), self.calculate_progress()
294
+
295
+ count = 0
296
+ for t in tasks:
297
+ estimate = t.get('estimate', 30) or 30
298
+ duration_str = f"{estimate} min"
299
+ self.task_manager.add_task(
300
+ title=t['title'],
301
+ description=t.get('description', ''),
302
+ estimated_duration=duration_str,
303
+ status="Todo"
304
+ )
305
+ count += 1
306
+
307
+ return f"✅ Imported {count} tasks from Linear!", self.get_task_dataframe(), self.calculate_progress()
ui/layout.py ADDED
@@ -0,0 +1,528 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Gradio Layout for FocusFlow.
3
+ """
4
+ import gradio as gr
5
+ import os
6
+ import inspect
7
+ from core.pomodoro import PomodoroTimer
8
+ import mcp_tools
9
+
10
+ def register_tool_safely(func):
11
+ """Register a tool with correct signature by creating dummy components."""
12
+ sig = inspect.signature(func)
13
+ inputs = []
14
+ for name, param in sig.parameters.items():
15
+ # Map types to components
16
+ if param.annotation == int:
17
+ inputs.append(gr.Number(label=name, visible=False))
18
+ elif param.annotation == bool:
19
+ inputs.append(gr.Checkbox(label=name, visible=False))
20
+ else:
21
+ inputs.append(gr.Textbox(label=name, visible=False))
22
+
23
+ # Dummy output to capture return value
24
+ output = gr.Textbox(visible=False)
25
+
26
+ # Hidden button to trigger
27
+ btn = gr.Button(f"cmd_{func.__name__}", visible=False)
28
+ btn.click(fn=func, inputs=inputs, outputs=[output])
29
+
30
+ def create_app(ui_handlers, pomodoro_timer: PomodoroTimer, launch_mode: str, ai_provider: str, monitor_interval: int):
31
+ """Create the Gradio Blocks app."""
32
+
33
+ with gr.Blocks(title="FocusFlow AI") as app:
34
+
35
+ # MCP Tools Registration (Hidden)
36
+ with gr.Row(visible=False):
37
+ # Register all tools from mcp_tools
38
+ register_tool_safely(mcp_tools.add_task)
39
+ register_tool_safely(mcp_tools.get_current_task)
40
+ register_tool_safely(mcp_tools.start_task)
41
+ register_tool_safely(mcp_tools.mark_task_done)
42
+ register_tool_safely(mcp_tools.get_all_tasks)
43
+ register_tool_safely(mcp_tools.delete_task)
44
+ register_tool_safely(mcp_tools.update_task)
45
+ register_tool_safely(mcp_tools.get_productivity_stats)
46
+
47
+ # Hidden component for browser alerts
48
+ alert_trigger = gr.HTML(visible=False)
49
+
50
+ # Auto-refresh timer for monitoring (default 30s)
51
+ monitor_timer = gr.Timer(value=monitor_interval, active=False)
52
+
53
+ # Dedicated 1-second timer for Pomodoro
54
+ pomodoro_ticker = gr.Timer(value=1, active=True)
55
+
56
+ with gr.Tabs() as tabs:
57
+ # Tab 1: Home/Landing Page
58
+ with gr.Tab("🏠 Home"):
59
+ gr.Markdown("""
60
+ # 🦉 FocusFlow - Your AI Accountability Buddy
61
+
62
+ Keep focused on your coding tasks with Duolingo-style nudges!
63
+ """)
64
+
65
+ # Status indicators
66
+ init_status = gr.Textbox(label="AI Status", value="Initializing...", interactive=False, scale=1)
67
+ voice_status_display = gr.Textbox(label="Voice Status", value="Checking...", interactive=False, scale=1)
68
+
69
+ gr.Markdown("""
70
+ ## ✨ Features
71
+
72
+ - **🎯 AI-Powered Project Planning**: Break down projects into actionable micro-tasks
73
+ - **📊 Progress Tracking**: Visual progress monitoring with completion percentages
74
+ - **👁️ Real-Time Monitoring**: Track your coding activity and stay focused
75
+ - **🦉 Duolingo-Style Nudges**: Encouraging, sassy, and gentle reminders
76
+ - **🔔 Browser Notifications**: Get alerted when you're distracted
77
+ - **🚀 Multi-Provider AI**: OpenAI, Anthropic, or local vLLM support
78
+ - **🔊 Voice Feedback**: ElevenLabs voice alerts for maximum engagement
79
+
80
+ ## ⚙️ Current Configuration
81
+ """)
82
+
83
+ # Dynamic AI provider display
84
+ ai_provider_display = gr.Markdown(f"**AI Provider:** `{ai_provider.upper()}`")
85
+
86
+ with gr.Row():
87
+ gr.Markdown(f"**Mode:** `{launch_mode.upper()}`")
88
+ ai_provider_display
89
+ gr.Markdown(f"**Check Interval:** `{monitor_interval}s`")
90
+
91
+ if launch_mode == "demo":
92
+ gr.Markdown("""
93
+ > ℹ️ **Demo Mode**: Use the text area in Monitor tab to simulate your workspace.
94
+ """)
95
+ else:
96
+ gr.Markdown("""
97
+ > ℹ️ **Local Mode**: Monitor your actual project directory.
98
+ """)
99
+
100
+ gr.Markdown("""
101
+ ---
102
+ **Get Started:** Navigate to Onboarding → describe your project → manage tasks → start monitoring!
103
+ """)
104
+
105
+ # Tab 2: Onboarding
106
+ with gr.Tab("🚀 Onboarding"):
107
+ gr.Markdown("""
108
+ ## AI-Powered Project Planning
109
+
110
+ Describe your project and I'll break it down into actionable micro-tasks!
111
+ """)
112
+
113
+ project_input = gr.Textbox(
114
+ label="What are you building?",
115
+ placeholder="e.g., 'A Python web scraper that extracts product data from e-commerce sites'",
116
+ lines=5
117
+ )
118
+ generate_btn = gr.Button("✨ Generate Tasks", variant="primary", size="lg")
119
+ onboard_status = gr.Markdown("")
120
+
121
+ # Linear Integration
122
+ gr.Markdown("""
123
+ ---
124
+ ## 🔗 Import from Linear
125
+ Connect to your Linear workspace to import existing issues.
126
+ """)
127
+
128
+ with gr.Row():
129
+ refresh_projects_btn = gr.Button("🔄 Load Projects", size="sm", scale=1)
130
+ project_selector = gr.Dropdown(label="Select Project", choices=[], scale=3, interactive=True)
131
+ import_linear_btn = gr.Button("⬇️ Import Tasks", variant="secondary", scale=1)
132
+
133
+ # Tab 3: Task Manager
134
+ with gr.Tab("📋 Tasks"):
135
+ gr.Markdown("## 📋 Your Tasks")
136
+
137
+ # Compact header: Progress bar + Action buttons in one row
138
+ with gr.Row():
139
+ progress_bar = gr.Slider(
140
+ label="Overall Progress",
141
+ value=0,
142
+ minimum=0,
143
+ maximum=100,
144
+ interactive=False,
145
+ scale=3
146
+ )
147
+ with gr.Column(scale=1, min_width=250):
148
+ gr.Markdown("**Quick Actions:**")
149
+ with gr.Row():
150
+ start_task_btn = gr.Button("▶️ Start", size="sm", variant="secondary", scale=1)
151
+ mark_done_btn = gr.Button("✅ Done", size="sm", variant="secondary", scale=1)
152
+ delete_task_btn = gr.Button("🗑️ Delete", size="sm", variant="stop", scale=1)
153
+
154
+ # State to hold selected task ID
155
+ selected_task_id = gr.State(value=None)
156
+
157
+ # Table view
158
+ gr.Markdown("**Click on a task row to edit it, or add a new task:**")
159
+ task_table = gr.Dataframe(
160
+ headers=["ID", "Title", "Description", "Status", "Duration (min)"],
161
+ value=[],
162
+ interactive=False,
163
+ wrap=True
164
+ )
165
+
166
+ selection_info = gr.Markdown("_Click **+ Add Task** to create a new task, or click a row above to edit._")
167
+
168
+ # Button to show Add form
169
+ add_task_trigger_btn = gr.Button("➕ Add Task", variant="primary", size="sm")
170
+
171
+ # Single dynamic form (hidden by default)
172
+ with gr.Column(visible=False, elem_id="task-form-container") as task_form:
173
+ form_header = gr.Markdown("### ✏️ Task Form")
174
+ form_title = gr.Textbox(label="Title", placeholder="Task title")
175
+ form_desc = gr.Textbox(label="Description", placeholder="Describe the task", lines=2)
176
+ with gr.Row():
177
+ form_duration = gr.Number(label="Duration (minutes)", value=30, minimum=5, maximum=480, step=5, scale=2)
178
+ form_status = gr.Dropdown(
179
+ label="Status",
180
+ choices=["Todo", "In Progress", "Done"],
181
+ value="Todo",
182
+ scale=1
183
+ )
184
+ with gr.Row():
185
+ form_save_btn = gr.Button("💾 Save", variant="primary", size="sm", scale=1)
186
+ form_cancel_btn = gr.Button("❌ Cancel", variant="secondary", size="sm", scale=1)
187
+
188
+ # Tab 4: Dashboard
189
+ with gr.Tab("📊 Dashboard"):
190
+ gr.Markdown("## 📊 Productivity Dashboard")
191
+
192
+ # Today's stats
193
+ with gr.Row():
194
+ with gr.Column(scale=1):
195
+ today_focus_score = gr.Number(label="Focus Score", value=0, interactive=False)
196
+ with gr.Column(scale=1):
197
+ today_streak = gr.Number(label="Current Streak 🔥", value=0, interactive=False)
198
+ with gr.Column(scale=1):
199
+ today_checks = gr.Number(label="Total Checks", value=0, interactive=False)
200
+
201
+ # State distribution (today)
202
+ gr.Markdown("### Today's Focus Distribution")
203
+ import pandas as pd
204
+ empty_state_df = pd.DataFrame([{"state": "On Track", "count": 0}, {"state": "Distracted", "count": 0}, {"state": "Idle", "count": 0}])
205
+ state_plot = gr.BarPlot(
206
+ value=empty_state_df,
207
+ x="state",
208
+ y="count",
209
+ title="Focus States Distribution"
210
+ )
211
+
212
+ # Weekly focus score trend
213
+ gr.Markdown("### Weekly Focus Score Trend")
214
+ empty_weekly_df = pd.DataFrame({"date": [], "score": []})
215
+ weekly_plot = gr.LinePlot(
216
+ value=empty_weekly_df,
217
+ x="date",
218
+ y="score",
219
+ title="Focus Score (Last 7 Days)"
220
+ )
221
+
222
+ refresh_dashboard_btn = gr.Button("🔄 Refresh Dashboard", variant="secondary")
223
+
224
+ # Tab 5: Monitor
225
+ with gr.Tab("👁️ Monitor"):
226
+ gr.Markdown("## Focus Monitoring")
227
+
228
+ # Mode-specific UI
229
+ if launch_mode == "demo":
230
+ gr.Markdown("**Demo Workspace** - Edit the text below to simulate coding:")
231
+ demo_textarea = gr.Textbox(
232
+ label="Your Code",
233
+ placeholder="Type or paste your code here...",
234
+ lines=8,
235
+ value="# Welcome to FocusFlow!\n# Start coding..."
236
+ )
237
+ demo_update_btn = gr.Button("💾 Save Changes", variant="secondary")
238
+ demo_status = gr.Textbox(label="Status", interactive=False)
239
+ watch_path_input = gr.State(value=None) # Dummy
240
+ start_monitor_btn = gr.State(value=None) # Dummy
241
+ stop_monitor_btn = gr.State(value=None) # Dummy
242
+ monitor_status = gr.State(value=None) # Dummy
243
+ else:
244
+ gr.Markdown("**Directory Monitoring**")
245
+ watch_path_input = gr.Textbox(
246
+ label="Path to Monitor",
247
+ value=os.getcwd(),
248
+ placeholder="/path/to/your/project"
249
+ )
250
+ with gr.Row():
251
+ start_monitor_btn = gr.Button("▶️ Start", variant="primary", size="sm")
252
+ stop_monitor_btn = gr.Button("⏹️ Stop", variant="stop", size="sm")
253
+ monitor_status = gr.Textbox(label="Status", interactive=False)
254
+ demo_textarea = gr.State(value=None) # Dummy
255
+ demo_update_btn = gr.State(value=None) # Dummy
256
+ demo_status = gr.State(value=None) # Dummy
257
+
258
+ # Check frequency selector
259
+ gr.Markdown("### ⚙️ Monitoring Settings")
260
+ check_frequency = gr.Dropdown(
261
+ label="Check Frequency",
262
+ choices=["30 seconds", "1 minute", "5 minutes", "10 minutes"],
263
+ value="30 seconds",
264
+ interactive=True
265
+ )
266
+
267
+ check_frequency.change(
268
+ fn=ui_handlers.set_check_interval,
269
+ inputs=[check_frequency],
270
+ outputs=[monitor_timer, monitor_status if launch_mode != "demo" else demo_status],
271
+ api_name=False
272
+ )
273
+
274
+ # Pomodoro Timer
275
+ gr.Markdown("### 🍅 Pomodoro Timer")
276
+
277
+ # Timer display with embedded audio alerts
278
+ with gr.Row():
279
+ pomodoro_display = gr.Markdown(value=pomodoro_timer.get_display(), elem_id="pomodoro-display")
280
+ gr.HTML("""
281
+ <audio id="pomodoro-alert" preload="auto">
282
+ <source src="data:audio/wav;base64,UklGRnoGAABXQVZFZm10IBAAAAABAAEAQB8AAEAfAAABAAgAZGF0YQoGAACBhYqFbF1fdJivrJBhNjVgodDbq2EcBj+a2/LDciUFLIHO8tiJNwgZaLvt559NEAxQp+PwtmMcBjiR1/LMeSwFJHfH8N2QQAoUXrTp66hVFApGn+DyvmwhBSuBzvLZiTUI" type="audio/wav">
283
+ </audio>
284
+ <audio id="nudge-alert" preload="auto">
285
+ <source src="data:audio/wav;base64,UklGRnoGAABXQVZFZm10IBAAAAABAAEAQB8AAEAfAAABAAgAZGF0YQoGAACBhYqFbF1fdJivrJBhNjVgodDbq2EcBj+a2/LDciUFLIHO8tiJNwgZaLvt559NEAxQp+PwtmMcBjiR1/LMeSwFJHfH8N2QQAoUXrTp66hVFApGn+DyvmwhBSuBzvLZiTUI" type="audio/wav">
286
+ </audio>
287
+ """)
288
+
289
+ with gr.Row():
290
+ pomodoro_start_btn = gr.Button("▶️ Start", size="sm", scale=1)
291
+ pomodoro_stop_btn = gr.Button("⏸️ Pause", size="sm", scale=1)
292
+ pomodoro_reset_btn = gr.Button("🔄 Reset", size="sm", scale=1)
293
+
294
+ # Focus log (common for both modes)
295
+ gr.Markdown("### 🦉 Focus Agent Log")
296
+ focus_log = gr.Textbox(
297
+ label="Activity Log",
298
+ lines=8,
299
+ interactive=False,
300
+ placeholder="Focus checks will appear here..."
301
+ )
302
+
303
+ # Voice feedback player
304
+ voice_audio = gr.Audio(
305
+ label="🔊 Voice Feedback",
306
+ visible=True,
307
+ autoplay=True,
308
+ show_label=True,
309
+ elem_id="voice-feedback-player"
310
+ )
311
+
312
+ with gr.Row():
313
+ manual_check_btn = gr.Button("🔍 Run Focus Check Now", variant="secondary")
314
+ if launch_mode == "demo":
315
+ timer_toggle_btn = gr.Button("⏸️ Pause Auto-Check", variant="secondary")
316
+ else:
317
+ timer_toggle_btn = gr.Button("▶️ Start Auto-Check", variant="secondary")
318
+
319
+ # --- Event Handlers ---
320
+
321
+ # Initialization
322
+ app.load(fn=lambda: ui_handlers.initialize_agent(ai_provider), outputs=[init_status, ai_provider_display], api_name=False)
323
+ app.load(fn=ui_handlers.get_voice_status_ui, outputs=voice_status_display, api_name=False)
324
+
325
+ # Onboarding
326
+ generate_btn.click(
327
+ fn=ui_handlers.process_onboarding,
328
+ inputs=[project_input],
329
+ outputs=[onboard_status, task_table, progress_bar],
330
+ api_name=False
331
+ )
332
+
333
+ # Linear Integration
334
+ refresh_projects_btn.click(
335
+ fn=ui_handlers.get_linear_projects_ui,
336
+ outputs=[project_selector, onboard_status],
337
+ api_name=False
338
+ )
339
+ import_linear_btn.click(
340
+ fn=ui_handlers.import_linear_tasks_ui,
341
+ inputs=[project_selector],
342
+ outputs=[onboard_status, task_table, progress_bar],
343
+ api_name=False
344
+ )
345
+
346
+ # Task Management
347
+ add_task_trigger_btn.click(
348
+ fn=lambda: gr.update(visible=True),
349
+ outputs=task_form,
350
+ api_name=False
351
+ )
352
+ form_cancel_btn.click(
353
+ fn=lambda: gr.update(visible=False),
354
+ outputs=task_form,
355
+ api_name=False
356
+ )
357
+ form_save_btn.click(
358
+ fn=ui_handlers.add_new_task,
359
+ inputs=[form_title, form_desc, form_duration, form_status],
360
+ outputs=[form_title, form_desc, form_duration, form_status, task_table, progress_bar],
361
+ api_name=False
362
+ )
363
+ form_save_btn.click(
364
+ fn=lambda: gr.update(visible=False),
365
+ outputs=task_form,
366
+ api_name=False
367
+ )
368
+
369
+ # Task Selection Handler
370
+ def on_select_task(evt: gr.SelectData, data):
371
+ try:
372
+ # data is a pandas DataFrame
373
+ row_index = evt.index[0]
374
+ task_id = data.iloc[row_index][0] # ID is in first column
375
+ return task_id, f"✅ Selected Task ID: {task_id}"
376
+ except Exception as e:
377
+ return None, f"❌ Error selecting task: {str(e)}"
378
+
379
+ task_table.select(
380
+ fn=on_select_task,
381
+ inputs=[task_table],
382
+ outputs=[selected_task_id, selection_info],
383
+ api_name=False
384
+ )
385
+
386
+ # Button Handlers
387
+ start_task_btn.click(
388
+ fn=ui_handlers.set_task_active,
389
+ inputs=[selected_task_id],
390
+ outputs=[onboard_status, task_table, progress_bar],
391
+ api_name=False
392
+ )
393
+
394
+ mark_done_btn.click(
395
+ fn=ui_handlers.mark_task_done,
396
+ inputs=[selected_task_id],
397
+ outputs=[onboard_status, task_table, progress_bar],
398
+ api_name=False
399
+ )
400
+
401
+ delete_task_btn.click(
402
+ fn=ui_handlers.delete_task,
403
+ inputs=[selected_task_id],
404
+ outputs=[onboard_status, task_table, progress_bar],
405
+ api_name=False
406
+ )
407
+
408
+ # Monitoring
409
+ if launch_mode == "demo":
410
+ demo_update_btn.click(
411
+ fn=ui_handlers.focus_monitor.update_demo_text,
412
+ inputs=[demo_textarea],
413
+ outputs=[demo_status],
414
+ api_name=False
415
+ )
416
+ # Auto-activate timer in demo mode
417
+ app.load(fn=lambda: gr.update(active=True), outputs=monitor_timer, api_name=False)
418
+
419
+ # Toggle handler for demo mode
420
+ def toggle_demo_timer(active):
421
+ new_state = not active
422
+ btn_label = "▶️ Start Auto-Check" if active else "⏸️ Pause Auto-Check"
423
+ return gr.update(active=new_state), gr.update(value=btn_label), new_state
424
+
425
+ # We need a state to track timer status for the button label
426
+ timer_active_state = gr.State(value=True)
427
+
428
+ timer_toggle_btn.click(
429
+ fn=toggle_demo_timer,
430
+ inputs=[timer_active_state],
431
+ outputs=[monitor_timer, timer_toggle_btn, timer_active_state],
432
+ api_name=False
433
+ )
434
+
435
+ else:
436
+ start_monitor_btn.click(
437
+ fn=lambda p: ui_handlers.start_monitoring(p, launch_mode),
438
+ inputs=[watch_path_input],
439
+ outputs=[monitor_status, monitor_timer],
440
+ api_name=False
441
+ )
442
+ stop_monitor_btn.click(
443
+ fn=ui_handlers.stop_monitoring,
444
+ outputs=[monitor_status, monitor_timer],
445
+ api_name=False
446
+ )
447
+
448
+ # Toggle handler for local mode (if needed, but local mode uses start/stop buttons)
449
+ # The button is present in local mode too: "Start Auto-Check"
450
+ # But local mode logic is tied to file monitoring start/stop.
451
+ # Let's map it to start/stop monitoring if it's the same intention,
452
+ # or just pause the timer while keeping monitoring active?
453
+ # Given the button label "Start Auto-Check", it seems redundant with "Start" button in Monitor tab.
454
+ # But let's make it toggle the timer.
455
+
456
+ def toggle_local_timer(active):
457
+ new_state = not active
458
+ btn_label = "▶️ Start Auto-Check" if active else "⏸️ Pause Auto-Check"
459
+ return gr.update(active=new_state), gr.update(value=btn_label), new_state
460
+
461
+ timer_active_state = gr.State(value=False)
462
+
463
+ timer_toggle_btn.click(
464
+ fn=toggle_local_timer,
465
+ inputs=[timer_active_state],
466
+ outputs=[monitor_timer, timer_toggle_btn, timer_active_state],
467
+ api_name=False
468
+ )
469
+
470
+
471
+ # Pomodoro Handlers
472
+ pomodoro_start_btn.click(fn=pomodoro_timer.start, outputs=pomodoro_display, api_name=False)
473
+ pomodoro_stop_btn.click(fn=pomodoro_timer.pause, outputs=pomodoro_display, api_name=False)
474
+ pomodoro_reset_btn.click(fn=pomodoro_timer.reset, outputs=pomodoro_display, api_name=False)
475
+
476
+ # Pomodoro Tick (1 second)
477
+ pomodoro_ticker.tick(fn=pomodoro_timer.tick, outputs=[pomodoro_display, alert_trigger])
478
+ # Note: tick returns (display, should_play_sound).
479
+ # But alert_trigger is HTML. I need a wrapper.
480
+
481
+ def pomodoro_tick_wrapper():
482
+ display, play_sound = pomodoro_timer.tick()
483
+ js = ""
484
+ if play_sound:
485
+ js = """
486
+ <script>
487
+ (function() {
488
+ const audio = document.getElementById('pomodoro-alert');
489
+ if (audio) { audio.play(); }
490
+ })();
491
+ </script>
492
+ """
493
+ return display, js
494
+
495
+ pomodoro_ticker.tick(fn=pomodoro_tick_wrapper, outputs=[pomodoro_display, alert_trigger], api_name=False)
496
+
497
+ # Focus Check Tick (Monitor Interval)
498
+ def monitor_tick_wrapper():
499
+ focus_result, alert_js, voice_data = ui_handlers.focus_monitor.run_check()
500
+ alert_html = f'<script>{alert_js}</script>' if alert_js else ""
501
+ voice_update = gr.update(visible=True, value=voice_data) if voice_data else gr.update(visible=False)
502
+ return focus_result, alert_html, voice_update
503
+
504
+ monitor_timer.tick(
505
+ fn=monitor_tick_wrapper,
506
+ outputs=[focus_log, alert_trigger, voice_audio],
507
+ api_name=False
508
+ )
509
+
510
+ manual_check_btn.click(
511
+ fn=monitor_tick_wrapper,
512
+ outputs=[focus_log, alert_trigger, voice_audio],
513
+ api_name=False
514
+ )
515
+
516
+ # Dashboard
517
+ refresh_dashboard_btn.click(
518
+ fn=ui_handlers.refresh_dashboard,
519
+ outputs=[today_focus_score, today_streak, today_checks, state_plot, weekly_plot],
520
+ api_name=False
521
+ )
522
+ app.load(
523
+ fn=ui_handlers.refresh_dashboard,
524
+ outputs=[today_focus_score, today_streak, today_checks, state_plot, weekly_plot],
525
+ api_name=False
526
+ )
527
+
528
+ return app
voice.py ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ ElevenLabs Voice Integration for FocusFlow.
3
+ Provides optional voice feedback for focus agent and Pomodoro timer.
4
+ Gracefully falls back to text-only mode if API key is missing or quota exceeded.
5
+ """
6
+ import os
7
+ import tempfile
8
+ from typing import Optional, Dict
9
+ from pathlib import Path
10
+
11
+
12
+ class VoiceGenerator:
13
+ """
14
+ Handles text-to-speech generation using ElevenLabs API.
15
+ Designed for graceful degradation - never crashes if voice unavailable.
16
+ """
17
+
18
+ def __init__(self):
19
+ """Initialize ElevenLabs client if API key available."""
20
+ self.client = None
21
+ self.available = False
22
+ self.voice_id = "JBFqnCBsd6RMkjVDRZzb" # George - friendly, clear voice
23
+ self.model_id = "eleven_turbo_v2_5" # Fast, low-latency model
24
+
25
+ try:
26
+ # Check for API key (demo key first, then user key)
27
+ api_key = os.getenv("DEMO_ELEVEN_API_KEY") or os.getenv("ELEVEN_API_KEY")
28
+
29
+ if not api_key:
30
+ print("ℹ️ ElevenLabs: No API key found. Voice feedback disabled (text-only mode).")
31
+ return
32
+
33
+ # Try to initialize client
34
+ from elevenlabs.client import ElevenLabs
35
+ self.client = ElevenLabs(api_key=api_key)
36
+ self.available = True
37
+
38
+ key_type = "demo" if os.getenv("DEMO_ELEVEN_API_KEY") else "user"
39
+ print(f"✅ ElevenLabs voice initialized ({key_type} key)")
40
+
41
+ except ImportError:
42
+ print("⚠️ ElevenLabs: Package not installed. Run: pip install elevenlabs")
43
+ except Exception as e:
44
+ print(f"⚠️ ElevenLabs: Initialization failed: {e}")
45
+
46
+ def text_to_speech(self, text: str, emotion: str = "neutral") -> Optional[str]:
47
+ """
48
+ Convert text to speech and return path to temporary audio file.
49
+
50
+ Args:
51
+ text: Text to convert to speech
52
+ emotion: Emotion hint (not used in current implementation)
53
+
54
+ Returns:
55
+ Path to temporary MP3 file, or None if voice unavailable
56
+ """
57
+ # Check if voice is enabled globally
58
+ if os.getenv("VOICE_ENABLED", "true").lower() == "false":
59
+ return None
60
+
61
+ if not self.available or not self.client:
62
+ return None
63
+
64
+ try:
65
+ # Generate audio using ElevenLabs API
66
+ audio = self.client.text_to_speech.convert(
67
+ text=text,
68
+ voice_id=self.voice_id,
69
+ model_id=self.model_id,
70
+ output_format="mp3_44100_128"
71
+ )
72
+
73
+ # Convert generator/stream to bytes
74
+ audio_bytes = b"".join(audio)
75
+
76
+ # Save to temporary file (Gradio expects file path, not data URL)
77
+ temp_file = tempfile.NamedTemporaryFile(
78
+ delete=False,
79
+ suffix=".mp3",
80
+ prefix="focusflow_voice_"
81
+ )
82
+ temp_file.write(audio_bytes)
83
+ temp_file.close()
84
+
85
+ return temp_file.name
86
+
87
+ except Exception as e:
88
+ # Graceful degradation - log error but don't crash
89
+ print(f"⚠️ ElevenLabs: TTS failed: {e}")
90
+ return None
91
+
92
+ def get_focus_message_audio(self, verdict: str, message: str) -> Optional[str]:
93
+ """
94
+ Generate voice feedback for focus check results.
95
+
96
+ Args:
97
+ verdict: "On Track", "Distracted", or "Idle"
98
+ message: Text message to speak
99
+
100
+ Returns:
101
+ Path to temporary audio file or None
102
+ """
103
+ if not self.available:
104
+ return None
105
+
106
+ # Add emotion/tone based on verdict (for future voice modulation)
107
+ emotion_map = {
108
+ "On Track": "cheerful",
109
+ "Distracted": "concerned",
110
+ "Idle": "motivating"
111
+ }
112
+
113
+ emotion = emotion_map.get(verdict, "neutral")
114
+ return self.text_to_speech(message, emotion=emotion)
115
+
116
+ def get_pomodoro_audio(self, event_type: str) -> Optional[str]:
117
+ """
118
+ Generate voice alerts for Pomodoro timer events.
119
+
120
+ Args:
121
+ event_type: "work_complete" or "break_complete"
122
+
123
+ Returns:
124
+ Path to temporary audio file or None
125
+ """
126
+ if not self.available:
127
+ return None
128
+
129
+ messages = {
130
+ "work_complete": "Great work! Time for a 5-minute break. You've earned it!",
131
+ "break_complete": "Break's over! Let's get back to work and stay focused!"
132
+ }
133
+
134
+ message = messages.get(event_type, "Timer complete!")
135
+ return self.text_to_speech(message, emotion="cheerful")
136
+
137
+ def test_voice(self) -> Dict[str, any]:
138
+ """
139
+ Test voice generation (for setup/debugging).
140
+
141
+ Returns:
142
+ Dict with status, message, and optional audio data
143
+ """
144
+ if not self.available:
145
+ return {
146
+ "status": "unavailable",
147
+ "message": "Voice not available (no API key or initialization failed)",
148
+ "audio": None
149
+ }
150
+
151
+ try:
152
+ test_message = "Hello! FocusFlow voice is working perfectly!"
153
+ audio = self.text_to_speech(test_message)
154
+
155
+ if audio:
156
+ return {
157
+ "status": "success",
158
+ "message": "Voice test successful!",
159
+ "audio": audio
160
+ }
161
+ else:
162
+ return {
163
+ "status": "error",
164
+ "message": "Voice generation failed",
165
+ "audio": None
166
+ }
167
+ except Exception as e:
168
+ return {
169
+ "status": "error",
170
+ "message": f"Voice test failed: {str(e)}",
171
+ "audio": None
172
+ }
173
+
174
+
175
+ # Global voice generator instance
176
+ voice_generator = VoiceGenerator()
177
+
178
+
179
+ def get_voice_status() -> str:
180
+ """
181
+ Get human-readable voice status for UI display.
182
+
183
+ Returns:
184
+ Status string like "✅ ElevenLabs Voice Enabled" or "ℹ️ Voice Disabled"
185
+ """
186
+ if voice_generator.available:
187
+ return "✅ ElevenLabs Voice Enabled"
188
+ else:
189
+ return "ℹ️ Voice Disabled (text-only mode)"