Commit
Β·
1cea319
1
Parent(s):
7b892cd
Remove sponsor LLM synthesis from UI, update for multi-track compatibility
Browse files- README.md +24 -19
- app.py +1 -13
- export_utils.py +2 -10
- orchestrator.py +2 -2
README.md
CHANGED
|
@@ -14,6 +14,10 @@ tags:
|
|
| 14 |
- multi-agent
|
| 15 |
- deployment
|
| 16 |
- productivity
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
---
|
| 18 |
|
| 19 |
# π Deployment Readiness Copilot
|
|
@@ -22,16 +26,15 @@ tags:
|
|
| 22 |
|
| 23 |
## π― Overview
|
| 24 |
|
| 25 |
-
The Deployment Readiness Copilot is a productivity-focused, developer-centric tool that automates deployment readiness checks using a multi-agent architecture. It combines Claude's reasoning with
|
| 26 |
|
| 27 |
## β¨ Features
|
| 28 |
|
| 29 |
-
- **π€ Multi-Agent Pipeline**: Planner β Evidence Gatherer β
|
| 30 |
- **π Codebase Analysis**: Upload folder (ZIP) or GitHub repo β Auto-detect framework, dependencies, configs
|
| 31 |
- **π Context7 Documentation Integration**: Automatic framework/platform documentation lookups
|
| 32 |
- **π§ MCP Tool Integration**: Real-time deployment signals from Hugging Face Spaces, Vercel, Context7, and GitHub
|
| 33 |
- **π Multi-Platform Deployment**: Deploy to Vercel, Netlify, AWS, GCP, Azure, Railway, Render, Fly.io, Kubernetes, Docker
|
| 34 |
-
- **π Sponsor LLM Support**: Cross-validation using Google Gemini 2.0 and OpenAI GPT-4o-mini
|
| 35 |
- **π Auto-Documentation**: Generates changelog entries, README snippets, and announcement drafts
|
| 36 |
- **β
Risk Assessment**: Automated review with confidence scoring and actionable findings
|
| 37 |
|
|
@@ -54,10 +57,9 @@ The Deployment Readiness Copilot is a productivity-focused, developer-centric to
|
|
| 54 |
|
| 55 |
1. **Planner Agent (Claude)**: Analyzes project context and generates deployment readiness checklist
|
| 56 |
2. **Evidence Agent (Claude + MCP)**: Gathers real deployment signals via MCP tools
|
| 57 |
-
3. **
|
| 58 |
-
4. **
|
| 59 |
-
5. **
|
| 60 |
-
6. **Documentation Lookup Agent (Context7)**: Looks up framework/platform docs for:
|
| 61 |
- Deployment guides
|
| 62 |
- Dependency compatibility
|
| 63 |
- Config validation
|
|
@@ -65,7 +67,7 @@ The Deployment Readiness Copilot is a productivity-focused, developer-centric to
|
|
| 65 |
- Environment variables
|
| 66 |
- Migration guides
|
| 67 |
- Observability setup
|
| 68 |
-
|
| 69 |
|
| 70 |
### MCP Tools Used
|
| 71 |
|
|
@@ -79,8 +81,6 @@ The Deployment Readiness Copilot is a productivity-focused, developer-centric to
|
|
| 79 |
|
| 80 |
1. **Set Environment Variables** (in HF Space Secrets):
|
| 81 |
- `ANTHROPIC_API_KEY`: Your Claude API key (required)
|
| 82 |
-
- `GOOGLE_API_KEY` or `GEMINI_API_KEY`: For Gemini synthesis (optional)
|
| 83 |
-
- `OPENAI_API_KEY`: For OpenAI synthesis (optional)
|
| 84 |
- `HF_TOKEN`: For Hugging Face MCP tools (optional)
|
| 85 |
- `GITHUB_TOKEN`: For GitHub deployment actions (optional)
|
| 86 |
- `GITHUB_REPO`: Repository in format `owner/repo` (optional, for deployments)
|
|
@@ -112,33 +112,38 @@ The system will:
|
|
| 112 |
1. Generate a deployment readiness plan
|
| 113 |
2. Gather evidence via MCP tools
|
| 114 |
3. Lookup framework/platform documentation via Context7
|
| 115 |
-
4.
|
| 116 |
-
5.
|
| 117 |
-
6.
|
| 118 |
-
7. Provide final review with risk assessment
|
| 119 |
|
| 120 |
## π― Hackathon Submission
|
| 121 |
|
| 122 |
-
**Track**: `mcp-in-action-track-2` (MCP in Action)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 123 |
|
| 124 |
**Key Highlights**:
|
| 125 |
- β
Autonomous multi-agent behavior with planning, reasoning, and execution
|
| 126 |
- β
MCP servers used as tools (Context7, HF Spaces, Vercel, GitHub)
|
| 127 |
- β
Context7 integration for comprehensive documentation lookups
|
| 128 |
- β
GitHub deployment actions for direct deployment execution
|
| 129 |
-
- β
Gradio
|
| 130 |
-
- β
Sponsor LLM integration (Gemini, OpenAI)
|
| 131 |
- β
Real-world productivity use case for developers
|
|
|
|
| 132 |
|
| 133 |
## π§ Technical Stack
|
| 134 |
|
| 135 |
- **Gradio 5.49.1**: UI framework with MCP server support
|
| 136 |
- **Anthropic Claude 3.5 Sonnet**: Primary reasoning engine
|
| 137 |
-
- **Google Gemini 2.0 Flash**: Sponsor LLM for evidence synthesis
|
| 138 |
-
- **OpenAI GPT-4o-mini**: Alternative sponsor LLM
|
| 139 |
- **Hugging Face Hub**: MCP client for tool integration
|
| 140 |
- **Context7 MCP**: Documentation lookup service
|
| 141 |
- **GitHub API/MCP**: Deployment actions and workflow triggers
|
|
|
|
|
|
|
| 142 |
|
| 143 |
## π License
|
| 144 |
|
|
|
|
| 14 |
- multi-agent
|
| 15 |
- deployment
|
| 16 |
- productivity
|
| 17 |
+
- context7
|
| 18 |
+
- github
|
| 19 |
+
- vercel
|
| 20 |
+
- mcp-server
|
| 21 |
---
|
| 22 |
|
| 23 |
# π Deployment Readiness Copilot
|
|
|
|
| 26 |
|
| 27 |
## π― Overview
|
| 28 |
|
| 29 |
+
The Deployment Readiness Copilot is a productivity-focused, developer-centric tool that automates deployment readiness checks using a multi-agent architecture. It combines Claude's reasoning with MCP tool integration to provide comprehensive pre-deployment validation across multiple platforms.
|
| 30 |
|
| 31 |
## β¨ Features
|
| 32 |
|
| 33 |
+
- **π€ Multi-Agent Pipeline**: Planner β Evidence Gatherer β Documentation β Reviewer β Docs Lookup β Deployment
|
| 34 |
- **π Codebase Analysis**: Upload folder (ZIP) or GitHub repo β Auto-detect framework, dependencies, configs
|
| 35 |
- **π Context7 Documentation Integration**: Automatic framework/platform documentation lookups
|
| 36 |
- **π§ MCP Tool Integration**: Real-time deployment signals from Hugging Face Spaces, Vercel, Context7, and GitHub
|
| 37 |
- **π Multi-Platform Deployment**: Deploy to Vercel, Netlify, AWS, GCP, Azure, Railway, Render, Fly.io, Kubernetes, Docker
|
|
|
|
| 38 |
- **π Auto-Documentation**: Generates changelog entries, README snippets, and announcement drafts
|
| 39 |
- **β
Risk Assessment**: Automated review with confidence scoring and actionable findings
|
| 40 |
|
|
|
|
| 57 |
|
| 58 |
1. **Planner Agent (Claude)**: Analyzes project context and generates deployment readiness checklist
|
| 59 |
2. **Evidence Agent (Claude + MCP)**: Gathers real deployment signals via MCP tools
|
| 60 |
+
3. **Documentation Agent (Claude)**: Generates deployment communications
|
| 61 |
+
4. **Reviewer Agent (Claude)**: Final risk assessment with confidence scoring
|
| 62 |
+
5. **Documentation Lookup Agent (Context7)**: Looks up framework/platform docs for:
|
|
|
|
| 63 |
- Deployment guides
|
| 64 |
- Dependency compatibility
|
| 65 |
- Config validation
|
|
|
|
| 67 |
- Environment variables
|
| 68 |
- Migration guides
|
| 69 |
- Observability setup
|
| 70 |
+
6. **Deployment Agent (GitHub)**: Prepares and executes deployment actions
|
| 71 |
|
| 72 |
### MCP Tools Used
|
| 73 |
|
|
|
|
| 81 |
|
| 82 |
1. **Set Environment Variables** (in HF Space Secrets):
|
| 83 |
- `ANTHROPIC_API_KEY`: Your Claude API key (required)
|
|
|
|
|
|
|
| 84 |
- `HF_TOKEN`: For Hugging Face MCP tools (optional)
|
| 85 |
- `GITHUB_TOKEN`: For GitHub deployment actions (optional)
|
| 86 |
- `GITHUB_REPO`: Repository in format `owner/repo` (optional, for deployments)
|
|
|
|
| 112 |
1. Generate a deployment readiness plan
|
| 113 |
2. Gather evidence via MCP tools
|
| 114 |
3. Lookup framework/platform documentation via Context7
|
| 115 |
+
4. Create documentation artifacts
|
| 116 |
+
5. Prepare GitHub deployment actions (if configured)
|
| 117 |
+
6. Provide final review with risk assessment
|
|
|
|
| 118 |
|
| 119 |
## π― Hackathon Submission
|
| 120 |
|
| 121 |
+
**Primary Track**: `mcp-in-action-track-2` (MCP in Action)
|
| 122 |
+
|
| 123 |
+
**Multi-Track Compatibility**:
|
| 124 |
+
- β
**Track 1 (Best MCP Use)**: Integrates 4+ MCP servers (Context7, HF Spaces, Vercel, GitHub) with comprehensive tool usage
|
| 125 |
+
- β
**Track 2 (MCP in Action)**: Autonomous multi-agent behavior with planning, reasoning, and execution via MCP tools
|
| 126 |
+
- β
**Track 3 (Community/Innovation)**: Developer productivity tool solving real-world deployment challenges
|
| 127 |
+
- β
**Track 4 (Best Integration)**: Seamless integration across multiple platforms and services
|
| 128 |
|
| 129 |
**Key Highlights**:
|
| 130 |
- β
Autonomous multi-agent behavior with planning, reasoning, and execution
|
| 131 |
- β
MCP servers used as tools (Context7, HF Spaces, Vercel, GitHub)
|
| 132 |
- β
Context7 integration for comprehensive documentation lookups
|
| 133 |
- β
GitHub deployment actions for direct deployment execution
|
| 134 |
+
- β
Gradio app with MCP server support (`mcp_server=True`)
|
|
|
|
| 135 |
- β
Real-world productivity use case for developers
|
| 136 |
+
- β
10 utility improvements covering security, cost, performance, CI/CD, monitoring, and collaboration
|
| 137 |
|
| 138 |
## π§ Technical Stack
|
| 139 |
|
| 140 |
- **Gradio 5.49.1**: UI framework with MCP server support
|
| 141 |
- **Anthropic Claude 3.5 Sonnet**: Primary reasoning engine
|
|
|
|
|
|
|
| 142 |
- **Hugging Face Hub**: MCP client for tool integration
|
| 143 |
- **Context7 MCP**: Documentation lookup service
|
| 144 |
- **GitHub API/MCP**: Deployment actions and workflow triggers
|
| 145 |
+
- **Vercel MCP**: Deployment validation and management
|
| 146 |
+
- **Python 3.10+**: Core runtime
|
| 147 |
|
| 148 |
## π License
|
| 149 |
|
app.py
CHANGED
|
@@ -119,7 +119,7 @@ def run_full_pipeline(
|
|
| 119 |
update_readme: bool,
|
| 120 |
stakeholders: str,
|
| 121 |
environment: str
|
| 122 |
-
) -> Tuple[Dict, str, str, str, str, str, str, str, str, str, str, str, str]:
|
| 123 |
"""Run complete pipeline with analysis, readiness check, and deployment."""
|
| 124 |
|
| 125 |
# Step 1: Analyze codebase if folder/repo provided
|
|
@@ -340,13 +340,6 @@ def run_full_pipeline(
|
|
| 340 |
icon = "β
" if status == "completed" else "β³" if status == "running" else "β" if status == "failed" else "βοΈ"
|
| 341 |
progress_text += f"{icon} **{name}**: {message}\n"
|
| 342 |
|
| 343 |
-
sponsor_text = ""
|
| 344 |
-
if "sponsor_synthesis" in result:
|
| 345 |
-
sponsor_text = "\n".join([
|
| 346 |
-
f"**{k}**: {v}"
|
| 347 |
-
for k, v in result["sponsor_synthesis"].items()
|
| 348 |
-
]) or "No sponsor LLM synthesis available"
|
| 349 |
-
|
| 350 |
docs_text = ""
|
| 351 |
if "docs_references" in result and result["docs_references"]:
|
| 352 |
docs_refs = result["docs_references"]
|
|
@@ -398,7 +391,6 @@ def run_full_pipeline(
|
|
| 398 |
return (
|
| 399 |
result,
|
| 400 |
full_analysis,
|
| 401 |
-
sponsor_text,
|
| 402 |
docs_text,
|
| 403 |
deploy_text,
|
| 404 |
readme_update_status,
|
|
@@ -514,9 +506,6 @@ def build_interface() -> gr.Blocks:
|
|
| 514 |
with gr.Column(scale=2):
|
| 515 |
gr.Markdown("### π Full Results")
|
| 516 |
output = gr.JSON(label="Complete Output", height=400)
|
| 517 |
-
with gr.Column(scale=1):
|
| 518 |
-
gr.Markdown("### π― Insights")
|
| 519 |
-
sponsor_output = gr.Textbox(label="Sponsor LLM Synthesis", lines=8, interactive=False)
|
| 520 |
|
| 521 |
with gr.Row():
|
| 522 |
with gr.Column():
|
|
@@ -598,7 +587,6 @@ def build_interface() -> gr.Blocks:
|
|
| 598 |
outputs=[
|
| 599 |
output,
|
| 600 |
progress_output,
|
| 601 |
-
sponsor_output,
|
| 602 |
docs_output,
|
| 603 |
deploy_output,
|
| 604 |
readme_status,
|
|
|
|
| 119 |
update_readme: bool,
|
| 120 |
stakeholders: str,
|
| 121 |
environment: str
|
| 122 |
+
) -> Tuple[Dict, str, str, str, str, str, str, str, str, str, str, str, str, str, str]:
|
| 123 |
"""Run complete pipeline with analysis, readiness check, and deployment."""
|
| 124 |
|
| 125 |
# Step 1: Analyze codebase if folder/repo provided
|
|
|
|
| 340 |
icon = "β
" if status == "completed" else "β³" if status == "running" else "β" if status == "failed" else "βοΈ"
|
| 341 |
progress_text += f"{icon} **{name}**: {message}\n"
|
| 342 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 343 |
docs_text = ""
|
| 344 |
if "docs_references" in result and result["docs_references"]:
|
| 345 |
docs_refs = result["docs_references"]
|
|
|
|
| 391 |
return (
|
| 392 |
result,
|
| 393 |
full_analysis,
|
|
|
|
| 394 |
docs_text,
|
| 395 |
deploy_text,
|
| 396 |
readme_update_status,
|
|
|
|
| 506 |
with gr.Column(scale=2):
|
| 507 |
gr.Markdown("### π Full Results")
|
| 508 |
output = gr.JSON(label="Complete Output", height=400)
|
|
|
|
|
|
|
|
|
|
| 509 |
|
| 510 |
with gr.Row():
|
| 511 |
with gr.Column():
|
|
|
|
| 587 |
outputs=[
|
| 588 |
output,
|
| 589 |
progress_output,
|
|
|
|
| 590 |
docs_output,
|
| 591 |
deploy_output,
|
| 592 |
readme_status,
|
export_utils.py
CHANGED
|
@@ -97,16 +97,8 @@ def export_markdown(data: Dict[str, Any], filename: Optional[str] = None) -> str
|
|
| 97 |
md_lines.append(f"- **{action_type}**: {message}")
|
| 98 |
md_lines.append("")
|
| 99 |
|
| 100 |
-
#
|
| 101 |
-
|
| 102 |
-
md_lines.extend([
|
| 103 |
-
"## Sponsor LLM Synthesis",
|
| 104 |
-
""
|
| 105 |
-
])
|
| 106 |
-
for key, value in data["sponsor_synthesis"].items():
|
| 107 |
-
md_lines.append(f"### {key}")
|
| 108 |
-
md_lines.append(str(value))
|
| 109 |
-
md_lines.append("")
|
| 110 |
|
| 111 |
output = "\n".join(md_lines)
|
| 112 |
if filename:
|
|
|
|
| 97 |
md_lines.append(f"- **{action_type}**: {message}")
|
| 98 |
md_lines.append("")
|
| 99 |
|
| 100 |
+
# Evidence synthesis (internal optimization, not exposed)
|
| 101 |
+
# Synthesis results are used internally but not exported to avoid confusion
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
|
| 103 |
output = "\n".join(md_lines)
|
| 104 |
if filename:
|
orchestrator.py
CHANGED
|
@@ -105,7 +105,7 @@ class ReadinessOrchestrator:
|
|
| 105 |
|
| 106 |
# Synthesis
|
| 107 |
if evidence and plan:
|
| 108 |
-
progress.update_agent("Synthesis", AgentStatus.RUNNING, "Cross-validating
|
| 109 |
sponsor_synthesis = safe_execute(
|
| 110 |
self.synthesis.run,
|
| 111 |
evidence,
|
|
@@ -209,7 +209,7 @@ class ReadinessOrchestrator:
|
|
| 209 |
deployment=DeploymentActions(**deployment_config) if deployment_config else None,
|
| 210 |
)
|
| 211 |
result = asdict(response)
|
| 212 |
-
|
| 213 |
result["progress"] = progress.to_dict()
|
| 214 |
result["partial_results"] = partial.to_dict()
|
| 215 |
return result
|
|
|
|
| 105 |
|
| 106 |
# Synthesis
|
| 107 |
if evidence and plan:
|
| 108 |
+
progress.update_agent("Synthesis", AgentStatus.RUNNING, "Cross-validating evidence...")
|
| 109 |
sponsor_synthesis = safe_execute(
|
| 110 |
self.synthesis.run,
|
| 111 |
evidence,
|
|
|
|
| 209 |
deployment=DeploymentActions(**deployment_config) if deployment_config else None,
|
| 210 |
)
|
| 211 |
result = asdict(response)
|
| 212 |
+
# sponsor_synthesis used internally but not exposed in UI
|
| 213 |
result["progress"] = progress.to_dict()
|
| 214 |
result["partial_results"] = partial.to_dict()
|
| 215 |
return result
|