HIMANSHUKUMARJHA commited on
Commit
cd249cc
Β·
1 Parent(s): 1cea319

Add sponsor LLM controls and secrets guidance

Browse files
Files changed (7) hide show
  1. README.md +34 -11
  2. agents.py +9 -2
  3. app.py +51 -4
  4. export_utils.py +11 -2
  5. orchestrator.py +2 -1
  6. schemas.py +1 -0
  7. sponsor_llms.py +41 -18
README.md CHANGED
@@ -26,15 +26,16 @@ tags:
26
 
27
  ## 🎯 Overview
28
 
29
- The Deployment Readiness Copilot is a productivity-focused, developer-centric tool that automates deployment readiness checks using a multi-agent architecture. It combines Claude's reasoning with MCP tool integration to provide comprehensive pre-deployment validation across multiple platforms.
30
 
31
  ## ✨ Features
32
 
33
- - **πŸ€– Multi-Agent Pipeline**: Planner β†’ Evidence Gatherer β†’ Documentation β†’ Reviewer β†’ Docs Lookup β†’ Deployment
34
  - **πŸ“ Codebase Analysis**: Upload folder (ZIP) or GitHub repo β†’ Auto-detect framework, dependencies, configs
35
  - **πŸ“š Context7 Documentation Integration**: Automatic framework/platform documentation lookups
36
  - **πŸ”§ MCP Tool Integration**: Real-time deployment signals from Hugging Face Spaces, Vercel, Context7, and GitHub
37
  - **πŸš€ Multi-Platform Deployment**: Deploy to Vercel, Netlify, AWS, GCP, Azure, Railway, Render, Fly.io, Kubernetes, Docker
 
38
  - **πŸ“ Auto-Documentation**: Generates changelog entries, README snippets, and announcement drafts
39
  - **βœ… Risk Assessment**: Automated review with confidence scoring and actionable findings
40
 
@@ -57,9 +58,10 @@ The Deployment Readiness Copilot is a productivity-focused, developer-centric to
57
 
58
  1. **Planner Agent (Claude)**: Analyzes project context and generates deployment readiness checklist
59
  2. **Evidence Agent (Claude + MCP)**: Gathers real deployment signals via MCP tools
60
- 3. **Documentation Agent (Claude)**: Generates deployment communications
61
- 4. **Reviewer Agent (Claude)**: Final risk assessment with confidence scoring
62
- 5. **Documentation Lookup Agent (Context7)**: Looks up framework/platform docs for:
 
63
  - Deployment guides
64
  - Dependency compatibility
65
  - Config validation
@@ -67,7 +69,7 @@ The Deployment Readiness Copilot is a productivity-focused, developer-centric to
67
  - Environment variables
68
  - Migration guides
69
  - Observability setup
70
- 6. **Deployment Agent (GitHub)**: Prepares and executes deployment actions
71
 
72
  ### MCP Tools Used
73
 
@@ -81,6 +83,10 @@ The Deployment Readiness Copilot is a productivity-focused, developer-centric to
81
 
82
  1. **Set Environment Variables** (in HF Space Secrets):
83
  - `ANTHROPIC_API_KEY`: Your Claude API key (required)
 
 
 
 
84
  - `HF_TOKEN`: For Hugging Face MCP tools (optional)
85
  - `GITHUB_TOKEN`: For GitHub deployment actions (optional)
86
  - `GITHUB_REPO`: Repository in format `owner/repo` (optional, for deployments)
@@ -112,9 +118,10 @@ The system will:
112
  1. Generate a deployment readiness plan
113
  2. Gather evidence via MCP tools
114
  3. Lookup framework/platform documentation via Context7
115
- 4. Create documentation artifacts
116
- 5. Prepare GitHub deployment actions (if configured)
117
- 6. Provide final review with risk assessment
 
118
 
119
  ## 🎯 Hackathon Submission
120
 
@@ -132,6 +139,7 @@ The system will:
132
  - βœ… Context7 integration for comprehensive documentation lookups
133
  - βœ… GitHub deployment actions for direct deployment execution
134
  - βœ… Gradio app with MCP server support (`mcp_server=True`)
 
135
  - βœ… Real-world productivity use case for developers
136
  - βœ… 10 utility improvements covering security, cost, performance, CI/CD, monitoring, and collaboration
137
 
@@ -139,12 +147,27 @@ The system will:
139
 
140
  - **Gradio 5.49.1**: UI framework with MCP server support
141
  - **Anthropic Claude 3.5 Sonnet**: Primary reasoning engine
 
 
142
  - **Hugging Face Hub**: MCP client for tool integration
143
  - **Context7 MCP**: Documentation lookup service
144
- - **GitHub API/MCP**: Deployment actions and workflow triggers
145
- - **Vercel MCP**: Deployment validation and management
146
  - **Python 3.10+**: Core runtime
147
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
148
  ## πŸ“ License
149
 
150
  MIT License
 
26
 
27
  ## 🎯 Overview
28
 
29
+ The Deployment Readiness Copilot is a productivity-focused, developer-centric tool that automates deployment readiness checks using a multi-agent architecture. It combines Claude's reasoning with sponsor LLM cross-checks and MCP tool integration to provide comprehensive pre-deployment validation across multiple platforms.
30
 
31
  ## ✨ Features
32
 
33
+ - **πŸ€– Multi-Agent Pipeline**: Planner β†’ Evidence Gatherer β†’ Synthesis β†’ Documentation β†’ Reviewer β†’ Docs Lookup β†’ Deployment
34
  - **πŸ“ Codebase Analysis**: Upload folder (ZIP) or GitHub repo β†’ Auto-detect framework, dependencies, configs
35
  - **πŸ“š Context7 Documentation Integration**: Automatic framework/platform documentation lookups
36
  - **πŸ”§ MCP Tool Integration**: Real-time deployment signals from Hugging Face Spaces, Vercel, Context7, and GitHub
37
  - **πŸš€ Multi-Platform Deployment**: Deploy to Vercel, Netlify, AWS, GCP, Azure, Railway, Render, Fly.io, Kubernetes, Docker
38
+ - **πŸŽ“ Sponsor LLM Cross-Checks**: Gemini 2.0 Flash + OpenAI GPT-4o mini for synthesis and validation
39
  - **πŸ“ Auto-Documentation**: Generates changelog entries, README snippets, and announcement drafts
40
  - **βœ… Risk Assessment**: Automated review with confidence scoring and actionable findings
41
 
 
58
 
59
  1. **Planner Agent (Claude)**: Analyzes project context and generates deployment readiness checklist
60
  2. **Evidence Agent (Claude + MCP)**: Gathers real deployment signals via MCP tools
61
+ 3. **Synthesis Agent (Gemini/OpenAI)**: Cross-validates Claude's evidence to earn sponsor bonus points
62
+ 4. **Documentation Agent (Claude)**: Generates deployment communications
63
+ 5. **Reviewer Agent (Claude)**: Final risk assessment with confidence scoring
64
+ 6. **Documentation Lookup Agent (Context7)**: Looks up framework/platform docs for:
65
  - Deployment guides
66
  - Dependency compatibility
67
  - Config validation
 
69
  - Environment variables
70
  - Migration guides
71
  - Observability setup
72
+ 7. **Deployment Agent (GitHub)**: Prepares and executes deployment actions
73
 
74
  ### MCP Tools Used
75
 
 
83
 
84
  1. **Set Environment Variables** (in HF Space Secrets):
85
  - `ANTHROPIC_API_KEY`: Your Claude API key (required)
86
+ - `GOOGLE_API_KEY` or `GEMINI_API_KEY`: Enables Gemini sponsor synthesis
87
+ - `OPENAI_API_KEY`: Enables OpenAI sponsor synthesis
88
+ - `SPONSOR_LLM_PRIORITY`: Optional override (default `gemini,openai`)
89
+ - `GEMINI_MODEL`, `OPENAI_MODEL`: Optional model overrides
90
  - `HF_TOKEN`: For Hugging Face MCP tools (optional)
91
  - `GITHUB_TOKEN`: For GitHub deployment actions (optional)
92
  - `GITHUB_REPO`: Repository in format `owner/repo` (optional, for deployments)
 
118
  1. Generate a deployment readiness plan
119
  2. Gather evidence via MCP tools
120
  3. Lookup framework/platform documentation via Context7
121
+ 4. Cross-validate evidence with sponsor LLMs
122
+ 5. Create documentation artifacts
123
+ 6. Prepare GitHub deployment actions (if configured)
124
+ 7. Provide final review with risk assessment
125
 
126
  ## 🎯 Hackathon Submission
127
 
 
139
  - βœ… Context7 integration for comprehensive documentation lookups
140
  - βœ… GitHub deployment actions for direct deployment execution
141
  - βœ… Gradio app with MCP server support (`mcp_server=True`)
142
+ - βœ… Sponsor LLM integration (Gemini, OpenAI) with configurable priority
143
  - βœ… Real-world productivity use case for developers
144
  - βœ… 10 utility improvements covering security, cost, performance, CI/CD, monitoring, and collaboration
145
 
 
147
 
148
  - **Gradio 5.49.1**: UI framework with MCP server support
149
  - **Anthropic Claude 3.5 Sonnet**: Primary reasoning engine
150
+ - **Google Gemini 2.0 Flash**: Sponsor cross-validation
151
+ - **OpenAI GPT-4o mini**: Alternate sponsor cross-validation
152
  - **Hugging Face Hub**: MCP client for tool integration
153
  - **Context7 MCP**: Documentation lookup service
154
+ - **GitHub & Vercel MCP**: Deployment validation and workflow triggers
 
155
  - **Python 3.10+**: Core runtime
156
 
157
+ ## πŸ” Secrets & API Keys
158
+
159
+ Add secrets in Hugging Face Space β†’ **Settings β†’ Repository secrets**:
160
+
161
+ | Secret | Purpose |
162
+ | --- | --- |
163
+ | `ANTHROPIC_API_KEY` | Required for Claude agents |
164
+ | `GOOGLE_API_KEY` / `GEMINI_API_KEY` | Enable Gemini sponsor synthesis |
165
+ | `OPENAI_API_KEY` | Enable OpenAI sponsor synthesis |
166
+ | `SPONSOR_LLM_PRIORITY` | Optional ordering, e.g. `gemini,openai` |
167
+ | `GEMINI_MODEL`, `OPENAI_MODEL` | Optional model overrides |
168
+ | `HF_TOKEN` | Optional Hugging Face MCP access |
169
+ | `GITHUB_TOKEN`, `GITHUB_REPO`, `GITHUB_BRANCH` | GitHub deployment actions |
170
+
171
  ## πŸ“ License
172
 
173
  MIT License
agents.py CHANGED
@@ -183,11 +183,18 @@ class SynthesisAgent:
183
  def __init__(self) -> None:
184
  self.sponsor_client = SponsorLLMClient()
185
 
186
- def run(self, evidence: EvidencePacket, plan_summary: str) -> Dict[str, str]:
 
 
 
 
 
187
  """Synthesize evidence using sponsor LLMs for bonus points."""
188
  all_evidence = evidence.findings + evidence.signals
189
  synthesis = self.sponsor_client.cross_validate_evidence(
190
- "\n".join(all_evidence[:5]), plan_summary
 
 
191
  )
192
  return synthesis
193
 
 
183
  def __init__(self) -> None:
184
  self.sponsor_client = SponsorLLMClient()
185
 
186
+ def run(
187
+ self,
188
+ evidence: EvidencePacket,
189
+ plan_summary: str,
190
+ preferred_llms: Optional[List[str]] = None,
191
+ ) -> Dict[str, str]:
192
  """Synthesize evidence using sponsor LLMs for bonus points."""
193
  all_evidence = evidence.findings + evidence.signals
194
  synthesis = self.sponsor_client.cross_validate_evidence(
195
+ "\n".join(all_evidence[:5]),
196
+ plan_summary,
197
+ preferred_llms,
198
  )
199
  return synthesis
200
 
app.py CHANGED
@@ -38,6 +38,13 @@ rollback_manager = RollbackManager()
38
  monitoring_integration = MonitoringIntegration()
39
  deployment_monitor = DeploymentMonitor()
40
 
 
 
 
 
 
 
 
41
 
42
  def analyze_input(
43
  upload_file: Optional[str],
@@ -118,8 +125,9 @@ def run_full_pipeline(
118
  deployment_platform: str,
119
  update_readme: bool,
120
  stakeholders: str,
121
- environment: str
122
- ) -> Tuple[Dict, str, str, str, str, str, str, str, str, str, str, str, str, str, str]:
 
123
  """Run complete pipeline with analysis, readiness check, and deployment."""
124
 
125
  # Step 1: Analyze codebase if folder/repo provided
@@ -236,6 +244,7 @@ def run_full_pipeline(
236
 
237
  # Step 2: Run readiness pipeline
238
  stakeholders_list = [s.strip() for s in stakeholders.split(",") if s.strip()] if stakeholders else ["eng"]
 
239
  payload = {
240
  "project_name": project_name or "Unnamed Service",
241
  "release_goal": release_goal or "Deploy to production",
@@ -243,6 +252,8 @@ def run_full_pipeline(
243
  "infra_notes": infra_notes or None,
244
  "stakeholders": stakeholders_list,
245
  }
 
 
246
 
247
  result = orchestrator.run_dict(payload)
248
 
@@ -340,6 +351,18 @@ def run_full_pipeline(
340
  icon = "βœ…" if status == "completed" else "⏳" if status == "running" else "❌" if status == "failed" else "⏭️"
341
  progress_text += f"{icon} **{name}**: {message}\n"
342
 
 
 
 
 
 
 
 
 
 
 
 
 
343
  docs_text = ""
344
  if "docs_references" in result and result["docs_references"]:
345
  docs_refs = result["docs_references"]
@@ -391,6 +414,7 @@ def run_full_pipeline(
391
  return (
392
  result,
393
  full_analysis,
 
394
  docs_text,
395
  deploy_text,
396
  readme_update_status,
@@ -494,6 +518,24 @@ def build_interface() -> gr.Blocks:
494
  value="production",
495
  info="Target deployment environment"
496
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
497
 
498
  # Run Pipeline
499
  run_button = gr.Button("πŸš€ Run Full Pipeline & Deploy", variant="primary", size="lg")
@@ -506,6 +548,9 @@ def build_interface() -> gr.Blocks:
506
  with gr.Column(scale=2):
507
  gr.Markdown("### πŸ“‹ Full Results")
508
  output = gr.JSON(label="Complete Output", height=400)
 
 
 
509
 
510
  with gr.Row():
511
  with gr.Column():
@@ -582,11 +627,13 @@ def build_interface() -> gr.Blocks:
582
  deployment_platform,
583
  update_readme,
584
  stakeholders,
585
- environment
 
586
  ],
587
  outputs=[
588
  output,
589
  progress_output,
 
590
  docs_output,
591
  deploy_output,
592
  readme_status,
@@ -599,7 +646,7 @@ def build_interface() -> gr.Blocks:
599
  monitoring_output,
600
  collaboration_output,
601
  json_export,
602
- markdown_export
603
  ]
604
  )
605
 
 
38
  monitoring_integration = MonitoringIntegration()
39
  deployment_monitor = DeploymentMonitor()
40
 
41
+ SPONSOR_PRIORITY_MAP = {
42
+ "Auto (Gemini β†’ OpenAI)": None,
43
+ "Gemini only": ["gemini"],
44
+ "OpenAI only": ["openai"],
45
+ "Both (merge results)": ["gemini", "openai"],
46
+ }
47
+
48
 
49
  def analyze_input(
50
  upload_file: Optional[str],
 
125
  deployment_platform: str,
126
  update_readme: bool,
127
  stakeholders: str,
128
+ environment: str,
129
+ sponsor_priority: str,
130
+ ) -> Tuple[Dict, str, str, str, str, str, str, str, str, str, str, str, str, str, str, str]:
131
  """Run complete pipeline with analysis, readiness check, and deployment."""
132
 
133
  # Step 1: Analyze codebase if folder/repo provided
 
244
 
245
  # Step 2: Run readiness pipeline
246
  stakeholders_list = [s.strip() for s in stakeholders.split(",") if s.strip()] if stakeholders else ["eng"]
247
+ sponsor_pref = SPONSOR_PRIORITY_MAP.get(sponsor_priority, None)
248
  payload = {
249
  "project_name": project_name or "Unnamed Service",
250
  "release_goal": release_goal or "Deploy to production",
 
252
  "infra_notes": infra_notes or None,
253
  "stakeholders": stakeholders_list,
254
  }
255
+ if sponsor_pref:
256
+ payload["sponsor_llms"] = sponsor_pref
257
 
258
  result = orchestrator.run_dict(payload)
259
 
 
351
  icon = "βœ…" if status == "completed" else "⏳" if status == "running" else "❌" if status == "failed" else "⏭️"
352
  progress_text += f"{icon} **{name}**: {message}\n"
353
 
354
+ sponsor_text = ""
355
+ if result.get("sponsor_synthesis"):
356
+ sponsor_lines = []
357
+ for name, details in result["sponsor_synthesis"].items():
358
+ sponsor_lines.append(f"**{name.replace('_', ' ').title()}**\n{details}")
359
+ sponsor_text = "\n\n".join(sponsor_lines)
360
+ else:
361
+ sponsor_text = (
362
+ "No sponsor LLM synthesis. Add `GOOGLE_API_KEY`/`GEMINI_API_KEY` or `OPENAI_API_KEY` "
363
+ "to your Hugging Face Space secrets to enable cross-checks."
364
+ )
365
+
366
  docs_text = ""
367
  if "docs_references" in result and result["docs_references"]:
368
  docs_refs = result["docs_references"]
 
414
  return (
415
  result,
416
  full_analysis,
417
+ sponsor_text,
418
  docs_text,
419
  deploy_text,
420
  readme_update_status,
 
518
  value="production",
519
  info="Target deployment environment"
520
  )
521
+ sponsor_priority = gr.Dropdown(
522
+ label="Sponsor LLM Priority",
523
+ choices=list(SPONSOR_PRIORITY_MAP.keys()),
524
+ value="Auto (Gemini β†’ OpenAI)",
525
+ info="Choose which sponsor APIs run for cross-validation"
526
+ )
527
+
528
+ with gr.Accordion("Secrets Setup (Hugging Face Spaces)", open=False):
529
+ gr.Markdown(
530
+ "**Add these secrets via Settings β†’ Repository Secrets:**\n"
531
+ "- `ANTHROPIC_API_KEY`: Required for Claude agents\n"
532
+ "- `GOOGLE_API_KEY` or `GEMINI_API_KEY`: Enables Gemini sponsor synthesis\n"
533
+ "- `OPENAI_API_KEY`: Enables OpenAI sponsor synthesis\n"
534
+ "- `SPONSOR_LLM_PRIORITY`: Optional override, e.g. `gemini,openai`\n"
535
+ "- `OPENAI_MODEL`, `GEMINI_MODEL`: Optional custom model IDs\n"
536
+ "- `GITHUB_TOKEN`, `GITHUB_REPO`, `GITHUB_BRANCH`: GitHub deployments\n"
537
+ "- `HF_TOKEN`: Access Hugging Face MCP tools"
538
+ )
539
 
540
  # Run Pipeline
541
  run_button = gr.Button("πŸš€ Run Full Pipeline & Deploy", variant="primary", size="lg")
 
548
  with gr.Column(scale=2):
549
  gr.Markdown("### πŸ“‹ Full Results")
550
  output = gr.JSON(label="Complete Output", height=400)
551
+ with gr.Column(scale=1):
552
+ gr.Markdown("### 🎯 Sponsor Insights")
553
+ sponsor_output = gr.Textbox(label="Sponsor LLM Cross-Checks", lines=8, interactive=False)
554
 
555
  with gr.Row():
556
  with gr.Column():
 
627
  deployment_platform,
628
  update_readme,
629
  stakeholders,
630
+ environment,
631
+ sponsor_priority,
632
  ],
633
  outputs=[
634
  output,
635
  progress_output,
636
+ sponsor_output,
637
  docs_output,
638
  deploy_output,
639
  readme_status,
 
646
  monitoring_output,
647
  collaboration_output,
648
  json_export,
649
+ markdown_export,
650
  ]
651
  )
652
 
export_utils.py CHANGED
@@ -97,8 +97,17 @@ def export_markdown(data: Dict[str, Any], filename: Optional[str] = None) -> str
97
  md_lines.append(f"- **{action_type}**: {message}")
98
  md_lines.append("")
99
 
100
- # Evidence synthesis (internal optimization, not exposed)
101
- # Synthesis results are used internally but not exported to avoid confusion
 
 
 
 
 
 
 
 
 
102
 
103
  output = "\n".join(md_lines)
104
  if filename:
 
97
  md_lines.append(f"- **{action_type}**: {message}")
98
  md_lines.append("")
99
 
100
+ # Sponsor synthesis (optional)
101
+ sponsor_data = data.get("sponsor_synthesis")
102
+ if sponsor_data:
103
+ md_lines.extend([
104
+ "## Sponsor LLM Cross-Checks",
105
+ "",
106
+ ])
107
+ for key, value in sponsor_data.items():
108
+ md_lines.append(f"### {key.replace('_', ' ').title()}")
109
+ md_lines.append(str(value))
110
+ md_lines.append("")
111
 
112
  output = "\n".join(md_lines)
113
  if filename:
orchestrator.py CHANGED
@@ -110,6 +110,7 @@ class ReadinessOrchestrator:
110
  self.synthesis.run,
111
  evidence,
112
  plan.summary,
 
113
  default={},
114
  error_message="Synthesis agent failed"
115
  )
@@ -209,7 +210,7 @@ class ReadinessOrchestrator:
209
  deployment=DeploymentActions(**deployment_config) if deployment_config else None,
210
  )
211
  result = asdict(response)
212
- # sponsor_synthesis used internally but not exposed in UI
213
  result["progress"] = progress.to_dict()
214
  result["partial_results"] = partial.to_dict()
215
  return result
 
110
  self.synthesis.run,
111
  evidence,
112
  plan.summary,
113
+ request.sponsor_llms,
114
  default={},
115
  error_message="Synthesis agent failed"
116
  )
 
210
  deployment=DeploymentActions(**deployment_config) if deployment_config else None,
211
  )
212
  result = asdict(response)
213
+ result["sponsor_synthesis"] = sponsor_synthesis
214
  result["progress"] = progress.to_dict()
215
  result["partial_results"] = partial.to_dict()
216
  return result
schemas.py CHANGED
@@ -71,6 +71,7 @@ class ReadinessRequest:
71
  code_summary: str
72
  infra_notes: Optional[str] = None
73
  stakeholders: Optional[List[str]] = None
 
74
 
75
 
76
  @dataclass(slots=True)
 
71
  code_summary: str
72
  infra_notes: Optional[str] = None
73
  stakeholders: Optional[List[str]] = None
74
+ sponsor_llms: Optional[List[str]] = None
75
 
76
 
77
  @dataclass(slots=True)
sponsor_llms.py CHANGED
@@ -18,12 +18,23 @@ except ImportError:
18
  OPENAI_AVAILABLE = False
19
 
20
 
 
 
 
 
 
 
 
 
 
 
21
  class SponsorLLMClient:
22
  """Unified interface for sponsor LLMs (Gemini, OpenAI)."""
23
 
24
  def __init__(self):
25
  self.gemini_client = None
26
  self.openai_client = None
 
27
  self._init_gemini()
28
  self._init_openai()
29
 
@@ -36,7 +47,8 @@ class SponsorLLMClient:
36
  if api_key:
37
  try:
38
  genai.configure(api_key=api_key)
39
- self.gemini_client = genai.GenerativeModel("gemini-2.0-flash-exp")
 
40
  except Exception as e:
41
  print(f"Gemini init failed: {e}")
42
 
@@ -67,7 +79,12 @@ class SponsorLLMClient:
67
  )
68
 
69
  try:
70
- response = self.gemini_client.generate_content(prompt)
 
 
 
 
 
71
  return response.text.strip()
72
  except Exception as e:
73
  return f"[Gemini error: {e}]"
@@ -101,24 +118,30 @@ class SponsorLLMClient:
101
  return f"[OpenAI error: {e}]"
102
 
103
  def cross_validate_evidence(
104
- self, claude_evidence: str, plan_summary: str
105
  ) -> Dict[str, str]:
106
  """Use sponsor LLMs to cross-validate Claude's evidence analysis."""
107
- results = {}
108
-
109
- # Try Gemini first (sponsor priority)
110
- if self.gemini_client:
111
- gemini_synthesis = self.synthesize_with_gemini(
112
- [claude_evidence], plan_summary
113
- )
114
- results["gemini_synthesis"] = gemini_synthesis
115
-
116
- # Fallback to OpenAI if Gemini unavailable
117
- if not results and self.openai_client:
118
- openai_synthesis = self.synthesize_with_openai(
119
- [claude_evidence], plan_summary
120
- )
121
- results["openai_synthesis"] = openai_synthesis
 
 
 
 
 
 
122
 
123
  return results
124
 
 
18
  OPENAI_AVAILABLE = False
19
 
20
 
21
+ def _normalize_priority(priority: Optional[List[str] | str]) -> List[str]:
22
+ """Normalize preferred sponsor list."""
23
+ if priority is None:
24
+ env_priority = os.getenv("SPONSOR_LLM_PRIORITY", "gemini,openai")
25
+ priority = env_priority
26
+ if isinstance(priority, str):
27
+ priority = [item.strip().lower() for item in priority.split(",") if item.strip()]
28
+ return [p for p in priority if p in {"gemini", "openai", "both"}]
29
+
30
+
31
  class SponsorLLMClient:
32
  """Unified interface for sponsor LLMs (Gemini, OpenAI)."""
33
 
34
  def __init__(self):
35
  self.gemini_client = None
36
  self.openai_client = None
37
+ self.default_priority = _normalize_priority(None)
38
  self._init_gemini()
39
  self._init_openai()
40
 
 
47
  if api_key:
48
  try:
49
  genai.configure(api_key=api_key)
50
+ model_id = os.getenv("GEMINI_MODEL", "gemini-2.0-flash-exp")
51
+ self.gemini_client = genai.GenerativeModel(model_id)
52
  except Exception as e:
53
  print(f"Gemini init failed: {e}")
54
 
 
79
  )
80
 
81
  try:
82
+ model = os.getenv("GEMINI_MODEL", "gemini-2.0-flash-exp")
83
+ response = self.gemini_client.generate_content(
84
+ prompt,
85
+ generation_config={"temperature": 0.2},
86
+ safety_settings=None
87
+ )
88
  return response.text.strip()
89
  except Exception as e:
90
  return f"[Gemini error: {e}]"
 
118
  return f"[OpenAI error: {e}]"
119
 
120
  def cross_validate_evidence(
121
+ self, claude_evidence: str, plan_summary: str, preferred: Optional[List[str] | str] = None
122
  ) -> Dict[str, str]:
123
  """Use sponsor LLMs to cross-validate Claude's evidence analysis."""
124
+ order = _normalize_priority(preferred) or self.default_priority
125
+ results: Dict[str, str] = {}
126
+
127
+ for provider in order:
128
+ if provider == "gemini" and self.gemini_client:
129
+ results["gemini_synthesis"] = self.synthesize_with_gemini(
130
+ [claude_evidence], plan_summary
131
+ )
132
+ elif provider == "openai" and self.openai_client:
133
+ results["openai_synthesis"] = self.synthesize_with_openai(
134
+ [claude_evidence], plan_summary
135
+ )
136
+ elif provider == "both":
137
+ if self.gemini_client:
138
+ results["gemini_synthesis"] = self.synthesize_with_gemini(
139
+ [claude_evidence], plan_summary
140
+ )
141
+ if self.openai_client:
142
+ results["openai_synthesis"] = self.synthesize_with_openai(
143
+ [claude_evidence], plan_summary
144
+ )
145
 
146
  return results
147