Genesis AI Code Bench
Developed by: Within Us AI
Generated: 2026-01-01
A lightweight evaluation harness for Genesis-style datasets that focuses on the signals developers care about in practice:
- Structure validity (JSON parsing, required fields, schema consistency)
- Tool-trace validity (JSON array of tool calls with
tool+args) - Diff validity (
patch_diffblocks contain recognizable unified-diff markers) - Self-grade validity (score bounds, confidence bounds, presence of notes)
- Governance presence (audit/tests flags when expected)
- Economics presence (cost budgets + latency targets)
This bench is intentionally fast and offline-friendly. It does not execute repo tests; it scores dataset quality and readiness for downstream training workflows.
Quick start
python bench.py --jsonl path/to/train.jsonl --max_rows 5000
Metrics produced
format_valid_raterequired_fields_ratetool_trace_valid_ratepatch_diff_valid_rateself_grade_valid_rategovernance_present_rateeconomics_present_rateuniqueness_rate(hash-based)
Recommended use
- Run before upload to ensure Viewer-ready consistency
- Run after merges to confirm schema stability
- Compare v1.0 vs v1.1 addon impact