README / README.md
Bturtel's picture
Update README.md
1910dbb verified
---
title: README
emoji: "\u26A1"
colorFrom: yellow
colorTo: yellow
sdk: static
pinned: false
---
# Lightning Rod Labs
**Train with Timestamps, Not Labels.**
Lightning Rod Labs automatically generates high-quality training data from your documents or public sources — no labeling or extraction required. Define your criteria in Python, and our SDK treats real-world outcomes as the label, producing high-signal supervision at scale. Models learn causal factors, not just tokens. Raw data to deployable specialized models in hours.
[Website](https://lightningrod.ai/) · [SDK](https://github.com/lightning-rod-labs/lightningrod-python-sdk) · [Blog](https://blog.lightningrod.ai/)
---
## How It Works
We generate grounded, model-ready training data from documents or public sources (Google News, SEC filings, market data). You define your criteria in Python, and our SDK uses the **future as the label** — turning messy, timestamped history into training signal automatically. No labeling pipelines, no extraction, no human annotation.
This approach has been used to beat frontier AIs 100x larger on prediction-market benchmarks, and has demonstrated success in financial forecasting, risk estimation, and policy prediction.
---
## Research & Results
- **[SEC Risk Prediction](https://arxiv.org/abs/2601.19189)**: Foresight learning on raw SEC filings trains a 32B model to outperform GPT-5 at predicting public company risks.
- **[Future-as-Label](https://arxiv.org/abs/2601.06336)**: AI learns directly from raw chronological news data at unlimited scale, no human annotation.
- **[Outcome-based RL](https://arxiv.org/abs/2505.17989)** (TMLR): Using RL to improve LLM forecasting ability from real-world outcomes.
- **[Foresight-32B vs. Frontier LLMs](https://blog.lightningrod.ai/p/foresight-32b-beats-frontier-llms-on-live-polymarket-predictions)**: Live demonstration beating frontier models on Polymarket predictions.
Foresight-32B is consistently top-ranked on [ForecastBench](https://www.forecastbench.org/tournament/) and [ProphetArena](https://www.prophetarena.co/leaderboard), despite being 10x-100x smaller than frontier models.