Spaces:
Build error
π‘ Chronos2 Server - API Documentation
Version: 3.0.0
Base URL: https://your-server.hf.space or http://localhost:8000
Date: 2025-11-09
π Table of Contents
- Overview
- Authentication
- Endpoints
- Data Models
- Examples
- Error Handling
- Rate Limiting
- Client Libraries
π― Overview
The Chronos2 API provides time series forecasting capabilities using Amazon's Chronos-2 model. The API supports:
- β Univariate forecasting: Single time series prediction
- β Multi-series forecasting: Multiple series in parallel
- β Anomaly detection: Identify outliers in data
- β Backtesting: Evaluate forecast accuracy
Base URLs
Production: https://your-app.hf.space
Local Development: http://localhost:8000
API Documentation
- Swagger UI:
/docs - ReDoc:
/redoc - OpenAPI Schema:
/openapi.json
π Authentication
Current: No authentication required (public API)
Future: API key authentication
curl -H "X-API-Key: your-api-key" https://api.example.com/forecast/univariate
π Endpoints
Health Check
GET /health
Check if the API is running.
Response:
{
"status": "healthy",
"timestamp": "2025-11-09T12:00:00Z"
}
Example:
curl http://localhost:8000/health
GET /health/info
Get system information.
Response:
{
"version": "3.0.0",
"model_id": "amazon/chronos-2",
"device": "cpu",
"python_version": "3.10.0"
}
Example:
curl http://localhost:8000/health/info
Forecasting
POST /forecast/univariate
Generate forecast for a single time series.
Request Body:
{
"values": [100.0, 102.0, 105.0, 103.0, 108.0, 112.0, 115.0],
"prediction_length": 3,
"quantile_levels": [0.1, 0.5, 0.9],
"freq": "D"
}
Parameters:
values(required): Array of numeric values (min 3 points)prediction_length(required): Number of periods to forecast (β₯ 1)quantile_levels(optional): Quantiles for prediction intervals (default: [0.1, 0.5, 0.9])freq(optional): Frequency ("D", "H", "M", default: "D")timestamps(optional): Custom timestampsseries_id(optional): Series identifier (default: "series_0")
Response:
{
"timestamps": ["8", "9", "10"],
"median": [118.5, 121.2, 124.0],
"quantiles": {
"0.1": [113.2, 115.8, 118.4],
"0.5": [118.5, 121.2, 124.0],
"0.9": [123.8, 126.6, 129.6]
}
}
Fields:
timestamps: Future time pointsmedian: Point forecasts (50th percentile)quantiles: Prediction intervals at specified quantile levels
Example:
curl -X POST http://localhost:8000/forecast/univariate \
-H "Content-Type: application/json" \
-d '{
"values": [100, 102, 105, 103, 108, 112, 115],
"prediction_length": 3,
"quantile_levels": [0.1, 0.5, 0.9],
"freq": "D"
}'
Python Example:
import requests
response = requests.post(
"http://localhost:8000/forecast/univariate",
json={
"values": [100, 102, 105, 103, 108, 112, 115],
"prediction_length": 3,
"quantile_levels": [0.1, 0.5, 0.9]
}
)
data = response.json()
print(f"Median forecast: {data['median']}")
print(f"90% upper bound: {data['quantiles']['0.9']}")
JavaScript Example:
const response = await fetch('http://localhost:8000/forecast/univariate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
values: [100, 102, 105, 103, 108, 112, 115],
prediction_length: 3,
quantile_levels: [0.1, 0.5, 0.9]
})
});
const data = await response.json();
console.log('Median:', data.median);
console.log('Quantiles:', data.quantiles);
POST /forecast/multi-series
Generate forecasts for multiple time series.
Request Body:
{
"series_list": [
{"values": [100, 102, 105, 108, 112]},
{"values": [200, 195, 190, 185, 180]},
{"values": [50, 52, 54, 56, 58]}
],
"prediction_length": 3,
"quantile_levels": [0.1, 0.5, 0.9],
"freq": "D"
}
Parameters:
series_list(required): Array of series objects withvalues- Other parameters same as univariate
Response:
{
"results": [
{
"timestamps": ["5", "6", "7"],
"median": [115.0, 118.0, 121.0],
"quantiles": {
"0.1": [110.0, 113.0, 116.0],
"0.9": [120.0, 123.0, 126.0]
}
},
{
"timestamps": ["5", "6", "7"],
"median": [175.0, 170.0, 165.0],
"quantiles": {
"0.1": [170.0, 165.0, 160.0],
"0.9": [180.0, 175.0, 170.0]
}
},
{
"timestamps": ["5", "6", "7"],
"median": [60.0, 62.0, 64.0],
"quantiles": {
"0.1": [58.0, 60.0, 62.0],
"0.9": [62.0, 64.0, 66.0]
}
}
]
}
Example:
curl -X POST http://localhost:8000/forecast/multi-series \
-H "Content-Type: application/json" \
-d '{
"series_list": [
{"values": [100, 102, 105, 108, 112]},
{"values": [200, 195, 190, 185, 180]}
],
"prediction_length": 3
}'
Anomaly Detection
POST /anomaly/detect
Detect anomalies in recent observations.
Request Body:
{
"context_values": [100, 102, 105, 103, 108, 112, 115],
"recent_observed": [120, 200, 124],
"prediction_length": 3,
"quantile_low": 0.05,
"quantile_high": 0.95,
"freq": "D"
}
Parameters:
context_values(required): Historical values for contextrecent_observed(required): Recent observations to checkprediction_length(required): Must equal length ofrecent_observedquantile_low(optional): Lower bound quantile (default: 0.05)quantile_high(optional): Upper bound quantile (default: 0.95)freq(optional): Frequency (default: "D")
Response:
{
"anomalies": [
{
"index": 0,
"value": 120.0,
"expected": 118.5,
"lower_bound": 113.2,
"upper_bound": 123.8,
"is_anomaly": false,
"z_score": 0.3
},
{
"index": 1,
"value": 200.0,
"expected": 121.2,
"lower_bound": 115.8,
"upper_bound": 126.6,
"is_anomaly": true,
"z_score": 14.5
},
{
"index": 2,
"value": 124.0,
"expected": 124.0,
"lower_bound": 118.4,
"upper_bound": 129.6,
"is_anomaly": false,
"z_score": 0.0
}
],
"total_points": 3,
"num_anomalies": 1,
"anomaly_rate": 0.333
}
Fields:
anomalies: Array of anomaly pointsindex: Position inrecent_observedvalue: Actual observed valueexpected: Forecasted medianlower_bound: Lower prediction boundupper_bound: Upper prediction boundis_anomaly: True if outside boundsz_score: Standardized deviation
total_points: Total observations checkednum_anomalies: Count of anomalies detectedanomaly_rate: Proportion of anomalies
Example:
curl -X POST http://localhost:8000/anomaly/detect \
-H "Content-Type: application/json" \
-d '{
"context_values": [100, 102, 105, 103, 108, 112, 115],
"recent_observed": [120, 200, 124],
"prediction_length": 3,
"quantile_low": 0.05,
"quantile_high": 0.95
}'
Python Example:
import requests
response = requests.post(
"http://localhost:8000/anomaly/detect",
json={
"context_values": [100, 102, 105, 103, 108, 112, 115],
"recent_observed": [120, 200, 124],
"prediction_length": 3
}
)
data = response.json()
print(f"Total anomalies: {data['num_anomalies']}")
print(f"Anomaly rate: {data['anomaly_rate']:.1%}")
for anomaly in data['anomalies']:
if anomaly['is_anomaly']:
print(f"Anomaly at index {anomaly['index']}: {anomaly['value']}")
Backtesting
POST /backtest/simple
Evaluate forecast accuracy on historical data.
Request Body:
{
"context_values": [100, 102, 105, 103, 108],
"actual_values": [112, 115, 118],
"prediction_length": 3,
"quantile_levels": [0.1, 0.5, 0.9],
"freq": "D"
}
Parameters:
context_values(required): Training dataactual_values(required): Test data (ground truth)prediction_length(required): Must equal length ofactual_valuesquantile_levels(optional): Quantiles (default: [0.1, 0.5, 0.9])freq(optional): Frequency (default: "D")
Response:
{
"forecast": [110.5, 113.2, 116.0],
"actuals": [112.0, 115.0, 118.0],
"mae": 1.9,
"mape": 1.6,
"rmse": 2.1,
"errors": [-1.5, -1.8, -2.0]
}
Fields:
forecast: Predicted values (median)actuals: Actual observed valuesmae: Mean Absolute Errormape: Mean Absolute Percentage Error (%)rmse: Root Mean Square Errorerrors: Residuals (actual - forecast)
Metrics Explanation:
- MAE: Average absolute difference (lower is better)
- MAPE: Average percentage error (lower is better)
- RMSE: Root mean squared error (penalizes large errors)
Example:
curl -X POST http://localhost:8000/backtest/simple \
-H "Content-Type: application/json" \
-d '{
"context_values": [100, 102, 105, 103, 108],
"actual_values": [112, 115, 118],
"prediction_length": 3
}'
Python Example:
import requests
response = requests.post(
"http://localhost:8000/backtest/simple",
json={
"context_values": [100, 102, 105, 103, 108],
"actual_values": [112, 115, 118],
"prediction_length": 3
}
)
data = response.json()
print(f"MAE: {data['mae']:.2f}")
print(f"MAPE: {data['mape']:.2f}%")
print(f"RMSE: {data['rmse']:.2f}")
# Plot results
import matplotlib.pyplot as plt
plt.plot(data['actuals'], label='Actual')
plt.plot(data['forecast'], label='Forecast')
plt.legend()
plt.show()
π¦ Data Models
ForecastUnivariateRequest
{
values: number[]; // Min 1 item
prediction_length: number; // >= 1
quantile_levels?: number[]; // [0, 1], default: [0.1, 0.5, 0.9]
freq?: string; // Default: "D"
timestamps?: string[]; // Optional
series_id?: string; // Default: "series_0"
}
ForecastUnivariateResponse
{
timestamps: string[];
median: number[];
quantiles: {
[key: string]: number[]; // e.g., "0.1": [...]
};
}
AnomalyDetectionRequest
{
context_values: number[];
recent_observed: number[];
prediction_length: number; // Must equal len(recent_observed)
quantile_low?: number; // Default: 0.05
quantile_high?: number; // Default: 0.95
freq?: string; // Default: "D"
}
AnomalyPoint
{
index: number;
value: number;
expected: number;
lower_bound: number;
upper_bound: number;
is_anomaly: boolean;
z_score: number;
}
BacktestRequest
{
context_values: number[];
actual_values: number[];
prediction_length: number; // Must equal len(actual_values)
quantile_levels?: number[];
freq?: string;
}
BacktestResponse
{
forecast: number[];
actuals: number[];
mae: number;
mape: number;
rmse: number;
errors: number[];
}
π‘ Examples
Complete Workflow: Forecast β Detect Anomalies β Backtest
import requests
import pandas as pd
BASE_URL = "http://localhost:8000"
# 1. Load your data
data = pd.read_csv("timeseries.csv")
values = data['value'].tolist()
# Split into train/test
train = values[:100]
test = values[100:110]
# 2. Generate forecast
forecast_response = requests.post(
f"{BASE_URL}/forecast/univariate",
json={
"values": train,
"prediction_length": len(test),
"quantile_levels": [0.05, 0.5, 0.95]
}
)
forecast = forecast_response.json()
print("Forecast median:", forecast['median'])
# 3. Detect anomalies in test data
anomaly_response = requests.post(
f"{BASE_URL}/anomaly/detect",
json={
"context_values": train,
"recent_observed": test,
"prediction_length": len(test),
"quantile_low": 0.05,
"quantile_high": 0.95
}
)
anomalies = anomaly_response.json()
print(f"Detected {anomalies['num_anomalies']} anomalies")
# 4. Evaluate forecast accuracy
backtest_response = requests.post(
f"{BASE_URL}/backtest/simple",
json={
"context_values": train,
"actual_values": test,
"prediction_length": len(test)
}
)
metrics = backtest_response.json()
print(f"MAE: {metrics['mae']:.2f}")
print(f"MAPE: {metrics['mape']:.2f}%")
Multi-Series Parallel Forecasting
import requests
import pandas as pd
# Load multiple series
products = ['A', 'B', 'C']
series_list = []
for product in products:
data = pd.read_csv(f"product_{product}.csv")
series_list.append({
"values": data['sales'].tolist()
})
# Forecast all series in parallel
response = requests.post(
"http://localhost:8000/forecast/multi-series",
json={
"series_list": series_list,
"prediction_length": 7
}
)
results = response.json()['results']
for i, product in enumerate(products):
print(f"Product {product} forecast: {results[i]['median']}")
Real-Time Anomaly Monitoring
import requests
import time
BASE_URL = "http://localhost:8000"
historical_data = []
while True:
# Simulate receiving new data point
new_value = get_latest_sensor_reading()
historical_data.append(new_value)
# Keep last 100 points as context
context = historical_data[-100:]
# Check if latest point is anomaly
response = requests.post(
f"{BASE_URL}/anomaly/detect",
json={
"context_values": context[:-1],
"recent_observed": [new_value],
"prediction_length": 1
}
)
result = response.json()
if result['anomalies'][0]['is_anomaly']:
print(f"π¨ ALERT: Anomaly detected! Value: {new_value}")
send_alert(new_value)
time.sleep(60) # Check every minute
β οΈ Error Handling
Error Response Format
{
"detail": "Error message describing what went wrong"
}
HTTP Status Codes
| Code | Meaning | Example |
|---|---|---|
| 200 | Success | Forecast generated successfully |
| 400 | Bad Request | Invalid input data |
| 422 | Validation Error | Missing required fields |
| 500 | Internal Server Error | Model inference failed |
Common Errors
422 Validation Error
Cause: Invalid request data
Example:
{
"detail": [
{
"loc": ["body", "values"],
"msg": "field required",
"type": "value_error.missing"
}
]
}
Solution: Check request body structure
400 Bad Request
Cause: Business logic validation failed
Example:
{
"detail": "values cannot be empty"
}
Solution: Provide at least 3 data points
500 Internal Server Error
Cause: Model inference or processing error
Example:
{
"detail": "Internal server error"
}
Solution: Check logs, retry request
π Rate Limiting
Current: No rate limiting
Future:
- 100 requests/minute per IP
- 1000 requests/hour per API key
π Client Libraries
Python
pip install requests pandas
import requests
class Chronos2Client:
def __init__(self, base_url="http://localhost:8000"):
self.base_url = base_url
def forecast(self, values, prediction_length, **kwargs):
response = requests.post(
f"{self.base_url}/forecast/univariate",
json={
"values": values,
"prediction_length": prediction_length,
**kwargs
}
)
response.raise_for_status()
return response.json()
def detect_anomalies(self, context, recent, **kwargs):
response = requests.post(
f"{self.base_url}/anomaly/detect",
json={
"context_values": context,
"recent_observed": recent,
"prediction_length": len(recent),
**kwargs
}
)
response.raise_for_status()
return response.json()
# Usage
client = Chronos2Client()
result = client.forecast([100, 102, 105], 3)
print(result['median'])
JavaScript/TypeScript
npm install axios
import axios from 'axios';
class Chronos2Client {
private baseURL: string;
constructor(baseURL: string = 'http://localhost:8000') {
this.baseURL = baseURL;
}
async forecast(
values: number[],
predictionLength: number,
options: any = {}
) {
const response = await axios.post(
`${this.baseURL}/forecast/univariate`,
{
values,
prediction_length: predictionLength,
...options
}
);
return response.data;
}
async detectAnomalies(
context: number[],
recent: number[],
options: any = {}
) {
const response = await axios.post(
`${this.baseURL}/anomaly/detect`,
{
context_values: context,
recent_observed: recent,
prediction_length: recent.length,
...options
}
);
return response.data;
}
}
// Usage
const client = new Chronos2Client();
const result = await client.forecast([100, 102, 105], 3);
console.log(result.median);
cURL Examples
Forecast:
curl -X POST http://localhost:8000/forecast/univariate \
-H "Content-Type: application/json" \
-d '{"values":[100,102,105],"prediction_length":3}'
Anomaly Detection:
curl -X POST http://localhost:8000/anomaly/detect \
-H "Content-Type: application/json" \
-d '{
"context_values":[100,102,105,103,108],
"recent_observed":[200],
"prediction_length":1
}'
Backtest:
curl -X POST http://localhost:8000/backtest/simple \
-H "Content-Type: application/json" \
-d '{
"context_values":[100,102,105],
"actual_values":[108,112],
"prediction_length":2
}'
π Advanced Usage
Custom Quantile Levels
# Fine-grained prediction intervals
response = requests.post(
"http://localhost:8000/forecast/univariate",
json={
"values": [100, 102, 105, 103, 108],
"prediction_length": 5,
"quantile_levels": [0.05, 0.1, 0.25, 0.5, 0.75, 0.9, 0.95]
}
)
data = response.json()
# Now you have more granular intervals
print("5-95% interval:", data['quantiles']['0.05'], "-", data['quantiles']['0.95'])
print("25-75% interval:", data['quantiles']['0.25'], "-", data['quantiles']['0.75'])
Custom Timestamps
import pandas as pd
# Use actual dates
dates = pd.date_range('2025-01-01', periods=10, freq='D')
timestamps = dates.astype(str).tolist()
response = requests.post(
"http://localhost:8000/forecast/univariate",
json={
"values": [100, 102, 105, 103, 108, 112, 115, 118, 120, 122],
"timestamps": timestamps,
"prediction_length": 7,
"freq": "D"
}
)
# Response will have future dates
forecast = response.json()
print("Future dates:", forecast['timestamps'])
# ['2025-01-11', '2025-01-12', ...]
π Support
Documentation: /docs, /redoc
Issues: GitHub Issues
Email: support@example.com
Last Updated: 2025-11-09
Version: 3.0.0
API Status: Production Ready