Spaces:
Build error
π¨βπ» Chronos2 Server - Development Guide
Version: 3.0.0
Date: 2025-11-09
For: Developers contributing to or extending the project
π Table of Contents
- Getting Started
- Development Setup
- Project Structure
- Development Workflow
- Adding Features
- Testing
- Code Style
- Debugging
- Contributing
π Getting Started
Prerequisites
- Python: 3.10 or higher
- Git: For version control
- Docker: (Optional) For containerized development
- IDE: VS Code, PyCharm, or similar
Quick Setup
# 1. Clone repository
git clone https://github.com/yourusername/chronos2-server.git
cd chronos2-server
# 2. Create virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# 3. Install dependencies
pip install -r requirements.txt
# 4. Run tests
pytest tests/ -v
# 5. Start server
python -m uvicorn app.main_v3:app --reload --host 0.0.0.0 --port 8000
# 6. Open browser
# http://localhost:8000/docs
π οΈ Development Setup
1. Environment Setup
Create .env file:
cp .env.example .env
Edit .env:
# API Configuration
API_TITLE=Chronos-2 Forecasting API
API_VERSION=3.0.0
API_PORT=8000
# Model Configuration
MODEL_ID=amazon/chronos-2
DEVICE_MAP=cpu
# CORS
CORS_ORIGINS=["http://localhost:3000","https://localhost:3001","*"]
# Logging
LOG_LEVEL=INFO
2. Install Development Dependencies
# Core dependencies
pip install -r requirements.txt
# Development tools
pip install \
pytest \
pytest-cov \
pytest-mock \
black \
flake8 \
mypy \
ipython
# Optional: Pre-commit hooks
pip install pre-commit
pre-commit install
3. IDE Configuration
VS Code
Create .vscode/settings.json:
{
"python.defaultInterpreterPath": "${workspaceFolder}/venv/bin/python",
"python.linting.enabled": true,
"python.linting.flake8Enabled": true,
"python.formatting.provider": "black",
"python.testing.pytestEnabled": true,
"python.testing.pytestArgs": ["tests"],
"editor.formatOnSave": true,
"editor.rulers": [88]
}
Create .vscode/launch.json:
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: FastAPI",
"type": "python",
"request": "launch",
"module": "uvicorn",
"args": [
"app.main_v3:app",
"--reload",
"--host",
"0.0.0.0",
"--port",
"8000"
],
"jinja": true,
"justMyCode": false
},
{
"name": "Python: Current Test File",
"type": "python",
"request": "launch",
"module": "pytest",
"args": [
"${file}",
"-v"
]
}
]
}
PyCharm
- Open Project: File β Open β Select
chronos2-server/ - Configure Interpreter: Settings β Project β Python Interpreter β Add β Virtualenv β Existing β
venv/bin/python - Enable pytest: Settings β Tools β Python Integrated Tools β Testing β pytest
- Run Configuration:
- Click "Add Configuration"
- Script path: Select
uvicorn - Parameters:
app.main_v3:app --reload
π Project Structure
chronos2-server/
βββ app/ # Application code
β βββ api/ # Presentation layer
β β βββ dependencies.py # Dependency injection
β β βββ routes/ # API endpoints
β β β βββ health.py
β β β βββ forecast.py
β β β βββ anomaly.py
β β β βββ backtest.py
β β βββ middleware/
β β
β βββ application/ # Application layer
β β βββ dtos/ # Data Transfer Objects
β β βββ use_cases/ # Business workflows
β β βββ mappers/ # DTO β Domain mapping
β β
β βββ domain/ # Domain layer (Core)
β β βββ interfaces/ # Abstract interfaces
β β βββ models/ # Domain models
β β βββ services/ # Business logic
β β
β βββ infrastructure/ # Infrastructure layer
β β βββ config/ # Configuration
β β βββ ml/ # ML model implementations
β β
β βββ utils/ # Shared utilities
β βββ main_v3.py # Application entry point
β
βββ tests/ # Test suite
β βββ conftest.py # Shared fixtures
β βββ unit/ # Unit tests
β βββ integration/ # Integration tests
β
βββ docs/ # Documentation
β βββ ARCHITECTURE.md
β βββ API.md
β βββ DEVELOPMENT.md (this file)
β
βββ static/ # Frontend (Excel Add-in)
β βββ taskpane/
β
βββ requirements.txt # Dependencies
βββ pytest.ini # Pytest configuration
βββ .env.example # Environment template
βββ README.md # Project overview
π Development Workflow
1. Feature Development Cycle
1. Create Feature Branch
β
2. Write Tests (TDD)
β
3. Implement Feature
β
4. Run Tests
β
5. Code Review
β
6. Merge to Main
2. Git Workflow
# 1. Create feature branch
git checkout -b feature/add-prophet-model
# 2. Make changes
# ... edit files ...
# 3. Run tests
pytest tests/ -v
# 4. Commit
git add .
git commit -m "feat: Add Prophet model support
- Implement ProphetModel class
- Register in ModelFactory
- Add tests
"
# 5. Push
git push origin feature/add-prophet-model
# 6. Create Pull Request
gh pr create --title "Add Prophet model support" --body "..."
3. Commit Message Convention
Format: <type>(<scope>): <subject>
Types:
feat: New featurefix: Bug fixdocs: Documentationtest: Testsrefactor: Code refactoringstyle: Formattingchore: Maintenance
Examples:
feat(api): Add streaming forecast endpoint
fix(domain): Handle empty time series
docs(api): Update forecast examples
test(services): Add anomaly service tests
refactor(infrastructure): Extract model loading
β Adding Features
Example: Add New ML Model
Step 1: Define Interface Implementation
Create app/infrastructure/ml/prophet_model.py:
from typing import List, Dict, Any
import pandas as pd
from prophet import Prophet
from app.domain.interfaces.forecast_model import IForecastModel
class ProphetModel(IForecastModel):
"""Prophet model implementation"""
def __init__(self, **kwargs):
self.model = Prophet(**kwargs)
self._fitted = False
def predict(
self,
context_df: pd.DataFrame,
prediction_length: int,
quantile_levels: List[float],
**kwargs
) -> pd.DataFrame:
"""Implement predict method"""
# Convert to Prophet format
prophet_df = pd.DataFrame({
'ds': pd.to_datetime(context_df['timestamp']),
'y': context_df['target']
})
# Fit model
if not self._fitted:
self.model.fit(prophet_df)
self._fitted = True
# Make forecast
future = self.model.make_future_dataframe(periods=prediction_length)
forecast = self.model.predict(future)
# Convert to expected format
result_df = pd.DataFrame({
'id': context_df['id'].iloc[0],
'timestamp': forecast['ds'].tail(prediction_length),
'predictions': forecast['yhat'].tail(prediction_length)
})
# Add quantiles
for q in quantile_levels:
col_name = f"{q:.3g}"
# Calculate quantile from Prophet's interval
result_df[col_name] = forecast[f'yhat_lower'].tail(prediction_length)
return result_df
def get_model_info(self) -> Dict[str, Any]:
return {
"type": "Prophet",
"provider": "Facebook",
"fitted": self._fitted
}
Step 2: Register in Factory
Edit app/infrastructure/ml/model_factory.py:
from app.infrastructure.ml.prophet_model import ProphetModel
class ModelFactory:
_models = {
"chronos2": ChronosModel,
"prophet": ProphetModel, # Add this line
}
Step 3: Add Tests
Create tests/unit/test_prophet_model.py:
import pytest
import pandas as pd
from app.infrastructure.ml.prophet_model import ProphetModel
def test_prophet_model_initialization():
"""Test Prophet model creation"""
model = ProphetModel()
assert isinstance(model, ProphetModel)
assert model._fitted is False
def test_prophet_predict():
"""Test Prophet prediction"""
model = ProphetModel()
context_df = pd.DataFrame({
'id': ['series_0'] * 10,
'timestamp': pd.date_range('2025-01-01', periods=10),
'target': [100, 102, 105, 103, 108, 112, 115, 118, 120, 122]
})
result = model.predict(
context_df=context_df,
prediction_length=3,
quantile_levels=[0.1, 0.5, 0.9]
)
assert len(result) == 3
assert 'predictions' in result.columns
Step 4: Update Documentation
Edit docs/API.md:
### Available Models
- **chronos2**: Amazon Chronos-2 (default)
- **prophet**: Facebook Prophet (new!)
```python
# Use Prophet model
model = ModelFactory.create("prophet")
service = ForecastService(model=model)
**Step 5: Run Tests**
```bash
pytest tests/unit/test_prophet_model.py -v
pytest tests/ -v # All tests
Example: Add New API Endpoint
Step 1: Create Use Case
Create app/application/use_cases/evaluate_use_case.py:
from app.domain.services.forecast_service import ForecastService
from app.application.dtos.evaluate_dtos import EvaluateInputDTO, EvaluateOutputDTO
class EvaluateUseCase:
"""Evaluate model on multiple test sets"""
def __init__(self, forecast_service: ForecastService):
self.forecast_service = forecast_service
def execute(self, input_dto: EvaluateInputDTO) -> EvaluateOutputDTO:
# Implement evaluation logic
...
return EvaluateOutputDTO(...)
Step 2: Create Route
Create app/api/routes/evaluate.py:
from fastapi import APIRouter, Depends
from app.api.dependencies import get_evaluate_use_case
from app.application.use_cases.evaluate_use_case import EvaluateUseCase
from app.application.dtos.evaluate_dtos import EvaluateInputDTO, EvaluateOutputDTO
router = APIRouter(prefix="/evaluate", tags=["Evaluate"])
@router.post("/", response_model=EvaluateOutputDTO)
async def evaluate_model(
request: EvaluateInputDTO,
use_case: EvaluateUseCase = Depends(get_evaluate_use_case)
):
"""Evaluate model performance"""
return use_case.execute(request)
Step 3: Register Route
Edit app/main_v3.py:
from app.api.routes import evaluate
app.include_router(evaluate.router)
Step 4: Add Tests
Create tests/integration/test_evaluate_endpoint.py:
from fastapi.testclient import TestClient
from app.main_v3 import app
client = TestClient(app)
def test_evaluate_endpoint():
response = client.post("/evaluate/", json={
"test_sets": [...]
})
assert response.status_code == 200
π§ͺ Testing
Running Tests
# All tests
pytest tests/ -v
# Specific test file
pytest tests/unit/test_forecast_service.py -v
# Specific test
pytest tests/unit/test_forecast_service.py::test_forecast_univariate_success -v
# With coverage
pytest tests/ --cov=app --cov-report=html
# Only unit tests
pytest tests/ -m unit
# Only integration tests
pytest tests/ -m integration
# Watch mode (requires pytest-watch)
ptw tests/
Writing Tests
Unit Test Template:
import pytest
from app.domain.services.forecast_service import ForecastService
@pytest.mark.unit
class TestForecastService:
"""Test suite for ForecastService"""
def test_forecast_success(self, mock_model, mock_transformer):
"""Test successful forecast"""
# Arrange
service = ForecastService(mock_model, mock_transformer)
series = TimeSeries(values=[100, 102, 105])
config = ForecastConfig(prediction_length=3)
# Act
result = service.forecast_univariate(series, config)
# Assert
assert len(result.timestamps) == 3
mock_model.predict.assert_called_once()
def test_forecast_with_invalid_data(self):
"""Test error handling"""
# Arrange
service = ForecastService(...)
# Act & Assert
with pytest.raises(ValueError, match="Invalid"):
service.forecast_univariate(...)
Integration Test Template:
from fastapi.testclient import TestClient
from app.main_v3 import app
client = TestClient(app)
@pytest.mark.integration
def test_forecast_endpoint():
"""Test forecast endpoint E2E"""
# Arrange
payload = {
"values": [100, 102, 105],
"prediction_length": 3
}
# Act
response = client.post("/forecast/univariate", json=payload)
# Assert
assert response.status_code == 200
data = response.json()
assert "median" in data
assert len(data["median"]) == 3
π¨ Code Style
Formatting
Use Black (line length 88):
black app/ tests/
Use isort (import sorting):
isort app/ tests/
Linting
Flake8:
flake8 app/ tests/ --max-line-length=88
MyPy (type checking):
mypy app/
Style Guidelines
1. Type Hints:
# β
Good
def forecast(values: List[float], length: int) -> Dict[str, List[float]]:
...
# β Bad
def forecast(values, length):
...
2. Docstrings:
def forecast_univariate(self, series: TimeSeries, config: ForecastConfig) -> ForecastResult:
"""
Generate forecast for univariate time series.
Args:
series: Time series to forecast
config: Forecast configuration
Returns:
Forecast result with predictions
Raises:
ValueError: If series is invalid
"""
...
3. Naming Conventions:
# Classes: PascalCase
class ForecastService:
...
# Functions/methods: snake_case
def forecast_univariate():
...
# Constants: UPPER_SNAKE_CASE
MAX_PREDICTION_LENGTH = 365
# Private methods: _leading_underscore
def _validate_input():
...
π Debugging
Debug Server
Run with debugger:
# With pdb
python -m pdb -m uvicorn app.main_v3:app --reload
# With ipdb (better interface)
pip install ipdb
python -m ipdb -m uvicorn app.main_v3:app --reload
Add breakpoint in code:
def forecast_univariate(self, series, config):
import ipdb; ipdb.set_trace() # Debugger stops here
...
Logging
Add logging:
from app.utils.logger import setup_logger
logger = setup_logger(__name__)
def forecast_univariate(self, series, config):
logger.info(f"Forecasting series with {len(series.values)} points")
logger.debug(f"Config: {config}")
try:
result = self._do_forecast(series, config)
logger.info("Forecast successful")
return result
except Exception as e:
logger.error(f"Forecast failed: {e}", exc_info=True)
raise
View logs:
# Set log level in .env
LOG_LEVEL=DEBUG
# Or environment variable
LOG_LEVEL=DEBUG python -m uvicorn app.main_v3:app --reload
Testing in Isolation
Test single component:
# test_debug.py
from app.domain.services.forecast_service import ForecastService
# Create mocks
model = MockModel()
transformer = MockTransformer()
# Test service
service = ForecastService(model, transformer)
result = service.forecast_univariate(...)
print(result)
π€ Contributing
Pull Request Process
- Fork & Clone:
gh repo fork yourusername/chronos2-server --clone
cd chronos2-server
- Create Branch:
git checkout -b feature/my-feature
- Make Changes:
# Edit files
# Add tests
# Update docs
- Run Tests:
pytest tests/ -v
black app/ tests/
flake8 app/ tests/
- Commit:
git add .
git commit -m "feat: Add my feature"
- Push:
git push origin feature/my-feature
- Create PR:
gh pr create --title "Add my feature" --body "Description..."
Code Review Checklist
- Tests added/updated
- Documentation updated
- Code formatted (black, isort)
- Linting passes (flake8)
- Type hints added (mypy)
- Commit message follows convention
- PR description clear
π Resources
Documentation
- Architecture:
docs/ARCHITECTURE.md - API:
docs/API.md - Interactive API: http://localhost:8000/docs
External Resources
- FastAPI: https://fastapi.tiangolo.com/
- Chronos: https://github.com/amazon-science/chronos-forecasting
- Pytest: https://docs.pytest.org/
- Clean Architecture: https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-architecture.html
Getting Help
- GitHub Issues: Report bugs, request features
- Discussions: Ask questions, share ideas
- Email: support@example.com
π Learning Path
For New Contributors
Week 1: Understanding
- Read
README.md - Read
docs/ARCHITECTURE.md - Run the project locally
- Explore API at
/docs
Week 2: Small Changes
- Fix a typo in docs
- Add a test case
- Improve error message
Week 3: Features
- Add new endpoint
- Implement new model
- Add new use case
π§ Troubleshooting
Common Issues
Issue: ModuleNotFoundError: No module named 'app'
# Solution: Run from project root
cd /path/to/chronos2-server
python -m uvicorn app.main_v3:app --reload
Issue: Model loading fails
# Solution: Check internet connection, HuggingFace access
pip install --upgrade transformers
Issue: Tests fail with import errors
# Solution: Install test dependencies
pip install pytest pytest-cov pytest-mock
Issue: Port 8000 already in use
# Solution: Use different port
uvicorn app.main_v3:app --port 8001
Last Updated: 2025-11-09
Version: 3.0.0
Maintainer: Claude AI