ttzzs's picture
Deploy Chronos2 Forecasting API v3.0.0 with new SOLID architecture
c40c447 verified

πŸ‘¨β€πŸ’» Chronos2 Server - Development Guide

Version: 3.0.0
Date: 2025-11-09
For: Developers contributing to or extending the project


πŸ“‹ Table of Contents

  1. Getting Started
  2. Development Setup
  3. Project Structure
  4. Development Workflow
  5. Adding Features
  6. Testing
  7. Code Style
  8. Debugging
  9. Contributing

πŸš€ Getting Started

Prerequisites

  • Python: 3.10 or higher
  • Git: For version control
  • Docker: (Optional) For containerized development
  • IDE: VS Code, PyCharm, or similar

Quick Setup

# 1. Clone repository
git clone https://github.com/yourusername/chronos2-server.git
cd chronos2-server

# 2. Create virtual environment
python3 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# 3. Install dependencies
pip install -r requirements.txt

# 4. Run tests
pytest tests/ -v

# 5. Start server
python -m uvicorn app.main_v3:app --reload --host 0.0.0.0 --port 8000

# 6. Open browser
# http://localhost:8000/docs

πŸ› οΈ Development Setup

1. Environment Setup

Create .env file:

cp .env.example .env

Edit .env:

# API Configuration
API_TITLE=Chronos-2 Forecasting API
API_VERSION=3.0.0
API_PORT=8000

# Model Configuration
MODEL_ID=amazon/chronos-2
DEVICE_MAP=cpu

# CORS
CORS_ORIGINS=["http://localhost:3000","https://localhost:3001","*"]

# Logging
LOG_LEVEL=INFO

2. Install Development Dependencies

# Core dependencies
pip install -r requirements.txt

# Development tools
pip install \
  pytest \
  pytest-cov \
  pytest-mock \
  black \
  flake8 \
  mypy \
  ipython

# Optional: Pre-commit hooks
pip install pre-commit
pre-commit install

3. IDE Configuration

VS Code

Create .vscode/settings.json:

{
  "python.defaultInterpreterPath": "${workspaceFolder}/venv/bin/python",
  "python.linting.enabled": true,
  "python.linting.flake8Enabled": true,
  "python.formatting.provider": "black",
  "python.testing.pytestEnabled": true,
  "python.testing.pytestArgs": ["tests"],
  "editor.formatOnSave": true,
  "editor.rulers": [88]
}

Create .vscode/launch.json:

{
  "version": "0.2.0",
  "configurations": [
    {
      "name": "Python: FastAPI",
      "type": "python",
      "request": "launch",
      "module": "uvicorn",
      "args": [
        "app.main_v3:app",
        "--reload",
        "--host",
        "0.0.0.0",
        "--port",
        "8000"
      ],
      "jinja": true,
      "justMyCode": false
    },
    {
      "name": "Python: Current Test File",
      "type": "python",
      "request": "launch",
      "module": "pytest",
      "args": [
        "${file}",
        "-v"
      ]
    }
  ]
}

PyCharm

  1. Open Project: File β†’ Open β†’ Select chronos2-server/
  2. Configure Interpreter: Settings β†’ Project β†’ Python Interpreter β†’ Add β†’ Virtualenv β†’ Existing β†’ venv/bin/python
  3. Enable pytest: Settings β†’ Tools β†’ Python Integrated Tools β†’ Testing β†’ pytest
  4. Run Configuration:
    • Click "Add Configuration"
    • Script path: Select uvicorn
    • Parameters: app.main_v3:app --reload

πŸ“‚ Project Structure

chronos2-server/
β”œβ”€β”€ app/                          # Application code
β”‚   β”œβ”€β”€ api/                      # Presentation layer
β”‚   β”‚   β”œβ”€β”€ dependencies.py       # Dependency injection
β”‚   β”‚   β”œβ”€β”€ routes/               # API endpoints
β”‚   β”‚   β”‚   β”œβ”€β”€ health.py
β”‚   β”‚   β”‚   β”œβ”€β”€ forecast.py
β”‚   β”‚   β”‚   β”œβ”€β”€ anomaly.py
β”‚   β”‚   β”‚   └── backtest.py
β”‚   β”‚   └── middleware/
β”‚   β”‚
β”‚   β”œβ”€β”€ application/              # Application layer
β”‚   β”‚   β”œβ”€β”€ dtos/                 # Data Transfer Objects
β”‚   β”‚   β”œβ”€β”€ use_cases/            # Business workflows
β”‚   β”‚   └── mappers/              # DTO ↔ Domain mapping
β”‚   β”‚
β”‚   β”œβ”€β”€ domain/                   # Domain layer (Core)
β”‚   β”‚   β”œβ”€β”€ interfaces/           # Abstract interfaces
β”‚   β”‚   β”œβ”€β”€ models/               # Domain models
β”‚   β”‚   └── services/             # Business logic
β”‚   β”‚
β”‚   β”œβ”€β”€ infrastructure/           # Infrastructure layer
β”‚   β”‚   β”œβ”€β”€ config/               # Configuration
β”‚   β”‚   └── ml/                   # ML model implementations
β”‚   β”‚
β”‚   β”œβ”€β”€ utils/                    # Shared utilities
β”‚   └── main_v3.py                # Application entry point
β”‚
β”œβ”€β”€ tests/                        # Test suite
β”‚   β”œβ”€β”€ conftest.py               # Shared fixtures
β”‚   β”œβ”€β”€ unit/                     # Unit tests
β”‚   └── integration/              # Integration tests
β”‚
β”œβ”€β”€ docs/                         # Documentation
β”‚   β”œβ”€β”€ ARCHITECTURE.md
β”‚   β”œβ”€β”€ API.md
β”‚   └── DEVELOPMENT.md (this file)
β”‚
β”œβ”€β”€ static/                       # Frontend (Excel Add-in)
β”‚   └── taskpane/
β”‚
β”œβ”€β”€ requirements.txt              # Dependencies
β”œβ”€β”€ pytest.ini                    # Pytest configuration
β”œβ”€β”€ .env.example                  # Environment template
└── README.md                     # Project overview

πŸ”„ Development Workflow

1. Feature Development Cycle

1. Create Feature Branch
   ↓
2. Write Tests (TDD)
   ↓
3. Implement Feature
   ↓
4. Run Tests
   ↓
5. Code Review
   ↓
6. Merge to Main

2. Git Workflow

# 1. Create feature branch
git checkout -b feature/add-prophet-model

# 2. Make changes
# ... edit files ...

# 3. Run tests
pytest tests/ -v

# 4. Commit
git add .
git commit -m "feat: Add Prophet model support

- Implement ProphetModel class
- Register in ModelFactory
- Add tests
"

# 5. Push
git push origin feature/add-prophet-model

# 6. Create Pull Request
gh pr create --title "Add Prophet model support" --body "..."

3. Commit Message Convention

Format: <type>(<scope>): <subject>

Types:

  • feat: New feature
  • fix: Bug fix
  • docs: Documentation
  • test: Tests
  • refactor: Code refactoring
  • style: Formatting
  • chore: Maintenance

Examples:

feat(api): Add streaming forecast endpoint
fix(domain): Handle empty time series
docs(api): Update forecast examples
test(services): Add anomaly service tests
refactor(infrastructure): Extract model loading

βž• Adding Features

Example: Add New ML Model

Step 1: Define Interface Implementation

Create app/infrastructure/ml/prophet_model.py:

from typing import List, Dict, Any
import pandas as pd
from prophet import Prophet
from app.domain.interfaces.forecast_model import IForecastModel

class ProphetModel(IForecastModel):
    """Prophet model implementation"""
    
    def __init__(self, **kwargs):
        self.model = Prophet(**kwargs)
        self._fitted = False
    
    def predict(
        self,
        context_df: pd.DataFrame,
        prediction_length: int,
        quantile_levels: List[float],
        **kwargs
    ) -> pd.DataFrame:
        """Implement predict method"""
        # Convert to Prophet format
        prophet_df = pd.DataFrame({
            'ds': pd.to_datetime(context_df['timestamp']),
            'y': context_df['target']
        })
        
        # Fit model
        if not self._fitted:
            self.model.fit(prophet_df)
            self._fitted = True
        
        # Make forecast
        future = self.model.make_future_dataframe(periods=prediction_length)
        forecast = self.model.predict(future)
        
        # Convert to expected format
        result_df = pd.DataFrame({
            'id': context_df['id'].iloc[0],
            'timestamp': forecast['ds'].tail(prediction_length),
            'predictions': forecast['yhat'].tail(prediction_length)
        })
        
        # Add quantiles
        for q in quantile_levels:
            col_name = f"{q:.3g}"
            # Calculate quantile from Prophet's interval
            result_df[col_name] = forecast[f'yhat_lower'].tail(prediction_length)
        
        return result_df
    
    def get_model_info(self) -> Dict[str, Any]:
        return {
            "type": "Prophet",
            "provider": "Facebook",
            "fitted": self._fitted
        }

Step 2: Register in Factory

Edit app/infrastructure/ml/model_factory.py:

from app.infrastructure.ml.prophet_model import ProphetModel

class ModelFactory:
    _models = {
        "chronos2": ChronosModel,
        "prophet": ProphetModel,  # Add this line
    }

Step 3: Add Tests

Create tests/unit/test_prophet_model.py:

import pytest
import pandas as pd
from app.infrastructure.ml.prophet_model import ProphetModel

def test_prophet_model_initialization():
    """Test Prophet model creation"""
    model = ProphetModel()
    assert isinstance(model, ProphetModel)
    assert model._fitted is False

def test_prophet_predict():
    """Test Prophet prediction"""
    model = ProphetModel()
    
    context_df = pd.DataFrame({
        'id': ['series_0'] * 10,
        'timestamp': pd.date_range('2025-01-01', periods=10),
        'target': [100, 102, 105, 103, 108, 112, 115, 118, 120, 122]
    })
    
    result = model.predict(
        context_df=context_df,
        prediction_length=3,
        quantile_levels=[0.1, 0.5, 0.9]
    )
    
    assert len(result) == 3
    assert 'predictions' in result.columns

Step 4: Update Documentation

Edit docs/API.md:

### Available Models

- **chronos2**: Amazon Chronos-2 (default)
- **prophet**: Facebook Prophet (new!)

```python
# Use Prophet model
model = ModelFactory.create("prophet")
service = ForecastService(model=model)

**Step 5: Run Tests**
```bash
pytest tests/unit/test_prophet_model.py -v
pytest tests/ -v  # All tests

Example: Add New API Endpoint

Step 1: Create Use Case

Create app/application/use_cases/evaluate_use_case.py:

from app.domain.services.forecast_service import ForecastService
from app.application.dtos.evaluate_dtos import EvaluateInputDTO, EvaluateOutputDTO

class EvaluateUseCase:
    """Evaluate model on multiple test sets"""
    
    def __init__(self, forecast_service: ForecastService):
        self.forecast_service = forecast_service
    
    def execute(self, input_dto: EvaluateInputDTO) -> EvaluateOutputDTO:
        # Implement evaluation logic
        ...
        return EvaluateOutputDTO(...)

Step 2: Create Route

Create app/api/routes/evaluate.py:

from fastapi import APIRouter, Depends
from app.api.dependencies import get_evaluate_use_case
from app.application.use_cases.evaluate_use_case import EvaluateUseCase
from app.application.dtos.evaluate_dtos import EvaluateInputDTO, EvaluateOutputDTO

router = APIRouter(prefix="/evaluate", tags=["Evaluate"])

@router.post("/", response_model=EvaluateOutputDTO)
async def evaluate_model(
    request: EvaluateInputDTO,
    use_case: EvaluateUseCase = Depends(get_evaluate_use_case)
):
    """Evaluate model performance"""
    return use_case.execute(request)

Step 3: Register Route

Edit app/main_v3.py:

from app.api.routes import evaluate

app.include_router(evaluate.router)

Step 4: Add Tests

Create tests/integration/test_evaluate_endpoint.py:

from fastapi.testclient import TestClient
from app.main_v3 import app

client = TestClient(app)

def test_evaluate_endpoint():
    response = client.post("/evaluate/", json={
        "test_sets": [...]
    })
    assert response.status_code == 200

πŸ§ͺ Testing

Running Tests

# All tests
pytest tests/ -v

# Specific test file
pytest tests/unit/test_forecast_service.py -v

# Specific test
pytest tests/unit/test_forecast_service.py::test_forecast_univariate_success -v

# With coverage
pytest tests/ --cov=app --cov-report=html

# Only unit tests
pytest tests/ -m unit

# Only integration tests
pytest tests/ -m integration

# Watch mode (requires pytest-watch)
ptw tests/

Writing Tests

Unit Test Template:

import pytest
from app.domain.services.forecast_service import ForecastService

@pytest.mark.unit
class TestForecastService:
    """Test suite for ForecastService"""
    
    def test_forecast_success(self, mock_model, mock_transformer):
        """Test successful forecast"""
        # Arrange
        service = ForecastService(mock_model, mock_transformer)
        series = TimeSeries(values=[100, 102, 105])
        config = ForecastConfig(prediction_length=3)
        
        # Act
        result = service.forecast_univariate(series, config)
        
        # Assert
        assert len(result.timestamps) == 3
        mock_model.predict.assert_called_once()
    
    def test_forecast_with_invalid_data(self):
        """Test error handling"""
        # Arrange
        service = ForecastService(...)
        
        # Act & Assert
        with pytest.raises(ValueError, match="Invalid"):
            service.forecast_univariate(...)

Integration Test Template:

from fastapi.testclient import TestClient
from app.main_v3 import app

client = TestClient(app)

@pytest.mark.integration
def test_forecast_endpoint():
    """Test forecast endpoint E2E"""
    # Arrange
    payload = {
        "values": [100, 102, 105],
        "prediction_length": 3
    }
    
    # Act
    response = client.post("/forecast/univariate", json=payload)
    
    # Assert
    assert response.status_code == 200
    data = response.json()
    assert "median" in data
    assert len(data["median"]) == 3

🎨 Code Style

Formatting

Use Black (line length 88):

black app/ tests/

Use isort (import sorting):

isort app/ tests/

Linting

Flake8:

flake8 app/ tests/ --max-line-length=88

MyPy (type checking):

mypy app/

Style Guidelines

1. Type Hints:

# βœ… Good
def forecast(values: List[float], length: int) -> Dict[str, List[float]]:
    ...

# ❌ Bad
def forecast(values, length):
    ...

2. Docstrings:

def forecast_univariate(self, series: TimeSeries, config: ForecastConfig) -> ForecastResult:
    """
    Generate forecast for univariate time series.
    
    Args:
        series: Time series to forecast
        config: Forecast configuration
        
    Returns:
        Forecast result with predictions
        
    Raises:
        ValueError: If series is invalid
    """
    ...

3. Naming Conventions:

# Classes: PascalCase
class ForecastService:
    ...

# Functions/methods: snake_case
def forecast_univariate():
    ...

# Constants: UPPER_SNAKE_CASE
MAX_PREDICTION_LENGTH = 365

# Private methods: _leading_underscore
def _validate_input():
    ...

πŸ› Debugging

Debug Server

Run with debugger:

# With pdb
python -m pdb -m uvicorn app.main_v3:app --reload

# With ipdb (better interface)
pip install ipdb
python -m ipdb -m uvicorn app.main_v3:app --reload

Add breakpoint in code:

def forecast_univariate(self, series, config):
    import ipdb; ipdb.set_trace()  # Debugger stops here
    ...

Logging

Add logging:

from app.utils.logger import setup_logger

logger = setup_logger(__name__)

def forecast_univariate(self, series, config):
    logger.info(f"Forecasting series with {len(series.values)} points")
    logger.debug(f"Config: {config}")
    
    try:
        result = self._do_forecast(series, config)
        logger.info("Forecast successful")
        return result
    except Exception as e:
        logger.error(f"Forecast failed: {e}", exc_info=True)
        raise

View logs:

# Set log level in .env
LOG_LEVEL=DEBUG

# Or environment variable
LOG_LEVEL=DEBUG python -m uvicorn app.main_v3:app --reload

Testing in Isolation

Test single component:

# test_debug.py
from app.domain.services.forecast_service import ForecastService

# Create mocks
model = MockModel()
transformer = MockTransformer()

# Test service
service = ForecastService(model, transformer)
result = service.forecast_univariate(...)

print(result)

🀝 Contributing

Pull Request Process

  1. Fork & Clone:
gh repo fork yourusername/chronos2-server --clone
cd chronos2-server
  1. Create Branch:
git checkout -b feature/my-feature
  1. Make Changes:
# Edit files
# Add tests
# Update docs
  1. Run Tests:
pytest tests/ -v
black app/ tests/
flake8 app/ tests/
  1. Commit:
git add .
git commit -m "feat: Add my feature"
  1. Push:
git push origin feature/my-feature
  1. Create PR:
gh pr create --title "Add my feature" --body "Description..."

Code Review Checklist

  • Tests added/updated
  • Documentation updated
  • Code formatted (black, isort)
  • Linting passes (flake8)
  • Type hints added (mypy)
  • Commit message follows convention
  • PR description clear

πŸ“š Resources

Documentation

External Resources

Getting Help

  • GitHub Issues: Report bugs, request features
  • Discussions: Ask questions, share ideas
  • Email: support@example.com

πŸŽ“ Learning Path

For New Contributors

Week 1: Understanding

  1. Read README.md
  2. Read docs/ARCHITECTURE.md
  3. Run the project locally
  4. Explore API at /docs

Week 2: Small Changes

  1. Fix a typo in docs
  2. Add a test case
  3. Improve error message

Week 3: Features

  1. Add new endpoint
  2. Implement new model
  3. Add new use case

πŸ”§ Troubleshooting

Common Issues

Issue: ModuleNotFoundError: No module named 'app'

# Solution: Run from project root
cd /path/to/chronos2-server
python -m uvicorn app.main_v3:app --reload

Issue: Model loading fails

# Solution: Check internet connection, HuggingFace access
pip install --upgrade transformers

Issue: Tests fail with import errors

# Solution: Install test dependencies
pip install pytest pytest-cov pytest-mock

Issue: Port 8000 already in use

# Solution: Use different port
uvicorn app.main_v3:app --port 8001

Last Updated: 2025-11-09
Version: 3.0.0
Maintainer: Claude AI