tensense's picture
Upload folder using huggingface_hub
4e909c7 verified
Laddr Logo

Your framework for building scalable multi-agent systems

License: Apache 2.0 Python 3.10+ Docker FastAPI Redis PostgreSQL

Quick Start โ€ข Features โ€ข Agent Example โ€ข Tool Example โ€ข Dashboard

Laddr Logo

Table of Contents


What is Laddr?

Laddr is a python framework for building multi-agent systems where agents communicate, delegate tasks, and execute work in parallel. Think of it as a microservices architecture for AI agents โ€” with built-in message queues, observability, and horizontal scalability.

Two Operating Modes

Coordinator-Orchestrator Mode
Dynamic, autonomous workflows where a coordinator agent analyzes tasks, delegates to specialist agents, and synthesizes results. The coordinator makes real-time decisions about routing and orchestration.

Sequential Deterministic Workflow Mode
Pre-defined, predictable pipelines where tasks flow through agents in a fixed sequence. Each step is explicitly defined with inputs, outputs, and dependencies.


Features

Scalability & Performance

  • Horizontal scaling โ€” Scale each agent independently based on workload
  • Multi-worker support โ€” Run multiple workers per agent for high throughput
  • Automatic load balancing โ€” Redis Streams distribute tasks across workers
  • Fault tolerance โ€” Worker failures don't stop the system

Observability & Monitoring

  • Complete tracing โ€” Every agent action automatically logged to PostgreSQL
  • Real-time metrics โ€” Job status, duration, success rates, queue depth
  • Interactive dashboard โ€” Beautiful UI to monitor agents, traces, and logs
  • Playground โ€” Test agents interactively with live streaming responses

Developer Experience

  • Clean CLI โ€” Minimal, modern interface for project management
  • Hot reload โ€” Code changes reflect instantly in development mode
  • Type safety โ€” Full Python type hints throughout the codebase
  • Simple setup โ€” One command (laddr init) to scaffold complete projects

AI-Optimized Architecture

  • Built-in delegation โ€” Agents can delegate tasks to other agents seamlessly
  • Parallel execution โ€” Process multiple independent tasks simultaneously
  • Context management โ€” Automatic artifact storage for large payloads (>1MB)
  • Smart retries โ€” Configurable retry logic and timeout handling per agent

Production Ready

  • Docker native โ€” Everything runs in containers with docker-compose
  • Queue-based messaging โ€” Redis Streams for reliable, distributed communication
  • Artifact storage โ€” MinIO/S3 for handling large files and documents
  • REST API โ€” FastAPI server with auto-generated OpenAPI documentation

Extensibility

  • Custom tools โ€” Add any Python function as an agent tool with @tool decorator
  • Override system tools โ€” Extend delegation and storage with your own implementations
  • LLM agnostic โ€” Works with Gemini, OpenAI, Anthropic, Groq, Ollama, and local models
  • Pluggable backends โ€” Swap Redis, PostgreSQL, or storage providers easily

Quick Start

Installation

pip install laddr

Create Your First Project

# Initialize a new project
laddr init my-agent-system
cd my-agent-system

Configure API Keys

Edit .env file:

# Required for web search tools
SERPER_API_KEY=your_serper_api_key  # Get from https://serper.dev

# LLM API Keys (choose one or more)
GEMINI_API_KEY=your_gemini_key      # Get from https://aistudio.google.com
OPENAI_API_KEY=your_openai_key      # Get from https://platform.openai.com

# Or use Ollama for local models (free, private, offline)
# Install: curl -fsSL https://ollama.ai/install.sh | sh
# Then: ollama pull gemma2:2b
OLLAMA_BASE_URL=http://localhost:11434
LLM_BACKEND=ollama  # Use local models instead of cloud APIs

๐Ÿ’ก Using Ollama? See the Ollama Integration Guide for complete setup instructions including Docker configuration.

Start the System

# Start all services (API, workers, database, Redis, MinIO)
laddr run dev

This starts:

  • โœ… API Server at http://localhost:8000
  • โœ… Dashboard at http://localhost:5173
  • โœ… PostgreSQL for traces and job history
  • โœ… Redis for message queue
  • โœ… MinIO for artifact storage
  • โœ… 2 agents: coordinator (orchestrator) and researcher (specialist)

Test Your Agents

Open the dashboard at http://localhost:5173/playground or use the API:

curl -X POST http://localhost:8000/api/jobs \
  -H "Content-Type: application/json" \
  -d '{
    "agent_name": "coordinator",
    "inputs": {
      "query": "What are the best hotels in Shimla?"
    }
  }'

Agent Example

Agents are defined in simple Python files. Here's what laddr init generates:

"""
Coordinator agent - orchestrates research tasks
"""
from laddr import Agent
from laddr.llms import gemini

coordinator = Agent(
    name="coordinator",
    role="Research Task Coordinator",
    goal="Coordinate research tasks by delegating to specialist agents",
    backstory="""You are a research coordinator who manages information 
    gathering tasks. You delegate specific queries to researcher agents 
    and compile their findings into comprehensive summaries.""",
    
    llm=gemini("gemini-2.0-flash"),
    
    # Coordinator can delegate to other agents
    is_coordinator=True,
    available_agents=["researcher"],
    
    # Behavior settings
    max_iterations=15,
    max_tool_calls=50,
    timeout=600,
    
    # Built-in delegation tools provided automatically
    tools=[],
    
    instructions="""
    ## Your workflow:
    1. Receive user query
    2. Delegate research to 'researcher' agent using system_delegate_task
    3. Wait for results (use wait_for_response=true)
    4. Synthesize findings into comprehensive answer
    5. Return final result to user
    
    ## Delegation format:
    {
      "type": "tool",
      "tool": "system_delegate_task",
      "params": {
        "agent_name": "researcher",
        "task": "Search for best hotels in Shimla with reviews",
        "wait_for_response": true,
        "timeout_seconds": 90
      }
    }
    """
)

Key Features:

  • Coordinators delegate work to specialists
  • Specialists use tools to complete tasks
  • LLM-powered decision making
  • Configurable behavior and limits
  • Clear instructions guide agent behavior

Tool Example

Tools are Python functions that agents can call. Here's a web search tool from the template:

"""
Web search tool using Serper.dev API
"""
from typing import Dict
import os
import requests
from laddr import tool

@tool(
    name="web_search",
    description="Search the web using Serper.dev API. Returns title, URL, and snippet for each result.",
    parameters={
        "type": "object",
        "properties": {
            "query": {
                "type": "string",
                "description": "Search query (be specific for better results)"
            },
            "max_results": {
                "type": "integer",
                "description": "Maximum number of results (1-10, default 5)",
                "default": 5
            }
        },
        "required": ["query"]
    }
)
def web_search(query: str, max_results: int = 5) -> Dict:
    """
    Search the web and return structured results.
    
    Returns:
        {
            "query": str,
            "results": [
                {
                    "title": str,
                    "link": str,
                    "snippet": str,
                    "site": str
                },
                ...
            ],
            "count": int,
            "status": "success" | "error"
        }
    """
    api_key = os.getenv("SERPER_API_KEY")
    if not api_key:
        return {
            "status": "error",
            "error": "SERPER_API_KEY not set"
        }
    
    response = requests.post(
        "https://google.serper.dev/search",
        headers={
            "X-API-KEY": api_key,
            "Content-Type": "application/json",
        },
        json={"q": query, "num": max_results},
        timeout=15,
    )
    response.raise_for_status()
    data = response.json()
    
    results = []
    for item in data.get("organic", [])[:max_results]:
        results.append({
            "title": item.get("title", ""),
            "link": item.get("link", ""),
            "snippet": item.get("snippet", "")[:500],
            "site": item.get("domain", "")
        })
    
    return {
        "query": query,
        "results": results,
        "count": len(results),
        "status": "success"
    }

Features:

  • @tool decorator โ€” Automatic registration
  • JSON Schema โ€” Parameters defined with validation
  • Structured output โ€” Consistent return format
  • Fast โ€” 15 second timeout
  • Error handling โ€” Graceful failures

FastAPI Runtime & REST API

Laddr includes a production-ready FastAPI server with comprehensive REST endpoints:

Core Endpoints

Method Endpoint Description
POST /api/jobs Submit a job to an agent
GET /api/jobs/{job_id} Get job status and result
GET /api/jobs List all jobs with filters
POST /api/jobs/{job_id}/replay Replay a failed job
GET /api/agents List all available agents
GET /api/agents/{agent_name}/tools Get agent's tools
GET /api/agents/{agent_name}/chat Interactive chat with agent

Observability Endpoints

Method Endpoint Description
GET /api/traces Get execution traces for jobs
GET /api/traces/grouped Get traces grouped by job
GET /api/traces/{trace_id} Get specific trace details
GET /api/metrics Get system metrics (jobs, latency, success rate)
GET /api/logs/containers List Docker containers
GET /api/logs/containers/{name} Stream container logs

Playground Endpoints

Method Endpoint Description
POST /api/prompts Create interactive prompt session
GET /api/prompts/{prompt_id} Get prompt status and messages
GET /api/prompts List all prompt sessions
POST /api/prompts/{prompt_id}/cancel Cancel running prompt

Example: Submit a Job

curl -X POST http://localhost:8000/api/jobs \
  -H "Content-Type: application/json" \
  -d '{
    "agent_name": "researcher",
    "inputs": {
      "query": "Latest AI trends 2025"
    }
  }'

# Response:
{
  "job_id": "job_abc123",
  "status": "queued",
  "created_at": "2025-01-15T10:30:00Z"
}

Example: Get Job Result

curl http://localhost:8000/api/jobs/job_abc123

# Response:
{
  "job_id": "job_abc123",
  "status": "completed",
  "result": {
    "answer": "Based on my research, the top AI trends in 2025 are...",
    "sources": [...]
  },
  "agent_name": "researcher",
  "created_at": "2025-01-15T10:30:00Z",
  "completed_at": "2025-01-15T10:31:45Z",
  "duration_seconds": 105
}

Interactive API Documentation

Full OpenAPI documentation available at:

http://localhost:8000/docs โ€” Swagger UI

Dashboard & Observability

Laddr includes a beautiful React dashboard with real-time monitoring:

Dashboard Features

Playground

  • Test agents interactively
  • Live streaming responses
  • Token usage tracking
  • Save and replay sessions

Traces

  • Complete execution history
  • Tool calls and results
  • LLM interactions
  • Delegation flows
  • Filter by job, agent, or time range

Metrics

  • System health overview
  • Jobs per agent (success/failed)
  • Average latency per agent
  • Queue depth monitoring
  • Worker status

Logs

  • Real-time container logs
  • Filter by agent or service
  • Error highlighting
  • Download logs

Agents

  • List all agents and their tools
  • Agent configuration viewer
  • Worker scaling controls
  • Health status

Access the Dashboard

# Start the system
laddr run dev

# Open dashboard in browser
open http://localhost:5173

Views:

  • / โ€” Dashboard home with metrics
  • /playground โ€” Interactive agent testing
  • /traces โ€” Execution traces and history
  • /agents โ€” Agent management
  • /logs โ€” Container logs viewer
  • /settings โ€” Configuration

Architecture

Message Bus (Redis Streams)

Laddr uses Redis Streams for reliable, distributed messaging:

  • Agent Queues โ€” Each agent has a dedicated stream (laddr:agent:{name})
  • Response Streams โ€” Temporary streams for delegation responses
  • Consumer Groups โ€” Multiple workers consume from the same stream
  • Automatic Load Balancing โ€” Redis distributes tasks across workers
  • Persistence โ€” Messages persisted until acknowledged
  • Backpressure โ€” Queue depth monitoring prevents overload

Example flow:

API โ†’ Redis Stream โ†’ Worker 1, Worker 2, Worker 3
                      โ†“
                    Process task
                      โ†“
                 Store result in Postgres

Trace Storage (PostgreSQL)

All agent executions are automatically traced to PostgreSQL:

  • Complete history โ€” Every tool call, LLM interaction, delegation
  • Structured data โ€” JSON traces with metadata
  • Fast queries โ€” Indexed by job_id, agent_name, timestamp
  • No external dependencies โ€” Built-in, no Jaeger or DataDog needed
  • Retention policies โ€” Configurable trace retention

Trace data includes:

  • Tool calls and results
  • LLM prompts and responses
  • Delegation events
  • Error stack traces
  • Token usage
  • Latency breakdown

Artifact Storage (MinIO/S3)

Large payloads are automatically stored in object storage:

  • Automatic threshold โ€” Messages >1MB stored as artifacts
  • S3-compatible โ€” MinIO (local) or AWS S3 (production)
  • Efficient messaging โ€” Only artifact reference sent via Redis
  • Retrieval on demand โ€” Workers fetch artifacts when needed
  • Configurable โ€” Set size threshold, retention, bucket names

Benefits:

  • Reduces Redis memory usage
  • Prevents message size limits
  • Enables large document processing
  • Supports binary data (images, PDFs, etc.)

Custom System Tools

Laddr allows you to override built-in system tools (delegation, artifact storage) with your own custom implementations:

from laddr import override_system_tool, TaskDelegationTool

@override_system_tool("system_delegate_task")
async def custom_delegation(
    agent_name: str,
    task_description: str,
    task: str,
    task_data: dict = None,
    timeout_seconds: int = 300,
    _message_bus=None,
    _artifact_storage=None,
    _agent=None
):
    """Custom delegation with logging and metrics."""
    
    # Add your custom logic (logging, rate limiting, etc.)
    logger.info(f"Delegating to {agent_name}: {task_description}")
    
    # Reuse base tool for actual delegation
    delegation_tool = TaskDelegationTool(_message_bus, _artifact_storage, _agent)
    result = await delegation_tool.delegate_task(
        agent_name=agent_name,
        task_description=task_description,
        task=task,
        task_data=task_data,
        timeout_seconds=timeout_seconds
    )
    
    logger.info(f"Delegation completed: {result}")
    return result

Available base tools:

  • TaskDelegationTool โ€” Single-task delegation
  • ParallelDelegationTool โ€” Parallel multi-task delegation
  • ArtifactStorageTool โ€” Artifact storage and retrieval

System tools you can override:

  • system_delegate_task โ€” Single task delegation
  • system_delegate_parallel โ€” Parallel task delegation
  • system_store_artifact โ€” Store data artifacts
  • system_retrieve_artifact โ€” Retrieve data artifacts

Learn more: See the Custom System Tools Guide for complete documentation with advanced patterns like rate limiting, circuit breakers, retries, and metrics.


License

Apache License 2.0 - see LICENSE for details.


๐Ÿ”— Links


Built for production. Designed for scale. Made transparent.

โญ Star us on GitHub if Laddr helps you build better agent systems!