tensense's picture
Upload folder using huggingface_hub
4e909c7 verified

Laddr

A transparent, Docker-native, observable, distributed agent framework.

Laddr is a superset of CrewAI that removes excessive abstractions and introduces real distributed runtime, local observability, and explicit agent communication.

๐ŸŽฏ Philosophy

CrewAI is too abstract, making it nearly impossible to understand or debug what's happening under the hood.

Laddr fixes this by being:

  • Transparent โ€” All logic (task flow, prompts, tool calls) visible and traceable
  • Pluggable โ€” Configure your own queues, databases, models, or tools
  • Observable โ€” Every agent action recorded via OpenTelemetry
  • Containerized โ€” Everything runs inside Docker for predictable behavior

In short: Laddr = CrewAI with explicit communication, Docker-native execution, local observability, and zero hidden magic.

๐Ÿ—๏ธ Architecture

Communication Model

Unlike CrewAI's internal synchronous calls, Laddr uses Redis Streams for explicit message passing:

Controller โ†’ Redis Queue โ†’ Agent Worker โ†’ Redis Response Stream

Each agent runs in its own container and consumes tasks from a dedicated Redis stream.

Services

  • PostgreSQL (with pgvector) โ€” Stores traces, job history, agent metadata
  • Redis โ€” Message bus for task distribution
  • MinIO โ€” S3-compatible storage for artifacts and large payloads
  • Jaeger โ€” OpenTelemetry trace collection and visualization
  • Prometheus โ€” Metrics collection and monitoring
  • API Server โ€” FastAPI server for job submission and queries
  • Worker Containers โ€” One per agent, consumes and processes tasks
  • Dashboard โ€” Real-time monitoring and agent interaction

๐Ÿš€ Quick Start

Installation

# Clone the repository
cd lib/laddr

# Install locally (for now)
pip install -e .

Create a Project

# Initialize a new project
laddr init my_project

# Navigate to project
cd my_project

# Configure API keys in .env
# Edit .env and add your GEMINI_API_KEY and SERPER_API_KEY

# Start the environment (includes default researcher agent)
laddr run dev

This will start all services with a working researcher agent and web_search tool ready to use.

What's included out-of-the-box:

  • Default researcher agent with Gemini 2.0 Flash
  • web_search tool powered by Serper.dev
  • Sample research_pipeline.yml
  • Full observability stack (Jaeger, Prometheus, Dashboard)

Access the dashboard at http://localhost:5173 to interact with your agents.

๐Ÿ“ฆ Project Structure

my_project/
โ”œโ”€โ”€ laddr.yml              # Project configuration
โ”œโ”€โ”€ docker-compose.yml       # Docker services (auto-generated)
โ”œโ”€โ”€ Dockerfile               # Container definition
โ”œโ”€โ”€ .env                     # Environment variables
โ”œโ”€โ”€ agents/                  # Agent configurations
โ”‚   โ”œโ”€โ”€ summarizer/
โ”‚   โ”‚   โ””โ”€โ”€ agent.yml
โ”‚   โ””โ”€โ”€ analyzer/
โ”‚       โ””โ”€โ”€ agent.yml
โ”œโ”€โ”€ tools/                   # Custom tools
โ”‚   โ””โ”€โ”€ my_tool.py
โ””โ”€โ”€ pipelines/               # Pipeline definitions
    โ””โ”€โ”€ analysis_pipeline.yml

๐Ÿค– Creating Agents

Add an Agent

laddr add agent researcher

This will:

  1. Create agents/researcher/agent.yml
  2. Add worker service to docker-compose.yml
  3. Register agent in laddr.yml

Note: A default researcher agent with web_search tool is created automatically when you run laddr init.

Agent Configuration

agents/researcher/agent.yml:

name: researcher
role: Research Agent
goal: Research topics on the web and summarize findings concisely
backstory: A helpful researcher that gathers and condenses information from reliable web sources
llm:
  provider: gemini
  model: gemini-2.5-flash
  api_key: ${GEMINI_API_KEY}
  temperature: 0.7
  max_tokens: 2048
tools:
  - web_search
max_iterations: 15
allow_delegation: false
verbose: true

LLM Providers

Laddr supports multiple LLM providers:

  • Gemini (default) - Google's Gemini models
  • OpenAI - GPT-4, GPT-3.5, etc.
  • Anthropic - Claude models
  • Groq - Fast inference
  • Ollama - Local models
  • llama.cpp - Local C++ inference

Set your API keys in .env:

GEMINI_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here
ANTHROPIC_API_KEY=your_key_here
GROQ_API_KEY=your_key_here

๐Ÿ”ง Custom Tools

Default Tool: web_search

A web_search tool using Serper.dev is included by default:

# tools/web_search.py
def web_search(query: str, max_results: int = 5) -> str:
    """Search the web using Serper.dev API."""
    # Uses SERPER_API_KEY from .env
    # Get your free API key at https://serper.dev

Setup: Add your Serper.dev API key to .env:

SERPER_API_KEY=your_serper_key_here

Add More Tools

laddr add tool my_custom_tool

Edit tools/my_custom_tool.py:

def my_custom_tool(param: str) -> str:
    """Your custom tool logic."""
    return result

๐Ÿ“‹ Pipelines

A sample pipeline (research_pipeline.yml) is created automatically on init.

Example Pipeline

pipelines/research_pipeline.yml:

name: research_pipeline
description: Example research pipeline using the researcher agent
tasks:
  - name: search_topic
    description: "Search the web for information about: {topic}"
    agent: researcher
    expected_output: A comprehensive summary of web search results
    tools:
      - web_search
    async_execution: false
    
  - name: analyze_results
    description: Analyze the search results and extract key insights
    agent: researcher
    expected_output: Key insights and recommendations based on the research
    context:
      - search_topic
    async_execution: false

Run a Pipeline

laddr run pipeline pipelines/analysis.yml

Note: Pipeline inputs are defined in the YAML file or can be passed via API.

๐Ÿ” Observability

View Traces

Navigate to Jaeger at http://localhost:16686 to see:

  • Task execution traces
  • LLM API calls
  • Tool invocations
  • Error spans

View Metrics

Navigate to Prometheus at http://localhost:9090 to query:

  • laddr_agent_task_duration_seconds โ€” Task execution time
  • laddr_queue_depth โ€” Pending tasks per agent
  • laddr_tokens_total โ€” Token usage
  • laddr_errors_total โ€” Error counts

Agent Logs

# View logs for an agent
laddr logs summarizer

# Follow logs in real-time
laddr logs summarizer -f

๐ŸŒ API Reference

Submit Job

curl -X POST http://localhost:8000/jobs \
  -H "Content-Type: application/json" \
  -d '{
    "pipeline_name": "analysis",
    "inputs": {"document": "report.pdf"}
  }'

Get Job Status

curl http://localhost:8000/jobs/{job_id}

List Agents

curl http://localhost:8000/agents

๐Ÿ“Š Dashboard

Access the dashboard at http://localhost:5173 to:

  • View all active agents
  • Monitor real-time logs
  • Inspect OpenTelemetry traces
  • Interact with individual agents
  • Visualize job workflows
  • Check system health metrics

๐Ÿณ Docker Commands

# Start all services
laddr run dev

# View logs
laddr logs <agent_name>

# Stop all services
laddr stop

# Rebuild containers
docker compose up -d --build

โš™๏ธ Configuration

Environment Variables

Edit .env to customize:

DATABASE_URL=postgresql://postgres:postgres@postgres:5432/laddr
REDIS_URL=redis://redis:6379
MINIO_ENDPOINT=minio:9000
OTEL_EXPORTER_OTLP_ENDPOINT=http://jaeger:4318
API_HOST=0.0.0.0
API_PORT=8000

Project Configuration

Edit laddr.yml:

project:
  name: my_project
  broker: redis
  database: postgres
  storage: minio
  tracing: true
  metrics: true
  agents:
    - summarizer
    - analyzer

๐Ÿ”„ Message Format

Task Message

{
  "task_id": "uuid",
  "job_id": "uuid",
  "source_agent": "controller",
  "target_agent": "summarizer",
  "payload": {
    "description": "Summarize this document",
    "context": "...",
    "expected_output": "..."
  },
  "trace_parent": "trace-id",
  "created_at": "timestamp"
}

Response Message

{
  "task_id": "uuid",
  "job_id": "uuid",
  "agent_name": "summarizer",
  "status": "completed",
  "result": {"output": "..."},
  "metrics": {
    "tokens": 2200,
    "latency_ms": 5200
  },
  "trace_parent": "trace-id",
  "completed_at": "timestamp"
}

๐Ÿ”ง Development

Prerequisites

  • Python 3.10+
  • Docker & Docker Compose
  • Git

Setup

# Clone repository
git clone https://github.com/laddr/laddr.git
cd laddr

# Install dependencies
cd lib/laddr
pip install -e .[dev]

# Run tests
pytest

๐Ÿ“ CLI Reference

laddr init [project_name]        # Initialize new project
laddr add agent <name>           # Add new agent
laddr add tool <name>            # Add custom tool
laddr run dev                      # Start development environment
laddr run agent <agent>            # Run single agent locally
laddr run pipeline <file.yml>      # Run a pipeline
laddr logs <agent>                 # View agent logs
laddr stop                       # Stop all services

๐Ÿ†š Laddr vs CrewAI

Feature CrewAI Laddr
Communication Hidden internal calls Explicit Redis message bus
Runtime In-memory Python Docker containers per agent
Observability Limited logging Full OpenTelemetry + Prometheus
Scalability Single process Distributed workers
Transparency Opaque orchestration Visible task flow
Storage In-memory MinIO/S3 for artifacts
Monitoring None Dashboard + Jaeger + Prometheus
Configuration Code-based YAML + Docker Compose

๐Ÿค Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

๐Ÿ“„ License

MIT License - see LICENSE for details.

๐Ÿ”— Links


Built with transparency in mind. No hidden magic. Just distributed agents.