2stacks's picture
Add environment variable configuration and enhanced Ollama model verification
5c62b72 verified

A newer version of the Gradio SDK is available: 6.1.0

Upgrade
metadata
title: First Agent Template
emoji: 
colorFrom: pink
colorTo: yellow
sdk: gradio
sdk_version: 5.49.1
app_file: app.py
pinned: false
tags:
  - smolagents
  - agent
  - smolagent
  - tool
  - agent-course

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference

Clone repository

git clone https://huggingface.co/spaces/2stacks/First_agent_template
cd First_agent_template

Create and activate Python environment

python -m venv env
source env/bin/activate

Configuration (Optional)

The application uses environment variables for model configuration. Create a .env file in the project root to customize settings:

# Ollama configuration (for local models)
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL_ID=qwen2.5-coder:32b

# HuggingFace configuration (fallback when Ollama is unavailable)
HF_MODEL_ID=Qwen/Qwen2.5-Coder-32B-Instruct

Environment Variables:

  • OLLAMA_BASE_URL: URL for your Ollama service (default: http://localhost:11434)
  • OLLAMA_MODEL_ID: Model name in Ollama (default: qwen2.5-coder:32b)
  • HF_MODEL_ID: HuggingFace model to use as fallback (default: Qwen/Qwen2.5-Coder-32B-Instruct)

The app automatically checks if Ollama is available with the specified model. If not, it falls back to HuggingFace.

Install dependencies and run

pip install -r requirements.txt
python app.py

Run with Docker

docker run -it -p 7860:7860 \
    --platform=linux/amd64 \
    -e HF_TOKEN="YOUR_VALUE_HERE" \
    -e OLLAMA_BASE_URL="http://localhost:11434" \
    -e OLLAMA_MODEL_ID="qwen2.5-coder:32b" \
    -e HF_MODEL_ID="Qwen/Qwen2.5-Coder-32B-Instruct" \
    registry.hf.space/2stacks-first-agent-template:latest python app.py