Deploying ComfyUI on Runpod: A Guide to HuggingFace Model Integration

Community Article Published November 21, 2025

Cloud-based GPU infrastructure has become essential for creators working with diffusion models, especially when local hardware limitations become a bottleneck. This guide explores how to leverage RunPod's cloud platform for ComfyUI workflows.

Watch the YouTube tutorial

Prerequisites

You'll need the following to follow along:

  • An active RunPod account (register here — affiliate link)
  • Familiarity with web-based development environments
  • Initial credits loaded ($5-10 recommended for testing)
  • A HuggingFace account with an API token

The Case for Cloud-Based ComfyUI Deployment

Cloud GPU providers like RunPod solve several key challenges for AI image generation:

  • Dynamic GPU allocation from entry-level to enterprise-grade hardware
  • Consumption-based billing ensures you're only charged during active sessions
  • Containerized environments eliminate dependency conflicts and setup headaches
  • Rapid provisioning gets your workspace operational in under five minutes

Phase 1: GPU Selection and Provisioning

After logging into RunPod, access the GPU Pods dashboard. The platform presents various GPU configurations with real-time availability.

hf - pods list on runpod

For those new to cloud rendering, consider these configurations:

  • RTX 3090 (24GB VRAM) — Economical entry point at $0.30-0.50/hour, sufficient for SDXL workflows
  • RTX 5090 (32GB VRAM) — Superior performance tier at $0.75-0.90/hour, handles complex multi-model pipelines

These specifications provide adequate memory bandwidth for standard ComfyUI operations while maintaining reasonable operational costs during your learning curve.

Phase 2: Template Configuration

Rather than building your environment from scratch, leverage pre-built container templates. This approach significantly reduces deployment time and eliminates common configuration errors.

Within the pod setup interface, locate the Template dropdown and select Change Template:

Screenshot 2025-11-20 at 8.54.28 AM

Search the template marketplace for "ComfyUI":

Screenshot 2025-11-20 at 8.54.35 AM

Multiple community-maintained templates exist, though RunPod's official template (runpod/comfyui:latest) provides the most reliable foundation. This container includes:

  • Complete Python runtime with CUDA dependencies
  • Pre-installed ComfyUI framework
  • JupyterLab environment for advanced workflows
  • Optional SSH terminal access (enable via checkbox during setup)

Configuration tip: Assign a descriptive pod name if you're managing multiple instances concurrently.

After configuration review, click Deploy or Deploy On-Demand Pod.

Screenshot 2025-11-20 at 8.55.39 AM

Phase 3: Container Initialization

The platform now allocates your requested GPU and spins up the container. Expect 1-3 minutes for full initialization, varying with hardware availability and image size.

Monitor the port 8188 status indicator until it displays "Ready" with a green status indicator:

Screenshot 2025-11-20 at 8.59.02 AM

For real-time progress monitoring, click the Logs tab to observe the initialization sequence.

Select "ComfyUI" adjacent to port 8188 to launch your workspace:

Screenshot 2025-11-20 at 9.00.37 AM

Success! Your cloud-based generation environment is now operational.

At this stage, you have a functional ComfyUI instance capable of image generation on high-performance GPUs for under $1/hour.

Phase 4: Automated Model Deployment Pipeline

This is where the workflow becomes powerful.

While the baseline installation functions adequately, it lacks the critical components: trained models, LoRAs, upscaling models, and custom node extensions.

Rather than manually curating and downloading dozens of model files, implement an automated deployment workflow that streamlines the entire process.

Navigate to deploy.promptingpixels.com to construct a custom installation script containing your preferred models and extensions.

Screenshot 2025-11-20 at 9.01.06 AM

For demonstration purposes, I'll configure DreamShaper (SD 1.5 checkpoint) via HuggingFace for the checkpoint directory, plus a stylized LoRA. Access the Add Models section:

Screenshot 2025-11-20 at 9.02.03 AM

After selection, specify "RunPod" as your target platform and copy the generated one-line command from the page header:

Screenshot 2025-11-20 at 9.03.17 AM

Return to RunPod, and within the connections panel, select "Enable Web Terminal":

Screenshot 2025-11-20 at 9.03.34 AM

After a brief delay, "Open Web Terminal" becomes available, providing bash access.

Screenshot 2025-11-20 at 9.04.26 AM

Critical: Replace YOUR_HF_TOKEN and YOUR_CIVITAI_TOKEN with valid API credentials for each service.

Before execution, insert your HuggingFace API token which can be found here. The script automatically retrieves all specified models and node packages.

Phase 5: Service Refresh

Once the installation completes, return to ComfyUI, open the Manager panel, and select "Restart":

Screenshot 2025-11-20 at 9.05.58 AM

Note: Model-only additions don't require a full restart. Simply press "R" within the workspace to refresh the model registry.

Troubleshooting tip: Cloudflare "Bad Gateway" errors typically resolve within 60 seconds. Manual browser refresh should restore access.

Verify your configuration with a basic workflow:

Screenshot 2025-11-20 at 9.07.59 AM

Your first cloud-generated image is complete! (Quality improves with proper prompting and parameters.)

Phase 6: Image Retrieval Workflow

ComfyUI lacks native batch export functionality. Access your generated assets via FileBrowser or JupyterLab.

Screenshot 2025-11-20 at 9.08.24 AM

FileBrowser Method

Default container credentials:

Username: admin
Password: adminadmin12

Navigate to runpod-slim > ComfyUI > output, then right-click images for download.

Screenshot 2025-11-20 at 9.09.15 AM

JupyterLab Method

Access the identical directory structure (runpod-slim > ComfyUI > output) via the left sidebar, right-click assets for download:

Screenshot 2025-11-20 at 9.09.48 AM

Phase 7: Resource Management

RunPod implements continuous billing during pod operation. Stop your instance when inactive:

Screenshot 2025-11-20 at 9.10.18 AM

Paused pods incur modest storage fees (useful for ongoing projects).

For complete cost elimination between sessions, stop AND terminate the instance:

Screenshot 2025-11-20 at 9.10.45 AM

You're now equipped for cloud-based open-source image generation!

Troubleshooting Common Issues

Memory Allocation Errors (OOM)

Insufficient VRAM for your workflow complexity. Solutions:

  • Upgrade to higher-capacity GPU (RTX 5090, RTX Pro 6000)
  • Implement model quantization
  • Optimize workflow for memory efficiency
  • Newer architectures (Wan 2.2, Qwen Image Edit) demand more resources without quantization

Model Registration Failures

Verify correct directory placement:

  • Checkpoints: ComfyUI/models/checkpoints
  • LoRAs: ComfyUI/models/loras
  • VAEs: ComfyUI/models/vae
  • Text Encoders/CLIP: ComfyUI/models/clip
  • Upscale Models: ComfyUI/models/upscale_models
  • ControlNet/Adapters: ComfyUI/models/controlnet

Troubleshooting steps:

  • Press "R" in ComfyUI to force model registry refresh
  • Verify complete downloads (no .partial files) with valid extensions (.safetensors, .ckpt, .pt)
  • Custom nodes requiring models need ComfyUI restart post-installation
  • Confirm adequate disk space; full storage causes silent corruption
  • Review server logs in RunPod (Logs tab) for permission or path errors

Persistent Gateway Errors

  • Allow 30-60 seconds post-restart for port 8188 binding
  • If persistent, stop pod, wait 15 seconds, restart
  • Check for process conflicts: in Web Terminal run lsof -i :8188 or ps -ef | grep -i comfy
  • Terminate stuck processes with kill -9 <pid>, then restart service

Disk Space Expansion

During initial pod creation, allocate additional storage by adjusting volume size in the ComfyUI container "Edit" settings.

Community

Sign up or log in to comment