SeaCache: Spectral-Evolution-Aware Cache for Accelerating Diffusion Models
Abstract
Spectral-Evolution-Aware Cache (SeaCache) improves diffusion model inference speed by using spectrally aligned representations to optimize intermediate output reuse, achieving better latency-quality trade-offs than previous methods.
Diffusion models are a strong backbone for visual generation, but their inherently sequential denoising process leads to slow inference. Previous methods accelerate sampling by caching and reusing intermediate outputs based on feature distances between adjacent timesteps. However, existing caching strategies typically rely on raw feature differences that entangle content and noise. This design overlooks spectral evolution, where low-frequency structure appears early and high-frequency detail is refined later. We introduce Spectral-Evolution-Aware Cache (SeaCache), a training-free cache schedule that bases reuse decisions on a spectrally aligned representation. Through theoretical and empirical analysis, we derive a Spectral-Evolution-Aware (SEA) filter that preserves content-relevant components while suppressing noise. Employing SEA-filtered input features to estimate redundancy leads to dynamic schedules that adapt to content while respecting the spectral priors underlying the diffusion model. Extensive experiments on diverse visual generative models and the baselines show that SeaCache achieves state-of-the-art latency-quality trade-offs.
Community
SeaCache is a training-free acceleration method that utilizes Spectral Evolution to decouple low-frequency content from high-frequency noise. It consistently outperforms existing baselines without requiring additional hyperparameter tuning, showing better trade-offs between inference speed and generation fidelity.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Forecast the Principal, Stabilize the Residual: Subspace-Aware Feature Caching for Efficient Diffusion Transformers (2026)
- CorGi: Contribution-Guided Block-Wise Interval Caching for Training-Free Acceleration of Diffusion Transformers (2025)
- LESA: Learnable Stage-Aware Predictors for Diffusion Model Acceleration (2026)
- Predict to Skip: Linear Multistep Feature Forecasting for Efficient Diffusion Transformers (2026)
- AdaCorrection: Adaptive Offset Cache Correction for Accurate Diffusion Transformers (2026)
- Relational Feature Caching for Accelerating Diffusion Transformers (2026)
- MeanCache: From Instantaneous to Average Velocity for Accelerating Flow Matching Inference (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper