Taming Preference Mode Collapse via Directional Decoupling Alignment in Diffusion Reinforcement Learning
Abstract
Researchers identify and address Preference Mode Collapse in text-to-image diffusion models through a novel framework that maintains generative diversity while improving human preference alignment.
Recent studies have demonstrated significant progress in aligning text-to-image diffusion models with human preference via Reinforcement Learning from Human Feedback. However, while existing methods achieve high scores on automated reward metrics, they often lead to Preference Mode Collapse (PMC)-a specific form of reward hacking where models converge on narrow, high-scoring outputs (e.g., images with monolithic styles or pervasive overexposure), severely degrading generative diversity. In this work, we introduce and quantify this phenomenon, proposing DivGenBench, a novel benchmark designed to measure the extent of PMC. We posit that this collapse is driven by over-optimization along the reward model's inherent biases. Building on this analysis, we propose Directional Decoupling Alignment (D^2-Align), a novel framework that mitigates PMC by directionally correcting the reward signal. Specifically, our method first learns a directional correction within the reward model's embedding space while keeping the model frozen. This correction is then applied to the reward signal during the optimization process, preventing the model from collapsing into specific modes and thereby maintaining diversity. Our comprehensive evaluation, combining qualitative analysis with quantitative metrics for both quality and diversity, reveals that D^2-Align achieves superior alignment with human preference.
Community
An interesting paper to address mode collapse in sota diffusion models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Direct Diffusion Score Preference Optimization via Stepwise Contrastive Policy-Pair Supervision (2025)
- GARDO: Reinforcing Diffusion Models without Reward Hacking (2025)
- The Image as Its Own Reward: Reinforcement Learning with Adversarial Reward for Image Generation (2025)
- DiverseGRPO: Mitigating Mode Collapse in Image Generation via Diversity-Aware GRPO (2025)
- Multi-dimensional Preference Alignment by Conditioning Reward Itself (2025)
- PC-Diffusion: Aligning Diffusion Models with Human Preferences via Preference Classifier (2025)
- PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper