Upload folder using huggingface_hub
Browse files- LICENSE +9 -0
- NOTICE +12 -0
- README.md +124 -3
- sd1.5/diffusion-dpo/unet.pth +3 -0
- sd1.5/dmpo/unet.pth +3 -0
- sd1.5/dspo/unet.pth +3 -0
- sdxl/diffusion-dpo/unet.pth +3 -0
- sdxl/dmpo/unet.pth +3 -0
- sdxl/dspo/unet.pth +3 -0
LICENSE
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Copyright (C) 2025 AIDC-AI
|
| 2 |
+
Licensed under the Apache License, Version 2.0 (the "License");
|
| 3 |
+
you may not use this file except in compliance with the License. You may obtain a copy of the License at
|
| 4 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 5 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 6 |
+
See the License for the specific language governing permissions and limitations under the License.
|
| 7 |
+
|
| 8 |
+
Use Restrictions
|
| 9 |
+
You agree not to use the Model or Derivatives of the Model: In any way that violates any applicable national, federal, state, local or international law or regulation; For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; To generate or disseminate verifiably false information and/or content with the purpose of harming others; To generate or disseminate personal identifiable information that can be used to harm an individual; To defame, disparage or otherwise harass others; For fully automated decision making that adversely impacts an individualโs legal rights or otherwise creates or modifies a binding, enforceable obligation; For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics; To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories; To provide medical advice and medical results interpretation; To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use).
|
NOTICE
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Copyright (C) 2025 AIDC-AI
|
| 2 |
+
Licensed under the Apache License, Version 2.0 (the "License");
|
| 3 |
+
you may not use this file except in compliance with the License. You may obtain a copy of the License at
|
| 4 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 5 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
|
| 6 |
+
|
| 7 |
+
Use Restrictions
|
| 8 |
+
You agree not to use the Model or Derivatives of the Model: In any way that violates any applicable national, federal, state, local or international law or regulation; For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; To generate or disseminate verifiably false information and/or content with the purpose of harming others; To generate or disseminate personal identifiable information that can be used to harm an individual; To defame, disparage or otherwise harass others; For fully automated decision making that adversely impacts an individualโs legal rights or otherwise creates or modifies a binding, enforceable obligation; For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics; To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories; To provide medical advice and medical results interpretation; To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use).
|
| 9 |
+
|
| 10 |
+
This model was trained based the following models:
|
| 11 |
+
1. stable-diffusion-v1-5(https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5), license: Open RAIL-MLicense(https://huggingface.co/spaces/CompVis/stable-diffusion-license). Use of the model shall be subject to the use-based restrictions in paragraph 5 of the Open RAIL-M License.
|
| 12 |
+
2. stable-diffusion-xl๏ผhttps://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0๏ผ,license: Open RAIL++-M License (https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md). Use of the model shall be subject to the use-based restrictions in paragraph 5 of the Open RAIL++-M License.
|
README.md
CHANGED
|
@@ -1,3 +1,124 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Diffusion-SDPO: Safeguarded Direct Preference Optimization for Diffusion Models
|
| 2 |
+
|
| 3 |
+
<p align="center">
|
| 4 |
+
<a href="https://github.com/AIDC-AI/Diffusion-SDPO">
|
| 5 |
+
<img src="https://img.shields.io/badge/Code-GitHub-181717?logo=github" alt="GitHub Repo" height="20">
|
| 6 |
+
</a>
|
| 7 |
+
</p>
|
| 8 |
+
|
| 9 |
+
## ๐ Introduction
|
| 10 |
+
|
| 11 |
+
**Diffusion-SDPO** is a plug-in training rule for preference alignment of diffusion models. It computes an adaptive scale for the loser branch based on the alignment between winner and loser output-space gradients, so that each update theoretically **does not increase the winner's loss to first order**. This preserves the preferred output while still widening the preference margin. The safeguard is model-agnostic and drops into Diffusion-DPO, DSPO, and DMPO with negligible overhead. See [our paper](https://arxiv.org/abs/2511.03317) for details (derivation of the safety bound, the output-space approximation, and the closed-form solution).
|
| 12 |
+
|
| 13 |
+
This repository is the official implementation of paper [Diffusion-SDPO](https://arxiv.org/abs/2511.03317).
|
| 14 |
+
|
| 15 |
+
<img width="987" alt="image" src="figs/sdpo_img.png">
|
| 16 |
+
|
| 17 |
+
## ๐ง Setup
|
| 18 |
+
|
| 19 |
+
```bash
|
| 20 |
+
pip install -r requirements.txt
|
| 21 |
+
```
|
| 22 |
+
|
| 23 |
+
## ๐ฆ Model Checkpoints
|
| 24 |
+
|
| 25 |
+
All checkpoints are initialized from Stable Diffusion (SD1.5 or SDXL) and trained as described in the paper.
|
| 26 |
+
Each name below means *{base model} + {DPO variant} with our safeguarded winner-preserving rule (SDPO)*:
|
| 27 |
+
|
| 28 |
+
- [**SD1.5-Diffusion-DPO (with SDPO)**](https://huggingface.co/AIDC-AI/Diffusion-SDPO/blob/main/sd1.5/diffusion-dpo/unet.pth) โ SD1.5 + Diffusion-DPO augmented by our safeguard
|
| 29 |
+
- [**SD1.5-DSPO (with SDPO)**](https://huggingface.co/AIDC-AI/Diffusion-SDPO/blob/main/sd1.5/dspo/unet.pth) โ SD1.5 + DSPO augmented by our safeguard
|
| 30 |
+
- [**SD1.5-DMPO (with SDPO)**](https://huggingface.co/AIDC-AI/Diffusion-SDPO/blob/main/sd1.5/dmpo/unet.pth) โ SD1.5 + DMPO augmented by our safeguard
|
| 31 |
+
- [**SDXL-Diffusion-DPO (with SDPO)**](https://huggingface.co/AIDC-AI/Diffusion-SDPO/blob/main/sdxl/diffusion-dpo/unet.pth) โ SDXL + Diffusion-DPO augmented by our safeguard
|
| 32 |
+
- [**SDXL-DSPO (with SDPO)**](https://huggingface.co/AIDC-AI/Diffusion-SDPO/blob/main/sdxl/dspo/unet.pth) โ SDXL + DSPO augmented by our safeguard
|
| 33 |
+
- [**SDXL-DMPO (with SDPO)**](https://huggingface.co/AIDC-AI/Diffusion-SDPO/blob/main/sdxl/dmpo/unet.pth) โ SDXL + DMPO augmented by our safeguard
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
## ๐ Model Training
|
| 37 |
+
|
| 38 |
+
### Example: SD1.5 + Diffusion-DPO with SDPO safeguard
|
| 39 |
+
|
| 40 |
+
Start training by running the provided script. It auto-detects the number of GPUs and launches with `accelerate`.
|
| 41 |
+
|
| 42 |
+
```bash
|
| 43 |
+
bash scripts/train/sd15_diffusion_dpo.sh
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
**Key arguments in this example**
|
| 47 |
+
|
| 48 |
+
* `--train_method` selects Diffusion-DPO as the baseline. Choices : [diffusion-dpo, dspo, dmpo]
|
| 49 |
+
* `--beta_dpo` controls the DPO temperature or strength.
|
| 50 |
+
* `--use_winner_preserving` enables our SDPO safeguard that rescales only the loser branchโs backward signal to avoid increasing the winner loss to first order.
|
| 51 |
+
* `--winner_preserving_mu` sets the safeguard strength. Larger values are more conservative.
|
| 52 |
+
* `--mixed_precision bf16` and `--allow_tf32` improve throughput on recent NVIDIA GPUs.
|
| 53 |
+
|
| 54 |
+
## ๐ Evaluation
|
| 55 |
+
|
| 56 |
+
We provide one-click evaluation scripts for SD1.5 and SDXL. They take a `unet.pth` checkpoint and will:
|
| 57 |
+
1) generate images for three prompt groups: **papv2**, **hpsv2**, **partiprompts**
|
| 58 |
+
2) compute **PickScore**, **HPSv2**, **Aesthetics**, **CLIP**, and **ImageReward**
|
| 59 |
+
3) print a summary to the console
|
| 60 |
+
4) optionally, compare two model checkpoints and report per-metric win rates across all prompts
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
The prompts come from `prompts/`:
|
| 64 |
+
- `papv2.json` is deduplicated from the [Pick-a-Pic v2](https://huggingface.co/datasets/yuvalkirstain/pickapic_v2) test set to ensure prompts are unique
|
| 65 |
+
- `hpsv2.json` and `partiprompts.json` are standard prompt suites used for qualitative and quantitative checks integrated from [HPDv2](https://huggingface.co/datasets/zhwang/HPDv2/tree/main/benchmark) and [Parti](https://github.com/google-research/parti).
|
| 66 |
+
|
| 67 |
+
### Quick start
|
| 68 |
+
|
| 69 |
+
**SD1.5 checkpoint**
|
| 70 |
+
```bash
|
| 71 |
+
bash scripts/eval/test_sd15.sh /path/to/your/unet.pth
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
**SDXL checkpoint**
|
| 75 |
+
```bash
|
| 76 |
+
bash scripts/eval/test_sdxl.sh /path/to/your/unet.pth
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
**Win-rate comparison**
|
| 80 |
+
```bash
|
| 81 |
+
# A/B win-rate comparison across all prompts from one group (papv2, hpsv2, partiprompts)
|
| 82 |
+
# A.json / B.json are the generation manifests produced by your eval runs.
|
| 83 |
+
|
| 84 |
+
bash scripts/eval/test_vs.sh \
|
| 85 |
+
--json_a path/to/A.json \
|
| 86 |
+
--json_b path/to/B.json \
|
| 87 |
+
--label_a "your label A" \
|
| 88 |
+
--label_b "your label B"
|
| 89 |
+
```
|
| 90 |
+
Example:
|
| 91 |
+
```bash
|
| 92 |
+
bash scripts/eval/test_vs.sh \
|
| 93 |
+
--json_a /path/to/sdxl/diffusion-dpo/hpsv2_seed0_1024x1024_50s_7.5cfg.json \
|
| 94 |
+
--json_b /path/to/sdxl/dmpo/hpsv2_seed0_1024x1024_50s_7.5cfg.json \
|
| 95 |
+
--label_a "diffusion_dpo_sdxl_hpsv2" \
|
| 96 |
+
--label_b "dmpo_sdxl_hpsv2"
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
|
| 100 |
+
## ๐ Citation
|
| 101 |
+
|
| 102 |
+
If you find TeEFusion helpful, please cite our paper:
|
| 103 |
+
|
| 104 |
+
```bibtex
|
| 105 |
+
@article{fu2025diffusion,
|
| 106 |
+
title={{Diffusion-SDPO}: Safeguarded Direct Preference Optimization for Diffusion Models},
|
| 107 |
+
author={Fu, Minghao and Wang, Guo-Hua and Cui, Tianyu and Chen, Qing-Guo and Xu, Zhao and Luo, Weihua and Zhang, Kaifu},
|
| 108 |
+
journal={arXiv:2511.03317},
|
| 109 |
+
year={2025}
|
| 110 |
+
}
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
## ๐ Acknowledgments
|
| 114 |
+
|
| 115 |
+
The code is built upon [Diffusers](https://github.com/huggingface/diffusers), [Transformers](https://github.com/huggingface/transformers), [Diffusion-DPO](https://github.com/SalesforceAIResearch/DiffusionDPO) and [DSPO](https://github.com/huaishengzhu/DSPO/tree/main).
|
| 116 |
+
|
| 117 |
+
## ๐ License
|
| 118 |
+
|
| 119 |
+
This project is licensed under the Apache License, Version 2.0 (SPDX-License-Identifier: Apache-2.0) with additional use restrictions. You can find the full text of the license(s) in the following path: ./LICENSE
|
| 120 |
+
|
| 121 |
+
## ๐จ Disclaimer
|
| 122 |
+
|
| 123 |
+
We used compliance checking algorithms during the training process, to ensure the compliance of the trained model(s) to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model is completely free of copyright issues or improper content. If you believe anything infringes on your rights or generates improper content, please contact us, and we will promptly address the matter.
|
| 124 |
+
|
sd1.5/diffusion-dpo/unet.pth
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:79b284b168bccc7978ffaa57238f9dc7841d913b0dfc618c633680eb496b0dcd
|
| 3 |
+
size 3438319424
|
sd1.5/dmpo/unet.pth
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:57c36e0d8d785d31cf4db75da2d3ff502dd83ff9f11681f23d2e0676241cc291
|
| 3 |
+
size 3438319424
|
sd1.5/dspo/unet.pth
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d7c7cce2d3b782623e9a1d50ad520ceaed574a98acf2b59f9c7dbd41fea29d77
|
| 3 |
+
size 1719188749
|
sdxl/diffusion-dpo/unet.pth
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8ca2ec6ab026ea4bb5aa5c99aea626b9d149777b541142a675aef61e90306033
|
| 3 |
+
size 5135334337
|
sdxl/dmpo/unet.pth
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7662b304e80367c3bdce9605877057edcbb57a41ab2752a805146ca85b16dbaa
|
| 3 |
+
size 5135334337
|
sdxl/dspo/unet.pth
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4dd74d7d24801261f810b8b6bbf91bb2d5c71dcc38182075fe9d9e8ed937b1bf
|
| 3 |
+
size 5135334401
|