Populate dataset card for REPA-E generated samples and artifacts
#1
by
nielsr
HF Staff
- opened
README.md
ADDED
|
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
task_categories:
|
| 3 |
+
- unconditional-image-generation
|
| 4 |
+
tags:
|
| 5 |
+
- vae
|
| 6 |
+
- diffusion-models
|
| 7 |
+
- imagenet
|
| 8 |
+
- image-generation
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# REPA-E: Generated Samples and Artifacts
|
| 12 |
+
|
| 13 |
+
This repository hosts the generated image samples and artifacts associated with the paper [REPA-E: Unlocking VAE for End-to-End Tuning with Latent Diffusion Transformers](https://huggingface.co/papers/2504.10483). These samples are instrumental for quantitative evaluation of the REPA-E method, which enables end-to-end tuning of latent diffusion models and VAEs, achieving state-of-the-art image generation performance.
|
| 14 |
+
|
| 15 |
+
- π [Project Page](https://end2end-diffusion.github.io/)
|
| 16 |
+
- π [Paper](https://huggingface.co/papers/2504.10483)
|
| 17 |
+
- π» [Code Repository](https://github.com/End2End-Diffusion/REPA-E)
|
| 18 |
+
|
| 19 |
+
## Overview
|
| 20 |
+
|
| 21 |
+
We address a fundamental question: **Can latent diffusion models and their VAE tokenizer be trained end-to-end?** While training both components jointly with standard diffusion loss is observed to be ineffective β often degrading final performance β we show that this limitation can be overcome using a simple representation-alignment (REPA) loss. Our proposed method, **REPA-E**, enables stable and effective joint training of both the VAE and the diffusion model.
|
| 22 |
+
|
| 23 |
+
**REPA-E** significantly accelerates training β achieving over **17Γ** speedup compared to REPA and **45Γ** over the vanilla training recipe. Interestingly, end-to-end tuning also improves the VAE itself: the resulting **E2E-VAE** provides better latent structure and serves as a **drop-in replacement** for existing VAEs (e.g., SD-VAE), improving convergence and generation quality across diverse LDM architectures. Our method achieves state-of-the-art FID scores on ImageNet 256Γ256: **1.12** with CFG and **1.69** without CFG. The generated `.npz` files for these evaluations can be found within this repository (e.g., under `labelsampling-equal-run1`).
|
| 24 |
+
|
| 25 |
+
## Sample Usage
|
| 26 |
+
|
| 27 |
+
This section shows how to load and use the REPA-E fine-tuned VAE (E2E-VAE) in latent diffusion training, demonstrating how E2E-VAE acts as a drop-in replacement for the original VAE, enabling significantly accelerated generation performance.
|
| 28 |
+
|
| 29 |
+
### β‘οΈ Quickstart
|
| 30 |
+
```python
|
| 31 |
+
from diffusers import AutoencoderKL
|
| 32 |
+
|
| 33 |
+
# Load end-to-end tuned VAE (ImageNet VAE example)
|
| 34 |
+
vae = AutoencoderKL.from_pretrained("REPA-E/e2e-vavae-hf").to("cuda")
|
| 35 |
+
|
| 36 |
+
# Or load a text-to-image VAE
|
| 37 |
+
vae = AutoencoderKL.from_pretrained("REPA-E/e2e-flux-vae").to("cuda")
|
| 38 |
+
|
| 39 |
+
# Use in your pipeline with vae.encode(...) / vae.decode(...)
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
### π§© Complete Example
|
| 43 |
+
Full workflow for encoding and decoding images:
|
| 44 |
+
```python
|
| 45 |
+
from io import BytesIO
|
| 46 |
+
import requests
|
| 47 |
+
from diffusers import AutoencoderKLQwenImage
|
| 48 |
+
import numpy as np
|
| 49 |
+
import torch
|
| 50 |
+
from PIL import Image
|
| 51 |
+
|
| 52 |
+
response = requests.get("https://raw.githubusercontent.com/End2End-Diffusion/fuse-dit/main/assets/example.png")
|
| 53 |
+
device = "cuda"
|
| 54 |
+
|
| 55 |
+
image = torch.from_numpy(
|
| 56 |
+
np.array(
|
| 57 |
+
Image.open(BytesIO(response.content))
|
| 58 |
+
)
|
| 59 |
+
).permute(2, 0, 1).unsqueeze(0).to(torch.float32) / 127.5 - 1
|
| 60 |
+
image = image.to(device)
|
| 61 |
+
|
| 62 |
+
vae = AutoencoderKLQwenImage.from_pretrained("REPA-E/e2e-qwenimage-vae").to(device)
|
| 63 |
+
|
| 64 |
+
# Add frame dimension (required for QwenImage VAE)
|
| 65 |
+
image_ = image.unsqueeze(2)
|
| 66 |
+
|
| 67 |
+
with torch.no_grad():
|
| 68 |
+
latents = vae.encode(image_).latent_dist.sample()
|
| 69 |
+
reconstructed = vae.decode(latents).sample
|
| 70 |
+
|
| 71 |
+
# Remove frame dimension
|
| 72 |
+
latents = latents.squeeze(2)
|
| 73 |
+
reconstructed = reconstructed.squeeze(2)
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
## Quantitative Results
|
| 77 |
+
Tables below report generation performance using gFID on 50k samples, with and without classifier-free guidance (CFG). We compare models trained end-to-end with **REPA-E** and models using a frozen REPA-E fine-tuned VAE (**E2E-VAE**). Lower is better. All linked checkpoints below are hosted on our [π€ Hugging Face Hub](https://huggingface.co/REPA-E). To reproduce these results, download the respective checkpoints to the `pretrained` folder and run the evaluation script as detailed in the [GitHub repository](https://github.com/End2End-Diffusion/REPA-E/blob/main/README.md#5-generate-samples-and-run-evaluation).
|
| 78 |
+
|
| 79 |
+
#### A. End-to-End Training (REPA-E)
|
| 80 |
+
| Tokenizer | Generation Model | Epochs | gFID-50k β | gFID-50k (CFG) β |
|
| 81 |
+
|:---------|:----------------|:-----:|:----:|:---:|
|
| 82 |
+
| [**SD-VAE<sup>*</sup>**](https://huggingface.co/REPA-E/sdvae) | [**SiT-XL/2**](https://huggingface.co/REPA-E/sit-repae-sdvae) | 80 | 4.07 | 1.67<sup>a</sup> |
|
| 83 |
+
| [**IN-VAE<sup>*</sup>**](https://huggingface.co/REPA-E/invae) | [**SiT-XL/1**](https://huggingface.co/REPA-E/sit-repae-invae) | 80 | 4.09 | 1.61<sup>b</sup> |
|
| 84 |
+
| [**VA-VAE<sup>*</sup>**](https://huggingface.co/REPA-E/vavae) | [**SiT-XL/1**](https://huggingface.co/REPA-E/sit-repae-vavae) | 80 | 4.05 | 1.73<sup>c</sup> |
|
| 85 |
+
|
| 86 |
+
\* The "Tokenizer" column refers to the initial VAE used for joint REPA-E training. The final (jointly optimized) VAE is bundled within the generation model checkpoint.
|
| 87 |
+
|
| 88 |
+
#### B. Traditional Latent Diffusion Model Training (Frozen VAE)
|
| 89 |
+
| Tokenizer | Generation Model | Method | Epochs | gFID-50k β | gFID-50k (CFG) β |
|
| 90 |
+
|:------|:---------|:----------------|:-----:|:----:|:---:|
|
| 91 |
+
| SD-VAE | SiT-XL/2 | SiT | 1400 | 8.30 | 2.06 |
|
| 92 |
+
| SD-VAE | SiT-XL/2 | REPA | 800 | 5.84 | 1.28 |
|
| 93 |
+
| VA-VAE | LightningDiT-XL/1 | LightningDiT | 800 | 2.05 | 1.25 |
|
| 94 |
+
| [**E2E-VAVAE (Ours)**](https://huggingface.co/REPA-E/e2e-vavae) | [**SiT-XL/1**](https://huggingface.co/REPA-E/sit-ldm-e2e-vavae) | REPA | 800 | **1.69** | **1.12**<sup>β </sup> |
|
| 95 |
+
|
| 96 |
+
In this setup, the VAE is kept frozen and only the generator is trained. Models using our E2E-VAE (fine-tuned via REPA-E) consistently outperform baselines such as SD-VAE and VA-VAE, achieving state-of-the-art performance when incorporating the REPA alignment objective.
|
| 97 |
+
|
| 98 |
+
**Note**: The results for the last three rows (REPA, LightningDiT, and E2E-VAE) are obtained using the class-balanced sampling protocol (50 images per class).
|
| 99 |
+
|
| 100 |
+
## Citation
|
| 101 |
+
If you find our work useful, please consider citing:
|
| 102 |
+
|
| 103 |
+
```bibtex
|
| 104 |
+
@article{leng2025repae,
|
| 105 |
+
title={REPA-E: Unlocking VAE for End-to-End Tuning with Latent Diffusion Transformers},
|
| 106 |
+
author={Xingjian Leng and Jaskirat Singh and Yunzhong Hou and Zhenchang Xing and Saining Xie and Liang Zheng},
|
| 107 |
+
year={2025},
|
| 108 |
+
journal={arXiv preprint arXiv:2504.10483},
|
| 109 |
+
}
|
| 110 |
+
```
|