Lmxyy's picture
Upload ./README.md with huggingface_hub
e93a5fb verified
metadata
base_model: Qwen/Qwen-Image-Edit-2509
base_model_relation: quantized
datasets:
  - mit-han-lab/svdquant-datasets
language:
  - en
library_name: diffusers
license: apache-2.0
pipeline_tag: text-to-image
tags:
  - image-editing
  - SVDQuant
  - Qwen-Image-Edit-2509
  - Diffusion
  - Quantization
  - ICLR2025

Nunchaku Logo

Model Card for nunchaku-qwen-image-edit-2509

visual This repository contains Nunchaku-quantized versions of Qwen-Image-Edit-2509, an image-editing model based on Qwen-Image, advances in complex text rendering. It is optimized for efficient inference while maintaining minimal loss in performance.

News

Model Details

Model Description

  • Developed by: Nunchaku Team
  • Model type: image-to-image
  • License: apache-2.0
  • Quantized from model: Qwen-Image-Edit-2509

Model Files

Data Type: INT4 for non-Blackwell GPUs (pre-50-series), NVFP4 for Blackwell GPUs (50-series). Rank: r32 for faster inference, r128 for better quality but slower inference.

Base Models

Standard inference speed models for general use

4-Step Distilled Models

4-step distilled models fused with Qwen-Image-Lightning-4steps-V2.0 LoRA or Qwen-Image-Edit-2509-Lightning-4steps-V1.0 LoRA using LoRA strength = 1.0

8-Step Distilled Models

8-step distilled models fused with Qwen-Image-Lightning-8steps-V2.0 LoRA or Qwen-Image-Edit-2509-Lightning-8steps-V1.0 LoRA using LoRA strength = 1.0

Model Sources

Usage

Performance

performance

Citation

@inproceedings{
  li2024svdquant,
  title={SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models},
  author={Li*, Muyang and Lin*, Yujun and Zhang*, Zhekai and Cai, Tianle and Li, Xiuyu and Guo, Junxian and Xie, Enze and Meng, Chenlin and Zhu, Jun-Yan and Han, Song},
  booktitle={The Thirteenth International Conference on Learning Representations},
  year={2025}
}