Javad Taghia commited on
Commit
b6094c3
·
1 Parent(s): 5bf027e

Update model card

Browse files
Files changed (1) hide show
  1. README.md +35 -152
README.md CHANGED
@@ -6,192 +6,75 @@ pipeline_tag: text-to-image
6
  library_name: diffusers
7
  ---
8
 
 
9
 
10
- <h1 align="center">⚡️- Image<br><sub><sup>An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer</sup></sub></h1>
11
 
12
- <div align="center">
13
 
14
- [![Official Site](https://img.shields.io/badge/Official%20Site-333399.svg?logo=homepage)](https://tongyi-mai.github.io/Z-Image-blog/)&#160;
15
- [![GitHub](https://img.shields.io/badge/GitHub-Z--Image-181717?logo=github&logoColor=white)](https://github.com/Tongyi-MAI/Z-Image)&#160;
16
- [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Checkpoint-Z--Image--Turbo-yellow)](https://huggingface.co/Tongyi-MAI/Z-Image-Turbo)&#160;
17
- [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Online_Demo-Z--Image--Turbo-blue)](https://huggingface.co/spaces/Tongyi-MAI/Z-Image-Turbo)&#160;
18
- [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Mobile_Demo-Z--Image--Turbo-red)](https://huggingface.co/spaces/akhaliq/Z-Image-Turbo)&#160;
19
- [![ModelScope Model](https://img.shields.io/badge/🤖%20Checkpoint-Z--Image--Turbo-624aff)](https://www.modelscope.cn/models/Tongyi-MAI/Z-Image-Turbo)&#160;
20
- [![ModelScope Space](https://img.shields.io/badge/🤖%20Online_Demo-Z--Image--Turbo-17c7a7)](https://www.modelscope.cn/aigc/imageGeneration?tab=advanced&versionId=469191&modelType=Checkpoint&sdVersion=Z_IMAGE_TURBO&modelUrl=modelscope%253A%252F%252FTongyi-MAI%252FZ-Image-Turbo%253Frevision%253Dmaster%7D%7BOnline)&#160;
21
- [![Art Gallery PDF](https://img.shields.io/badge/%F0%9F%96%BC%20Art_Gallery-PDF-ff69b4)](assets/Z-Image-Gallery.pdf)&#160;
22
- [![Web Art Gallery](https://img.shields.io/badge/%F0%9F%8C%90%20Web_Art_Gallery-online-00bfff)](https://modelscope.cn/studios/Tongyi-MAI/Z-Image-Gallery/summary)&#160;
23
- <a href="https://arxiv.org/abs/2511.22699" target="_blank"><img src="https://img.shields.io/badge/Report-b5212f.svg?logo=arxiv" height="21px"></a>
24
 
 
25
 
26
- Welcome to the official repository for the Z-Image(造相)project!
27
-
28
- </div>
29
-
30
-
31
-
32
- ## ✨ Z-Image
33
-
34
- Z-Image is a powerful and highly efficient image generation model with **6B** parameters. Currently there are three variants:
35
-
36
- - 🚀 **Z-Image-Turbo** – A distilled version of Z-Image that matches or exceeds leading competitors with only **8 NFEs** (Number of Function Evaluations). It offers **⚡️sub-second inference latency⚡️** on enterprise-grade H800 GPUs and fits comfortably within **16G VRAM consumer devices**. It excels in photorealistic image generation, bilingual text rendering (English & Chinese), and robust instruction adherence.
37
-
38
- - 🧱 **Z-Image-Base** – The non-distilled foundation model. By releasing this checkpoint, we aim to unlock the full potential for community-driven fine-tuning and custom development.
39
-
40
- - ✍️ **Z-Image-Edit** – A variant fine-tuned on Z-Image specifically for image editing tasks. It supports creative image-to-image generation with impressive instruction-following capabilities, allowing for precise edits based on natural language prompts.
41
-
42
- ### 📥 Model Zoo
43
-
44
- | Model | Hugging Face | ModelScope |
45
- | :--- |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
46
- | **Z-Image-Turbo** | [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Checkpoint%20-Z--Image--Turbo-yellow)](https://huggingface.co/Tongyi-MAI/Z-Image-Turbo) <br> [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Online%20Demo-Z--Image--Turbo-blue)](https://huggingface.co/spaces/Tongyi-MAI/Z-Image-Turbo) | [![ModelScope Model](https://img.shields.io/badge/🤖%20%20Checkpoint-Z--Image--Turbo-624aff)](https://www.modelscope.cn/models/Tongyi-MAI/Z-Image-Turbo) <br> [![ModelScope Space](https://img.shields.io/badge/%F0%9F%A4%96%20Online%20Demo-Z--Image--Turbo-17c7a7)](https://www.modelscope.cn/aigc/imageGeneration?tab=advanced&versionId=469191&modelType=Checkpoint&sdVersion=Z_IMAGE_TURBO&modelUrl=modelscope%3A%2F%2FTongyi-MAI%2FZ-Image-Turbo%3Frevision%3Dmaster) |
47
- | **Z-Image-Base** | *To be released* | *To be released* |
48
- | **Z-Image-Edit** | *To be released* | *To be released* |
49
-
50
- ### 🖼️ Showcase
51
-
52
- 📸 **Photorealistic Quality**: **Z-Image-Turbo** delivers strong photorealistic image generation while maintaining excellent aesthetic quality.
53
-
54
- ![Showcase of Z-Image on Photo-realistic image Generation](assets/showcase_realistic.png)
55
-
56
- 📖 **Accurate Bilingual Text Rendering**: **Z-Image-Turbo** excels at accurately rendering complex Chinese and English text.
57
-
58
- ![Showcase of Z-Image on Bilingual Text Rendering](assets/showcase_rendering.png)
59
-
60
- 💡 **Prompt Enhancing & Reasoning**: Prompt Enhancer empowers the model with reasoning capabilities, enabling it to transcend surface-level descriptions and tap into underlying world knowledge.
61
-
62
- ![reasoning.jpg](assets/reasoning.png)
63
-
64
- 🧠 **Creative Image Editing**: **Z-Image-Edit** shows a strong understanding of bilingual editing instructions, enabling imaginative and flexible image transformations.
65
-
66
- ![Showcase of Z-Image-Edit on Image Editing](assets/showcase_editing.png)
67
-
68
- ### 🏗️ Model Architecture
69
- We adopt a **Scalable Single-Stream DiT** (S3-DiT) architecture. In this setup, text, visual semantic tokens, and image VAE tokens are concatenated at the sequence level to serve as a unified input stream, maximizing parameter efficiency compared to dual-stream approaches.
70
-
71
- ![Architecture of Z-Image and Z-Image-Edit](assets/architecture.webp)
72
-
73
- ### 📈 Performance
74
- According to the Elo-based Human Preference Evaluation (on [*Alibaba AI Arena*](https://aiarena.alibaba-inc.com/corpora/arena/leaderboard?arenaType=T2I)), Z-Image-Turbo shows highly competitive performance against other leading models, while achieving state-of-the-art results among open-source models.
75
-
76
- <p align="center">
77
- <a href="https://aiarena.alibaba-inc.com/corpora/arena/leaderboard?arenaType=T2I">
78
- <img src="assets/leaderboard.png" alt="Z-Image Elo Rating on AI Arena"/><br />
79
- <span style="font-size:1.05em; cursor:pointer; text-decoration:underline;"> Click to view the full leaderboard</span>
80
- </a>
81
- </p>
82
-
83
- ### 🚀 Quick Start
84
- Install the latest version of diffusers, use the following command:
85
- <details>
86
- <summary><sup>Click here for details for why you need to install diffusers from source</sup></summary>
87
-
88
- We have submitted two pull requests ([#12703](https://github.com/huggingface/diffusers/pull/12703) and [#12715](https://github.com/huggingface/diffusers/pull/12715)) to the 🤗 diffusers repository to add support for Z-Image. Both PRs have been merged into the latest official diffusers release.
89
- Therefore, you need to install diffusers from source for the latest features and Z-Image support.
90
 
91
- </details>
92
 
93
  ```bash
94
- pip install git+https://github.com/huggingface/diffusers
95
  ```
96
 
 
 
97
  ```python
98
  import torch
99
  from diffusers import ZImagePipeline
100
 
101
- # 1. Load the pipeline
102
- # Use bfloat16 for optimal performance on supported GPUs
103
  pipe = ZImagePipeline.from_pretrained(
104
- "Tongyi-MAI/Z-Image-Turbo",
105
- torch_dtype=torch.bfloat16,
106
  low_cpu_mem_usage=False,
107
  )
108
  pipe.to("cuda")
109
 
110
- # [Optional] Attention Backend
111
- # Diffusers uses SDPA by default. Switch to Flash Attention for better efficiency if supported:
112
- # pipe.transformer.set_attention_backend("flash") # Enable Flash-Attention-2
113
- # pipe.transformer.set_attention_backend("_flash_3") # Enable Flash-Attention-3
114
-
115
- # [Optional] Model Compilation
116
- # Compiling the DiT model accelerates inference, but the first run will take longer to compile.
117
- # pipe.transformer.compile()
118
 
119
- # [Optional] CPU Offloading
120
- # Enable CPU offloading for memory-constrained devices.
121
- # pipe.enable_model_cpu_offload()
122
-
123
- prompt = "Young Chinese woman in red Hanfu, intricate embroidery. Impeccable makeup, red floral forehead pattern. Elaborate high bun, golden phoenix headdress, red flowers, beads. Holds round folding fan with lady, trees, bird. Neon lightning-bolt lamp (⚡️), bright yellow glow, above extended left palm. Soft-lit outdoor night background, silhouetted tiered pagoda (西安大雁塔), blurred colorful distant lights."
124
-
125
- # 2. Generate Image
126
  image = pipe(
127
  prompt=prompt,
128
  height=1024,
129
  width=1024,
130
- num_inference_steps=9, # This actually results in 8 DiT forwards
131
- guidance_scale=0.0, # Guidance should be 0 for the Turbo models
132
  generator=torch.Generator("cuda").manual_seed(42),
133
  ).images[0]
134
 
135
- image.save("example.png")
136
  ```
137
 
138
- ## 🔬 Decoupled-DMD: The Acceleration Magic Behind Z-Image
139
-
140
- [![arXiv](https://img.shields.io/badge/arXiv-2511.22677-b31b1b.svg)](https://arxiv.org/abs/2511.22677)
141
-
142
- Decoupled-DMD is the core few-step distillation algorithm that empowers the 8-step Z-Image model.
143
-
144
- Our core insight in Decoupled-DMD is that the success of existing DMD (Distributaion Matching Distillation) methods is the result of two independent, collaborating mechanisms:
145
 
146
- - **CFG Augmentation (CA)**: The primary **engine** 🚀 driving the distillation process, a factor largely overlooked in previous work.
147
- - **Distribution Matching (DM)**: Acts more as a **regularizer** ⚖️, ensuring the stability and quality of the generated output.
 
148
 
149
- By recognizing and decoupling these two mechanisms, we were able to study and optimize them in isolation. This ultimately motivated us to develop an improved distillation process that significantly enhances the performance of few-step generation.
150
 
151
- ![Diagram of Decoupled-DMD](assets/decoupled-dmd.webp)
 
 
152
 
153
- ## 🤖 DMDR: Fusing DMD with Reinforcement Learning
154
 
155
- [![arXiv](https://img.shields.io/badge/arXiv-2511.13649-b31b1b.svg)](https://arxiv.org/abs/2511.13649)
156
 
157
- Building upon the strong foundation of Decoupled-DMD, our 8-step Z-Image model has already demonstrated exceptional capabilities. To achieve further improvements in terms of semantic alignment, aesthetic quality, and structural coherence—while producing images with richer high-frequency details—we present **DMDR**.
158
 
159
- Our core insight behind DMDR is that Reinforcement Learning (RL) and Distribution Matching Distillation (DMD) can be synergistically integrated during the post-training of few-step models. We demonstrate that:
160
-
161
- - **RL Unlocks the Performance of DMD** 🚀
162
- - **DMD Effectively Regularizes RL** ⚖️
163
-
164
- ![Diagram of DMDR](assets/DMDR.webp)
165
-
166
- ## ⏬ Download
167
- ```bash
168
- pip install -U huggingface_hub
169
- HF_XET_HIGH_PERFORMANCE=1 hf download Tongyi-MAI/Z-Image-Turbo
170
- ```
171
 
172
- ## 📜 Citation
173
-
174
- If you find our work useful in your research, please consider citing:
175
-
176
- ```bibtex
177
- @article{team2025zimage,
178
- title={Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer},
179
- author={Z-Image Team},
180
- journal={arXiv preprint arXiv:2511.22699},
181
- year={2025}
182
- }
183
-
184
- @article{liu2025decoupled,
185
- title={Decoupled DMD: CFG Augmentation as the Spear, Distribution Matching as the Shield},
186
- author={Dongyang Liu and Peng Gao and David Liu and Ruoyi Du and Zhen Li and Qilong Wu and Xin Jin and Sihan Cao and Shifeng Zhang and Hongsheng Li and Steven Hoi},
187
- journal={arXiv preprint arXiv:2511.22677},
188
- year={2025}
189
- }
190
-
191
- @article{jiang2025distribution,
192
- title={Distribution Matching Distillation Meets Reinforcement Learning},
193
- author={Jiang, Dengyang and Liu, Dongyang and Wang, Zanyi and Wu, Qilong and Jin, Xin and Liu, David and Li, Zhen and Wang, Mengmeng and Gao, Peng and Yang, Harry},
194
- journal={arXiv preprint arXiv:2511.13649},
195
- year={2025}
196
- }
197
- ```
 
6
  library_name: diffusers
7
  ---
8
 
9
+ # dee-z-image
10
 
11
+ This repository hosts a text-to-image checkpoint in Diffusers format. It is compatible with `ZImagePipeline` and can be loaded directly from the Hugging Face Hub.
12
 
13
+ ## Usage
14
 
15
+ ### Install
 
 
 
 
 
 
 
 
 
16
 
17
+ Install the latest Diffusers (recommended) and the required runtime dependencies:
18
 
19
+ ```bash
20
+ pip install -U torch transformers accelerate safetensors
21
+ pip install -U diffusers
22
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
+ If your installed Diffusers version does not include `ZImagePipeline`, install Diffusers from source instead:
25
 
26
  ```bash
27
+ pip install -U git+https://github.com/huggingface/diffusers
28
  ```
29
 
30
+ ### Generate an image
31
+
32
  ```python
33
  import torch
34
  from diffusers import ZImagePipeline
35
 
36
+ model_id = "telcom/dee-z-image"
37
+
38
  pipe = ZImagePipeline.from_pretrained(
39
+ model_id,
40
+ torch_dtype=torch.bfloat16, # use torch.float16 if your GPU does not support bf16
41
  low_cpu_mem_usage=False,
42
  )
43
  pipe.to("cuda")
44
 
45
+ prompt = "A cinematic studio photo of a small robot sitting at a desk, warm lighting, shallow depth of field, high detail."
 
 
 
 
 
 
 
46
 
 
 
 
 
 
 
 
47
  image = pipe(
48
  prompt=prompt,
49
  height=1024,
50
  width=1024,
51
+ num_inference_steps=9,
52
+ guidance_scale=0.0,
53
  generator=torch.Generator("cuda").manual_seed(42),
54
  ).images[0]
55
 
56
+ image.save("out.png")
57
  ```
58
 
59
+ ## Tips
 
 
 
 
 
 
60
 
61
+ - If you run out of VRAM, try `pipe.enable_model_cpu_offload()` (requires `accelerate`) or reduce the resolution.
62
+ - Start with `guidance_scale=0.0` and `num_inference_steps` around 8–12; adjust based on quality/speed needs.
63
+ - For reproducibility, set a `generator` seed as shown above.
64
 
65
+ ## Repository contents
66
 
67
+ - `model_index.json` defines the Diffusers pipeline components used by `ZImagePipeline`.
68
+ - `text_encoder/`, `tokenizer/`, `transformer/`, `vae/`, `scheduler/` contain the model submodules.
69
+ - `assets/` contains example images and an optional gallery PDF.
70
 
71
+ ## License
72
 
73
+ Apache-2.0 (see metadata at the top of this model card).
74
 
75
+ ## Acknowledgements
76
 
77
+ This repo packages a checkpoint for the Z-Image family of models. For upstream project details, see:
 
 
 
 
 
 
 
 
 
 
 
78
 
79
+ - https://github.com/Tongyi-MAI/Z-Image
80
+ - https://arxiv.org/abs/2511.22699