Text-to-Image
Diffusers
Safetensors
English
ZImagePipeline

Dear devs, im extremely high steps guy..

#34
by gemstonebro - opened

Dear devs, i enjoy pushing as much steps as possible to make renders the best ever without caring at all about how much time it takes! (during sleeping).

What does z-image-turbo needs to gain benefit from high steps? Will z-image be friendly for 100 steps or 200 steps? 🫥 yes i want max quality and dont care about speed at all because at night i just sleep!

What does z-image-turbo needs to gain benefit from high steps? Will z-image be friendly for 100 steps or 200 steps? 🫥 yes i want max quality and dont care about speed at all because at night i just sleep!

5 steps are enough

Wait for Z-Image-Base, then. This one is specifically distilled for low steps inference. With the standard non-distilled model, you can crank up the steps count.

Tongyi-MAI org

Dear devs, i enjoy pushing as much steps as possible to make renders the best ever without caring at all about how much time it takes! (during sleeping).

What does z-image-turbo needs to gain benefit from high steps? Will z-image be friendly for 100 steps or 200 steps? 🫥 yes i want max quality and dont care about speed at all because at night i just sleep!

Hi, Z-Image Turbo is a step-distilled model aiming to bring the community a model that pairs extremely low inference latency with nearly uncompromised quality. If you enjoy pushing step to see the gain, stay turn for the Base model to release.

QJerry changed discussion status to closed
Tongyi-MAI org
edited 8 days ago

In fact, if you are interested, you can try inferencing with, say, 16 or even 30 steps. When using such large steps, I recommend also trying with different time shift values (currently it is set to 3, you can try something like 6 or 12). There is no guarantee that in this way the model's output will consistently be better, but typically it still gives reasonable outputs and in some cases it might be better than the default 8 step timeshift=3 setting

Cxxs changed discussion status to open

In fact, if you are interested, you can try inferencing with, say, 16 or even 30 steps. When using such large steps, I recommend also trying with different time shift values (currently it is set to 3, you can try something like 6 or 12). There is no guarantee that in this way the model's output will consistently be better, but typically it still gives reasonable outputs and in some cases it might be better than the default 8 step timeshift=3 setting

I will be patient. I am relatively new to ai generation few months in. I wouldnt know what to adjust. I tried sdxl, qwen, flux1, flux2 lmao lmao... and z-image turbo. Im having soo much fun!

gemstonebro changed discussion status to closed

Sign up or log in to comment