A newer version of the Gradio SDK is available:
6.3.0
metadata
title: Zig
emoji: 🏃
colorFrom: green
colorTo: purple
sdk: gradio
sdk_version: 6.2.0
app_file: app.py
pinned: false
Z-Image Hugging Face Space
Gradio Space using the official Z-Image pipeline (Tongyi-MAI/Z-Image-Turbo) with optional LoRA injection (Civitai link you provided). There is no SD1.5 fallback—if the Z-Image model is unavailable, the Space will fail to load.
Files
app.py: Z-Image pipeline, FlowMatch scheduler, LoRA toggle/strength, simple gallery UI.requirements.txt: Python deps for Spaces/local runs.lora/: Placezit-mystic-xxx.safetensorshere (or pointLORA_PATHto your filename)..gitattributes: Tracks.safetensorsvia Git LFS for large LoRA files.
Using on Hugging Face Spaces
- Create a Space (Python) and select a GPU hardware type.
- Add/clone this repo into the Space.
- Manually add the LoRA file from https://civitai.com/models/2206377/zit-mystic-xxx to
lora/zit-mystic-xxx.safetensors(or setLORA_PATH). Network fetch of Civitai is not handled in the Space. - If model download fails with a token error, set
HF_TOKENin the Space secrets (some repos require authentication). - (Optional) Toggle advanced envs below; then the Space will launch
app.py. The header shows whether the LoRA was detected/loaded.- If the header/log says
PEFT backend is required for LoRA, installpeft(already included inrequirements.txt) and restart/rebuild.
- If the header/log says
Environment variables
MODEL_PATH(defaultTongyi-MAI/Z-Image-Turbo): HF repo or local path for the Z-Image model.LORA_PATH(defaultlora/zit-mystic-xxx.safetensors): Path to the LoRA file; loaded if present.HF_TOKEN: HF token for gated/private models or faster pulls.MODEL_DTYPE(defaultauto):bf16if supported, elsefp16(override withbf16/fp16/fp32).ENABLE_COMPILE(defaulttrue): Enabletorch.compileon the transformer.ENABLE_WARMUP(defaultfalse): Run a quick warmup across resolutions after load (adds startup time).ATTENTION_BACKEND(defaultflash_3): Backend for transformer attention (falls back toflash/xformers/native).OFFLOAD_TO_CPU_AFTER_RUN(defaultfalse): Move the model back to CPU after each generation (useful on ZeroGPU; slower on normal GPUs).ENABLE_AOTI(defaulttrue): Try to load ZeroGPU AoTI blocks viaspaces.aoti_blocks_loadfor faster inference.AOTI_REPO(defaultzerogpu-aoti/Z-Image): AoTI blocks repo.AOTI_VARIANT(defaultfa3): AoTI variant.AOTI_ALLOW_LORA(defaultfalse): Allow AoTI to load even if LoRA adapters are loaded (may crash; AoTI blocks generally don’t support LoRA).DEBUG(defaultfalse): When set to a truthy value (1,true,yes,on), hide the Status/Debug floating panel.
Run locally
python -m venv .venv
.venv\Scripts\activate # Windows; on Linux/macOS: source .venv/bin/activate
pip install -r requirements.txt
python app.py
Place the LoRA file under lora/ first (or set LORA_PATH); otherwise the app will run the base Z-Image model without LoRA.
UI controls
- Prompt
- Resolution category + explicit WxH selection
- Seed (with random toggle)
- Steps + Time Shift
- Advanced: CFG, scheduler + extra scheduler params, max sequence length
- LoRA toggle + strength (enabled only if the file is found)
Git LFS note
.gitattributes tracks .safetensors with LFS. If you commit the LoRA, run git lfs install once before pushing so large files go through LFS.