FLUX.2-dev-gguf / README.md
city96's picture
Update README.md
4643fb9 verified
|
raw
history blame
2.04 kB
metadata
base_model: black-forest-labs/FLUX.2-dev
library_name: gguf
quantized_by: city96
license: other
license_name: flux-dev-non-commercial-license
license_link: LICENSE.md
tags:
  - image-generation
  - image-editing
  - flux
  - diffusion-single-file
pipeline_tag: image-to-image

This is a direct GGUF conversion of black-forest-labs/FLUX.2-dev.

The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:

Type Name Location Download
Main Model flux2-dev ComfyUI/models/diffusion_models GGUF (this repo)
Text Encoder Mistral-Small-3.2-24B-Instruct-2506 ComfyUI/models/text_encoders Safetensors / GGUF (support TBA)
VAE flux2 VAE ComfyUI/models/vae Safetensors

Example outputs - sample size of 1, not strictly representative

sample

Notes

As with Qwen-Image, Q5_K_M, Q4_K_M, Q3_K_M, Q3_K_S and Q2_K have some extra logic as to which blocks to keep in high precision.

The logic is partially based on guesswork, trial & error, and the graph found in the readme for Freepik/flux.1-lite-8B (which in turn quotes this blog by Ostris)

As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.