Update README.md
Browse files
README.md
CHANGED
|
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
base_model: black-forest-labs/FLUX.2-dev
|
| 3 |
+
library_name: gguf
|
| 4 |
+
quantized_by: city96
|
| 5 |
+
license: other
|
| 6 |
+
license_name: flux-dev-non-commercial-license
|
| 7 |
+
license_link: LICENSE.md
|
| 8 |
+
tags:
|
| 9 |
+
- image-generation
|
| 10 |
+
- image-editing
|
| 11 |
+
- flux
|
| 12 |
+
- diffusion-single-file
|
| 13 |
+
pipeline_tag: image-to-image
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
This is a direct GGUF conversion of [black-forest-labs/FLUX.2-dev](https://huggingface.co/black-forest-labs/FLUX.2-dev).
|
| 17 |
+
|
| 18 |
+
The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place the required model(s) in the following folders:
|
| 19 |
+
|
| 20 |
+
| Type | Name | Location | Download |
|
| 21 |
+
| ------------ | ----------------------------------- | --------------------------------- | ---------------- |
|
| 22 |
+
| Main Model | flux2-dev | `ComfyUI/models/diffusion_models` | GGUF (this repo) |
|
| 23 |
+
| Text Encoder | Mistral-Small-3.2-24B-Instruct-2506 | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/flux2-dev/tree/main/split_files/text_encoders) / GGUF (support TBA) |
|
| 24 |
+
| VAE | flux2 VAE | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/Comfy-Org/flux2-dev/blob/main/split_files/vae/flux2-vae.safetensors) |
|
| 25 |
+
|
| 26 |
+
[**Example outputs**](media/flux2dev-image.jpg) - sample size of 1, not strictly representative
|
| 27 |
+
|
| 28 |
+

|
| 29 |
+
|
| 30 |
+
### Notes
|
| 31 |
+
|
| 32 |
+
> [!NOTE]
|
| 33 |
+
> As with Qwen-Image, Q5_K_M, Q4_K_M, Q3_K_M, Q3_K_S and Q2_K have some extra logic as to which blocks to keep in high precision.
|
| 34 |
+
>
|
| 35 |
+
> The logic is partially based on guesswork, trial & error, and the graph found in the readme for [Freepik/flux.1-lite-8B](https://huggingface.co/Freepik/flux.1-lite-8B#motivation) (which in turn quotes [this blog by Ostris](https://ostris.com/2024/09/07/skipping-flux-1-dev-blocks/))
|
| 36 |
+
|
| 37 |
+
*As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.*
|