This repo contains specialized MoE-quants for Qwen3.5-397B-A17B. The idea being that given the huge size of the FFN tensors compared to the rest of the tensors in the model, it should be possible to achieve a better quality while keeping the overall size of the entire model smaller compared to a similar naive quantization. To that end, the quantization type default is kept in high quality and the FFN UP + FFN GATE tensors are quanted down along with the FFN DOWN tensors.
| Quant | Size | Mixture | PPL | 1-(Mean PPL(Q)/PPL(base)) | KLD |
|---|---|---|---|---|---|
| Q5_K_M | 273.49 GiB (5.93 BPW) | Q8_0 / Q5_K / Q5_K / Q6_K | 4.617400 ± 0.057235 | +0.0156% | 0.002553 ± 0.000078 |
| Q4_K_M | 227.55 GiB (4.93 BPW) | Q8_0 / Q4_K / Q4_K / Q5_K | 4.624688 ± 0.057341 | +0.1735% | 0.004496 ± 0.000117 |
| IQ4_XS | 176.92 GiB (3.83 BPW) | Q8_0 / IQ3_S / IQ3_S / IQ4_XS | 4.653226 ± 0.057738 | +0.7916% | 0.011963 ± 0.000309 |
| IQ3_S | 136.31 GiB (2.95 BPW) | Q6_K / IQ2_S / IQ2_S / IQ3_S | 4.745153 ± 0.059208 | +2.7828% | 0.033163 ± 0.000791 |
- Downloads last month
- 2,584
Hardware compatibility
Log In
to add your hardware
3-bit
4-bit
5-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for AesSedai/Qwen3.5-397B-A17B-GGUF
Base model
Qwen/Qwen3.5-397B-A17B
