Note

The ik_llama.cpp specific quants require PR1288 until that gets merged into main.

Big Thanks

Shout out to Wendell and the Level1Techs crew, the community Forums, YouTube Channel! BIG thanks for providing BIG hardware expertise and access to run these experiments and make these great quants available to the community!!!

Also thanks to all the folks in the quanting and inferencing community on BeaverAI Club Discord and on r/LocalLLaMA for tips and tricks helping each other run, test, and benchmark all the fun new models! Thanks to huggingface for hosting all these big quants!

Finally, I really appreciate the support from aifoundry.org so check out their open source RISC-V based solutions!

Quant Collection

Perplexity computed against wiki.test.raw. (lower is "better")

Perplexity Chart

These two are just test quants for baseline perplexity comparison and not available for download here:

  • BF16 738.493 GiB (16.005 BPW)
    • TODO
  • Q8_0 392.549 GiB (8.508 BPW)
    • PPL over 580 chunks for n_ctx=512 = 3.4862 +/- 0.01883

NOTE: The first split file is much smaller on purpose to only contain metadata, its fine!

IQ4_KSS 194.058 GiB (4.206 BPW)

PPL over 580 chunks for n_ctx=512 = 3.5102 +/- 0.01896

👈 Secret Recipe
#!/usr/bin/env bash

custom="
# 60 Repeating Layers [0-59]

## Gated Attention/Delta Net [Blended 0-59]
blk\..*\.attn_gate\.weight=q8_0
blk\..*\.attn_qkv\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0
blk\..*\.attn_q\.weight=q8_0
blk\..*\.attn_k\.weight=q8_0
blk\..*\.attn_v\.weight=q8_0
blk\..*\.ssm_alpha\.weight=q8_0
blk\..*\.ssm_beta\.weight=q8_0
blk\..*\.ssm_out\.weight=q8_0

# Shared Expert Layers [0-59]
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0

# Routed Experts Layers [0-59]
blk\..*\.ffn_down_exps\.weight=iq4_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq4_kss

# Non-Repeating Layers
token_embd\.weight=iq6_k
output\.weight=iq6_k
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    --imatrix /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/imatrix-Qwen3.5-397B-A17B-BF16.dat \
    /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/Qwen3.5-397B-A17B-BF16-00001-of-00017.gguf \
    /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/Qwen3.5-397B-A17B-IQ4_KSS.gguf \
    IQ4_KSS \
    128

Q3_K 179.97 GiB (3.90 BPW)

PPL over 580 chunks for n_ctx=512 = 3.5409 +/- 0.01924

This is a custom mainline llama.cpp compatible MoE optimized mix similar to AesSedai/ddh0's mixes and likely better than vanilla Q3_K_ mixes. Check the recipe for details.

👈 Secret Recipe
#!/usr/bin/env bash

./build/bin/llama-quantize \
    --tensor-type ffn_down_exps=q4_K \
    --tensor-type ffn_gate_exps=q3_K \
    --tensor-type ffn_up_exps=q3_K \
    --token-embedding-type q4_K \
    --output-tensor-type q6_K \
    --imatrix /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/imatrix-Qwen3.5-397B-A17B-BF16-mainline.gguf \
    /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/Qwen3.5-397B-A17B-BF16-00001-of-00017.gguf \
    /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/Qwen3.5-397B-A17B-Q3_K.gguf \
    Q8_0 \
    128

IQ2_KL 138.142 GiB (2.994 BPW)

PPL over 580 chunks for n_ctx=512 = 3.6536 +/- 0.02000

👈 Secret Recipe
#!/usr/bin/env bash

custom="
# 60 Repeating Layers [0-59]

## Gated Attention/Delta Net [Blended 0-59]
blk\..*\.attn_gate\.weight=q8_0
blk\..*\.attn_qkv\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0
blk\..*\.attn_q\.weight=q8_0
blk\..*\.attn_k\.weight=q8_0
blk\..*\.attn_v\.weight=q8_0
blk\..*\.ssm_alpha\.weight=q8_0
blk\..*\.ssm_beta\.weight=q8_0
blk\..*\.ssm_out\.weight=q8_0

# Shared Expert Layers [0-59]
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0

# Routed Experts Layers [0-59]
blk\..*\.ffn_down_exps\.weight=iq3_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq2_kl

# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    --imatrix /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/imatrix-Qwen3.5-397B-A17B-BF16.dat \
    /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/Qwen3.5-397B-A17B-BF16-00001-of-00017.gguf \
    /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/Qwen3.5-397B-A17B-IQ2_KL.gguf \
    IQ2_KL \
    128

smol-IQ2_XS 113.41 GiB (2.46 BPW)

PPL over 580 chunks for n_ctx=512 = 3.8717 +/- 0.02131

This is a custom mainline compatible MoE optimized mix similar to AesSedai/ddh0's mixes and likely better than vanilla mixes especially lacking imatrix. Check the recipe for details.

👈 Secret Recipe
#!/usr/bin/env bash

./build/bin/llama-quantize \
    --tensor-type ffn_down_exps=iq2_xs \
    --tensor-type ffn_gate_exps=iq2_xs \
    --tensor-type ffn_up_exps=iq2_xs \
    --token-embedding-type q4_K \
    --output-tensor-type q6_K \
    --imatrix /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/imatrix-Qwen3.5-397B-A17B-BF16-mainline.gguf \
    /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/Qwen3.5-397B-A17B-BF16-00001-of-00017.gguf \
    /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/Qwen3.5-397B-A17B-smol-IQ2_XS.gguf \
    Q8_0 \
    128

smol-IQ2_KS 108.142 GiB (2.344 BPW)

PPL over 580 chunks for n_ctx=512 = 3.9153 +/- 0.02172

👈 Secret Recipe
#!/usr/bin/env bash

custom="
# 60 Repeating Layers [0-59]

## Gated Attention/Delta Net [Blended 0-59]
blk\..*\.attn_gate\.weight=q8_0
blk\..*\.attn_qkv\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0
blk\..*\.attn_q\.weight=q8_0
blk\..*\.attn_k\.weight=q8_0
blk\..*\.attn_v\.weight=q8_0
blk\..*\.ssm_alpha\.weight=q8_0
blk\..*\.ssm_beta\.weight=q8_0
blk\..*\.ssm_out\.weight=q8_0

# Shared Expert Layers [0-59]
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0

# Routed Experts Layers [0-59]
blk\..*\.ffn_down_exps\.weight=iq2_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq2_ks

# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    --imatrix /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/imatrix-Qwen3.5-397B-A17B-BF16.dat \
    /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/Qwen3.5-397B-A17B-BF16-00001-of-00017.gguf \
    /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/Qwen3.5-397B-A17B-smol-IQ2_KS.gguf \
    IQ2_KS \
    128

smol-IQ1_KT 88.807 GiB (1.925 BPW)

PPL over 580 chunks for n_ctx=512 = 4.2523 +/- 0.02412

👈 Secret Recipe
#!/usr/bin/env bash

custom="
# 60 Repeating Layers [0-59]

## Gated Attention/Delta Net [Blended 0-59]
blk\..*\.attn_gate\.weight=q8_0
blk\..*\.attn_qkv\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0
blk\..*\.attn_q\.weight=q8_0
blk\..*\.attn_k\.weight=q8_0
blk\..*\.attn_v\.weight=q8_0
blk\..*\.ssm_alpha\.weight=q8_0
blk\..*\.ssm_beta\.weight=q8_0
blk\..*\.ssm_out\.weight=q8_0

# Shared Expert Layers [0-59]
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0

# Routed Experts Layers [0-59]
blk\..*\.ffn_down_exps\.weight=iq1_kt
blk\..*\.ffn_(gate|up)_exps\.weight=iq1_kt

# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    --imatrix /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/imatrix-Qwen3.5-397B-A17B-BF16.dat \
    /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/Qwen3.5-397B-A17B-BF16-00001-of-00017.gguf \
    /mnt/data/models/ubergarm/Qwen3.5-397B-A17B-GGUF/Qwen3.5-397B-A17B-smol-IQ1_KT.gguf \
    IQ1_KT \
    128

Quick Start

Example command for mainline llama.cpp for now including mmproj from another repo. Just remove the --mmproj if you don't want image capabilities.

# Download Desired Quants
$ pip install huggingface_hub
$ hf download --local-dir ./ --include=smol-IQ2_XS/*.gguf ubergarm/Qwen3.5-397B-A17B-GGUF

# Hybrid CPU+GPU
model=/mnt/raid/models/ubergarm/Qwen3.5-397B-A17B-GGUF/Qwen3.5-397B-A17B-Q3_K-00001-of-00005.gguf

./build/bin/llama-server \
    --model "$model"\
    --mmproj /mnt/raid/models/ubergarm/Qwen3.5-397B-A17B-GGUF/mmproj-BF16.gguf \
    --alias ubergarm/Qwen3.5-392B-A17B \
    -fa on \
    --ctx-size 135168 \
    -ctk q8_0 -ctv q8_0 \
    -ub 2048 -b 2048 \
    -fit off \
    -ngl 999 \
    -ot "blk\.(0|1|2|3|4|5|6|7|8|9|10|11|12)\.ffn_(gate|up|down)_exps.*=CUDA0,blk\.(47|47|48|49|50|51|52|53|54|55|56|57|58|59|60)\.ffn_(gate|up|down)_exps.*=CUDA1" \
    --cpu-moe \
    --threads 24 \
    --host 127.0.0.1 \
    --port 8080 \
    --parallel 1 \
    --no-mmap \
    --jinja

# This setup was used to generate these results:
# AMD Thread Ripper Pro (Zen 4) 7965WX 24x Core
# 8x32GiB DDR5@4800 (221.41 GB/s via mlc)
# Dual RTX A6000 (48GB VRAM each)
# Driver: 580.105.08 CUDA: 13.0 P2P: OK NCCL found!

prompt eval time =   20177.37 ms /  4036 tokens (    5.00 ms per token,   200.03 tokens per second)
       eval time =  118034.13 ms /  2525 tokens (   46.75 ms per token,    21.39 tokens per second)
      total time =  138211.50 ms /  6561 tokens

prompt eval time =   53071.66 ms / 11154 tokens (    4.76 ms per token,   210.17 tokens per second)
       eval time =    5012.90 ms /   104 tokens (   48.20 ms per token,    20.75 tokens per second)
      total time =   58084.56 ms / 11258 tokens

# CPU-Only
# It works but seems to inference very slowly on CPU-only and probably requires at least one GPU to handle attention/kv-cache/delta-net stuff as it is much faster even hybrid CPU+GPU.

Vibe Coding

Seems like the autoparser branch is working fortunately!

You can get the freshest version like so:

git remote add pwilkin git@github.com:pwilkin/llama.cpp.git
git fetch pwilkin
git checkout pwilkin/autoparser

# compile as normal

Otherwise I noticed when trying opencode without autoparser branch it is spitting this error, still got the error using a chat template with --chat-template-file myTemplate.jinja from another repo.

Template supports tool calls but does not natively describe tools. The fallback behaviour used may produce bad results, inspect prompt w/ --verbose & consider overriding the template.
srv    operator(): got exception: {"error":{"code":500,"message":"\n------------\nWhile executing FilterExpression at line 120, column 73 in source:\n..._name, args_value in tool_call.arguments|items %}↵                        {{- '<...\n                                           ^\nError: Unknown (built-in) filter 'items' for type String","type":"server_error"}}

References

Downloads last month
3,392
GGUF
Model size
396B params
Architecture
qwen35moe
Hardware compatibility
Log In to add your hardware

2-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ubergarm/Qwen3.5-397B-A17B-GGUF

Quantized
(17)
this model