noctrex commited on
Commit
7b1e92a
·
1 Parent(s): 293ee4d

Add files using upload-large-folder tool

Browse files
Qwen3-Coder-Next-MXFP4_MOE_BF16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7cbd6f71169958c5118c02820016c73bb8ec68a91c07ab1b71a84a071aff1f1
3
+ size 45910019872
Qwen3-Coder-Next-MXFP4_MOE_F16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc685cda61b109e6885dc8f49e4f52b04003bc3ecc0b3349e4aaec38d534e9e2
3
+ size 45910019872
README.md CHANGED
@@ -1,23 +1,19 @@
1
- ---
2
- pipeline_tag: text-generation
3
- base_model:
4
- - Qwen/Qwen3-Coder-Next
5
- ---
6
- This is a MXFP4_MOE quantization of the model [Qwen3-Coder-Next](https://huggingface.co/Qwen/Qwen3-Coder-Next)
7
-
8
- The suggested parameters are:
9
- ```
10
- temperature=1.0
11
- top_p=0.95
12
- top_k=40
13
- ```
14
- As of 2026-02-17 I have updated the model to a MXFP4 quant of higher quality.
15
-
16
- The mainline standard is:
17
- | tensors | quant |
18
- | ------- | ----------- |
19
- |1D | unquantized |
20
- |other | Q8_0 |
21
- |MoE | MXFP4 |
22
-
23
- So I created a new variant, where the other tensors are bumped up from Q8 to FP16.
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ base_model:
4
+ - Qwen/Qwen3-Coder-Next
5
+ ---
6
+ This is a MXFP4_MOE quantization of the model [Qwen3-Coder-Next](https://huggingface.co/Qwen/Qwen3-Coder-Next)
7
+
8
+ The suggested parameters from the official docs are:
9
+ ```
10
+ temperature=1.0
11
+ top_p=0.95
12
+ top_k=40
13
+ ```
14
+ As of 2026-02-17 I have updated the model to a MXFP4 quant of higher quality.
15
+
16
+ The mainline standard is to use MXFP4 for the MoE tensors, and Q8 for the rest.
17
+ So I created 2 new variants, where the other tensors are either BF16 or FP16 instead of Q8.
18
+ The order of preference is BF16, then F16.
19
+ On some architectures BF16 will be slower, but its the highest quality, essentialy its the original tensors from the model copied over unquantized.