sebastavar commited on
Commit
ac6538f
·
verified ·
1 Parent(s): b5e1947

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +113 -3
README.md CHANGED
@@ -1,3 +1,113 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: mlx
3
+ pipeline_tag: text-generation
4
+ inference: false
5
+ license: apache-2.0
6
+ base_model: Qwen/Qwen3-Next-80B-A3B-Instruct
7
+ base_model_relation: quantized
8
+ tags:
9
+ - apple-silicon
10
+ - metal
11
+ - arm64
12
+ - 5-bit
13
+ - group-size-32
14
+ - mlx
15
+ - mlx-lm
16
+ - qwen
17
+ - halley-ai
18
+ ---
19
+
20
+ # Qwen3-Next-80B-A3B-Instruct — MLX 5-bit (group size 32)
21
+
22
+ **Summary.** This is a 5-bit (Q5) MLX quantization of Qwen3-Next-80B-A3B-Instruct with group size 32. Built for Apple Silicon with Metal acceleration.
23
+
24
+ - Base model: `Qwen/Qwen3-Next-80B-A3B-Instruct` (apache-2.0)
25
+ - Quantization: MLX Q5, `q_group_size=32` (some tensors may remain 16-bit for stability)
26
+ - Files: MLX weight shards + `config.json`; tokenizer files included for drop-in use
27
+ - Intended use: local inference / research on M-series Macs
28
+ - Not intended for: safety-critical decisions; outputs may be inaccurate or biased
29
+
30
+ ## Requirements
31
+
32
+ Built for Apple Silicon with Metal acceleration.
33
+
34
+ - Memory: ≥96 GB recommended for comfortable headroom at large context lengths.
35
+
36
+ ## How to use (MLX)
37
+
38
+ ```bash
39
+ pip install mlx-lm
40
+ ```
41
+
42
+ ```python
43
+ from mlx_lm import load, generate
44
+
45
+ model, tokenizer = load("halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-5bit-gs32")
46
+ print(generate(
47
+ model, tokenizer,
48
+ prompt="Explain the Chudnovsky algorithm to compute π.",
49
+ max_tokens=256, max_kv_size=512
50
+ ))
51
+ ```
52
+
53
+ ```bash
54
+ python -m mlx_lm generate --model halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-5bit-gs32 \
55
+ --prompt "Explain the Chudnovsky algorithm to compute pi." \
56
+ --max-kv-size 512 --max-tokens 256
57
+ ```
58
+
59
+ ## Evaluation
60
+
61
+ Perplexity (PPL) streaming evaluation on WikiText-2 (raw, test); fast preset with `window=stride=4096`, ~100k tokens, EOS inserted between docs.
62
+
63
+ | Variant | PPL (ctx=4096, fast) |
64
+ |-------------------------|----------------------------------------|
65
+ | MLX bf16 (reference) | 5.14 |
66
+ | MLX 6-bit (gs=64) | 5.14 (≈0.0% vs bf16) |
67
+ | MLX 5-bit (gs=32) | 5.20 (+1.2% vs bf16, +1.2% vs 6b/gs64) |
68
+ | MLX 4-bit (gs=64) | 5.43 (+5.6% vs bf16, +5.6% vs 6b/gs64) |
69
+
70
+ Notes:
71
+
72
+ - Numbers from local MLX runs on Apple Silicon; small variations are expected with tokenizer details, logits dtype, and token subset.
73
+
74
+ ### Interpretation
75
+
76
+ - 6-bit gs64 matches bf16 on this corpus; use it when maximum quality is the goal.
77
+ - 5-bit gs32 is a balanced pick: near-par PPL with a smaller footprint and strong deterministic math behavior.
78
+ - 4-bit gs64 trades a modest quality drop for the smallest size; good for constrained machines.
79
+
80
+ Reproduce locally:
81
+
82
+ ```bash
83
+ python python/scripts/test_perplexity-mlx.py \
84
+ --model_path "/path/to/Qwen3-Next-80B-A3B-Instruct-5bit-gs32" \
85
+ --fast --progress
86
+ ```
87
+
88
+ ## Conversion details (provenance)
89
+
90
+ ```bash
91
+ python -m mlx_lm convert \
92
+ --hf-path Qwen3-Next-80B-A3B-Instruct \
93
+ --mlx-path /path/to/Qwen3-Next-80B-A3B-Instruct-5bit-gs32 \
94
+ -q --q-bits 5 --q-group-size 32
95
+ ```
96
+
97
+ - Some tensors (for example, embeddings/norms/router) may remain 16-bit for numerical stability.
98
+
99
+ ## Sibling & reference models
100
+
101
+ - halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-6bit-gs64
102
+ - halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-4bit-gs64
103
+
104
+ ## Limitations and biases
105
+
106
+ Outputs may be factually wrong or unsafe. Do not use for medical, legal, or financial decisions without human review.
107
+
108
+ ## License and credits
109
+
110
+ - License: apache-2.0 (inherits from the base model)
111
+ - Base model: Qwen/Qwen3-Next-80B-A3B-Instruct
112
+ - Quantization: Halley AI Lab (MLX Q5, gs=32)
113
+ - Please cite both the base model and this repository when you use the weights.