sebastavar commited on
Commit
1730a54
·
verified ·
1 Parent(s): 457c8ce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -12
README.md CHANGED
@@ -17,13 +17,24 @@ tags:
17
  - halley-ai
18
  ---
19
 
20
- # halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-6bit-gs64
21
 
22
- This model [halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-6bit-gs64](https://huggingface.co/halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-6bit-gs64) was
23
- converted to MLX format from [Qwen/Qwen3-Next-80B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct)
24
- using mlx-lm version **0.28.0**.
25
 
26
- ## Use with mlx
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
  ```bash
29
  pip install mlx-lm
@@ -32,15 +43,74 @@ pip install mlx-lm
32
  ```python
33
  from mlx_lm import load, generate
34
 
 
35
  model, tokenizer = load("halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-6bit-gs64")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
- prompt = "hello"
38
 
39
- if tokenizer.chat_template is not None:
40
- messages = [{"role": "user", "content": prompt}]
41
- prompt = tokenizer.apply_chat_template(
42
- messages, add_generation_prompt=True
43
- )
44
 
45
- response = generate(model, tokenizer, prompt=prompt, verbose=True)
 
 
 
 
 
46
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  - halley-ai
18
  ---
19
 
20
+ # Qwen3-Next-80B-A3B-InstructMLX 6-bit (group size 64)
21
 
22
+ **Summary.** This is a 6-bit (int6) MLX quantization of Qwen3-Next-80B-A3B-Instruct with group size 64. Built for Apple Silicon with Metal acceleration.
 
 
23
 
24
+ - Base model: `Qwen/Qwen3-Next-80B-A3B-Instruct` (apache-2.0)
25
+ - Quantization: MLX int6, `q_group_size=64` (some tensors may remain 16-bit for stability)
26
+ - Files: MLX weight shards + `config.json`; tokenizer files included for drop-in use
27
+ - Intended use: local inference / research on M-series Macs
28
+ - Not intended for: safety-critical decisions; outputs may be inaccurate or biased
29
+
30
+ ## Requirements
31
+
32
+ Runs on Apple Silicon (M1 or newer) with macOS ≥ 13.5 via MLX (Metal).
33
+
34
+ - Not supported: Intel macOS / Linux / Windows (consider a GGUF build + llama.cpp instead).
35
+ - Memory guidance: large unified memory recommended (96 GB provides comfortable headroom). The effective GPU working set is capped by Metal’s budget; keep 5–10% headroom.
36
+
37
+ ## How to use (MLX)
38
 
39
  ```bash
40
  pip install mlx-lm
 
43
  ```python
44
  from mlx_lm import load, generate
45
 
46
+ # Use the uploaded HF repo or a local path to the MLX export
47
  model, tokenizer = load("halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-6bit-gs64")
48
+ print(generate(
49
+ model, tokenizer,
50
+ prompt="Explain the Chudnovsky algorithm to compute π.",
51
+ max_tokens=256, max_kv_size=512
52
+ ))
53
+ ```
54
+
55
+ ```bash
56
+ python -m mlx_lm generate --model halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-6bit-gs64 \
57
+ --prompt "Explain the Chudnovsky algorithm to compute pi." \
58
+ --max-kv-size 512 --max-tokens 256
59
+ ```
60
+
61
+ ## Evaluation
62
+
63
+ Perplexity (PPL) streaming evaluation on WikiText-2 (raw, test); fast preset with `window=stride=4096`, ~100k tokens, EOS inserted between docs.
64
+
65
+ | Variant | PPL (ctx=4096, fast) |
66
+ |-------------------------|----------------------------------------|
67
+ | MLX bf16 (reference) | 5.14 |
68
+ | MLX 6-bit (gs=64) | 5.14 (≈0.0% vs bf16) |
69
+ | MLX 5-bit (gs=32) | 5.20 (+1.2% vs bf16, +1.2% vs 6b/gs64) |
70
+ | MLX 4-bit (gs=64) | 5.43 (+5.6% vs bf16, +5.6% vs 6b/gs64) |
71
+
72
+ Notes:
73
+
74
+ - Numbers from local MLX runs on Apple Silicon; small variations are expected with tokenizer details, logits dtype, and token subset.
75
+ - For more sensitive comparisons, use overlapping windows (for example, `--stride 512`) and evaluate the full split.
76
 
77
+ ### Interpretation
78
 
79
+ - 6-bit gs64 matches the bf16 reference on this corpus, making it the quality pick.
80
+ - 5-bit gs32 is near-par in PPL and strong on deterministic math probes (smaller footprint).
81
+ - 4-bit gs64 shows a modest drop; choose it when footprint/throughput matter most.
 
 
82
 
83
+ Reproduce locally:
84
+
85
+ ```bash
86
+ python python/scripts/test_perplexity-mlx.py \
87
+ --model_path "/path/to/Qwen3-Next-80B-A3B-Instruct-6bit-gs64" \
88
+ --fast --progress
89
  ```
90
+
91
+ ## Conversion details (provenance)
92
+
93
+ ```bash
94
+ python -m mlx_lm convert \
95
+ --hf-path Qwen3-Next-80B-A3B-Instruct \
96
+ --mlx-path /path/to/Qwen3-Next-80B-A3B-Instruct-6bit-gs64 \
97
+ -q --q-bits 6 --q-group-size 64
98
+ ```
99
+
100
+ - Some tensors (for example, embeddings/norms/router) may remain 16-bit for numerical stability.
101
+
102
+ ## Sibling & reference models
103
+
104
+ - halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-5bit-gs32
105
+ - halley-ai/Qwen3-Next-80B-A3B-Instruct-MLX-4bit-gs64
106
+
107
+ ## Limitations and biases
108
+
109
+ Outputs may be factually wrong or unsafe. Do not use for medical, legal, or financial decisions without human review. Large models can be sensitive to prompt wording; prefer explicit, structured prompts.
110
+
111
+ ## License and credits
112
+
113
+ - License: apache-2.0 (inherits from the base model)
114
+ - Base model: Qwen/Qwen3-Next-80B-A3B-Instruct
115
+ - Quantization: Halley AI Lab (MLX int6, gs=64)
116
+ - Please cite both the base model and this repository when you use the weights.