kernelpool's picture
Update README.md
a5b85ce verified
---
pipeline_tag: text-generation
license: other
license_name: modified-mit
license_link: https://github.com/MiniMax-AI/MiniMax-M2.5/blob/main/LICENSE
library_name: mlx
base_model: MiniMaxAI/MiniMax-M2.5
tags:
- mlx
---
# catalystsec/MiniMax-M2.5-3bit-DWQ
This model was quantized to 3-bit using DWQ with mlx-lm version **0.30.7**.
| Parameter | Value |
|---------------------------|--------------------------------|
| DWQ learning rate | 3e-7 |
| Batch size | 1 |
| Dataset | `allenai/tulu-3-sft-mixture` |
| Initial validation loss | 0.183 |
| Final validation loss | 0.110 |
| Relative KL reduction | ≈40 % |
| Tokens processed | ≈1.11 M |
## Perplexity
Evaluated on 210 samples of 512 tokens from the default mlx-lm calibration data.
| Model | Perplexity |
|-------|-----------|
| 3-bit | 7.802 |
| 3-bit DWQ | **7.434** |
| 4-bit | 6.581 |
| 4-bit DWQ | 6.431 |
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("catalystsec/MiniMax-M2.5-3bit-DWQ")
prompt = "hello"
if tokenizer.chat_template is not None:
prompt = tokenizer.apply_chat_template(
[{"role": "user", "content": prompt}],
add_generation_prompt=True,
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
print(response)
```