Anyone get this working with a server on MacOS yet?
I tried running it with lmstudio and it errors out on the "safe" stringfilter in the prompt template. If I remove that, it errors out on some other XML thing. I tried running it with mlx_lm.server but it just 404s every request. If anyone has it working please let me know.
Thanks!
noe. neither me. stops here
Fetching 73 files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 73/73 [00:00<00:00, 11559.79it/s]
/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
version tried:
Name Version Build Channel
mlx 0.26.5 py311hef3d267_0 conda-forge
mlx-lm 0.26.0 pyhd8ed1ab_0 conda-forge
mlx-metal 0.26.5 pypi_0 pypi
mlx-vlm 0.1.22 pypi_0 pypi
I'm day-dreaming about the model Qwen3-coder-30B-A3B-4bit-DWQ so I can run it locally :-)
Running on an m3 Ultra 512GB give me:
mlx_lm.generate --model mlx-community/Qwen3-Coder-480B-A35B-Instruct-4bit
Fetching 73 files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 73/73 [00:00<00:00, 331727.19it/s]
Hello! How can I help you today?
Prompt: 9 tokens, 40.850 tokens-per-sec
Generation: 10 tokens, 29.229 tokens-per-sec
Peak memory: 270.170 GB