Original Model Link : https://huggingface.co/cerebras/MiniMax-M2.5-REAP-172B-A10B
name: MiniMax-M2.5-REAP-172B-A10B-GGUF-Q4_K_M
base_model: MiniMaxAI/MiniMax-M2.5
license: other
pipeline_tag: text-generation
tasks: text-generation
language: en
library_name: llama.cpp
tags:
- Cerebras
- MiniMaxAI
- M2.5
- REAP
- GGUF
- static quantization
- 4-bit
MiniMax-M2.5-REAP-172B-A10B-GGUF-Q4
This is a 172 billion parameter MiniMax M2.5 model with 25% of its experts pruned with REAP (Router-weighted Expert Activation Pruning), then converted to GGUF with llama.cpp and static Q4 quantized.
Patched 20 / 02 /26
Reuploaded quantization from
llama.cppmain@8110gguf@0.17.1. On initial push testing on M4 device and Ollama the model rambled compared to M2.1-REAP. Original conversion,llama.cppmain@7952 quantization.
Command sequence using source version of llama.cpp from source and ports llama-quantize:
hf download cerebras/MiniMax-M2.5-REAP-172B-A10B --local-dir MiniMax-M2.5-REAP-172B-A10B
python -m convert_hf_to_gguf ~/Downloads/MiniMax-M2.5-REAP-172B-A10B
llama-quantize MiniMax-M2.5-REAP-172B-A10B-BF16.gguf Q4_K_M
- Downloads last month
- 79
Hardware compatibility
Log In
to add your hardware
4-bit
Model tree for exdysa/MiniMax-M2.5-REAP-172B-A10B-GGUF-Q4_K_M
Base model
MiniMaxAI/MiniMax-M2.5