Simple quantizations of Akicou/MiniMax-M2-5-REAP-29 using convert_hf_to_gguf.py. Nothing fancy

Downloads last month
951
GGUF
Model size
162B params
Architecture
minimax-m2
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for tomngdev/MiniMax-M2.5-REAP-29-GGUF

Quantized
(2)
this model