Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

mratsim
/
MiniMax-M2.5-BF16-INT4-AWQ

Text Generation
Safetensors
llm-compressor
minimax_m2
fp8
awq
conversational
vllm
code
devops
software engineering
engineer
developer
architect
stem
agent
custom_code
compressed-tensors
Model card Files Files and versions
xet
Community
8
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Any plans for Xiaomi's MiMo V2 Flash?

1
#8 opened 2 days ago by
droussis

Are there any plans to make BF16/FP8 AWQ INT4 version of Qwen/Qwen3.5-397B-A17B?

❤️ 1
1
#7 opened 2 days ago by
zuuky

Links in README

1
#6 opened 4 days ago by
Jon-Nielsen

accuracy

16
#4 opened 7 days ago by
ktsaou

accuracy benchmark

1
#3 opened 7 days ago by
mwalol

FP8 + INT4 version

14
#2 opened 7 days ago by
bigstorm

Cant get it to work on 8x RTX3090

7
#1 opened 7 days ago by
maglat
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs