README.md exists but content is empty.
Downloads last month
7
Safetensors
Model size
7B params
Tensor type
F32
F16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for Disya/Mistral-qwq-12b-merge-4bit

Quantized
(4)
this model
Finetunes
2 models