Searched around in a few other repos, and could only find F16 versions of it, but not the original unquantized, so:
This is the unquantized BF16 version of the model Mistral-7B-Instruct-v0.3 that is the same as the safetensors with no quality loss.

It's an older model, sir, but it checks out

Downloads last month
24
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/Mistral-7B-Instruct-v0.3-BF16-GGUF

Quantized
(204)
this model