It's more compliant with unaligned requests than the original, I believe, but you may still stumble upon times when it doesn't comply with certain requests.

Going to try and make it even more compliant and release another version when I get the chance.

Important note: This model is a quant of the original, specifically Llama-3.2-1B-Instruct-Q8_0


base_model: - meta-llama/Llama-3.2-1B-Instruct

Downloads last month
125
GGUF
Model size
1B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ironvin/Llama3.2-1b-Instruct-Heretic-GGUF

Quantized
(326)
this model