This is a abliterated version of Mistral-7B-Instruct-v0.3, made using Heretic

The quantizations were created using an imatrix merged from text_en_large and harmful.txt to leverage the abliterated nature of the model.

It's an older model, sir, but it checks out

Performance

Metric This model Original model
KL divergence 0.15 0 (by definition)
Refusals 3/100 85/100

Analysis against the original model:

  • Total Tensors: 291
  • Tensors with Diffs: 47 (16.2%)
  • Average % Diff: 6.99%
  • Median % Diff: 0.00%
  • Min/Max % Diff: 0.00% / 46.77%
  • Std Dev % Diff: 15.97%
  • Skewness % Diff: 1.86
  • Avg L2 Norm: 144619.57
  • Tensors with >5% diff: 47
  • Top differences:
    • blk.14.attn_output.weight ((4096, 8192), L2: 669167.94): 46.77%
    • blk.13.attn_output.weight ((4096, 8192), L2: 667456.52): 46.51%
    • blk.16.attn_output.weight ((4096, 8192), L2: 667644.60): 46.46%
    • blk.12.attn_output.weight ((4096, 8192), L2: 664339.15): 46.03%
    • blk.15.attn_output.weight ((4096, 8192), L2: 664117.46): 45.94%

Tensor Difference Distribution

Tensor Charts

Downloads last month
970
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/Mistral-7B-Instruct-v0.3-abliterated-GGUF

Quantized
(204)
this model