metadata
pipeline_tag: text-generation
tags:
- uncensored
- abliterated
base_model:
- mistralai/Mistral-7B-Instruct-v0.3
This is a abliterated version of Mistral-7B-Instruct-v0.3, made using Heretic
The quantizations were created using an imatrix merged from text_en_large and harmful.txt to leverage the abliterated nature of the model.
Performance
| Metric | This model | Original model |
|---|---|---|
| KL divergence | 0.15 | 0 (by definition) |
| Refusals | 3/100 | 85/100 |
Analysis against the original model:
- Total Tensors: 291
- Tensors with Diffs: 47 (16.2%)
- Average % Diff: 6.99%
- Median % Diff: 0.00%
- Min/Max % Diff: 0.00% / 46.77%
- Std Dev % Diff: 15.97%
- Skewness % Diff: 1.86
- Avg L2 Norm: 144619.57
- Tensors with >5% diff: 47
- Top differences:
- blk.14.attn_output.weight ((4096, 8192), L2: 669167.94): 46.77%
- blk.13.attn_output.weight ((4096, 8192), L2: 667456.52): 46.51%
- blk.16.attn_output.weight ((4096, 8192), L2: 667644.60): 46.46%
- blk.12.attn_output.weight ((4096, 8192), L2: 664339.15): 46.03%
- blk.15.attn_output.weight ((4096, 8192), L2: 664117.46): 45.94%


