This is the QuasiStarSynth-12B deslopped through P-E-W's Heretic (v1.1.0) abliteration engine with the Magnitude-Preserving Orthogonal Ablation enabled and configred via P-E-W's Noslop configuration.
Note: Removal of "slop direction" alone from a creative writing/RP model may not immediately increase a model's prose quality. Similarly to refusal removal that tends to greatly increase willingness, which may unlock its access to certain information, Noslopfication may instead make enhancements in metrics such as improved originality, reduced clichΓ©, and lower redundancy. This model (or the hereticated version) should be futher trained with a database consisting of high quality prose or used as a base in mergers. However, this is a mere hypothesis that needs to be challenged.
Note 2: The model was generated with Transformers v5.1.0.
Noslopfication Results
| Score Metric | Value | Parameter | Value |
|---|---|---|---|
| Slop | 39/100 | direction_index | 24.95 |
| KL Divergence | 0.0731 | attn.o_proj.max_weight | 3.35 |
| Initial Slop | 89/100 | attn.o_proj.max_weight_position | 33.40 |
| attn.o_proj.min_weight | 0.37 | ||
| attn.o_proj.min_weight_distance | 2.91 | ||
| mlp.down_proj.max_weight | 3.67 | ||
| mlp.down_proj.max_weight_position | 25.24 | ||
| mlp.down_proj.min_weight | 3.39 | ||
| mlp.down_proj.min_weight_distance | 9.70 |
QuasiStarSynth-12B
From a time before galaxies settled and stars knew their limits, something titanic burned.
Its light was golden, but inside darkness bloomed.
A black heart beating beneath layers of radiant fire, devouring slowly, unseen.
Neither star nor singularity, this was a monument to scale, a paradox wrapped in brilliance.
π§ Recommended Sampling Settings:
Temperature: 0.75 to 1.25
Min P: 0.035
Context Length: Stable at 12k tokens, with possible support for extended contexts
π¬ Prompt Format
Supports ChatML style messages. Example:
<|im_start|>user
Your question here.
<|im_end|>
<|im_start|>assistant
QuasiStarSynth-12B is a merge of the following models using LazyMergekit:
π§© Configuration
merge_method: ties
base_model: yamatazen/EtherealAurora-12B-v2
models:
- model: DreadPoor/Irix-12B-Model_Stock
parameters:
weight: 0.25
density: 1.0
- model: ohyeah1/Violet-Lyra-Gutenberg-v2
parameters:
weight: 0.25
density: 1.0
- model: redrix/patricide-12B-Unslop-Mell-v2
parameters:
weight: 0.25
density: 1.0
- model: yamatazen/EtherealAurora-12B-v3
parameters:
weight: 0.25
density: 1.0
parameters:
normalize: false
int8_mask: false
dtype: bfloat16
layer_parameters:
- filter: "attn"
sources:
- model: Irix
weight: 0.5
- model: Patricide
weight: 0.3
- model: Aurora-v3
weight: 0.2
- filter: "mlp"
sources:
- model: Violet
weight: 0.5
- model: Aurora-v3
weight: 0.3
- model: Irix
weight: 0.2
- filter: "embed_tokens"
sources:
- model: Aurora-v2
weight: 1.0
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Marcjoni/AbyssSynth-12B-12B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=1, top_k=0, top_p=1)
print(outputs[0]["generated_text"])
- Downloads last month
- 26
Model tree for MuXodious/QuasiStarSynth-12B-noslop
Base model
Marcjoni/QuasiStarSynth-12B