GLM-5 Abliterated (BF16)

This is an abliterated (uncensored) version of zai-org/GLM-5 (744B MoE, 40B active parameters).

What is abliteration?

Abliteration removes the "refusal direction" from the model weights using weight orthogonalization. This allows the model to respond to a wider range of prompts without safety refusals, while preserving general capability.

Method

  1. Computed refusal directions for all 78 layers using contrastive activation pairs (harmful vs harmless prompts)
  2. Applied weight orthogonalization to layers 15-54:
    • self_attn.o_proj.weight (attention output projection)
    • mlp.shared_experts.down_proj.weight (shared expert down projection)
  3. Alpha = 1.0, 80 weight matrices modified total

Details

  • Base model: zai-org/GLM-5 (744B MoE, BF16)
  • Modified layers: 15-54 (40 of 78 total layers)
  • Weights modified: 80 (o_proj + shared_experts.down_proj per layer)
  • Precision: BF16 (full precision, no quantization artifacts)

Disclaimer

This model is provided for research purposes. Users are responsible for ensuring appropriate use.

Downloads last month
82
Safetensors
Model size
754B params
Tensor type
BF16
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for skyblanket/GLM-5-abliterated

Base model

zai-org/GLM-5
Finetuned
(13)
this model