DeepSeek-R1-Distill-Llama-70B-heretic

Abliterated (uncensored) version of deepseek-ai/DeepSeek-R1-Distill-Llama-70B, created using Heretic and converted to GGUF.

Abliteration Quality

Metric Value
Refusals 0/100
KL Divergence 0.0361
Rounds 1

Lower refusals = fewer refused prompts. Lower KL divergence = closer to original model behavior.

Available Quantizations

Usage with Ollama

Important: This model is based on Llama 3.1 architecture but uses DeepSeek's fullwidth Unicode special tokens in the GGUF metadata. These tokens are not correctly handled by Ollama's default tokenizer, causing garbled output. You must use the included Modelfile to get correct output.

# Download the Modelfile and create the model
wget https://huggingface.co/ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic/resolve/main/Modelfile
ollama create deepseek-r1-70b-heretic -f Modelfile
ollama run deepseek-r1-70b-heretic

To use a different quantization, edit the FROM line in the Modelfile:

FROM hf.co/ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q4_K_M
FROM hf.co/ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q3_K_M

bf16 Weights

The full bf16 abliterated weights are available in the bf16/ subdirectory of this repository.

Usage with Transformers

The bf16 weights in the bf16/ subdirectory can be loaded directly with Transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic"
tokenizer = AutoTokenizer.from_pretrained(model_id, subfolder="bf16")
model = AutoModelForCausalLM.from_pretrained(
    model_id, subfolder="bf16", torch_dtype="auto", device_map="auto"
)

messages = [{"role": "user", "content": "Hello!"}]
text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))

About

This model was processed by the Apostate automated abliteration pipeline:

  1. The source model was loaded in bf16
  2. Heretic's optimization-based abliteration was applied to remove refusal behavior
  3. The merged model was converted to GGUF format using llama.cpp
  4. Multiple quantization levels were generated

The abliteration process uses directional ablation to remove the model's refusal directions while minimizing KL divergence from the original model's behavior on harmless prompts.

Downloads last month
166
GGUF
Model size
71B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic

Quantized
(62)
this model

Collection including ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic