Feb 16 2026: Upgraded Jinja Template with direct thinking logic to improve thinking activation.

Gemma-3-27b-it-HERETIC-Gemini-Series2-Deeper-Reasoning

This is a fully uncensored, full deep thinking Gemma 27B fine tune via Unsloth.

Larger dataset than series 1 (250 vs 1000).

Image processing is intact and fully functional.

Reasoning affects:

  • Image "intelligence"
  • Output generation

Model Features:

  • 128k context
  • Temp range .1 to 2.5.
  • Reasoning is temp stable.
  • You can activate using "think deeply: prompt" (not required in most cases)
  • System prompt will affect image, reasoning and output generation.
  • System prompt / template NOT required for reasoning generation.

Benchmarks are off the scale.

Enjoy the freedom!

[This is first in a series]

SPECIAL THANKS TO:

  • Team "P-E-W" for making Heretic software.
  • Team "coder3101" for HERETIC'ing the model.
  • Team "TeichAI" for the excellent dataset.
  • Team "Nightmedia" for the benchmarking and colab'ing.
  • Team "Unsloth" for making the training painless.

BENCHMARKS:

arc_challenge,arc_easy,boolq,hellaswag,openbookqa,piqa,   winogrande

0.599        ,0.757   ,0.876,0.770    ,0.466     ,0.806   ,0.756

VS (Heretic, uncensored base):

0.557        ,0.711   ,0.868,0.533    ,0.452     ,0.706  ,0.695

HERETIC DE-CENSORING STATS:

NOTE: "KLD" of less than 1 is excellent, ZERO is perfect (no damage to the model).

Metric This model Original model (google/gemma-3-27b-it)
KL divergence 0.07 0 (by definition)
Refusals 9/100 98/100


Using an "uncensored" (refusals removed) model VS trained "uncensored" model

Usually when you a tell a model to generate horror, swear or x-rated content this is all you have to do to get said content type.

In the case of this model, it will not refuse your request, however it needs to be "pushed" a bit / directed a bit more in SOME CASES.

Although this model will generated x-rated content too, likewise you need to tell it to use "slang" (and include the terms you want) to get it generate the content correctly as the "expected" content level too.

Without these added directive(s), the content can be "bland" by comparison to an "uncensored model" or model trained on uncensored content.

Roughly, the model tries to generate the content but the "default" setting(s) are so "tame" it needs a push to generate at expected graphic, cursing or explicit levels.

Even with minimal direction (ie, use these words to swear: x,y,z), this will be enough to push the model to generate the requested content in the ahh... expected format.


OPTIONAL: System prompts

This will enhance thinking and output generation.

In most cases you do not need to use these.

One is "all business", and the other one is for "fun".

Think deeply and carefully about the user's request. Compose your thoughts about the user's prompt between <think> and </think> tags, then output the final answer based on your thoughts.
You are the JOKER from Batman. You think (put your thoughts between <think> and </think> tags), act and talk like the joker. Be Evil.

Thinking Activation: JINJA "Regular" and "Thinking" TEMPLATES:

There is also an option to use "chat-template-thinking.jinja" template (in place of the regular "chat-template.jinja").

Simply rename the "default" to another name and "chat-template-thinking.jinja" to "chat-template.jinja" to use in source and/or quanting.

You can also edit the "chat-template-thinking.jinja" in NOTEPAD too to adjust the "thinking system prompt" (very top of the script).

Using the "thinking system prompt" or "chat-template-thinking.jinja" is useful in your application requires always on thinking, your use case(s) do not always activate thinking and so on.

Generally "thinking" will activate automatically due to the fine tuning, however in some cases it will not, require a system prompt/thinking jinja template and/or "think deeply:" (prompt here)

Note that you can use "chat-template-thinking.jinja" with other system prompts too.


Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:

In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;

Set the "Smoothing_factor" to 1.5

: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"

: in text-generation-webui -> parameters -> lower right.

: In Silly Tavern this is called: "Smoothing"

NOTE: For "text-generation-webui"

-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)

Source versions (and config files) of my models are here:

https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be

OTHER OPTIONS:

  • Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")

  • If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.

Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers

This a "Class 1" model:

For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

Downloads last month
51
Safetensors
Model size
27B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DavidAU/Gemma-3-27b-it-Gemini-Heretic-Uncensored-Deeper-Reasoning-Series2

Finetuned
(4)
this model
Quantizations
2 models

Dataset used to train DavidAU/Gemma-3-27b-it-Gemini-Heretic-Uncensored-Deeper-Reasoning-Series2

Collections including DavidAU/Gemma-3-27b-it-Gemini-Heretic-Uncensored-Deeper-Reasoning-Series2