Gemma 3 – 12B IT Uncensored

This repository hosts Gemma 3 – 12B IT Uncensored, an instruction-tuned 12 billion–parameter model based on Google’s Gemma 3 architecture, along with its Vision-Language (VLM) variant. The model is intended for advanced local and research use, offering strong instruction-following, reasoning, coding, and (for the VLM) multimodal image + text understanding, with minimal additional alignment constraints.


Model Overview

  • Model Name: Gemma 3 – 12B IT Uncensored
  • VLM Variant: Gemma 3 – 12B IT VLM Uncensored
  • Base Architecture: Gemma 3, 12 billion parameters (12B)
  • Base Model Developer: Google
  • Curator / Release: BrainDAO
  • License: Gemma License (inherits from the base model)
  • Intended Use: Instruction following, reasoning, coding, conversation, and multimodal understanding

What Is This Model?

This is an uncensored derivative of the Gemma 3 12B Instruction-Tuned (IT) model.
No additional safety layers, refusals, or alignment constraints have been intentionally added beyond those present in the base model.

The goal is to provide:

  • Greater freedom in system prompt design
  • Fewer artificial refusals
  • Strong general reasoning and instruction adherence
  • Full user control in local or private deployments

Key Features & Capabilities

Text Model (LLM)

  • High-quality instruction following
  • Strong logical and analytical reasoning
  • Coding assistance across multiple programming languages
  • Conversational and assistant-style interactions
  • Suitable for agentic and tool-augmented workflows

Vision-Language Model (VLM)

  • Image understanding and description
  • Visual question answering (VQA)
  • Image + text instruction following
  • Multimodal chat and assistant use cases

Chat Template & System Prompt

The model follows the Gemma instruction format.

Example:


<bos><start_of_turn>system
You are a helpful AI assistant.
<end_of_turn>
<start_of_turn>user
{your prompt here}
<end_of_turn>
<start_of_turn>assistant

For the VLM variant, images must be provided using the multimodal input format supported by your inference framework.


Intended Use Cases

  • General-purpose assistant — reasoning, writing, and conversation
  • Coding assistant — generation, debugging, and refactoring
  • Research & analysis — structured reasoning and synthesis
  • Agentic workflows — tool use, planners, function calling
  • Multimodal applications (VLM) — image QA, captioning, visual reasoning
  • Local & private deployment — full control over data and prompts

License & Usage Notes

This model inherits the Gemma License from its base model (google/gemma-3-12b-it).

  • The Gemma License is a custom license provided by Google
  • You must review and comply with the Gemma terms of use before downloading, using, or redistributing this model
  • This repository does not relicense the model under Apache-2.0, MIT, or any other standard open-source license

Users are solely responsible for ensuring their use complies with the Gemma License and all applicable laws and regulations.


Acknowledgements

  • Google for the Gemma 3 architecture and base model
  • BrainDAO for curation and release
  • The open-source community supporting local inference, quantization, and deployment tools

Community & Support

  • Use the Hugging Face Discussions tab for questions and updates
  • Community feedback and contributions are welcome
Downloads last month
6,924
GGUF
Model size
12B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Andycurrent/gemma-3-12b-it-uncensored-GGUF

Quantized
(136)
this model

Collection including Andycurrent/gemma-3-12b-it-uncensored-GGUF