Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking-GGUF

This repository contains Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking-GGUF, a 4B-parameter vision-language instruction-tuned model provided in GGUF format for efficient local inference. The model is designed for open-ended reasoning, multimodal understanding, and minimal alignment constraints, making it suitable for experimentation, research, and advanced local deployments.


Model Summary

  • Model ID: Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking-GGUF
  • Architecture: Gemma 3 (4B parameters)
  • Type: Vision-Language (Text + Image)
  • Format: GGUF
  • Publisher: mradermacher
  • License: Apache 2.0 (inherits from base model)

Key Characteristics

  • Multimodal input support (text + images)
  • Instruction-tuned for conversational and reasoning tasks
  • Reduced content filtering and alignment constraints
  • Optimized for local inference runtimes
  • Suitable for research, exploration, and advanced user workflows

⚠️ This model is uncensored. Outputs may include sensitive or unfiltered content. Use responsibly.


Supported Use Cases

Text-Based

  • Conversational assistants
  • Creative writing and storytelling
  • Summarization and rewriting
  • General reasoning and analysis

Vision + Text

  • Image captioning
  • Visual question answering
  • Scene and object understanding
  • Multimodal reasoning tasks

GGUF Compatibility

This model can be used with GGUF-compatible runtimes such as:

  • llama.cpp
  • Ollama (GGUF-based builds)
  • Other local inference engines supporting GGUF

Performance and supported features may vary depending on runtime and hardware.


Basic Usage Example

Command Line (llama.cpp-style)

./main \
  -m Andycurrent/Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking_GGUF_F16.gguf \
  -p "Describe the key idea behind multimodal AI models."

Usage Notes

  • Provide clear, explicit prompts for best results
  • When using images, ensure proper formatting and resolution
  • Add moderation or filtering layers if deploying in public-facing applications

Ethical Considerations

Due to its uncensored nature:

  • Not recommended for unrestricted public deployment
  • Should not be used in safety-critical environments
  • Users are responsible for compliance with applicable laws and policies

Acknowledgements

  • Gemma base model contributors
  • Open-source inference and quantization communities
  • Tools and runtimes enabling efficient local LLM deployment

Downloads last month
10,350
GGUF
Model size
4B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Andycurrent/Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking_GGUF

Finetunes
1 model

Collection including Andycurrent/Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking_GGUF