Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking-GGUF
This repository contains Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking-GGUF, a 4B-parameter vision-language instruction-tuned model provided in GGUF format for efficient local inference.
The model is designed for open-ended reasoning, multimodal understanding, and minimal alignment constraints, making it suitable for experimentation, research, and advanced local deployments.
Model Summary
- Model ID: Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking-GGUF
- Architecture: Gemma 3 (4B parameters)
- Type: Vision-Language (Text + Image)
- Format: GGUF
- Publisher: mradermacher
- License: Apache 2.0 (inherits from base model)
Key Characteristics
- Multimodal input support (text + images)
- Instruction-tuned for conversational and reasoning tasks
- Reduced content filtering and alignment constraints
- Optimized for local inference runtimes
- Suitable for research, exploration, and advanced user workflows
⚠️ This model is uncensored. Outputs may include sensitive or unfiltered content. Use responsibly.
Supported Use Cases
Text-Based
- Conversational assistants
- Creative writing and storytelling
- Summarization and rewriting
- General reasoning and analysis
Vision + Text
- Image captioning
- Visual question answering
- Scene and object understanding
- Multimodal reasoning tasks
GGUF Compatibility
This model can be used with GGUF-compatible runtimes such as:
llama.cpp
- Ollama (GGUF-based builds)
- Other local inference engines supporting GGUF
Performance and supported features may vary depending on runtime and hardware.
Basic Usage Example
Command Line (llama.cpp-style)
./main \
-m Andycurrent/Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking_GGUF_F16.gguf \
-p "Describe the key idea behind multimodal AI models."
Usage Notes
- Provide clear, explicit prompts for best results
- When using images, ensure proper formatting and resolution
- Add moderation or filtering layers if deploying in public-facing applications
Ethical Considerations
Due to its uncensored nature:
- Not recommended for unrestricted public deployment
- Should not be used in safety-critical environments
- Users are responsible for compliance with applicable laws and policies
Acknowledgements
- Gemma base model contributors
- Open-source inference and quantization communities
- Tools and runtimes enabling efficient local LLM deployment