Gemma-3-1B (MNN Quantized)
This is a 4-bit quantized version of the Gemma-3-1B model, optimized for on-device inference (Android/iOS) using the Alibaba MNN framework.
🚀 Fast Deployment on Android
1. Download the App
Don't build from scratch! Use the official MNN Chat Android app:
2. Setup
- Download the files from this repo (
llm.mnn,llm.mnn.weight,config.json). - Create a folder on your phone:
/sdcard/MNN/gemma-3-1b. - Copy the files into that folder.
- Open the MNN App and select your folder.
💻 Technical Details
- Framework: MNN
- Quantization: 4-bit Asymmetric (Int4)
- Model Type: Gemma-3-1B (Uncensored)
- Downloads last month
- 16
Model tree for WhoIsShe/gemma-3-1b-it-heretic-extreme-uncensored-abliterated-MNN
Unable to build the model tree, the base model loops to the model itself. Learn more.