Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Open4bits
/
gemma-3-1b-it-GGUF
like
0
Follow
Open4bit
5
Text Generation
GGUF
open4bits
conversational
License:
gemma
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
278
GGUF
Model size
1.0B params
Architecture
gemma3
Chat template
Hardware compatibility
Log In
to add your hardware
2-bit
Q2_K
690 MB
3-bit
Q3_K_S
689 MB
Q3_K_L
752 MB
4-bit
Q4_K_S
781 MB
Q4_0
720 MB
Q4_K_M
806 MB
5-bit
Q5_K_S
836 MB
Q5_0
808 MB
Q5_K_M
851 MB
6-bit
Q6_K
1.01 GB
8-bit
Q8_0
1.07 GB
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for
Open4bits/gemma-3-1b-it-GGUF
Base model
google/gemma-3-1b-pt
Finetuned
google/gemma-3-1b-it
Quantized
(
164
)
this model