Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Open4bits
/
GLM-4.7-Flash-GGUF
like
0
Follow
Open4bit
5
Text Generation
Transformers
GGUF
English
Chinese
open4bits
imatrix
conversational
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
695
GGUF
Model size
30B params
Architecture
deepseek2
Chat template
Hardware compatibility
Log In
to add your hardware
1-bit
UD-IQ1_S
9.25 GB
UD-TQ1_0
8.33 GB
UD-IQ1_M
9.81 GB
2-bit
UD-IQ2_XXS
10.5 GB
Q2_K
11.3 GB
UD-IQ2_M
11 GB
Q2_K_L
11.4 GB
3-bit
UD-IQ3_XXS
12.9 GB
Q3_K_S
13.3 GB
Q3_K_M
14.6 GB
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for
Open4bits/GLM-4.7-Flash-GGUF
Base model
zai-org/GLM-4.7-Flash
Quantized
(
68
)
this model
Collection including
Open4bits/GLM-4.7-Flash-GGUF
GGUF
Collection
12 items
•
Updated
2 days ago