Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Open4bits
/
GLM-4.7-Flash-GGUF
like
0
Follow
Open4bit
5
Text Generation
Transformers
GGUF
English
Chinese
open4bits
imatrix
conversational
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
GLM-4.7-Flash-GGUF
112 GB
1 contributor
History:
5 commits
fmasterpro27
Upload folder using huggingface_hub
813a9b8
verified
2 days ago
.gitattributes
2.15 kB
Upload folder using huggingface_hub
2 days ago
GLM-4.7-Flash-Q2_K.gguf
11.3 GB
xet
Upload folder using huggingface_hub
2 days ago
GLM-4.7-Flash-Q2_K_L.gguf
11.4 GB
xet
Upload folder using huggingface_hub
2 days ago
GLM-4.7-Flash-Q3_K_M.gguf
14.6 GB
xet
Upload folder using huggingface_hub
2 days ago
GLM-4.7-Flash-Q3_K_S.gguf
13.3 GB
xet
Upload folder using huggingface_hub
2 days ago
GLM-4.7-Flash-UD-IQ1_M.gguf
9.81 GB
xet
Upload folder using huggingface_hub
2 days ago
GLM-4.7-Flash-UD-IQ1_S.gguf
9.25 GB
xet
Upload folder using huggingface_hub
2 days ago
GLM-4.7-Flash-UD-IQ2_M.gguf
11 GB
xet
Upload folder using huggingface_hub
2 days ago
GLM-4.7-Flash-UD-IQ2_XXS.gguf
10.5 GB
xet
Upload folder using huggingface_hub
2 days ago
GLM-4.7-Flash-UD-IQ3_XXS.gguf
12.9 GB
xet
Upload folder using huggingface_hub
2 days ago
GLM-4.7-Flash-UD-TQ1_0.gguf
8.33 GB
xet
Upload folder using huggingface_hub
2 days ago
README.md
151 Bytes
Update README.md
2 days ago