Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
batmac
/
gpt2-gguf
like
1
GGUF
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
gpt2-gguf
2.86 GB
1 contributor
History:
3 commits
batmac
Upload README.md with huggingface_hub
c1da8c3
verified
over 1 year ago
.gitattributes
Safe
1.84 kB
Upload folder using huggingface_hub
over 1 year ago
README.md
Safe
19 Bytes
Upload README.md with huggingface_hub
over 1 year ago
ggml-model-IQ3_M.gguf
94.2 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-IQ3_S.gguf
90.1 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-IQ3_XS.gguf
89.2 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-IQ3_XXS.gguf
83 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-IQ4_NL.gguf
107 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-IQ4_XS.gguf
103 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-Q2_K.gguf
81.2 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-Q3_K.gguf
97.7 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-Q3_K_L.gguf
102 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-Q3_K_M.gguf
97.7 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-Q3_K_S.gguf
90.1 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-Q4_0.gguf
107 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-Q4_1.gguf
114 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-Q4_K.gguf
113 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-Q4_K_M.gguf
113 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-Q4_K_S.gguf
107 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-Q5_0.gguf
122 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-Q5_1.gguf
130 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-Q5_K.gguf
127 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-Q5_K_M.gguf
127 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-Q5_K_S.gguf
122 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-Q6_K.gguf
138 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-Q8_0.gguf
178 MB
xet
Upload folder using huggingface_hub
over 1 year ago
ggml-model-f16.gguf
Safe
330 MB
xet
Upload folder using huggingface_hub
over 1 year ago