Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
inference-optimization
/
Llama-3.1-8B-Instruct-QKV-Cache-FP8-Per-Head
like
0
Follow
Inference Optimization
12
Safetensors
llama
compressed-tensors
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
main
Llama-3.1-8B-Instruct-QKV-Cache-FP8-Per-Head
16.1 GB
1 contributor
History:
6 commits
krishnateja95
Update README.md
a5ead57
verified
4 days ago
.gitattributes
Safe
1.57 kB
Upload folder using huggingface_hub
5 days ago
README.md
2.18 kB
Update README.md
4 days ago
chat_template.jinja
Safe
4.61 kB
Upload folder using huggingface_hub
5 days ago
config.json
1.52 kB
Upload folder using huggingface_hub
5 days ago
generation_config.json
Safe
184 Bytes
Upload folder using huggingface_hub
5 days ago
model-00001-of-00004.safetensors
4.98 GB
xet
Upload folder using huggingface_hub
5 days ago
model-00002-of-00004.safetensors
5 GB
xet
Upload folder using huggingface_hub
5 days ago
model-00003-of-00004.safetensors
4.92 GB
xet
Upload folder using huggingface_hub
5 days ago
model-00004-of-00004.safetensors
Safe
1.17 GB
xet
Upload folder using huggingface_hub
5 days ago
model.safetensors.index.json
Safe
28.9 kB
Upload folder using huggingface_hub
5 days ago
recipe.yaml
417 Bytes
Upload folder using huggingface_hub
5 days ago
special_tokens_map.json
Safe
296 Bytes
Upload folder using huggingface_hub
5 days ago
tokenizer.json
Safe
17.2 MB
xet
Upload folder using huggingface_hub
5 days ago
tokenizer_config.json
Safe
50.5 kB
Upload folder using huggingface_hub
5 days ago