Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
inference-optimization
/
Llama-3.1-8B-Instruct-FP8-dynamic-QKV-Cache-FP8-Per-Head
like
0
Follow
Inference Optimization
12
Safetensors
llama
compressed-tensors
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
c310671
Llama-3.1-8B-Instruct-FP8-dynamic-QKV-Cache-FP8-Per-Head
1.55 kB
1 contributor
History:
1 commit
krishnateja95
initial commit
c310671
verified
5 days ago
.gitattributes
Safe
1.52 kB
initial commit
5 days ago
README.md
Safe
31 Bytes
initial commit
5 days ago