Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
inference-optimization
/
Llama-3.1-8B-Instruct-FP8-dynamic-QKV-Cache-FP8-Per-Head
like
0
Follow
Inference Optimization
12
Safetensors
llama
compressed-tensors
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
main
Llama-3.1-8B-Instruct-FP8-dynamic-QKV-Cache-FP8-Per-Head
/
README.md
Commit History
initial commit
c310671
verified
krishnateja95
commited on
6 days ago