Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
inference-optimization
/
Llama-3.1-8B-Instruct-FP8-dynamic-QKV-Cache-FP8-Per-Head
like
0
Follow
Inference Optimization
10
Safetensors
llama
compressed-tensors
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
main
Llama-3.1-8B-Instruct-FP8-dynamic-QKV-Cache-FP8-Per-Head
/
README.md
krishnateja95
initial commit
c310671
verified
4 days ago
preview
code
|
raw
Copy download link
history
blame
contribute
delete
Safe
31 Bytes
metadata
license:
apache-2.0