keypa's picture
Upload README.md with huggingface_hub
ecd1c4d verified
metadata
base_model: PrimeIntellect/INTELLECT-3-FP8
library_name: gguf
quantized_by: keypa
tags:
  - gguf
  - text-generation-inference

INTELLECT-3-FP8 - GGUF

This is a GGUF conversion of PrimeIntellect/INTELLECT-3-FP8.

Conversion Info

  • Precision: F16 (Half Precision)
  • Tool: llama.cpp convert-hf-to-gguf.py

Usage

Download and use with llama.cpp or any GGUF-compatible inference engine.