granite-8b-code-instruct-4k-FP8

FP8 quantized version of IBM's Granite 8B Code model for efficient inference

This is an FP8 (E4M3) quantized version of ibm-granite/granite-8b-code-instruct-4k using compressed_tensors format. Quantized by TevunahAi on enterprise-grade hardware.

🎯 Recommended Usage: vLLM

For optimal performance with full FP8 benefits (2x memory savings + faster inference), use vLLM or TensorRT-LLM:

Quick Start with vLLM

pip install vllm

Python API:

from vllm import LLM, SamplingParams

# vLLM auto-detects FP8 from model config
llm = LLM(model="TevunahAi/granite-8b-code-instruct-4k-FP8", dtype="auto")

# Generate
prompt = "Write a Python function to calculate fibonacci numbers:"
sampling_params = SamplingParams(temperature=0.7, max_tokens=256)

outputs = llm.generate([prompt], sampling_params)
for output in outputs:
    print(output.outputs[0].text)

OpenAI-Compatible API Server:

vllm serve TevunahAi/granite-8b-code-instruct-4k-FP8 \
    --dtype auto \
    --max-model-len 4096

Then use with OpenAI client:

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="token-abc123",  # dummy key
)

response = client.chat.completions.create(
    model="TevunahAi/granite-8b-code-instruct-4k-FP8",
    messages=[
        {"role": "user", "content": "Write a Python function to calculate fibonacci numbers"}
    ],
    temperature=0.7,
    max_tokens=256,
)

print(response.choices[0].message.content)

vLLM Benefits

  • βœ… Weights, activations, and KV cache in FP8
  • βœ… ~8GB VRAM (50% reduction vs BF16)
  • βœ… Native FP8 tensor core acceleration on Ada/Hopper GPUs
  • βœ… Faster inference with optimized CUDA kernels
  • βœ… Runs on consumer GPUs (RTX 4070, RTX 4060 Ti 16GB, RTX 5000 Ada)

βš™οΈ Alternative: Transformers

This model can also be loaded with transformers. Note: Transformers will decompress FP8 β†’ BF16 during inference. However, at 8B parameters, this is manageable (~16GB VRAM).

Transformers Example (Click to expand)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Loads FP8 weights but decompresses to BF16 during compute
model = AutoModelForCausalLM.from_pretrained(
    "TevunahAi/granite-8b-code-instruct-4k-FP8",
    device_map="auto",
    torch_dtype="auto",
    low_cpu_mem_usage=True,
)
tokenizer = AutoTokenizer.from_pretrained("TevunahAi/granite-8b-code-instruct-4k-FP8")

# Generate
prompt = "Write a Python function to calculate fibonacci numbers:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Requirements:

pip install torch>=2.1.0 transformers>=4.40.0 accelerate compressed-tensors

System Requirements:

  • ~16GB VRAM (decompressed to BF16)
  • CUDA 11.8 or newer
  • PyTorch 2.1+ with CUDA support

πŸ“Š Quantization Details

Property Value
Base Model ibm-granite/granite-8b-code-instruct-4k
Quantization Method FP8 E4M3 weight-only
Framework llm-compressor + compressed_tensors
Calibration Dataset open_platypus (512 samples)
Storage Size ~8GB (sharded safetensors)
VRAM (vLLM) ~8GB
VRAM (Transformers) ~16GB (decompressed to BF16)
Target Hardware NVIDIA Ada (RTX 4000/5000) or Hopper (H100/GH200)
Quantization Time 21.6 minutes

Quantization Infrastructure

Professional hardware ensures consistent, high-quality quantization:

  • CPUs: Dual Intel Xeon Max 9480 (112 cores / 224 threads, 128GB HBM2e)
  • GPU: NVIDIA RTX 5000 Ada Generation (32GB VRAM, native FP8 support)
  • Memory: 256GB DDR5 + 128GB HBM2e = 384GB total system memory
  • Software Stack: Ubuntu 25.10 | Python 3.12 | PyTorch 2.8 | CUDA 13.0 | llm-compressor

πŸ”§ Why FP8?

With vLLM/TensorRT-LLM:

  • βœ… 50% memory reduction vs BF16 (weights + activations + KV cache)
  • βœ… Faster inference via native FP8 tensor cores
  • βœ… Better throughput with optimized kernels
  • βœ… Minimal quality loss for code generation tasks
  • βœ… Accessible on consumer GPUs (RTX 4060 Ti 16GB+)

With Transformers:

  • βœ… Smaller download size (~8GB vs ~16GB BF16)
  • βœ… Compatible with standard transformers workflow
  • ⚠️ Decompresses to BF16 during inference (no runtime memory benefit)

For production inference, use vLLM to realize the full FP8 benefits.

πŸ’Ύ Model Files

This model is sharded into multiple safetensors files (all required for inference). The compressed format enables efficient storage and faster downloads.

πŸ”¬ IBM Granite Code Models

Granite Code models are specifically trained for code generation, editing, and explanation tasks. This 8B parameter version offers strong performance on:

  • Code completion and generation
  • Bug fixing and refactoring
  • Code explanation and documentation
  • Multiple programming languages
  • 4K context window

Granite 8B vs Larger Models:

  • βœ… Fast iteration - quick response times
  • βœ… Accessible - runs on consumer GPUs
  • βœ… Good quality - suitable for most coding tasks
  • ⚠️ Trade-off: Less capable on very complex reasoning vs 20B/34B

πŸ“š Original Model

This quantization is based on ibm-granite/granite-8b-code-instruct-4k by IBM.

For comprehensive information about:

  • Model architecture and training methodology
  • Supported programming languages
  • Evaluation benchmarks and results
  • Ethical considerations and responsible AI guidelines

Please refer to the original model card.

πŸ”§ Hardware Requirements

Minimum (vLLM):

  • GPU: NVIDIA RTX 4060 Ti (16GB) or better
  • VRAM: 8GB minimum, 12GB+ recommended
  • CUDA: 11.8 or newer

Recommended (vLLM):

  • GPU: NVIDIA RTX 4070 / 4090 / RTX 5000 Ada
  • VRAM: 12GB+
  • CUDA: 12.0+

Transformers:

  • GPU: Any CUDA-capable GPU
  • VRAM: 16GB+
  • Works but not optimal for performance

πŸ“– Additional Resources

πŸ“„ License

This model inherits the Apache 2.0 License from the original Granite model.

πŸ™ Acknowledgments

  • Original Model: IBM Granite team
  • Quantization Framework: Neural Magic's llm-compressor
  • Quantized by: TevunahAi

πŸ“ Citation

If you use this model, please cite the original Granite work:

@misc{granite2024,
  title={Granite Code Models},
  author={IBM Research},
  year={2024},
  url={https://huggingface.co/ibm-granite/granite-8b-code-instruct-4k}
}

Professional AI Model Quantization by TevunahAi

Enterprise-grade quantization on specialized hardware

View all models | Contact for custom quantization

Downloads last month
12
Safetensors
Model size
8B params
Tensor type
BF16
Β·
F8_E4M3
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for TevunahAi/granite-8b-code-instruct-4k-FP8

Collection including TevunahAi/granite-8b-code-instruct-4k-FP8