Just some testing

Pin down what is needed between 50s,40s,30s cards

Translate info for use in TensorRT fp4 converting / onnx

Anyone that has made fp4 onnx before with NVIDIA’s Model Optimizer PTQ help in the Community tab is welcomed.

Currently working on correct Quantize/Dequantize nodes for onnx export to build fp4 RT engine for 50s cards.

Klein 4B NVFP4 Variants — txt_attn dtype notes

Klein4b-nvfp4.safetensors

  • txt_attn weights: NVFP4

Klein4b-nvfp4_BF16.safetensors

  • txt_attn weights: BF16

Klein4b-nvfp4_fp8e4m3fn_absmax.safetensors

  • txt_attn weights: FP8 (float8_e4m3fn)
  • txt_attn scaling: absmax

Klein4b-nvfp4_fp8e5m2_absmax.safetensors

  • txt_attn weights: FP8 (float8_e5m2)
  • txt_attn scaling: absmax
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ApacheOne/FluxKlein4b-nvfp4_mixed

Finetuned
(7)
this model

Collection including ApacheOne/FluxKlein4b-nvfp4_mixed