Firworks commited on
Commit
13bc57d
·
verified ·
1 Parent(s): 2d48701

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -10,6 +10,8 @@ tags:
10
  ---
11
  # Step-3.5-Flash-nvfp4
12
 
 
 
13
  **Format:** NVFP4 — weights & activations quantized to FP4 with dual scaling.
14
  **Base model:** `stepfun-ai/Step-3.5-Flash`
15
  **How it was made:** One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration (1 samples of 512 length) with Rombo-Org/Optimized_Reasoning.
@@ -19,9 +21,8 @@ tags:
19
  Check the original model card for information about this model.
20
 
21
  # Running the model with VLLM in Docker
22
- ```sh
23
- sudo docker run --runtime nvidia --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:nightly --model Firworks/Step-3.5-Flash-nvfp4 --dtype auto --max-model-len 32768
24
- ```
25
- This was tested on an RTX Pro 6000 Blackwell cloud instance.
26
 
27
  If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.
 
10
  ---
11
  # Step-3.5-Flash-nvfp4
12
 
13
+ Note: This is mostly an experiment but I've been trying anything I can do to get this to complete in NVFP4. I finally cracked it! With a monkey patch I was able to get llm-compressor to produce something that at least *looks* like an NVFP4 quant of this model. However, I cannot get VLLM to load it. I only ran it with a single sample of the calibration data for troubleshooting but if I (or someone else) figure out how to get it to actually load in VLLM or Transformers I can go back and run it with a more normal calibration data set. Only mess with this if you want a technical challenge. Don't expect it to work.
14
+
15
  **Format:** NVFP4 — weights & activations quantized to FP4 with dual scaling.
16
  **Base model:** `stepfun-ai/Step-3.5-Flash`
17
  **How it was made:** One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration (1 samples of 512 length) with Rombo-Org/Optimized_Reasoning.
 
21
  Check the original model card for information about this model.
22
 
23
  # Running the model with VLLM in Docker
24
+
25
+ No idea how yet.
26
+
 
27
 
28
  If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.