--- license: mit base_model: - deepseek-ai/DeepSeek-R1-0528 --- # Model Overview - **Model Architecture:** DeepSeek-R1-0528 - **Input:** Text - **Output:** Text - **Supported Hardware Microarchitecture:** AMD MI350/MI355 - **ROCm**: 7.0 - **PyTorch**: 2.8.0 - **Transformers**: 4.53.0 - **Operating System(s):** Linux - **Inference Engine:** [SGLang](https://docs.sglang.ai/)/[vLLM](https://docs.vllm.ai/en/latest/) - **Model Optimizer:** [AMD-Quark](https://quark.docs.amd.com/latest/index.html) (V0.10) - **Weight quantization:** OCP MXFP4, Static - **Activation quantization:** OCP MXFP4, Dynamic - **Calibration Dataset:** [Pile](https://huggingface.co/datasets/mit-han-lab/pile-val-backup) This model was built with deepseek-ai DeepSeek-R1-0528 model by applying [AMD-Quark](https://quark.docs.amd.com/latest/index.html) for MXFP4 quantization. # Model Quantization The model was quantized from [deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) using [AMD-Quark](https://quark.docs.amd.com/latest/index.html). Both weights and activations were quantized to MXFP4 format, and the AutoSmoothQuant algorithm was applied to enhance accuracy. **Preprocessing requirement:** Before executing the quantization script below, the original FP8 model must first be dequantized to BFloat16. You can either perform the dequantization manually using this [conversion script](https://github.com/deepseek-ai/DeepSeek-V3/blob/main/inference/fp8_cast_bf16.py), or use the pre-converted BFloat16 model available at [unsloth/DeepSeek-R1-0528-BF16](https://huggingface.co/unsloth/DeepSeek-R1-0528-BF16). **Quantization scripts:** ``` cd Quark/examples/torch/language_modeling/llm_ptq/ exclude_layers="*self_attn* *mlp.gate.* *lm_head" python3 quantize_quark.py --model_dir $MODEL_DIR \ --quant_scheme w_mxfp4_a_mxfp4 \ --num_calib_data 128 \ --exclude_layers $exclude_layers \ --skip_evaluation \ --multi_gpu \ --quant_algo autosmoothquant \ --model_export hf_format \ --output_dir amd/DeepSeek-R1-0528-MXFP4-ASQ ``` # Deployment This model can be deployed efficiently using the [SGLang](https://docs.sglang.ai/) and [vLLM](https://docs.vllm.ai/en/latest/) backends. ## Evaluation The model was evaluated on AIME24, GPQA Diamond, MATH-500, and GSM8K benchmarks. The tasks of AIME24, GPQA Diamond, and MATH-500 were conducted using [lighteval](https://github.com/huggingface/lighteval/tree/v0.10.0) with 10 rounds for different generation seeds. GSM8K was conducted using [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). ### Accuracy
| Benchmark | DeepSeek-R1-0528 | DeepSeek-R1-0528-MXFP4-ASQ(this model) | Recovery |
| AIME24 | 88.00 | 87.67 | 99.62% |
| GPQA Diamond | 79.90 | 79.65 | 99.69% |
| MATH-500 | 97.06 | 96.90 | 99.84% |
| GSM8K | 95.30 | 95.18 | 99.87% |