GGUF quants of Ring-flash-2.0
Using llama.cpp (commit 6de8ed75196c7cd98c1f34bbf3a7452451ba8ac2)
The importance matrix was generated with eaddario/imatrix-calibration
All quants were generated/calibrated with the imatrix, including the K quants.
🤗 Hugging Face | 🤖 ModelScope | 🚀 Experience Now
Introduction
Today, we are officially open-sourcing Ring-flash-2.0.
This is a high-performance thinking model, deeply optimized based on Ling-flash-2.0-base. Like Ling-flash-2.0, Ring-flash-2.0 has a total of 100B parameters, with only 6.1B activated per inference. Our independently developed icepop algorithm has successfully addressed the challenge of training instability in reinforcement learning (RL) for MoE LLMs after cold-start Long-CoT SFT, enabling the model’s complex reasoning capabilities to continuously improve throughout extended RL training cycles.
Ring-flash-2.0 demonstrates significant breakthroughs across multiple challenging benchmarks, including math competitions, code generation, and logical reasoning. Its performance not only surpasses that of SOTA dense models under 40B parameters but also rivals larger open-weight MoE models and closed-source high-performance thinking model APIs.
leading-level performance in complex reasoning
We selected representative open-source thinking models and closed-source APIs for comparison, including GPT-OSS-120B(medium), Qwen3-32B-Thinking, Seed-OSS-36B-Instruct, and Gemini-2.5-Flash.
The benchmarking results demonstrate that Ring-flash-2.0 exhibits leading performance across multiple challenging general reasoning tasks, including:
- Math competitions (AIME 25, Omni-MATH),
- Code generation (LiveCodeBench, CodeForce-Elo),
- Logical reasoning (ARC-Prize). It also shows strong competitiveness in specialized domains such as:
- Scientific and medical reasoning (GPQA-Diamond, HealthBench).
More surprisingly, although Ring-flash-2.0 is primarily designed for complex reasoning, it outperforms all other compared models in creative writing (Creative Writing v3) and matches the creative capability of its "twin brother"—the non-thinking model Ling-flash-2.0.
Efficient Architecture, High-Speed Inference
Building on the highly efficient MoE architecture of the Ling 2.0 series, and through structural optimizations such as a __1/32 expert activation ratio__ and __MTP layers__, Ring-flash-2.0 activates only 6.1B (4.8B non-embedding) parameters while delivering performance comparable to a ∼40B dense model. Thanks to its low activation and high sparsity design, Ring-flash-2.0 achieves a high generation speed of __200+ tokens/sec__ when deployed on just four H20 GPUs, significantly reducing inference costs for thinking models in high-concurrency scenarios.
IcePop: Cooling Down Training-Inference Gaps in RL for MoE Models
During the RL for MoE models, the discrepancy of precision between the training and inference engines is more pronounced compared to dense models. This gap widens progressively as sequence length and training steps increase—particularly during long-sequence generation and extended training cycles. A more critical issue is that the original GRPO algorithm begins to break down within a limited number of training steps. Specifically, the probabilistic discrepancy for the same token between training and inference phases gradually increases. When this relative difference exceeds 5%, training effectively fails, posing a significant challenge for long-horizon reinforcement learning with lengthy sequences.
To address this issue, we introduced a key solution: distribution calibration via masked bidirectional truncation, which effectively narrows the gap between training and inference.
- Bidirectional Truncation: We truncate not only tokens where the training probability is significantly higher than the inference probability but also the reverse scenario where the training probability is much lower.
- Masking: Tokens with excessively large discrepancies are excluded from gradient computation.
For detailed algorithm introduction, please refer to our technical blog: https://ringtech.notion.site/icepop
SFT + RLVR + RLHF Multi-Stage Training
To comprehensively enhance the capabilities of Ring-flash-2.0, we designed a Two-staged RL pipeline. First, lightweight Long-CoT SFT equips the Ling-flash-2.0-base model with diverse thinking patterns. This is followed by RL training with Verifiable Rewards (RLVR) to continually stimulate the model’s reasoning potential. Finally, an RLHF phase is incorporated to improve the model’s general abilities.
During RL training, we compared directly combining RLVR and RLHF into joint training with the ultimately adopted Two-staged RL pipeline. Both approaches showed relatively similar effectiveness in our experiments. However, due to the differing difficulty levels of RLVR and RLHF tasks—with RLHF involving relatively shorter model rollouts—joint training resulted in more long-tail generations. From an engineering efficiency perspective, we ultimately adopted the Two-staged RL approach.
- Downloads last month
- 165
Model tree for redponike/Ring-flash-2.0-GGUF
Base model
inclusionAI/Ling-flash-base-2.0