Update README.md
Browse files
README.md
CHANGED
|
@@ -19,9 +19,9 @@ VibeThinker-1.5B is a 1.5-billion parameter dense language model. With a total t
|
|
| 19 |

|
| 20 |
|
| 21 |
## Key Performance Data
|
| 22 |
-
π‘ Mathematical Reasoning: On the three major math benchmarks AIME24, AIME25, and HMMT25, its scores (80.
|
| 23 |
|
| 24 |
-
π± Code Generation: It achieved scores of 55.
|
| 25 |
|
| 26 |
π On the AIME 25 benchmark, VibeThinker-1.5B significantly extends the Pareto frontier of reasoning accuracy versus model scale, demonstrating that exceptional performance can be achieved with extreme parameter efficiency.
|
| 27 |
|
|
|
|
| 19 |

|
| 20 |
|
| 21 |
## Key Performance Data
|
| 22 |
+
π‘ Mathematical Reasoning: On the three major math benchmarks AIME24, AIME25, and HMMT25, its scores (80.3, 74.5, and 50.4, respectively) all surpass those of the initial DeepSeek R1 model, which has over 400 times the parameters (scores of 79.8, 70.0, and 41.7, respectively).
|
| 23 |
|
| 24 |
+
π± Code Generation: It achieved scores of 55.9 on LiveCodeBench v5 and 51.1 on v6. Its v6 score slightly leads Magistral Medium (50.3), underscoring its strong reasoning performance.
|
| 25 |
|
| 26 |
π On the AIME 25 benchmark, VibeThinker-1.5B significantly extends the Pareto frontier of reasoning accuracy versus model scale, demonstrating that exceptional performance can be achieved with extreme parameter efficiency.
|
| 27 |
|