Update README.md
Browse files
README.md
CHANGED
|
@@ -1,114 +1,59 @@
|
|
| 1 |
---
|
| 2 |
-
base_model:
|
| 3 |
-
|
| 4 |
-
- open-thoughts/OpenThoughts3-1.2M
|
| 5 |
library_name: transformers
|
| 6 |
license: apache-2.0
|
| 7 |
-
tags:
|
| 8 |
-
- llama-factory
|
| 9 |
-
- full
|
| 10 |
-
- generated_from_trainer
|
| 11 |
model-index:
|
| 12 |
-
- name: OpenThinker3-
|
| 13 |
results: []
|
| 14 |
pipeline_tag: text-generation
|
| 15 |
---
|
| 16 |
|
| 17 |
-
<p align="center">
|
| 18 |
-
<img src="https://huggingface.co/datasets/open-thoughts/open-thoughts-114k/resolve/main/open_thoughts.png" width="50%">
|
| 19 |
-
</p>
|
| 20 |
-
|
| 21 |
-
<p align="center">
|
| 22 |
-
<a href="https://arxiv.org/abs/2506.04178" style="margin-right: 24px;">paper</a> |
|
| 23 |
-
<a href="https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M" style="margin-right: 24px; margin-left: 24px;">dataset</a> |
|
| 24 |
-
<a href="https://huggingface.co/open-thoughts/OpenThinker3-7B" style="margin-left: 24px;">model</a>
|
| 25 |
-
</p>
|
| 26 |
-
|
| 27 |
> [!NOTE]
|
| 28 |
-
>
|
| 29 |
|
| 30 |
-
# OpenThinker3
|
| 31 |
|
| 32 |
-
|
| 33 |
|
| 34 |
-
This model is
|
| 35 |
-
[OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset.
|
| 36 |
|
| 37 |
-
See our [paper](https://arxiv.org/abs/
|
| 38 |
|
| 39 |
# Evaluation Results
|
| 40 |
-
The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy).
|
| 41 |
-
In the table below, we bold values in each column that are within 2 standard errors of the best.
|
| 42 |
-
|
| 43 |
-
| Model | AIME24 | AIME25 | AMC23 | MATH500 | HMMT O2/25 | LCB 06/24-01/25 | CodeElo | CodeForces | GPQA-D | JEEBench |
|
| 44 |
-
| ------------------------------------------------------------------------------------------------------------ | ------ | ------ | ------ | ------- | ---------- | --------------- | ------- | ---------- | ------ | -------- |
|
| 45 |
-
| **[OpenThinker3-1.5B](https://huggingface.co/open-thoughts/OpenThinker3-1.5B)** |**52.0**|**41.7**|**87.0**| 86.4 | **27.3** | **39.4** | 12.9 | 15.5 | 29.5 | 51.9 |
|
| 46 |
-
| [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) | 32.3 | 23.7 | 71.8 | 80.8 | 15.3 | 27.2 | 8.8 | 8.5 | 31.1 | 32.5 |
|
| 47 |
-
| [Nemotron-Research-Reasoning-Qwen-1.5B](https://huggingface.co/nvidia/Nemotron-Research-Reasoning-Qwen-1.5B) |**47.7**| 32.0 |**87.5**| 86.0 | 21.7 | 31.4 |**54.7** |**40.3** | 41.8 | 52.6 |
|
| 48 |
-
| [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) |**52.0**| 35.3 | 83.8 | **87.2**| 23.3 | 27.7 | 20.7 | 20.0 |**49.3**|**60.7** |
|
| 49 |
-
| [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) | 3.0 | 0.7 | 30.8 | 50.2 | 0.0 | 5.5 | 0.8 | 2.2 | 24.7 | 16.4 |
|
| 50 |
-
|
| 51 |
-
# Data
|
| 52 |
-
|
| 53 |
-
This model was trained on the [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset.
|
| 54 |
-
|
| 55 |
-
The key to the strong model performance is our comprehensive data pipeline and over 1,000+ ablation experiments.
|
| 56 |
-
This led to the creation of [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M), which consists of 850,000 math questions, 250,000 code questions, and 100,000 science questions.
|
| 57 |
-
Reasoning traces are generated with QwQ-32B.
|
| 58 |
|
| 59 |
-
See the [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset page or our [paper](https://arxiv.org/abs/2506.04178) for additional information.
|
| 60 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 61 |
|
| 62 |
# Intended uses & limitations
|
| 63 |
|
| 64 |
Apache 2.0 License
|
| 65 |
|
| 66 |
-
## Training
|
| 67 |
|
| 68 |
-
|
| 69 |
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
- train_batch_size: 4
|
| 75 |
-
- eval_batch_size: 8
|
| 76 |
-
- seed: 42
|
| 77 |
-
- distributed_type: multi-GPU
|
| 78 |
-
- num_devices: 64
|
| 79 |
-
- total_train_batch_size: 256
|
| 80 |
-
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 81 |
-
- lr_scheduler_type: cosine
|
| 82 |
-
- lr_scheduler_warmup_ratio: 0.1
|
| 83 |
-
- num_epochs: 7.0
|
| 84 |
-
|
| 85 |
-
## Framework versions
|
| 86 |
-
|
| 87 |
-
- Transformers 4.46.1
|
| 88 |
-
- Pytorch 2.3.0
|
| 89 |
-
- Datasets 3.1.0
|
| 90 |
-
- Tokenizers 0.20.3
|
| 91 |
-
|
| 92 |
-
More info can be found in our repository: [https://github.com/open-thoughts/open-thoughts](https://github.com/open-thoughts/open-thoughts).
|
| 93 |
|
| 94 |
# Links
|
| 95 |
-
- π [
|
| 96 |
-
-
|
| 97 |
-
-
|
| 98 |
-
-
|
| 99 |
-
- π€ [OpenThinker3-7B model](https://huggingface.co/open-thoughts/OpenThinker3-7B)
|
| 100 |
-
- π€ [OpenThinker3-1.5B model](https://huggingface.co/open-thoughts/OpenThinker3-1.5B) - this model.
|
| 101 |
|
| 102 |
|
| 103 |
# Citation
|
| 104 |
```
|
| 105 |
-
@
|
| 106 |
-
title={
|
| 107 |
-
author={
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
archivePrefix={arXiv},
|
| 111 |
-
primaryClass={cs.LG},
|
| 112 |
-
url={https://arxiv.org/abs/2506.04178},
|
| 113 |
}
|
| 114 |
```
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model:
|
| 3 |
+
- open-thoughts/OpenThinker3-1.5B
|
|
|
|
| 4 |
library_name: transformers
|
| 5 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
model-index:
|
| 7 |
+
- name: OpenThinker3-1.5B-RLVE
|
| 8 |
results: []
|
| 9 |
pipeline_tag: text-generation
|
| 10 |
---
|
| 11 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
> [!NOTE]
|
| 13 |
+
> For full information, go check out the RLVE paper [here](https://arxiv.org/abs/TODO).
|
| 14 |
|
| 15 |
+
# OpenThinker3 1.5B RLVE
|
| 16 |
|
| 17 |
+
OpenThinker 3 1.5B with additional RLVE training.
|
| 18 |
|
| 19 |
+
This model is an RLVE-trained version of [Openthinker3 1.5B](https://huggingface.co/open-thoughts/OpenThinker3-1.5B) that has undergone RLVE training.
|
|
|
|
| 20 |
|
| 21 |
+
See our [paper](https://arxiv.org/abs/TODO) for more details.
|
| 22 |
|
| 23 |
# Evaluation Results
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
|
|
|
| 25 |
|
| 26 |
+
| Benchmark | AIME 2024 (Avg@64) | AIME 2025 (Avg@64) | OMEGA-500 (Avg@4) | OlympiadBench (Avg@4) | BBEH (Avg@4) | LiveCodeBench-v6 (Pass@8) |
|
| 27 |
+
|:-----------|:------------------:|:------------------:|:-----------------:|:----------------------:|:-------------:|:--------------------------:|
|
| 28 |
+
| [OpenThinker3-1.5B](https://huggingface.co/open-thoughts/OpenThinker3-1.5B) | 54.32 | 42.03 | 25.15 | 56.85 | 4.00 | 28.17 |
|
| 29 |
+
| [OpenThinker3-1.5B-RLVE](https://huggingface.co/hamishivi/OpenThinker3-1.5B-RLVE) | **58.18** | **49.90** | **29.45** | **62.67** | **7.13** | **34.07** |
|
| 30 |
|
| 31 |
# Intended uses & limitations
|
| 32 |
|
| 33 |
Apache 2.0 License
|
| 34 |
|
| 35 |
+
## Training
|
| 36 |
|
| 37 |
+
For training details and hyperparameters. please see [our repository](https://github.com/Zhiyuan-Zeng/RLVE).
|
| 38 |
|
| 39 |
+
In particular, you can rerun training for this model with this command (after setting up the repository):
|
| 40 |
+
```bash
|
| 41 |
+
bash scripts/training/OpenThinker3-1.5B/rlve/num-environment=400.sh RLVE
|
| 42 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
# Links
|
| 45 |
+
- π [RLVE Paper](https://arxiv.org/abs/TODO)
|
| 46 |
+
- π» [RLVE GitHub Repository](https://github.com/Zhiyuan-Zeng/RLVE)
|
| 47 |
+
- π€ [Nemotron-Research-Reasoning-Qwen-1.5B-v2-RLVE](https://huggingface.co/hamishivi/Nemotron-Research-Reasoning-Qwen-1.5B-v2-RLVE)
|
| 48 |
+
- π€ [OpenThinker3-1.5B-RLVE](https://huggingface.co/hamishivi/OpenThinker3-1.5B-RLVE) - this model.
|
|
|
|
|
|
|
| 49 |
|
| 50 |
|
| 51 |
# Citation
|
| 52 |
```
|
| 53 |
+
@article{zeng2025rlve,
|
| 54 |
+
title={RLVE: Scaling Up Reinforcement Learning for Language Models with Adaptive Verifiable Environments},
|
| 55 |
+
author={Zeng, Zhiyuan and Ivison, Hamish and Wang, Yiping and Yuan, Lifan and Li, Shuyue Stella and Ye, Zhuorui and Li, Siting and He, Jacqueline and Zhou, Runlong and Chen, Tong and Zhao, Chenyang and Tsvetkov, Yulia and Du, Simon Shaolei and Jaques, Natasha and Peng, Hao and Koh, Pang Wei and Hajishirzi, Hannaneh},
|
| 56 |
+
journal={[TODO ARXIV]},
|
| 57 |
+
year={2025}
|
|
|
|
|
|
|
|
|
|
| 58 |
}
|
| 59 |
```
|