sedrickkeh commited on
Commit
8d156e5
Β·
verified Β·
1 Parent(s): 939a67b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -19
README.md CHANGED
@@ -1,38 +1,75 @@
1
  ---
 
 
 
2
  library_name: transformers
3
  license: apache-2.0
4
- base_model: Qwen/Qwen2.5-1.5B-Instruct
5
  tags:
6
  - llama-factory
7
  - full
8
  - generated_from_trainer
9
  model-index:
10
- - name: openthoughts3_full_qwen25_1b
11
  results: []
 
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
- # openthoughts3_full_qwen25_1b
18
 
19
- This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the mlfoundations-dev/openthoughts3 dataset.
 
 
20
 
21
- ## Model description
 
 
 
 
 
 
 
 
22
 
23
- More information needed
24
 
25
- ## Intended uses & limitations
26
 
27
- More information needed
 
 
28
 
29
- ## Training and evaluation data
30
 
31
- More information needed
 
 
 
32
 
33
  ## Training procedure
34
 
35
- ### Training hyperparameters
 
 
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 0.00016
@@ -42,19 +79,38 @@ The following hyperparameters were used during training:
42
  - distributed_type: multi-GPU
43
  - num_devices: 64
44
  - total_train_batch_size: 256
45
- - total_eval_batch_size: 512
46
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: cosine
48
  - lr_scheduler_warmup_ratio: 0.1
49
  - num_epochs: 7.0
50
 
51
- ### Training results
52
-
53
-
54
-
55
- ### Framework versions
56
 
57
  - Transformers 4.46.1
58
  - Pytorch 2.3.0
59
  - Datasets 3.1.0
60
  - Tokenizers 0.20.3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: Qwen/Qwen2.5-1.5B-Instruct
3
+ datasets:
4
+ - open-thoughts/OpenThoughts3-1.2M
5
  library_name: transformers
6
  license: apache-2.0
 
7
  tags:
8
  - llama-factory
9
  - full
10
  - generated_from_trainer
11
  model-index:
12
+ - name: OpenThinker3-7B
13
  results: []
14
+ pipeline_tag: text-generation
15
  ---
16
 
17
+ <p align="center">
18
+ <img src="https://huggingface.co/datasets/open-thoughts/open-thoughts-114k/resolve/main/open_thoughts.png" width="50%">
19
+ </p>
20
+
21
+ <p align="center">
22
+ <a href="https://arxiv.org/abs/2506.04178" style="margin-right: 24px;">paper</a> |
23
+ <a href="https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M" style="margin-right: 24px; margin-left: 24px;">dataset</a> |
24
+ <a href="https://huggingface.co/open-thoughts/OpenThinker3-7B" style="margin-left: 24px;">model</a>
25
+ </p>
26
+
27
+ > [!NOTE]
28
+ > We have released a paper for OpenThoughts! See our paper [here](https://arxiv.org/abs/2506.04178).
29
+
30
+ # OpenThinker3-1.5B
31
+
32
+ State-of-the-art SFT-only 1.5B reasoning model. πŸš€
33
+
34
+ This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the
35
+ [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset.
36
 
37
+ See our [paper](https://arxiv.org/abs/2506.04178) and [blog post](https://openthoughts.ai/blog/ot3) for more details.
38
 
39
+ # Evaluation Results
40
+ The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy).
41
+ In the table below, we bold values in each column that are within 2 standard errors of the best.
42
 
43
+ | Model | Data | AIME24 | AIME25 | AMC23 | MATH500 | HMMT O2/25 | LCB 06/24-01/25 | CodeElo | CodeForces | GPQA-D | JEEBench |
44
+ | ----------------------------------------------------------------------------------------------- | ----- | ------ | ------ | ------ | ------- | ---------- | --------------- | ------- | ---------- | ------ | -------- |
45
+ | [OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B) | βœ… | 30.7 | 22.0 | 72.5 | 82.8 | 15.7 | 26.1 | 11.1 | 14.9 | 38.6 | 45.3 |
46
+ | [OpenThinker2-7B](https://huggingface.co/open-thoughts/OpenThinker2-7B) | βœ… | 60.7 | 38.7 | 89.8 | 87.6 | 24.7 | 40.6 | 22.8 | 26.6 | 47.0 | 65.1 |
47
+ | **[OpenThinker3-7B](https://huggingface.co/open-thoughts/OpenThinker3-7B)** | βœ… |**69.0**|**53.3**|**93.5**| **90.0**| **42.7** | **51.7** | 31.0 |**32.2** | 53.7 |**72.4** |
48
+ | [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | ❌ | 51.3 | 38.0 | 92.0 | 88.0 | 25.0 | 34.5 | 19.9 | 21.1 | 33.2 | 50.4 |
49
+ | [OpenR1-Distill-7B](https://huggingface.co/open-r1/OpenR1-Distill-7B) | βœ… | 57.7 | 39.7 | 87.0 | 88.0 | 25.7 | 30.7 | 30.1 | 29.3 |**58.9**| 68.7 |
50
+ | [Llama-3.1-Nemotron-Nano-8B-v1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) | βœ… | 62.0 | 48.0 |**94.0**| 89.4 | 26.7 | **50.9** | 30.9 |**32.9** | 52.9 | 70.7 |
51
+ | [AceReason-Nemotron-7B](https://huggingface.co/nvidia/AceReason-Nemotron-7B) | βœ… |**71.0**| 50.7 |**93.8**| 89.8 | 33.3 | 44.3 |**32.9** |**30.9** | 52.9 | 64.3 |
52
 
53
+ # Data
54
 
55
+ This model was trained on the [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset.
56
 
57
+ The key to the strong model performance is our comprehensive data pipeline and over 1,000+ ablation experiments.
58
+ This led to the creation of [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M), which consists of 850,000 math questions, 250,000 code questions, and 100,000 science questions.
59
+ Reasoning traces are generated with QwQ-32B.
60
 
61
+ See the [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset page or our [paper](https://arxiv.org/abs/2506.04178) for additional information.
62
 
63
+
64
+ # Intended uses & limitations
65
+
66
+ Apache 2.0 License
67
 
68
  ## Training procedure
69
 
70
+ We used 64 A100 gpus to train the model for 7 days.
71
+
72
+ ## Training hyperparameters
73
 
74
  The following hyperparameters were used during training:
75
  - learning_rate: 0.00016
 
79
  - distributed_type: multi-GPU
80
  - num_devices: 64
81
  - total_train_batch_size: 256
 
82
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
83
  - lr_scheduler_type: cosine
84
  - lr_scheduler_warmup_ratio: 0.1
85
  - num_epochs: 7.0
86
 
87
+ ## Framework versions
 
 
 
 
88
 
89
  - Transformers 4.46.1
90
  - Pytorch 2.3.0
91
  - Datasets 3.1.0
92
  - Tokenizers 0.20.3
93
+
94
+ More info can be found in our repository: [https://github.com/open-thoughts/open-thoughts](https://github.com/open-thoughts/open-thoughts).
95
+
96
+ # Links
97
+ - πŸ“ [OpenThoughts Paper](https://arxiv.org/abs/2506.04178)
98
+ - πŸ“Š [OpenThoughts3-1.2M and OpenThinker3-7B Blog Post](https://www.open-thoughts.ai/blog/ot3)
99
+ - πŸ’» [Open Thoughts GitHub Repository](https://github.com/open-thoughts/open-thoughts)
100
+ - 🧠 [OpenThoughts3-1.2M dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M)
101
+ - πŸ€– [OpenThinker3-7B model](https://huggingface.co/open-thoughts/OpenThinker3-7B)
102
+ - πŸ€– [OpenThinker3-1.5B model](https://huggingface.co/open-thoughts/OpenThinker3-1.5B) - this model.
103
+
104
+
105
+ # Citation
106
+ ```
107
+ @misc{guha2025openthoughtsdatarecipesreasoning,
108
+ title={OpenThoughts: Data Recipes for Reasoning Models},
109
+ author={Etash Guha and Ryan Marten and Sedrick Keh and Negin Raoof and Georgios Smyrnis and Hritik Bansal and Marianna Nezhurina and Jean Mercat and Trung Vu and Zayne Sprague and Ashima Suvarna and Benjamin Feuer and Liangyu Chen and Zaid Khan and Eric Frankel and Sachin Grover and Caroline Choi and Niklas Muennighoff and Shiye Su and Wanjia Zhao and John Yang and Shreyas Pimpalgaonkar and Kartik Sharma and Charlie Cheng-Jie Ji and Yichuan Deng and Sarah Pratt and Vivek Ramanujan and Jon Saad-Falcon and Jeffrey Li and Achal Dave and Alon Albalak and Kushal Arora and Blake Wulfe and Chinmay Hegde and Greg Durrett and Sewoong Oh and Mohit Bansal and Saadia Gabriel and Aditya Grover and Kai-Wei Chang and Vaishaal Shankar and Aaron Gokaslan and Mike A. Merrill and Tatsunori Hashimoto and Yejin Choi and Jenia Jitsev and Reinhard Heckel and Maheswaran Sathiamoorthy and Alexandros G. Dimakis and Ludwig Schmidt},
110
+ year={2025},
111
+ eprint={2506.04178},
112
+ archivePrefix={arXiv},
113
+ primaryClass={cs.LG},
114
+ url={https://arxiv.org/abs/2506.04178},
115
+ }
116
+ ```