hamishivi's picture
Update README.md
277fc80 verified
metadata
base_model:
  - open-thoughts/OpenThinker3-1.5B
library_name: transformers
license: apache-2.0
model-index:
  - name: OpenThinker3-1.5B-RLVE
    results: []
pipeline_tag: text-generation
tags:
  - https://arxiv.org/abs/2511.07317

For full information, go check out the RLVE paper here.

OpenThinker3 1.5B RLVE

This model is trained on top of Openthinker3 1.5B using RLVE. For more details on RLVE, check out the figure below, or see our paper for more details.

Figure 1

Evaluation Results

We provide evaluation instructions in our repository.

Benchmark AIME 2024 (Avg@64) AIME 2025 (Avg@64) OMEGA-500 (Avg@4) OlympiadBench (Avg@4) BBEH (Avg@4) LiveCodeBench-v6 (Pass@8)
OpenThinker3-1.5B (starting model) 54.32 42.03 25.15 56.85 4.00 28.17
OpenThinker3-1.5B-RLVE 58.18 49.90 29.45 62.67 7.13 34.07

Intended uses & limitations

Apache 2.0 License

Training

For training details and hyperparameters. please see our repository.

In particular, you can rerun training for this model with this command (after setting up the repository):

bash scripts/training/OpenThinker3-1.5B/rlve/num-environment=400.sh RLVE

Links

Citation

@article{zeng2025rlve,
  title={RLVE: Scaling Up Reinforcement Learning for Language Models with Adaptive Verifiable Environments},
  author={Zeng, Zhiyuan and Ivison, Hamish and Wang, Yiping and Yuan, Lifan and Li, Shuyue Stella and Ye, Zhuorui and Li, Siting and He, Jacqueline and Zhou, Runlong and Chen, Tong and Zhao, Chenyang and Tsvetkov, Yulia and Du, Simon Shaolei and Jaques, Natasha and Peng, Hao and Koh, Pang Wei and Hajishirzi, Hannaneh},
  journal={arXiv preprint 2511.07317},
  year={2025}
}