---
library_name: transformers
tags:
- prime-rl
- verifiers
- prime-intellect
- reinforcement-learning
- reasoning
- agentic
- mixture-of-experts
license: mit
language:
- en
base_model:
- zai-org/GLM-4.5-Air-Base
pipeline_tag: text-generation
---
# INTELLECT-3.1
INTELLECT-3.1: A 100B+ MoE trained with large-scale RL
Trained with prime-rl and verifiers
Environments released on Environments Hub
Read the Blog & Technical Report
X | Discord | Prime Intellect Platform
## Introduction
**INTELLECT-3.1** is a 106B (A12B) parameter Mixture-of-Experts reasoning model built as a continued training of [INTELLECT-3](https://huggingface.co/PrimeIntellect/INTELLECT-3) with additional reinforcement learning on math, coding, software engineering, and agentic tasks.
Training was performed with [prime-rl](https://github.com/PrimeIntellect-ai/prime-rl) using environments built with the [verifiers](https://github.com/PrimeIntellect-ai/verifiers) library.
All training and evaluation environments are available on the [Environments Hub](https://app.primeintellect.ai/dashboard/environments).
The model, training frameworks, and environments are open-sourced under fully-permissive licenses (MIT and Apache 2.0).
For more details, see the [technical report](https://storage.googleapis.com/intellect-3-paper/INTELLECT_3_Technical_Report.pdf).
## Serving with vLLM
The model can be served on 2x H200s:
```bash
vllm serve PrimeIntellect/INTELLECT-3.1 \
--tensor-parallel-size 2 \
--enable-auto-tool-choice \
--tool-call-parser qwen3_coder \
--reasoning-parser deepseek_r1
```
## Citation
```bibtex
@misc{intellect3.1,
title={INTELLECT-3.1: Technical Report},
author={Prime Intellect Team},
year={2025},
url={https://huggingface.co/PrimeIntellect/INTELLECT-3.1}
}
```