INTELLECT-3.1

Prime Intellect Logo

INTELLECT-3.1: A 100B+ MoE trained with large-scale RL

Trained with prime-rl and verifiers
Environments released on Environments Hub
Read the Blog & Technical Report
X | Discord | Prime Intellect Platform

Introduction

INTELLECT-3.1 is a 106B (A12B) parameter Mixture-of-Experts reasoning model built as a continued training of INTELLECT-3 with additional reinforcement learning on math, coding, software engineering, and agentic tasks.

Training was performed with prime-rl using environments built with the verifiers library. All training and evaluation environments are available on the Environments Hub.

The model, training frameworks, and environments are open-sourced under fully-permissive licenses (MIT and Apache 2.0).

For more details, see the technical report.

Serving with vLLM

The model can be served on 2x H200s:

vllm serve PrimeIntellect/INTELLECT-3.1 \
    --tensor-parallel-size 2 \
    --enable-auto-tool-choice \
    --tool-call-parser qwen3_coder \
    --reasoning-parser deepseek_r1

Citation

@misc{intellect3.1,
  title={INTELLECT-3.1: Technical Report},
  author={Prime Intellect Team},
  year={2025},
  url={https://huggingface.co/PrimeIntellect/INTELLECT-3.1}
}
Downloads last month
22
Safetensors
Model size
32B params
Tensor type
BF16
I64
F32
I32
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for cyankiwi/INTELLECT-3.1-AWQ-8bit

Quantized
(6)
this model