Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,199 +1,222 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
This is
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
|
| 128 |
-
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
|
| 142 |
-
|
| 143 |
-
|
| 144 |
-
|
| 145 |
-
|
| 146 |
-
|
| 147 |
-
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
| 153 |
-
|
| 154 |
-
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
|
| 158 |
-
|
| 159 |
-
|
| 160 |
-
|
| 161 |
-
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
|
| 165 |
-
|
| 166 |
-
|
| 167 |
-
|
| 168 |
-
|
| 169 |
-
|
| 170 |
-
|
| 171 |
-
|
| 172 |
-
|
| 173 |
-
|
| 174 |
-
|
| 175 |
-
|
| 176 |
-
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
|
| 181 |
-
|
| 182 |
-
|
| 183 |
-
|
| 184 |
-
|
| 185 |
-
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
|
| 189 |
-
|
| 190 |
-
|
| 191 |
-
|
| 192 |
-
|
| 193 |
-
|
| 194 |
-
|
| 195 |
-
|
| 196 |
-
|
| 197 |
-
|
| 198 |
-
|
| 199 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
- zh
|
| 6 |
+
library_name: transformers
|
| 7 |
+
pipeline_tag: text-generation
|
| 8 |
+
tags:
|
| 9 |
+
- llm
|
| 10 |
+
- nanbeige
|
| 11 |
+
- heretic
|
| 12 |
+
- uncensored
|
| 13 |
+
- decensored
|
| 14 |
+
- abliterated
|
| 15 |
+
base_model:
|
| 16 |
+
- Nanbeige/Nanbeige4-3B-Base
|
| 17 |
+
---
|
| 18 |
+
# This is a decensored version of [Nanbeige/Nanbeige4.1-3B](https://huggingface.co/Nanbeige/Nanbeige4.1-3B), made using [Heretic](https://github.com/p-e-w/heretic) v1.2.0
|
| 19 |
+
|
| 20 |
+
## Abliteration parameters
|
| 21 |
+
|
| 22 |
+
| Parameter | Value |
|
| 23 |
+
| :-------- | :---: |
|
| 24 |
+
| **direction_index** | 13.45 |
|
| 25 |
+
| **attn.o_proj.max_weight** | 1.02 |
|
| 26 |
+
| **attn.o_proj.max_weight_position** | 22.93 |
|
| 27 |
+
| **attn.o_proj.min_weight** | 0.73 |
|
| 28 |
+
| **attn.o_proj.min_weight_distance** | 6.77 |
|
| 29 |
+
| **mlp.down_proj.max_weight** | 1.25 |
|
| 30 |
+
| **mlp.down_proj.max_weight_position** | 29.81 |
|
| 31 |
+
| **mlp.down_proj.min_weight** | 0.99 |
|
| 32 |
+
| **mlp.down_proj.min_weight_distance** | 18.35 |
|
| 33 |
+
|
| 34 |
+
## Performance
|
| 35 |
+
|
| 36 |
+
| Metric | This model | Original model ([Nanbeige/Nanbeige4.1-3B](https://huggingface.co/Nanbeige/Nanbeige4.1-3B)) |
|
| 37 |
+
| :----- | :--------: | :---------------------------: |
|
| 38 |
+
| **KL divergence** | 0.0011 | 0 *(by definition)* |
|
| 39 |
+
| **Refusals** | 1/100 | 97/100 |
|
| 40 |
+
|
| 41 |
+
-----
|
| 42 |
+
|
| 43 |
+
<div align="center">
|
| 44 |
+
|
| 45 |
+
<img src="figures/nbg.png" width="220" alt="Nanbeige Logo">
|
| 46 |
+
</div>
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
# Introduction
|
| 51 |
+
|
| 52 |
+
Nanbeige4.1-3B is built upon Nanbeige4-3B-Base and represents an enhanced iteration of our previous reasoning model, Nanbeige4-3B-Thinking-2511, achieved through further post-training optimization with supervised fine-tuning (SFT) and reinforcement learning (RL). As a highly competitive open-source model at a small parameter scale, Nanbeige4.1-3B illustrates that compact models can simultaneously achieve robust **reasoning**, **preference alignment**, and **effective agentic behaviors**.
|
| 53 |
+
|
| 54 |
+
<div align="center">
|
| 55 |
+
|
| 56 |
+
<img src="figures/model_performance_comparison.png">
|
| 57 |
+
</div>
|
| 58 |
+
|
| 59 |
+
Specifically, Nanbeige4.1-3B exhibits the following key strengths:
|
| 60 |
+
|
| 61 |
+
* **Strong Reasoning:** Nanbeige4.1-3B is capable of solving complex, multi-step problems through sustained and coherent reasoning within a single forward pass, and reliably produces correct final answers on challenging tasks such as LiveCodeBench-Pro, IMO-Answer-Bench, and AIME 2026 I.
|
| 62 |
+
* **Robust Preference Alignment:** Nanbeige4.1-3B achieves solid alignment performance, outperforming not only same-scale models such as Qwen3-4B-2507 and Nanbeige4-3B-2511, but also substantially larger models including Qwen3-30B-A3B and Qwen3-32B on Arena-Hard-v2 and Multi-Challenge.
|
| 63 |
+
* **Agentic Capability:** Nanbeige4.1-3B is the first general small model to natively support deep-search tasks and reliably sustain complex problem solving involving more than 500 rounds of tool invocations. It fills a long-standing gap in the small-model ecosystem where models are typically optimized for either general reasoning or agentic scenarios, but rarely excel at both.
|
| 64 |
+
|
| 65 |
+
> **Technical Report:** [Link](https://huggingface.co/Nanbeige/Nanbeige4.1-3B/blob/main/Nanbeige4.1-3B-Report.pdf)
|
| 66 |
+
|
| 67 |
+
|
| 68 |
+
|
| 69 |
+
|
| 70 |
+
# Performances
|
| 71 |
+
|
| 72 |
+
We evaluate Nanbeige4.1-3B across a broad and diverse set of benchmarks covering **general reasoning**, and **deep-search capabilities**.
|
| 73 |
+
|
| 74 |
+
### General Reasoning Tasks
|
| 75 |
+
|
| 76 |
+
On general reasoning tasks including **code**, **math**, **science**, **alignment**, and **tool-use** benchmarks, Nanbeige4.1-3B not only significantly outperforms same-scale models such as **Qwen3-4B**, but also demonstrates overall superior performance compared to larger models including **Qwen3-30B-A3B-2507** and **Qwen3-32B**.
|
| 77 |
+
|
| 78 |
+
|
| 79 |
+
| Benchmark | Qwen3-4B-2507 | Qwen3-8B | Qwen3-14B | Qwen3-32B | Qwen3-30B-A3B-2507 | Nanbeige4-3B-2511 | **Nanbeige4.1-3B** |
|
| 80 |
+
| --------------------------- | ------------- | -------- | --------- | --------- | ------------------ | ----------------- | ------------------ |
|
| 81 |
+
| **Code** | | | | | | | |
|
| 82 |
+
| Live-Code-Bench-V6 | 57.4 | 49.4 | 55.9 | 55.7 | <u>66.0<u> | 46.0 | **76.9** |
|
| 83 |
+
| Live-Code-Bench-Pro-Easy | 40.2 | 41.2 | 33.0 | 42.3 | <u>60.8<u> | 40.2 | **81.4** |
|
| 84 |
+
| Live-Code-Bench-Pro-Mediium | 5.3 | 3.5 | 1.8 | 3.5 | 3.5 | <u>5.3<u> | **28.1** |
|
| 85 |
+
| **Math** | | | | | | | |
|
| 86 |
+
| AIME 2026 I | 81.46 | 70.42 | 76.46 | 75.83 | <u>87.30<u> | 84.1 | **87.40** |
|
| 87 |
+
| HMMT Nov | 68.33 | 48.33 | 56.67 | 57.08 | <u>71.25<u> | 66.67 | **77.92** |
|
| 88 |
+
| IMO-Answer-Bench | 48.00 | 36.56 | 41.81 | 43.94 | **54.34** | 38.25 | 53.38 |
|
| 89 |
+
| **Science** | | | | | | | |
|
| 90 |
+
| GPQA | 65.8 | 62.0 | 63.38 | 68.4 | 73.4 | <u>82.2<u> | **83.8** |
|
| 91 |
+
| HLE (Text-only) | 6.72 | 5.28 | 7.00 | 9.31 | <u>11.77<u> | 10.98 | **12.60** |
|
| 92 |
+
| **Alignment** | | | | | | | |
|
| 93 |
+
| Arena-Hard-v2 | 34.9 | 26.3 | 36.9 | 56.0 | <u>60.2<u> | 60.0 | **73.2** |
|
| 94 |
+
| Multi-Challenge | 41.14 | 36.30 | 36.97 | 38.72 | <u>49.40<u> | 41.20 | **52.21** |
|
| 95 |
+
| **Tool Use** | | | | | | | |
|
| 96 |
+
| BFCL-V4 | 44.87 | 42.20 | 45.14 | 47.90 | 48.6 | <u>53.8<u> | **56.50** |
|
| 97 |
+
| Tau2-Bench | 45.9 | 42.06 | 44.96 | 45.26 | <u> 47.70<u> | 41.77 | **48.57** |
|
| 98 |
+
|
| 99 |
+
|
| 100 |
+
|
| 101 |
+
### Deep Search Tasks
|
| 102 |
+
|
| 103 |
+
As a general small model, Nanbeige4.1-3B achieves deep-search performance comparable to specialized agents under 10B parameters.
|
| 104 |
+
In contrast to existing small general models, which typically exhibit little to no deep-search capability, Nanbeige4.1-3B represents a substantial qualitative improvement over prior small general models.
|
| 105 |
+
|
| 106 |
+
#### Deep Search and Agent Benchmarks
|
| 107 |
+
| Model | xBench-DeepSearch-2505 | xBench-DeepSearch-2510 | Browse-Comp | Browse-Comp-ZH | GAIA (Text-only) | HLE | SEAL-0 |
|
| 108 |
+
|------|-------------------|-------------------|-------------|----------------|------------------|-----|--------|
|
| 109 |
+
| **Search-Specialized Small Agents** ||||||||
|
| 110 |
+
| MiroThinker-v1.0-8B | 61 | – | 31.1 | 40.2 | 66.4 | 21.5 | 40.4 |
|
| 111 |
+
| AgentCPM-Explore-4B | 70 | – | 25.0 | 29.0 | 63.9 | 19.1 | 40.0 |
|
| 112 |
+
| **Large Foundation Models (with Tools)** ||||||||
|
| 113 |
+
| GLM-4.6-357B | 70 | – | 45.1 | 49.5 | 71.9 | 30.4 | – |
|
| 114 |
+
| Minimax-M2-230B | 72 | – | 44.0 | 48.5 | 75.7 | 31.8 | – |
|
| 115 |
+
| DeepSeek-V3.2-671B | 71 | – | 67.6 | 65.0 | 63.5 | 40.8 | 38.5 |
|
| 116 |
+
| **Small Foundation Models (with Tools)** ||||||||
|
| 117 |
+
| Qwen3-4B-2507 | 34 | 5 | 1.57 | 7.92 | 28.33 | 11.13 | <u>15.74<u> |
|
| 118 |
+
| Qwen3-8B | 31 | 2 | 0.79 | 5.15 | 19.53 | 10.24 | 6.34 |
|
| 119 |
+
| Qwen3-14B | 34 | 9 | 2.36 | 7.11 | 30.23 | 10.17 | 12.64 |
|
| 120 |
+
| Qwen3-32B | <u>39<u> | 8 | <u>3.15<u> | <u>7.34<u> | 30.17 | 9.26 | 8.15 |
|
| 121 |
+
| Qwen3-30B-A3B-2507 | 25 | 10| 1.57 | 4.12 | <u>31.63<u> | <u>14.81<u> | 9.24 |
|
| 122 |
+
| **Ours (with Tools)** ||||||||
|
| 123 |
+
| Nanbeige4-3B-2511 | 33 | <u>11<u> | 0.79 | 3.09 | 19.42 | 13.89 | 12.61 |
|
| 124 |
+
| **Nanbeige4.1-3B** | **75** | **39** | **19.12** | **31.83** | **69.90** | **22.29** | **41.44** |
|
| 125 |
+
|
| 126 |
+
|
| 127 |
+
## <span id="Inference">Quickstart</span>
|
| 128 |
+
|
| 129 |
+
For inference hyperparameters, we recommend the following settings:
|
| 130 |
+
* Temperature: 0.6
|
| 131 |
+
* Top-p: 0.95
|
| 132 |
+
* Repeat penalty: 1.0
|
| 133 |
+
* Max New Tokens: 131072
|
| 134 |
+
|
| 135 |
+
For the chat scenario:
|
| 136 |
+
```
|
| 137 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 138 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
| 139 |
+
'Nanbeige/Nanbeige4.1-3B',
|
| 140 |
+
use_fast=False,
|
| 141 |
+
trust_remote_code=True
|
| 142 |
+
)
|
| 143 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 144 |
+
'Nanbeige/Nanbeige4.1-3B',
|
| 145 |
+
torch_dtype='auto',
|
| 146 |
+
device_map='auto',
|
| 147 |
+
trust_remote_code=True
|
| 148 |
+
)
|
| 149 |
+
messages = [
|
| 150 |
+
{'role': 'user', 'content': 'Which number is bigger, 9.11 or 9.8?'}
|
| 151 |
+
]
|
| 152 |
+
prompt = tokenizer.apply_chat_template(
|
| 153 |
+
messages,
|
| 154 |
+
add_generation_prompt=True,
|
| 155 |
+
tokenize=False
|
| 156 |
+
)
|
| 157 |
+
input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids
|
| 158 |
+
output_ids = model.generate(input_ids.to('cuda'), eos_token_id=166101)
|
| 159 |
+
resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True)
|
| 160 |
+
print(resp)
|
| 161 |
+
```
|
| 162 |
+
|
| 163 |
+
For the tool use scenario:
|
| 164 |
+
```
|
| 165 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 166 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
| 167 |
+
'Nanbeige/Nanbeige4.1-3B',
|
| 168 |
+
use_fast=False,
|
| 169 |
+
trust_remote_code=True
|
| 170 |
+
)
|
| 171 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 172 |
+
'Nanbeige/Nanbeige4.1-3B',
|
| 173 |
+
torch_dtype='auto',
|
| 174 |
+
device_map='auto',
|
| 175 |
+
trust_remote_code=True
|
| 176 |
+
)
|
| 177 |
+
messages = [
|
| 178 |
+
{'role': 'user', 'content': 'Help me check the weather in Beijing now'}
|
| 179 |
+
]
|
| 180 |
+
tools = [{'type': 'function',
|
| 181 |
+
'function': {'name': 'SearchWeather',
|
| 182 |
+
'description': 'Find out the current weather in a place on a certain day.',
|
| 183 |
+
'parameters': {'type': 'dict',
|
| 184 |
+
'properties': {'location': {'type': 'string',
|
| 185 |
+
'description': 'A city in China.'},
|
| 186 |
+
'required': ['location']}}}}]
|
| 187 |
+
prompt = tokenizer.apply_chat_template(
|
| 188 |
+
messages,
|
| 189 |
+
tools,
|
| 190 |
+
add_generation_prompt=True,
|
| 191 |
+
tokenize=False
|
| 192 |
+
)
|
| 193 |
+
input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids
|
| 194 |
+
output_ids = model.generate(input_ids.to('cuda'), eos_token_id=166101)
|
| 195 |
+
resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True)
|
| 196 |
+
print(resp)
|
| 197 |
+
```
|
| 198 |
+
|
| 199 |
+
For the deep-search scenario:
|
| 200 |
+
|
| 201 |
+
* Inference Framework: [**miroflow-framework**](https://github.com/MiroMindAI/MiroThinker)!
|
| 202 |
+
* Switch tokenizer configuration to **tokenizer_config_search.json**
|
| 203 |
+
* Tools Configuration:
|
| 204 |
+
|
| 205 |
+
| Server | Description | Tools Provided |
|
| 206 |
+
|-----------------------------|-----------------------------------------------------------------------------|-------------------------------------------------------------------------------|
|
| 207 |
+
| tool-python | Execution environment and file management ([E2B sandbox](https://e2b.dev/)) | create_sandbox, run_command, run_python_code, upload_file_from_local_to_sandbox, download_file_from_sandbox_to_local, download_file_from_internet_to_sandbox |
|
| 208 |
+
| search_and_scrape_webpage | Google search via [Serper API](https://google.serper.dev) | google_search |
|
| 209 |
+
| jina_scrape_llm_summary | Web scraping with LLM-based information extraction with [Jina](https://r.jina.ai) | scrape_and_extract_info |
|
| 210 |
+
|
| 211 |
+
* Summary model: Qwen3-14B-thinking
|
| 212 |
+
* Temperature: 1.0
|
| 213 |
+
* Note, access to HuggingFace has been explicitly disabled in these tools.
|
| 214 |
+
|
| 215 |
+
# <span id="Limitations">Limitations</span>
|
| 216 |
+
|
| 217 |
+
While we place great emphasis on the safety of the model during the training process, striving to ensure that its outputs align with ethical and legal requirements, it may not completely avoid generating unexpected outputs due to the model's size and probabilistic nature. These outputs may include harmful content such as bias or discrimination. Please don't propagate such content. We do not assume any responsibility for the consequences resulting from the dissemination of inappropriate information.
|
| 218 |
+
<br>
|
| 219 |
+
|
| 220 |
+
# <span id="Limitations">Contact</span>
|
| 221 |
+
If you have any questions, please raise an issue or contact us at nanbeige@kanzhun.com.
|
| 222 |
+
<br>
|