Update README.md (#4)
Browse files- Update README.md (c0dafa941f3360797cec74f2d0ca867a8f00abcc)
Co-authored-by: Ye Zhenjie <thinkthinking@users.noreply.huggingface.co>
README.md
CHANGED
|
@@ -1,24 +1,20 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
base_model:
|
| 4 |
-
- inclusionAI/Ling-mini-base-2.0
|
| 5 |
pipeline_tag: text-generation
|
| 6 |
library_name: transformers
|
| 7 |
---
|
| 8 |
|
| 9 |
-
|
| 10 |
-
|
| 11 |
<p align="center">
|
| 12 |
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
|
| 13 |
<p>
|
| 14 |
-
|
| 15 |
-
<p align="center">π€ <a href="https://huggingface.co/inclusionAI">Hugging Face</a>   |   π€ <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
|
| 16 |
-
|
| 17 |
|
| 18 |
## Introduction
|
| 19 |
|
| 20 |
-
Today, we are excited to announce the open-sourcing of
|
| 21 |
-
The first released version, Ling-mini-2.0, is compact yet powerful. It has
|
| 22 |
|
| 23 |
<p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/2NKZS5LVXzcAAAAASBAAAAgADkZ7AQFr/fmt.webp" /></p>
|
| 24 |
|
|
@@ -28,25 +24,24 @@ We evaluated Ling-mini-2.0 on challenging general reasoning tasks in coding (Liv
|
|
| 28 |
|
| 29 |
### 7Γ Equivalent Dense Performance Leverage
|
| 30 |
|
| 31 |
-
Guided by [Ling Scaling Laws](https://arxiv.org/abs/2507.17702), Ling 2.0 adopts a
|
| 32 |
|
| 33 |
### High-speed Generation at 300+ token/s
|
| 34 |
|
| 35 |
<p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/bnxIRaK9tzcAAAAAgSAAAAgADkZ7AQFr/original" /></p>
|
| 36 |
|
| 37 |
-
The highly sparse small-activation MoE architecture also delivers significant training and inference efficiency. In simple QA scenarios (within 2000 tokens),
|
| 38 |
|
| 39 |
<p align="center"><img src="https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/figures/needle_in_a_haystack.webp" /></p>
|
| 40 |
|
| 41 |
### Open-sourced FP8 Efficient Training Solution
|
| 42 |
|
| 43 |
-
Ling 2.0 employs
|
| 44 |
|
| 45 |
### A More Open Opensource Strategy
|
| 46 |
|
| 47 |
We believe Ling-mini-2.0 is an ideal starting point for MoE research. For the first time at this scale, it integrates 1/32 sparsity, MTP layers, and FP8 training β achieving both strong effectiveness and efficient training/inference performance, making it a prime candidate for the small-size LLM segment.
|
| 48 |
-
To further foster community research, in addition to releasing the post-trained version, we are also open-sourcing
|
| 49 |
-
|
| 50 |
|
| 51 |
## Model Downloads
|
| 52 |
|
|
@@ -55,30 +50,65 @@ You can download the following table to see the various stage of Ling-mini-2.0 m
|
|
| 55 |
<center>
|
| 56 |
|
| 57 |
| **Model** | **Context Length** | **Download** |
|
| 58 |
-
|
| 59 |
-
| Ling-mini-base-2.0 |
|
| 60 |
-
| Ling-mini-base-2.0-5T |
|
| 61 |
-
| Ling-mini-base-2.0-10T |
|
| 62 |
-
| Ling-mini-base-2.0-15T |
|
| 63 |
-
| Ling-mini-base-2.0-20T |
|
| 64 |
-
| Ling-mini-2.0 |
|
| 65 |
|
| 66 |
</center>
|
| 67 |
|
| 68 |
Note: If you are interested in previous version, please visit the past model collections in [Huggingface](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
|
| 69 |
|
| 70 |
-
|
| 71 |
## Quickstart
|
| 72 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 73 |
### Convert to safetensors
|
| 74 |
|
| 75 |
Models with safetensors format can be downloaded from [HuggingFace](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
|
| 76 |
If you want to train your model and eval it, you can convert from dcp produced by training.
|
|
|
|
| 77 |
```shell
|
| 78 |
python tools/convert_dcp_to_safe_tensors.py --checkpoint-path ${DCP_PATH} --target-path ${SAFETENSORS_PATH}
|
| 79 |
```
|
| 80 |
|
| 81 |
Currently, BF16 and FP8 formats are supported, you can use convert parameter to handle it:
|
|
|
|
| 82 |
- `--force-bf16` for BF16 format.
|
| 83 |
- `--force-fp8` for FP8 format.
|
| 84 |
|
|
@@ -88,9 +118,7 @@ Here is a code snippet to show you how to use the chat model with `transformers`
|
|
| 88 |
|
| 89 |
```python
|
| 90 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 91 |
-
|
| 92 |
model_name = "inclusionAI/Ling-mini-2.0"
|
| 93 |
-
|
| 94 |
model = AutoModelForCausalLM.from_pretrained(
|
| 95 |
model_name,
|
| 96 |
dtype="auto",
|
|
@@ -98,7 +126,6 @@ model = AutoModelForCausalLM.from_pretrained(
|
|
| 98 |
trust_remote_code=True,
|
| 99 |
)
|
| 100 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 101 |
-
|
| 102 |
prompt = "Give me a short introduction to large language models."
|
| 103 |
messages = [
|
| 104 |
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
|
|
@@ -110,7 +137,6 @@ text = tokenizer.apply_chat_template(
|
|
| 110 |
add_generation_prompt=True
|
| 111 |
)
|
| 112 |
model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device)
|
| 113 |
-
|
| 114 |
generated_ids = model.generate(
|
| 115 |
**model_inputs,
|
| 116 |
max_new_tokens=512
|
|
@@ -118,7 +144,6 @@ generated_ids = model.generate(
|
|
| 118 |
generated_ids = [
|
| 119 |
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
| 120 |
]
|
| 121 |
-
|
| 122 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 123 |
```
|
| 124 |
|
|
@@ -149,25 +174,20 @@ pip install -e .
|
|
| 149 |
```bash
|
| 150 |
from transformers import AutoTokenizer
|
| 151 |
from vllm import LLM, SamplingParams
|
| 152 |
-
|
| 153 |
tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ling-mini-2.0")
|
| 154 |
-
|
| 155 |
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=16384)
|
| 156 |
-
|
| 157 |
llm = LLM(model="inclusionAI/Ling-mini-2.0", dtype='bfloat16')
|
| 158 |
prompt = "Give me a short introduction to large language models."
|
| 159 |
messages = [
|
| 160 |
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
|
| 161 |
{"role": "user", "content": prompt}
|
| 162 |
]
|
| 163 |
-
|
| 164 |
text = tokenizer.apply_chat_template(
|
| 165 |
messages,
|
| 166 |
tokenize=False,
|
| 167 |
add_generation_prompt=True
|
| 168 |
)
|
| 169 |
outputs = llm.generate([text], sampling_params)
|
| 170 |
-
|
| 171 |
```
|
| 172 |
|
| 173 |
#### Online Inference:
|
|
@@ -181,7 +201,9 @@ vllm serve inclusionAI/Ling-mini-2.0 \
|
|
| 181 |
```
|
| 182 |
|
| 183 |
To handle long context in vLLM using YaRN, we need to follow these two steps:
|
|
|
|
| 184 |
1. Add a `rope_scaling` field to the model's `config.json` file, for example:
|
|
|
|
| 185 |
```json
|
| 186 |
{
|
| 187 |
...,
|
|
@@ -192,24 +214,29 @@ To handle long context in vLLM using YaRN, we need to follow these two steps:
|
|
| 192 |
}
|
| 193 |
}
|
| 194 |
```
|
|
|
|
| 195 |
2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service.
|
| 196 |
|
| 197 |
For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/).
|
| 198 |
|
| 199 |
-
|
| 200 |
### SGLang
|
| 201 |
|
| 202 |
#### Environment Preparation
|
| 203 |
|
| 204 |
We will later submit our model to SGLang official release, now we can prepare the environment following steps:
|
|
|
|
| 205 |
```shell
|
| 206 |
pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1
|
| 207 |
```
|
|
|
|
| 208 |
You can use docker image as well:
|
|
|
|
| 209 |
```shell
|
| 210 |
docker pull lmsysorg/sglang:v0.5.2rc0-cu126
|
| 211 |
```
|
|
|
|
| 212 |
Then you should apply patch to sglang installation:
|
|
|
|
| 213 |
```shell
|
| 214 |
# patch command is needed, run `yum install -y patch` if needed
|
| 215 |
patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch
|
|
@@ -217,9 +244,10 @@ patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__fil
|
|
| 217 |
|
| 218 |
#### Run Inference
|
| 219 |
|
| 220 |
-
BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
|
| 221 |
|
| 222 |
- Start server:
|
|
|
|
| 223 |
```shell
|
| 224 |
python -m sglang.launch_server \
|
| 225 |
--model-path $MODLE_PATH \
|
|
@@ -227,16 +255,19 @@ python -m sglang.launch_server \
|
|
| 227 |
--trust-remote-code \
|
| 228 |
--attention-backend fa3
|
| 229 |
```
|
|
|
|
| 230 |
MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN`
|
| 231 |
to start command.
|
| 232 |
|
| 233 |
- Client:
|
|
|
|
| 234 |
```shell
|
| 235 |
curl -s http://localhost:${PORT}/v1/chat/completions \
|
| 236 |
-H "Content-Type: application/json" \
|
| 237 |
-d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
|
| 238 |
"""
|
| 239 |
```
|
|
|
|
| 240 |
More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
|
| 241 |
|
| 242 |
## Training
|
|
@@ -254,11 +285,11 @@ The table below shows the pre-training performance of several models, measured i
|
|
| 254 |
<center>
|
| 255 |
|
| 256 |
| **Model** | **8 x 80G GPUs (GBS=128)** | **16 x 80G GPUs (GBS=256)** | **32 x 80G GPUs (GBS=512)** |
|
| 257 |
-
|
| 258 |
-
| LLaMA 3.1 8B (baseline) |
|
| 259 |
-
| Qwen3 8B |
|
| 260 |
-
| Ling-mini-2.0 |
|
| 261 |
-
| Ling-mini-2.0 w/o MTP |
|
| 262 |
|
| 263 |
</center>
|
| 264 |
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
base_model:
|
| 4 |
+
- inclusionAI/Ling-mini-base-2.0
|
| 5 |
pipeline_tag: text-generation
|
| 6 |
library_name: transformers
|
| 7 |
---
|
| 8 |
|
|
|
|
|
|
|
| 9 |
<p align="center">
|
| 10 |
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
|
| 11 |
<p>
|
| 12 |
+
<p align="center">π€ <a href="https://huggingface.co/inclusionAI">Hugging Face</a>   |   π€ <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>   |   π <a href="https://zenmux.ai/inclusionai/ling-mini-2.0?utm_source=hf_inclusionAI">Experience Now</a></p>
|
|
|
|
|
|
|
| 13 |
|
| 14 |
## Introduction
|
| 15 |
|
| 16 |
+
Today, we are excited to announce the open-sourcing of **Ling 2.0** β a family of MoE-based large language models that combine **SOTA performance** with **high efficiency**.
|
| 17 |
+
The first released version, Ling-mini-2.0, is compact yet powerful. It has **16B total parameters**, but only **1.4B** are activated per input token (non-embedding 789M). Trained on more than **20T tokens** of high-quality data and enhanced through multi-stage supervised fine-tuning and reinforcement learning, Ling-mini-2.0 achieves remarkable improvements in complex reasoning and instruction following. With just 1.4B activated parameters, it still reaches the top-tier level of sub-10B dense LLMs and even matches or surpasses much larger MoE models.
|
| 18 |
|
| 19 |
<p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/2NKZS5LVXzcAAAAASBAAAAgADkZ7AQFr/fmt.webp" /></p>
|
| 20 |
|
|
|
|
| 24 |
|
| 25 |
### 7Γ Equivalent Dense Performance Leverage
|
| 26 |
|
| 27 |
+
Guided by [Ling Scaling Laws](https://arxiv.org/abs/2507.17702), Ling 2.0 adopts a **1/32 activation ratio** MoE architecture, with empirically optimized design choices in expert granularity, shared expert ratio, attention ratio, aux-loss free + sigmoid routing strategy, MTP loss, QK-Norm, half RoPE, and more. This enables small-activation MoE models to achieve over **7Γ equivalent dense performance**. In other words, **Ling-mini-2.0 with only 1.4B activated parameters (non-embedding 789M) can deliver performance equivalent to a 7β8B dense model**.
|
| 28 |
|
| 29 |
### High-speed Generation at 300+ token/s
|
| 30 |
|
| 31 |
<p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/bnxIRaK9tzcAAAAAgSAAAAgADkZ7AQFr/original" /></p>
|
| 32 |
|
| 33 |
+
The highly sparse small-activation MoE architecture also delivers significant training and inference efficiency. In simple QA scenarios (within 2000 tokens), **Ling-mini-2.0 generates at 300+ token/s (on H20 deployment)** β more than **2Γ faster** than an 8B dense model. Ling-mini-2.0 is able to handle **128K context length** with YaRN, as sequence length increases, the relative speedup can reach **over 7Γ**.
|
| 34 |
|
| 35 |
<p align="center"><img src="https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/figures/needle_in_a_haystack.webp" /></p>
|
| 36 |
|
| 37 |
### Open-sourced FP8 Efficient Training Solution
|
| 38 |
|
| 39 |
+
Ling 2.0 employs **FP8 mixed-precision training** throughout. Compared with BF16, experiments with over 1T training tokens show nearly identical loss curves and downstream benchmark performance. To support the community in efficient continued pretraining and fine-tuning under limited compute, we are also open-sourcing our **FP8 training solution**. Based on tile/blockwise FP8 scaling, it further introduces FP8 optimizer, FP8 on-demand transpose weight, and FP8 padding routing map for extreme memory optimization. On 8/16/32 80G GPUs, compared with LLaMA 3.1 8B and Qwen3 8B, **Ling-mini-2.0 achieved 30β60% throughput gains with MTP enabled, and 90β120% throughput gains with MTP disabled**.
|
| 40 |
|
| 41 |
### A More Open Opensource Strategy
|
| 42 |
|
| 43 |
We believe Ling-mini-2.0 is an ideal starting point for MoE research. For the first time at this scale, it integrates 1/32 sparsity, MTP layers, and FP8 training β achieving both strong effectiveness and efficient training/inference performance, making it a prime candidate for the small-size LLM segment.
|
| 44 |
+
To further foster community research, in addition to releasing the post-trained version, we are also open-sourcing **five pretraining checkpoints**: the pre-finetuning Ling-mini-2.0-base, along with four base models trained on 5T, 10T, 15T, and 20T tokens, enabling deeper research and broader applications.
|
|
|
|
| 45 |
|
| 46 |
## Model Downloads
|
| 47 |
|
|
|
|
| 50 |
<center>
|
| 51 |
|
| 52 |
| **Model** | **Context Length** | **Download** |
|
| 53 |
+
| :--------------------: | :----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
|
| 54 |
+
| Ling-mini-base-2.0 | 32K -> 128K (YaRN) | [π€ HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0) <br>[π€ ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0) |
|
| 55 |
+
| Ling-mini-base-2.0-5T | 4K | [π€ HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-5T) <br>[π€ ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-5T) |
|
| 56 |
+
| Ling-mini-base-2.0-10T | 4K | [π€ HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-10T) <br>[π€ ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-10T) |
|
| 57 |
+
| Ling-mini-base-2.0-15T | 4K | [π€ HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-15T) <br>[π€ ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-15T) |
|
| 58 |
+
| Ling-mini-base-2.0-20T | 4K | [π€ HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-20T) <br>[π€ ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-20T) |
|
| 59 |
+
| Ling-mini-2.0 | 32K -> 128K (YaRN) | [π€ HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-2.0) <br>[π€ ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-2.0) |
|
| 60 |
|
| 61 |
</center>
|
| 62 |
|
| 63 |
Note: If you are interested in previous version, please visit the past model collections in [Huggingface](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
|
| 64 |
|
|
|
|
| 65 |
## Quickstart
|
| 66 |
|
| 67 |
+
### π Try Online
|
| 68 |
+
|
| 69 |
+
You can experience Ling-mini-2.0 online at: [ZenMux](https://zenmux.ai/inclusionai/ling-mini-2.0?utm_source=hf_inclusionAI)
|
| 70 |
+
|
| 71 |
+
### π API Usage
|
| 72 |
+
|
| 73 |
+
You can also use Ling-mini-2.0 through API calls:
|
| 74 |
+
|
| 75 |
+
```python
|
| 76 |
+
from openai import OpenAI
|
| 77 |
+
|
| 78 |
+
# 1. Initialize the OpenAI client
|
| 79 |
+
client = OpenAI(
|
| 80 |
+
# 2. Point the base URL to the ZenMux endpoint
|
| 81 |
+
base_url="https://zenmux.ai/api/v1",
|
| 82 |
+
# 3. Replace with the API Key from your ZenMux user console
|
| 83 |
+
api_key="<your ZENMUX_API_KEY>",
|
| 84 |
+
)
|
| 85 |
+
|
| 86 |
+
# 4. Make a request
|
| 87 |
+
completion = client.chat.completions.create(
|
| 88 |
+
# 5. Specify the model to use in the format "provider/model-name"
|
| 89 |
+
model="inclusionai/ling-mini-2.0",
|
| 90 |
+
messages=[
|
| 91 |
+
{
|
| 92 |
+
"role": "user",
|
| 93 |
+
"content": "What is the meaning of life?"
|
| 94 |
+
}
|
| 95 |
+
]
|
| 96 |
+
)
|
| 97 |
+
|
| 98 |
+
print(completion.choices[0].message.content)
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
### Convert to safetensors
|
| 102 |
|
| 103 |
Models with safetensors format can be downloaded from [HuggingFace](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
|
| 104 |
If you want to train your model and eval it, you can convert from dcp produced by training.
|
| 105 |
+
|
| 106 |
```shell
|
| 107 |
python tools/convert_dcp_to_safe_tensors.py --checkpoint-path ${DCP_PATH} --target-path ${SAFETENSORS_PATH}
|
| 108 |
```
|
| 109 |
|
| 110 |
Currently, BF16 and FP8 formats are supported, you can use convert parameter to handle it:
|
| 111 |
+
|
| 112 |
- `--force-bf16` for BF16 format.
|
| 113 |
- `--force-fp8` for FP8 format.
|
| 114 |
|
|
|
|
| 118 |
|
| 119 |
```python
|
| 120 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
|
| 121 |
model_name = "inclusionAI/Ling-mini-2.0"
|
|
|
|
| 122 |
model = AutoModelForCausalLM.from_pretrained(
|
| 123 |
model_name,
|
| 124 |
dtype="auto",
|
|
|
|
| 126 |
trust_remote_code=True,
|
| 127 |
)
|
| 128 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
|
|
|
| 129 |
prompt = "Give me a short introduction to large language models."
|
| 130 |
messages = [
|
| 131 |
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
|
|
|
|
| 137 |
add_generation_prompt=True
|
| 138 |
)
|
| 139 |
model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device)
|
|
|
|
| 140 |
generated_ids = model.generate(
|
| 141 |
**model_inputs,
|
| 142 |
max_new_tokens=512
|
|
|
|
| 144 |
generated_ids = [
|
| 145 |
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
| 146 |
]
|
|
|
|
| 147 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 148 |
```
|
| 149 |
|
|
|
|
| 174 |
```bash
|
| 175 |
from transformers import AutoTokenizer
|
| 176 |
from vllm import LLM, SamplingParams
|
|
|
|
| 177 |
tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ling-mini-2.0")
|
|
|
|
| 178 |
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=16384)
|
|
|
|
| 179 |
llm = LLM(model="inclusionAI/Ling-mini-2.0", dtype='bfloat16')
|
| 180 |
prompt = "Give me a short introduction to large language models."
|
| 181 |
messages = [
|
| 182 |
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
|
| 183 |
{"role": "user", "content": prompt}
|
| 184 |
]
|
|
|
|
| 185 |
text = tokenizer.apply_chat_template(
|
| 186 |
messages,
|
| 187 |
tokenize=False,
|
| 188 |
add_generation_prompt=True
|
| 189 |
)
|
| 190 |
outputs = llm.generate([text], sampling_params)
|
|
|
|
| 191 |
```
|
| 192 |
|
| 193 |
#### Online Inference:
|
|
|
|
| 201 |
```
|
| 202 |
|
| 203 |
To handle long context in vLLM using YaRN, we need to follow these two steps:
|
| 204 |
+
|
| 205 |
1. Add a `rope_scaling` field to the model's `config.json` file, for example:
|
| 206 |
+
|
| 207 |
```json
|
| 208 |
{
|
| 209 |
...,
|
|
|
|
| 214 |
}
|
| 215 |
}
|
| 216 |
```
|
| 217 |
+
|
| 218 |
2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service.
|
| 219 |
|
| 220 |
For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/).
|
| 221 |
|
|
|
|
| 222 |
### SGLang
|
| 223 |
|
| 224 |
#### Environment Preparation
|
| 225 |
|
| 226 |
We will later submit our model to SGLang official release, now we can prepare the environment following steps:
|
| 227 |
+
|
| 228 |
```shell
|
| 229 |
pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1
|
| 230 |
```
|
| 231 |
+
|
| 232 |
You can use docker image as well:
|
| 233 |
+
|
| 234 |
```shell
|
| 235 |
docker pull lmsysorg/sglang:v0.5.2rc0-cu126
|
| 236 |
```
|
| 237 |
+
|
| 238 |
Then you should apply patch to sglang installation:
|
| 239 |
+
|
| 240 |
```shell
|
| 241 |
# patch command is needed, run `yum install -y patch` if needed
|
| 242 |
patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch
|
|
|
|
| 244 |
|
| 245 |
#### Run Inference
|
| 246 |
|
| 247 |
+
BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
|
| 248 |
|
| 249 |
- Start server:
|
| 250 |
+
|
| 251 |
```shell
|
| 252 |
python -m sglang.launch_server \
|
| 253 |
--model-path $MODLE_PATH \
|
|
|
|
| 255 |
--trust-remote-code \
|
| 256 |
--attention-backend fa3
|
| 257 |
```
|
| 258 |
+
|
| 259 |
MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN`
|
| 260 |
to start command.
|
| 261 |
|
| 262 |
- Client:
|
| 263 |
+
|
| 264 |
```shell
|
| 265 |
curl -s http://localhost:${PORT}/v1/chat/completions \
|
| 266 |
-H "Content-Type: application/json" \
|
| 267 |
-d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
|
| 268 |
"""
|
| 269 |
```
|
| 270 |
+
|
| 271 |
More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
|
| 272 |
|
| 273 |
## Training
|
|
|
|
| 285 |
<center>
|
| 286 |
|
| 287 |
| **Model** | **8 x 80G GPUs (GBS=128)** | **16 x 80G GPUs (GBS=256)** | **32 x 80G GPUs (GBS=512)** |
|
| 288 |
+
| :---------------------: | :------------------------: | :-------------------------: | :-------------------------: |
|
| 289 |
+
| LLaMA 3.1 8B (baseline) | 81222 | 161319 | 321403 |
|
| 290 |
+
| Qwen3 8B | 55775 (-31.33%) | 109799 (-31.94%) | 219943 (-31.57%) |
|
| 291 |
+
| Ling-mini-2.0 | 109532 (+34.86%) | 221585 (+37.36%) | 448726 (+39.61%) |
|
| 292 |
+
| Ling-mini-2.0 w/o MTP | 128298 (+57.96%) | 307264 (+90.47%) | 611466 (+90.25%) |
|
| 293 |
|
| 294 |
</center>
|
| 295 |
|