moevis commited on
Commit
c188696
verified
1 Parent(s): 056d3e1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +142 -3
README.md CHANGED
@@ -1,3 +1,142 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: audio-text-to-text
4
+ library_name: transformers
5
+ tags:
6
+ - audio-reasoning
7
+ - chain-of-thought
8
+ - multi-modal
9
+ - step-audio-r1
10
+ ---
11
+
12
+ ## Overview of Step-Audio-R1.1
13
+
14
+ <a href="https://www.stepfun.com/studio/audio?tab=conversation"><img src="https://img.shields.io/static/v1?label=Space%20Playground&message=Studio&color=yellow"></a> <a href="https://huggingface.co/spaces/stepfun-ai/Step-Audio-R1"><img src="https://img.shields.io/static/v1?label=Demo%20Page&message=Web&color=green"></a> &ensp;
15
+
16
+ ### Introduction
17
+ Step-Audio R1.1 (Realtime) is a major upgrade to Step-Audio-R1, designed for interactive spoken dialogue with both **real-time responsiveness** and **strong reasoning capability**.
18
+
19
+ Unlike conventional streaming speech models that trade intelligence for latency, R1.1 enables *thinking while speaking*, achieving high intelligence without sacrificing speed.
20
+
21
+ ### Mind-Paced Speaking (Low Latency)
22
+ Based on the research [*Mind-Paced Speaking*](MPS.pdf), the Realtime variant adopts a **Dual-Brain Architecture**:
23
+ - A **Formulation Brain** responsible for high-level reasoning
24
+ - An **Articulation Brain** dedicated to speech generation
25
+
26
+ This decoupling allows the model to perform **Chain-of-Thought reasoning during speech output**, maintaining ultra-low latency while handling complex tasks in real time.
27
+
28
+ ### Acoustic-Grounded Reasoning (High Intelligence)
29
+ To address the *inverted scaling* issue鈥攚here reasoning over transcripts can degrade performance鈥擲tep-Audio R1.1 grounds its reasoning directly in acoustic representations rather than text alone.
30
+
31
+ Through iterative self-distillation, extended deliberation becomes a strength instead of a liability. This enables effective test-time compute scaling and leads to **state-of-the-art performance**, including top-ranking results on the AA benchmark.
32
+
33
+
34
+ ![image](https://cdn-uploads.huggingface.co/production/uploads/64ba9dfdbfd8286d23b5c0fd/GTZwkSO5q0ryc6BUC82uT.png)
35
+
36
+ ![image](https://cdn-uploads.huggingface.co/production/uploads/64ba9dfdbfd8286d23b5c0fd/cN3V5c_6TmXVMPH8tuhu5.png)
37
+
38
+ ![image](https://cdn-uploads.huggingface.co/production/uploads/64ba9dfdbfd8286d23b5c0fd/qx25DGHPuDEK5FK1hBxOB.png)
39
+
40
+
41
+ ## Model Usage
42
+ ### 馃摐 Requirements
43
+ - **GPU**: NVIDIA GPUs with CUDA support (tested on 4脳L40S/H100/H800/H20).
44
+ - **Operating System**: Linux.
45
+ - **Python**: >= 3.10.0.
46
+
47
+ ### 猬囷笍 Download Model
48
+ First, you need to download the Step-Audio-R1 model weights.
49
+
50
+ **Method A 路 Git LFS**
51
+ ```bash
52
+ git lfs install
53
+ git clone https://huggingface.co/stepfun-ai/Step-Audio-R1
54
+ ```
55
+
56
+ **Method B 路 Hugging Face CLI**
57
+ ```bash
58
+ hf download stepfun-ai/Step-Audio-R1.1 --local-dir ./Step-Audio-R1.1
59
+ ```
60
+
61
+ ### 馃殌 Deployment and Execution
62
+ We provide two ways to serve the model: Docker (recommended) or compiling the customized vLLM backend.
63
+
64
+ #### 馃惓 Method 1 路 Run with Docker (Recommended)
65
+
66
+ A customized vLLM image is required.
67
+
68
+ 1. **Pull the image**:
69
+ ```bash
70
+ docker pull stepfun2025/vllm:step-audio-2-v20250909
71
+ ```
72
+ 2. **Start the service**:
73
+ Assuming the model is downloaded in the `Step-Audio-R1` folder in the current directory.
74
+
75
+ ```bash
76
+ docker run --rm -ti --gpus all \
77
+ -v $(pwd)/Step-Audio-R1.1:/Step-Audio-R1.1 \
78
+ -p 9999:9999 \
79
+ stepfun2025/vllm:step-audio-2-v20250909 \
80
+ -- vllm serve /Step-Audio-R1 \
81
+ --served-model-name Step-Audio-R1 \
82
+ --port 9999 \
83
+ --max-model-len 16384 \
84
+ --max-num-seqs 32 \
85
+ --tensor-parallel-size 4 \
86
+ --chat-template '{%- macro render_content(content) -%}{%- if content is string -%}{{- content.replace("<audio_patch>\n", "<audio_patch>") -}}{%- elif content is mapping -%}{{- content['"'"'value'"'"'] if '"'"'value'"'"' in content else content['"'"'text'"'"'] -}}{%- elif content is iterable -%}{%- for item in content -%}{%- if item.type == '"'"'text'"'"' -%}{{- item['"'"'value'"'"'] if '"'"'value'"'"' in item else item['"'"'text'"'"'] -}}{%- elif item.type == '"'"'audio'"'"' -%}<audio_patch>{%- endif -%}{%- endfor -%}{%- endif -%}{%- endmacro -%}{%- if tools -%}{{- '"'"'<|BOT|>system\n'"'"' -}}{%- if messages[0]['"'"'role'"'"'] == '"'"'system'"'"' -%}{{- render_content(messages[0]['"'"'content'"'"']) + '"'"'<|EOT|>'"'"' -}}{%- endif -%}{{- '"'"'<|BOT|>tool_json_schemas\n'"'"' + tools|tojson + '"'"'<|EOT|>'"'"' -}}{%- else -%}{%- if messages[0]['"'"'role'"'"'] == '"'"'system'"'"' -%}{{- '"'"'<|BOT|>system\n'"'"' + render_content(messages[0]['"'"'content'"'"']) + '"'"'<|EOT|>'"'"' -}}{%- endif -%}{%- endif -%}{%- for message in messages -%}{%- if message["role"] == "user" -%}{{- '"'"'<|BOT|>human\n'"'"' + render_content(message["content"]) + '"'"'<|EOT|>'"'"' -}}{%- elif message["role"] == "assistant" -%}{{- '"'"'<|BOT|>assistant\n'"'"' + (render_content(message["content"]) if message["content"] else '"'"''"'"') -}}{%- set is_last_assistant = true -%}{%- for m in messages[loop.index:] -%}{%- if m["role"] == "assistant" -%}{%- set is_last_assistant = false -%}{%- endif -%}{%- endfor -%}{%- if not is_last_assistant -%}{{- '"'"'<|EOT|>'"'"' -}}{%- endif -%}{%- elif message["role"] == "function_output" -%}{%- else -%}{%- if not (loop.first and message["role"] == "system") -%}{{- '"'"'<|BOT|>'"'"' + message["role"] + '"'"'\n'"'"' + render_content(message["content"]) + '"'"'<|EOT|>'"'"' -}}{%- endif -%}{%- endif -%}{%- endfor -%}{%- if add_generation_prompt -%}{{- '"'"'<|BOT|>assistant\n<think>\n'"'"' -}}{%- endif -%}' \
87
+ --enable-log-requests \
88
+ --interleave-mm-strings \
89
+ --trust-remote-code
90
+ ```
91
+ After the service starts, it will listen on `localhost:9999`.
92
+
93
+ #### 馃惓 Method 2 路 Run from Source (Compile vLLM)
94
+ Step-Audio-R1 requires a customized vLLM backend.
95
+
96
+ 1. **Download Source Code**:
97
+ ```bash
98
+ git clone https://github.com/stepfun-ai/vllm.git
99
+ cd vllm
100
+ ```
101
+
102
+ 2. **Prepare Environment**:
103
+ ```bash
104
+ python3 -m venv .venv
105
+ source .venv/bin/activate
106
+ ```
107
+
108
+ 3. **Install and Compile**:
109
+ vLLM contains both C++ and Python code. We mainly modified the Python code, so the C++ part can use the pre-compiled version to speed up the process.
110
+
111
+ ```bash
112
+ # Use pre-compiled C++ extensions (Recommended)
113
+ VLLM_USE_PRECOMPILED=1 pip install -e .
114
+ ```
115
+
116
+ 4. **Switch Branch**:
117
+ After compilation, switch to the branch that supports Step-Audio.
118
+ ```bash
119
+ git checkout feat/step-audio-support
120
+ ```
121
+
122
+ 5. **Start the Service**:
123
+ ```bash
124
+ # Ensure you are in the vllm directory and the virtual environment is activated
125
+ source .venv/bin/activate
126
+
127
+ python3 -m vllm.entrypoints.openai.api_server \
128
+ --model ../Step-Audio-R1.1 \
129
+ --served-model-name Step-Audio-R1.1 \
130
+ --port 9999 \
131
+ --host 0.0.0.0 \
132
+ --max-model-len 65536 \
133
+ --max-num-seqs 128 \
134
+ --tensor-parallel-size 4 \
135
+ --gpu-memory-utilization 0.85 \
136
+ --trust-remote-code \
137
+ --enable-log-requests \
138
+ --interleave-mm-strings \
139
+ --chat-template '{%- macro render_content(content) -%}{%- if content is string -%}{{- content.replace("<audio_patch>\n", "<audio_patch>") -}}{%- elif content is mapping -%}{{- content['"'"'value'"'"'] if '"'"'value'"'"' in content else content['"'"'text'"'"'] -}}{%- elif content is iterable -%}{%- for item in content -%}{%- if item.type == '"'"'text'"'"' -%}{{- item['"'"'value'"'"'] if '"'"'value'"'"' in item else item['"'"'text'"'"'] -}}{%- elif item.type == '"'"'audio'"'"' -%}<audio_patch>{%- endif -%}{%- endfor -%}{%- endif -%}{%- endmacro -%}{%- if tools -%}{{- '"'"'<|BOT|>system\n'"'"' -}}{%- if messages[0]['"'"'role'"'"'] == '"'"'system'"'"' -%}{{- render_content(messages[0]['"'"'content'"'"']) + '"'"'<|EOT|>'"'"' -}}{%- endif -%}{{- '"'"'<|BOT|>tool_json_schemas\n'"'"' + tools|tojson + '"'"'<|EOT|>'"'"' -}}{%- else -%}{%- if messages[0]['"'"'role'"'"'] == '"'"'system'"'"' -%}{{- '"'"'<|BOT|>system\n'"'"' + render_content(messages[0]['"'"'content'"'"']) + '"'"'<|EOT|>'"'"' -}}{%- endif -%}{%- endif -%}{%- for message in messages -%}{%- if message["role"] == "user" -%}{{- '"'"'<|BOT|>human\n'"'"' + render_content(message["content"]) + '"'"'<|EOT|>'"'"' -}}{%- elif message["role"] == "assistant" -%}{{- '"'"'<|BOT|>assistant\n'"'"' + (render_content(message["content"]) if message["content"] else '"'"''"'"') -}}{%- set is_last_assistant = true -%}{%- for m in messages[loop.index:] -%}{%- if m["role"] == "assistant" -%}{%- set is_last_assistant = false -%}{%- endif -%}{%- endfor -%}{%- if not is_last_assistant -%}{{- '"'"'<|EOT|>'"'"' -}}{%- endif -%}{%- elif message["role"] == "function_output" -%}{%- else -%}{%- if not (loop.first and message["role"] == "system") -%}{{- '"'"'<|BOT|>'"'"' + message["role"] + '"'"'\n'"'"' + render_content(message["content"]) + '"'"'<|EOT|>'"'"' -}}{%- endif -%}{%- endif -%}{%- endfor -%}{%- if add_generation_prompt -%}{{- '"'"'<|BOT|>assistant\n<think>\n'"'"' -}}{%- endif -%}'
140
+ ```
141
+
142
+ After the service starts, it will listen on `localhost:9999`.