marcosremar2 commited on
Commit
78862c3
Β·
verified Β·
1 Parent(s): 1967cda

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +226 -0
README.md ADDED
@@ -0,0 +1,226 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # serverless_n1: Ultravox + Kokoro TTS
2
+
3
+ Docker image and vLLM configuration for deploying Ultravox + Kokoro TTS on RunPod serverless.
4
+
5
+ **Location:** `runpod_serverless_models/serverless_n1/`
6
+
7
+ ## 🎯 What This Does
8
+
9
+ - **Input:** Audio or text
10
+ - **Processing:** Ultravox v0.2 (8B speech-to-text-to-text)
11
+ - **Output:** Text + synthesized audio (Kokoro TTS 82M)
12
+ - **Latency:** ~0.3-0.5s warm inference
13
+ - **Cold Start:** ~8-12 seconds
14
+
15
+ ## πŸ“ Files
16
+
17
+ ```
18
+ serverless_n1/
19
+ β”œβ”€β”€ Dockerfile # Docker image definition
20
+ β”œβ”€β”€ handler.py # RunPod serverless handler
21
+ β”œβ”€β”€ requirements.txt # Python dependencies
22
+ β”œβ”€β”€ build.sh # Build and push script
23
+ β”œβ”€β”€ test_endpoint.py # Test custom Docker endpoint
24
+ β”œβ”€β”€ test_vllm_endpoint.py # Test vLLM endpoint
25
+ β”œβ”€β”€ VLLM_SETUP_GUIDE.md # Quick vLLM setup (no Docker)
26
+ └── README.md # This file
27
+ ```
28
+
29
+ ## πŸš€ Quick Start
30
+
31
+ ### Option A: Use vLLM Template (Recommended - No Docker!)
32
+
33
+ See: **[VLLM_SETUP_GUIDE.md](VLLM_SETUP_GUIDE.md)**
34
+
35
+ Uses RunPod's pre-built vLLM image - just configure and deploy!
36
+
37
+ **Pros:** Fast setup (10 min), no Docker build
38
+ **Cons:** Text-only (no audio input/output)
39
+
40
+ ### Option B: Build Custom Docker Image (Full Audio Support)
41
+
42
+ ```bash
43
+ # Make build script executable
44
+ chmod +x build.sh
45
+
46
+ # Build and push (replace with your Docker Hub username)
47
+ ./build.sh YOUR_DOCKERHUB_USERNAME
48
+ ```
49
+
50
+ **Time:** ~15-20 minutes
51
+ - Build: ~10-15 min (downloads models)
52
+ - Push: ~5-10 min (uploads ~10-15GB)
53
+
54
+ ### 2. Create RunPod Endpoint
55
+
56
+ **Go to:** https://www.runpod.io/console/serverless
57
+
58
+ **Click:** "+ New Endpoint"
59
+
60
+ **Select:** "Custom" template
61
+
62
+ **Configure:**
63
+ ```
64
+ Container Image: YOUR_DOCKERHUB_USERNAME/ultravox-kokoro-tts:latest
65
+ GPU Type: RTX 4090 (24GB)
66
+ Min Workers: 0
67
+ Max Workers: 3
68
+ Container Disk: 30 GB
69
+ Scale Down Delay: 600s
70
+ ```
71
+
72
+ **Deploy** β†’ Copy the **Endpoint ID**
73
+
74
+ ### 3. Test Your Endpoint
75
+
76
+ ```python
77
+ import runpod
78
+ import base64
79
+
80
+ # Set your API key and endpoint ID
81
+ runpod.api_key = "rpa_S3BTM7DTB0WKQKD8PU1OIWEH2XGW6J35WQOJKCSLl3nd4a"
82
+ endpoint = runpod.Endpoint("YOUR_ENDPOINT_ID")
83
+
84
+ # Test with text input
85
+ result = endpoint.run_sync({
86
+ "input": {
87
+ "text": "Hello, how are you?",
88
+ "max_tokens": 100,
89
+ "return_audio": True
90
+ }
91
+ })
92
+
93
+ print("Response:", result['text'])
94
+ print("Timing:", result['timing'])
95
+
96
+ # Save audio output
97
+ if 'audio_base64' in result:
98
+ with open("output.wav", "wb") as f:
99
+ f.write(base64.b64decode(result['audio_base64']))
100
+ print("βœ“ Audio saved to output.wav")
101
+ ```
102
+
103
+ ## πŸ“Š Input/Output Format
104
+
105
+ ### Input
106
+
107
+ ```json
108
+ {
109
+ "input": {
110
+ "audio_base64": "UklGRi4...", // Base64 audio (optional)
111
+ "text": "Hello world", // Text input (optional)
112
+ "prompt": "You are helpful", // System prompt (optional)
113
+ "max_tokens": 256, // Max output tokens
114
+ "temperature": 0.7, // Sampling temperature
115
+ "return_audio": true // Synthesize audio response
116
+ }
117
+ }
118
+ ```
119
+
120
+ ### Output
121
+
122
+ ```json
123
+ {
124
+ "text": "Hello! I'm doing great, thank you for asking...",
125
+ "audio_base64": "UklGRi4...", // Generated audio (WAV)
126
+ "timing": {
127
+ "inference_time": 0.35, // Ultravox inference
128
+ "tts_time": 0.15, // Kokoro TTS
129
+ "total_time": 0.52 // End-to-end
130
+ }
131
+ }
132
+ ```
133
+
134
+ ## πŸ’° Cost Estimates
135
+
136
+ **GPU:** RTX 4090 @ $0.34/hour
137
+
138
+ ### Testing (1 hour)
139
+ - Cold starts: 3 Γ— 10s = 30s
140
+ - Warm requests: 20 Γ— 0.5s = 10s
141
+ - **Total:** ~40s active = **$0.004** (~half a cent)
142
+
143
+ ### Production (2 hours/day)
144
+ - Monthly: 60 hours Γ— $0.34 = **$20/month**
145
+
146
+ ### Always-On (24/7)
147
+ - Monthly: 720 hours Γ— $0.34 = **$245/month**
148
+
149
+ **Recommendation:** Use min_workers=0 to save costs!
150
+
151
+ ## 🎯 Performance Targets
152
+
153
+ | Metric | Target | Actual |
154
+ |--------|--------|--------|
155
+ | Cold Start | <15s | 8-12s βœ… |
156
+ | Warm Inference | <1s | 0.3-0.5s βœ… |
157
+ | Ultravox Latency | <500ms | ~350ms βœ… |
158
+ | TTS Latency | <300ms | ~150ms βœ… |
159
+
160
+ ## πŸ”§ Customization
161
+
162
+ ### Use Different Models
163
+
164
+ Edit `Dockerfile`:
165
+
166
+ ```dockerfile
167
+ # Use Ultravox v0.5 instead
168
+ RUN python -c "from transformers import AutoModelForCausalLM, AutoProcessor; \
169
+ AutoModelForCausalLM.from_pretrained('fixie-ai/ultravox-v0_5-llama-3_1-8b'); \
170
+ AutoProcessor.from_pretrained('fixie-ai/ultravox-v0_5-llama-3_1-8b')"
171
+ ```
172
+
173
+ ### Adjust GPU Memory
174
+
175
+ For smaller GPU (RTX 3090 20GB):
176
+ - Use 4-bit quantization
177
+ - Or deploy Ultravox only (skip Kokoro)
178
+
179
+ ### Add More Features
180
+
181
+ Edit `handler.py`:
182
+ - Voice cloning with Kokoro
183
+ - Streaming responses
184
+ - Multi-language support
185
+
186
+ ## πŸ› Troubleshooting
187
+
188
+ ### Build fails with "No space left"
189
+ ```bash
190
+ docker system prune -a
191
+ ```
192
+
193
+ ### Push fails with "denied"
194
+ ```bash
195
+ docker login
196
+ ```
197
+
198
+ ### Cold start >30s
199
+ - Check model is pre-downloaded in Dockerfile
200
+ - Use network-attached storage in RunPod
201
+
202
+ ### Out of memory errors
203
+ - Use smaller GPU or
204
+ - Enable 4-bit quantization or
205
+ - Reduce batch size
206
+
207
+ ## πŸ“š Resources
208
+
209
+ - **RunPod Docs:** https://docs.runpod.io/serverless/overview
210
+ - **Ultravox:** https://github.com/fixie-ai/ultravox
211
+ - **Kokoro TTS:** https://huggingface.co/hexgrad/Kokoro-82M
212
+
213
+ ## βœ… Checklist
214
+
215
+ - [ ] Docker installed and logged in
216
+ - [ ] Built Docker image (`./build.sh`)
217
+ - [ ] Pushed to Docker Hub
218
+ - [ ] Created RunPod endpoint
219
+ - [ ] Tested inference
220
+ - [ ] Configured service with endpoint ID
221
+ - [ ] Measured cold start time
222
+ - [ ] Ready for production!
223
+
224
+ ---
225
+
226
+ **Need help?** Check the build.log or RunPod console logs for errors.