linhaotong commited on
Commit
b396ed8
ยท
1 Parent(s): 6a8ae2c

update paper link and clean files

Browse files
EXAMPLES_DIRECTORY.md DELETED
@@ -1,286 +0,0 @@
1
- # ๐Ÿ“ Examples ็›ฎๅฝ•้…็ฝฎๆŒ‡ๅ—
2
-
3
- ## ๐Ÿ“ Examples ็›ฎๅฝ•ไฝ็ฝฎ
4
-
5
- ### ้ป˜่ฎคไฝ็ฝฎ
6
-
7
- Examples ็›ฎๅฝ•ๅบ”่ฏฅๆ”พๅœจ๏ผš
8
-
9
- ```
10
- workspace/gradio/examples/
11
- ```
12
-
13
- ### ๅฎŒๆ•ด่ทฏๅพ„่ฏดๆ˜Ž
14
-
15
- ๆ นๆฎ `app.py` ็š„้…็ฝฎ๏ผš
16
-
17
- ```python
18
- workspace_dir = os.environ.get("DA3_WORKSPACE_DIR", "workspace/gradio")
19
- examples_dir = os.path.join(workspace_dir, "examples")
20
- # ็ป“ๆžœ: workspace/gradio/examples/
21
- ```
22
-
23
- ## ๐Ÿ“‚ ็›ฎๅฝ•็ป“ๆž„
24
-
25
- Examples ็›ฎๅฝ•ๅบ”่ฏฅๆŒ‰ไปฅไธ‹็ป“ๆž„็ป„็ป‡๏ผš
26
-
27
- ```
28
- workspace/gradio/examples/
29
- โ”œโ”€โ”€ scene1/ # ๅœบๆ™ฏ 1
30
- โ”‚ โ”œโ”€โ”€ 000.png # ๅ›พๅƒๆ–‡ไปถ
31
- โ”‚ โ”œโ”€โ”€ 010.png
32
- โ”‚ โ”œโ”€โ”€ 020.png
33
- โ”‚ โ””โ”€โ”€ ...
34
- โ”œโ”€โ”€ scene2/ # ๅœบๆ™ฏ 2
35
- โ”‚ โ”œโ”€โ”€ 000.jpg
36
- โ”‚ โ”œโ”€โ”€ 010.jpg
37
- โ”‚ โ””โ”€โ”€ ...
38
- โ””โ”€โ”€ scene3/ # ๅœบๆ™ฏ 3
39
- โ”œโ”€โ”€ image1.png
40
- โ”œโ”€โ”€ image2.png
41
- โ””โ”€โ”€ ...
42
- ```
43
-
44
- ### ่ฆๆฑ‚
45
-
46
- 1. **ๆฏไธชๅœบๆ™ฏไธ€ไธชๆ–‡ไปถๅคน**๏ผšๆฏไธชๅœบๆ™ฏๅบ”่ฏฅๆœ‰่‡ชๅทฑ็š„ๆ–‡ไปถๅคน
47
- 2. **ๆ–‡ไปถๅคนๅ็งฐ**๏ผšๆ–‡ไปถๅคนๅ็งฐไผšๆ˜พ็คบไธบๅœบๆ™ฏๅ็งฐ
48
- 3. **ๅ›พๅƒๆ–‡ไปถ**๏ผšๆ”ฏๆŒ `.jpg`, `.jpeg`, `.png`, `.bmp`, `.tiff`, `.tif` ๆ ผๅผ
49
- 4. **็ฌฌไธ€ๅผ ๅ›พๅƒ**๏ผš็ฌฌไธ€ๅผ ๅ›พๅƒ๏ผˆๆŒ‰ๆ–‡ไปถๅๆŽ’ๅบ๏ผ‰ไผš็”จไฝœ็ผฉ็•ฅๅ›พ
50
-
51
- ## ๐Ÿ”ง ้…็ฝฎๆ–นๅผ
52
-
53
- ### ๆ–นๅผ 1๏ผšไฝฟ็”จ้ป˜่ฎค่ทฏๅพ„๏ผˆๆŽจ่๏ผ‰
54
-
55
- ็›ดๆŽฅๅˆ›ๅปบ็›ฎๅฝ•๏ผš
56
-
57
- ```bash
58
- mkdir -p workspace/gradio/examples
59
- ```
60
-
61
- ็„ถๅŽๆทปๅŠ ๅœบๆ™ฏ๏ผš
62
-
63
- ```bash
64
- # ๅˆ›ๅปบๅœบๆ™ฏๆ–‡ไปถๅคน
65
- mkdir -p workspace/gradio/examples/my_scene
66
-
67
- # ๅคๅˆถๅ›พๅƒๆ–‡ไปถ
68
- cp your_images/* workspace/gradio/examples/my_scene/
69
- ```
70
-
71
- ### ๆ–นๅผ 2๏ผšไฝฟ็”จ็Žฏๅขƒๅ˜้‡
72
-
73
- ้€š่ฟ‡็Žฏๅขƒๅ˜้‡่‡ชๅฎšไน‰ไฝ็ฝฎ๏ผš
74
-
75
- ```bash
76
- # ่ฎพ็ฝฎ็Žฏๅขƒๅ˜้‡
77
- export DA3_WORKSPACE_DIR="/path/to/your/workspace"
78
-
79
- # ็„ถๅŽ examples ไผšๅœจ /path/to/your/workspace/examples
80
- ```
81
-
82
- ๆˆ–ๅœจ `app.py` ไธญไฟฎๆ”น๏ผš
83
-
84
- ```python
85
- workspace_dir = os.environ.get("DA3_WORKSPACE_DIR", "/custom/path/workspace")
86
- ```
87
-
88
- ### ๆ–นๅผ 3๏ผšๅœจ Hugging Face Spaces ไธญ
89
-
90
- ๅœจ Spaces ไธญ๏ผŒๅฏไปฅ้€š่ฟ‡ไปฅไธ‹ๆ–นๅผๆทปๅŠ  examples๏ผš
91
-
92
- 1. **้€š่ฟ‡ Git ไธŠไผ **๏ผš
93
- ```bash
94
- git add workspace/gradio/examples/
95
- git commit -m "Add example scenes"
96
- git push
97
- ```
98
-
99
- 2. **้€š่ฟ‡็ฝ‘้กต็•Œ้ขไธŠไผ **๏ผš
100
- - ๅœจ Spaces ็š„ๆ–‡ไปถๆต่งˆๅ™จไธญๅˆ›ๅปบ `workspace/gradio/examples/` ็›ฎๅฝ•
101
- - ไธŠไผ ๅœบๆ™ฏๆ–‡ไปถๅคนๅ’Œๅ›พๅƒ
102
-
103
- 3. **ไฝฟ็”จๆŒไน…ๅญ˜ๅ‚จ**๏ผš
104
- - ๅฆ‚ๆžœไฝฟ็”จๆŒไน…ๅญ˜ๅ‚จ๏ผŒexamples ไผšไฟๅญ˜ๅœจๆŒไน…ๅญ˜ๅ‚จไธญ
105
- - ่ทฏๅพ„ไป็„ถๆ˜ฏ `workspace/gradio/examples/`
106
-
107
- ## ๐Ÿ“ ็คบไพ‹ๅœบๆ™ฏ็ป“ๆž„็คบไพ‹
108
-
109
- ### ็คบไพ‹ 1๏ผš็ฎ€ๅ•ๅœบๆ™ฏ
110
-
111
- ```
112
- workspace/gradio/examples/
113
- โ””โ”€โ”€ indoor_room/
114
- โ”œโ”€โ”€ 000.png
115
- โ”œโ”€โ”€ 010.png
116
- โ”œโ”€โ”€ 020.png
117
- โ””โ”€โ”€ 030.png
118
- ```
119
-
120
- ### ็คบไพ‹ 2๏ผšๅคšไธชๅœบๆ™ฏ
121
-
122
- ```
123
- workspace/gradio/examples/
124
- โ”œโ”€โ”€ outdoor_garden/
125
- โ”‚ โ”œโ”€โ”€ frame_001.jpg
126
- โ”‚ โ”œโ”€โ”€ frame_002.jpg
127
- โ”‚ โ””โ”€โ”€ frame_003.jpg
128
- โ”œโ”€โ”€ office_space/
129
- โ”‚ โ”œโ”€โ”€ img_000.png
130
- โ”‚ โ”œโ”€โ”€ img_010.png
131
- โ”‚ โ””โ”€โ”€ img_020.png
132
- โ””โ”€โ”€ street_scene/
133
- โ”œโ”€โ”€ 000.png
134
- โ”œโ”€โ”€ 010.png
135
- โ””โ”€โ”€ 020.png
136
- ```
137
-
138
- ## ๐Ÿ” ้ชŒ่ฏ Examples ็›ฎๅฝ•
139
-
140
- ### ๆฃ€ๆŸฅ็›ฎๅฝ•ๆ˜ฏๅฆๅญ˜ๅœจ
141
-
142
- ```bash
143
- # ๆฃ€ๆŸฅ้ป˜่ฎคไฝ็ฝฎ
144
- ls -la workspace/gradio/examples/
145
-
146
- # ๆˆ–ไฝฟ็”จ Python
147
- python -c "
148
- import os
149
- workspace_dir = os.environ.get('DA3_WORKSPACE_DIR', 'workspace/gradio')
150
- examples_dir = os.path.join(workspace_dir, 'examples')
151
- print(f'Examples directory: {examples_dir}')
152
- print(f'Exists: {os.path.exists(examples_dir)}')
153
- if os.path.exists(examples_dir):
154
- scenes = [d for d in os.listdir(examples_dir) if os.path.isdir(os.path.join(examples_dir, d))]
155
- print(f'Found {len(scenes)} scenes: {scenes}')
156
- "
157
- ```
158
-
159
- ### ๆฃ€ๆŸฅๅœบๆ™ฏไฟกๆฏ
160
-
161
- ๅบ”็”จๅฏๅŠจๆ—ถไผš่‡ชๅŠจๆ‰ซๆ examples ็›ฎๅฝ•๏ผŒๅนถๅœจๆ—ฅๅฟ—ไธญๆ˜พ็คบ๏ผš
162
-
163
- ```
164
- Found 3 example scenes:
165
- - scene1 (5 images)
166
- - scene2 (10 images)
167
- - scene3 (8 images)
168
- ```
169
-
170
- ## ๐Ÿš€ ๅฟซ้€Ÿๅผ€ๅง‹
171
-
172
- ### 1. ๅˆ›ๅปบ็›ฎๅฝ•็ป“ๆž„
173
-
174
- ```bash
175
- # ๅœจ้กน็›ฎๆ น็›ฎๅฝ•
176
- mkdir -p workspace/gradio/examples
177
- ```
178
-
179
- ### 2. ๆทปๅŠ ็คบไพ‹ๅœบๆ™ฏ
180
-
181
- ```bash
182
- # ๅˆ›ๅปบๅœบๆ™ฏๆ–‡ไปถๅคน
183
- mkdir -p workspace/gradio/examples/my_first_scene
184
-
185
- # ๆทปๅŠ ๅ›พๅƒๆ–‡ไปถ๏ผˆๅคๅˆถไฝ ็š„ๅ›พๅƒ๏ผ‰
186
- cp /path/to/your/images/* workspace/gradio/examples/my_first_scene/
187
- ```
188
-
189
- ### 3. ้ชŒ่ฏ
190
-
191
- ๅฏๅŠจๅบ”็”จๅŽ๏ผŒไฝ ๅบ”่ฏฅ่ƒฝๅœจ UI ไธญ็œ‹ๅˆฐ็คบไพ‹ๅœบๆ™ฏ็ฝ‘ๆ ผใ€‚
192
-
193
- ## ๐Ÿ“Š ๅœจ Hugging Face Spaces ไธญ
194
-
195
- ### ไธŠไผ ๆ–นๅผ
196
-
197
- 1. **้€š่ฟ‡ Git**๏ผˆๆŽจ่๏ผ‰๏ผš
198
- ```bash
199
- # ๅœจๆœฌๅœฐๅ‡†ๅค‡ examples
200
- mkdir -p workspace/gradio/examples
201
- # ... ๆทปๅŠ ๅœบๆ™ฏ ...
202
-
203
- # ๆไบคๅนถๆŽจ้€
204
- git add workspace/gradio/examples/
205
- git commit -m "Add example scenes"
206
- git push
207
- ```
208
-
209
- 2. **้€š่ฟ‡็ฝ‘้กต็•Œ้ข**๏ผš
210
- - ๅœจ Spaces ็š„ๆ–‡ไปถๆต่งˆๅ™จไธญ
211
- - ๅˆ›ๅปบ `workspace/gradio/examples/` ็›ฎๅฝ•
212
- - ไธŠไผ ๅœบๆ™ฏๆ–‡ไปถๅคน
213
-
214
- ### ๆณจๆ„ไบ‹้กน
215
-
216
- - **ๆ–‡ไปถๅคงๅฐ้™ๅˆถ**๏ผš็กฎไฟๅ›พๅƒๆ–‡ไปถไธ่ถ…่ฟ‡ Spaces ็š„ๆ–‡ไปถๅคงๅฐ้™ๅˆถ
217
- - **ๆŒไน…ๅญ˜ๅ‚จ**๏ผšๅฆ‚ๆžœไฝฟ็”จๆŒไน…ๅญ˜ๅ‚จ๏ผŒexamples ไผšๆŒไน…ไฟๅญ˜
218
- - **็ผ“ๅญ˜**๏ผš็คบไพ‹ๅœบๆ™ฏ็š„็ป“ๆžœไผš็ผ“ๅญ˜ๅœจ `workspace/gradio/input_images/` ไธ‹
219
-
220
- ## ๐Ÿ”— ็›ธๅ…ณ้…็ฝฎ
221
-
222
- ### ็Žฏๅขƒๅ˜้‡
223
-
224
- - `DA3_WORKSPACE_DIR`: ๅทฅไฝœ็ฉบ้—ด็›ฎๅฝ•๏ผˆ้ป˜่ฎค๏ผš`workspace/gradio`๏ผ‰
225
- - Examples ็›ฎๅฝ•่‡ชๅŠจ่ฎพ็ฝฎไธบ๏ผš`{DA3_WORKSPACE_DIR}/examples`
226
-
227
- ### ไปฃ็ ไธญ็š„้…็ฝฎ
228
-
229
- - `depth_anything_3/app/gradio_app.py`: `cache_examples()` ๆ–นๆณ•
230
- - `depth_anything_3/app/modules/utils.py`: `get_scene_info()` ๅ‡ฝๆ•ฐ
231
- - `depth_anything_3/app/modules/event_handlers.py`: `load_example_scene()` ๆ–นๆณ•
232
-
233
- ## โ“ ๅธธ่ง้—ฎ้ข˜
234
-
235
- ### Q: Examples ็›ฎๅฝ•ไธๅญ˜ๅœจๆ€ŽไนˆๅŠž๏ผŸ
236
-
237
- A: ๅบ”็”จไผš่‡ชๅŠจๅˆ›ๅปบ `workspace/gradio/` ็›ฎๅฝ•๏ผŒไฝ†ไธไผš่‡ชๅŠจๅˆ›ๅปบ `examples/` ๅญ็›ฎๅฝ•ใ€‚ไฝ ้œ€่ฆๆ‰‹ๅŠจๅˆ›ๅปบ๏ผš
238
-
239
- ```bash
240
- mkdir -p workspace/gradio/examples
241
- ```
242
-
243
- ### Q: ๅฆ‚ไฝ•ๆทปๅŠ ๆ–ฐ็š„็คบไพ‹ๅœบๆ™ฏ๏ผŸ
244
-
245
- A: ๅช้œ€ๅœจ `workspace/gradio/examples/` ไธ‹ๅˆ›ๅปบๆ–ฐๆ–‡ไปถๅคนๅนถๆทปๅŠ ๅ›พๅƒ๏ผš
246
-
247
- ```bash
248
- mkdir -p workspace/gradio/examples/new_scene
249
- cp images/* workspace/gradio/examples/new_scene/
250
- ```
251
-
252
- ๅบ”็”จไผšๅœจไธ‹ๆฌกๅฏๅŠจๆ—ถ่‡ชๅŠจๆฃ€ๆต‹ๆ–ฐๅœบๆ™ฏใ€‚
253
-
254
- ### Q: ๅœบๆ™ฏๅ็งฐๅฆ‚ไฝ•ๆ˜พ็คบ๏ผŸ
255
-
256
- A: ๅœบๆ™ฏๅ็งฐๅฐฑๆ˜ฏๆ–‡ไปถๅคนๅ็งฐใ€‚ไพ‹ๅฆ‚๏ผš
257
- - ๆ–‡ไปถๅคน๏ผš`workspace/gradio/examples/indoor_room/`
258
- - ๆ˜พ็คบๅ็งฐ๏ผš`indoor_room`
259
-
260
- ### Q: ็ผฉ็•ฅๅ›พๅฆ‚ไฝ•้€‰ๆ‹ฉ๏ผŸ
261
-
262
- A: ็ผฉ็•ฅๅ›พๆ˜ฏๆ–‡ไปถๅคนไธญๆŒ‰ๆ–‡ไปถๅๆŽ’ๅบๅŽ็š„็ฌฌไธ€ๅผ ๅ›พๅƒใ€‚
263
-
264
- ## ๐Ÿ“ ๆ€ป็ป“
265
-
266
- **Examples ็›ฎๅฝ•ไฝ็ฝฎ๏ผš**
267
- - **้ป˜่ฎค**๏ผš`workspace/gradio/examples/`
268
- - **ๅฏ้€š่ฟ‡็Žฏๅขƒๅ˜้‡**๏ผš`DA3_WORKSPACE_DIR` ่‡ชๅฎšไน‰
269
-
270
- **็›ฎๅฝ•็ป“ๆž„๏ผš**
271
- ```
272
- workspace/gradio/examples/
273
- โ”œโ”€โ”€ scene1/
274
- โ”‚ โ””โ”€โ”€ images...
275
- โ”œโ”€โ”€ scene2/
276
- โ”‚ โ””โ”€โ”€ images...
277
- โ””โ”€โ”€ scene3/
278
- โ””โ”€โ”€ images...
279
- ```
280
-
281
- **ๅฟซ้€Ÿๅˆ›ๅปบ๏ผš**
282
- ```bash
283
- mkdir -p workspace/gradio/examples
284
- # ็„ถๅŽๆทปๅŠ ๅœบๆ™ฏๆ–‡ไปถๅคนๅ’Œๅ›พๅƒ
285
- ```
286
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SPACES_GPU_BEST_PRACTICES.md DELETED
@@ -1,481 +0,0 @@
1
- # ๐ŸŽฏ Spaces GPU ๆœ€ไฝณๅฎž่ทตๆŒ‡ๅ—
2
-
3
- ## ๐Ÿ“š spaces.GPU ๅทฅไฝœๅŽŸ็†
4
-
5
- ### ๆžถๆž„ๆฆ‚่งˆ
6
-
7
- ```
8
- โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
9
- โ”‚ ไธป่ฟ›็จ‹ (Main Process) โ”‚
10
- โ”‚ - CPU ็Žฏๅขƒ โ”‚
11
- โ”‚ - โŒ ไธ่ƒฝๅˆๅง‹ๅŒ– CUDA โ”‚
12
- โ”‚ - โœ… ๅฏไปฅๅˆ›ๅปบ Gradio UI โ”‚
13
- โ”‚ - โœ… ๅฏไปฅๅˆ›ๅปบ ModelInference ๅฎžไพ‹๏ผˆไฝ†ไธๅŠ ่ฝฝๆจกๅž‹๏ผ‰ โ”‚
14
- โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
15
- โ”‚
16
- โ”‚ ่ฐƒ็”จ @spaces.GPU ่ฃ…้ฅฐ็š„ๅ‡ฝๆ•ฐ
17
- โ”‚
18
- โ–ผ
19
- โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
20
- โ”‚ ๅญ่ฟ›็จ‹ (GPU Worker Process) โ”‚
21
- โ”‚ - GPU ็Žฏๅขƒ โ”‚
22
- โ”‚ - โœ… ๅฏไปฅๅˆๅง‹ๅŒ– CUDA โ”‚
23
- โ”‚ - โœ… ๅฏไปฅๅŠ ่ฝฝๆจกๅž‹ๅˆฐ GPU โ”‚
24
- โ”‚ - โœ… ่ฟ่กŒๆŽจ็† โ”‚
25
- โ”‚ - โœ… ๅ…จๅฑ€ๅ˜้‡็ผ“ๅญ˜๏ผˆๆฏไธชๅญ่ฟ›็จ‹็‹ฌ็ซ‹๏ผ‰ โ”‚
26
- โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
27
- โ”‚
28
- โ”‚ pickle ๅบๅˆ—ๅŒ–่ฟ”ๅ›žๅ€ผ
29
- โ”‚
30
- โ–ผ
31
- โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
32
- โ”‚ ไธป่ฟ›็จ‹ๆŽฅๆ”ถ่ฟ”ๅ›žๅ€ผ โ”‚
33
- โ”‚ - โœ… ๅฟ…้กปๆ˜ฏ CPU ๆ•ฐๆฎ๏ผˆnumpy, ๅŸบๆœฌ็ฑปๅž‹๏ผ‰ โ”‚
34
- โ”‚ - โŒ ไธ่ƒฝๅŒ…ๅซ CUDA ๅผ ้‡ โ”‚
35
- โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
36
- ```
37
-
38
- ## โœ… ๆœ€ไฝณๅฎž่ทต๏ผšๆจกๅž‹ๅŠ ่ฝฝ็ญ–็•ฅ
39
-
40
- ### โŒ ้”™่ฏฏๅšๆณ• 1๏ผšไธป่ฟ›็จ‹ๅŠ ่ฝฝๆจกๅž‹
41
-
42
- ```python
43
- # โŒ ้”™่ฏฏ๏ผšๅœจไธป่ฟ›็จ‹ๅŠ ่ฝฝๆจกๅž‹
44
- class EventHandlers:
45
- def __init__(self):
46
- self.model_inference = ModelInference()
47
- # โŒ ๅฆ‚ๆžœๅœจไธป่ฟ›็จ‹่ฐƒ็”จ่ฟ™ไธช๏ผŒไผš่งฆๅ‘ CUDA ๅˆๅง‹ๅŒ–้”™่ฏฏ
48
- self.model_inference.initialize_model("cuda") # ๐Ÿ’ฅ
49
- ```
50
-
51
- **ไธบไป€ไนˆ้”™่ฏฏ๏ผŸ**
52
- - ไธป่ฟ›็จ‹ไธ่ƒฝๅˆๅง‹ๅŒ– CUDA
53
- - ไผš็ซ‹ๅณๆŠฅ้”™๏ผš`CUDA must not be initialized in the main process`
54
-
55
- ### โŒ ้”™่ฏฏๅšๆณ• 2๏ผšๅฎžไพ‹ๅ˜้‡ๅญ˜ๅ‚จๆจกๅž‹
56
-
57
- ```python
58
- # โŒ ้”™่ฏฏ๏ผšไฝฟ็”จๅฎžไพ‹ๅ˜้‡ๅญ˜ๅ‚จๆจกๅž‹
59
- class ModelInference:
60
- def __init__(self):
61
- self.model = None # โŒ ๅฎžไพ‹ๅ˜้‡
62
-
63
- def initialize_model(self, device):
64
- if self.model is None:
65
- self.model = load_model() # โŒ ไฟๅญ˜ๅœจๅฎžไพ‹ไธญ
66
- return self.model
67
- ```
68
-
69
- **ไธบไป€ไนˆ้”™่ฏฏ๏ผŸ**
70
- - ๅฎžไพ‹ๅœจไธป่ฟ›็จ‹ๅˆ›ๅปบ
71
- - ๆจกๅž‹็Šถๆ€ๅฏ่ƒฝ่ทจ่ฟ›็จ‹ๆททไนฑ
72
- - ็ฌฌไบŒๆฌก่ฐƒ็”จๆ—ถ็Šถๆ€ไธ็กฎๅฎš
73
-
74
- ### โœ… ๆญฃ็กฎๅšๆณ•๏ผšๅญ่ฟ›็จ‹ๅ…จๅฑ€ๅ˜้‡็ผ“ๅญ˜
75
-
76
- ```python
77
- # โœ… ๆญฃ็กฎ๏ผšไฝฟ็”จๅ…จๅฑ€ๅ˜้‡ๅœจๅญ่ฟ›็จ‹ไธญ็ผ“ๅญ˜
78
- _MODEL_CACHE = None # ๅ…จๅฑ€ๅ˜้‡๏ผŒๆฏไธชๅญ่ฟ›็จ‹็‹ฌ็ซ‹
79
-
80
- class ModelInference:
81
- def __init__(self):
82
- # โœ… ไธๅญ˜ๅ‚จไปปไฝ•็Šถๆ€
83
- pass
84
-
85
- def initialize_model(self, device: str = "cuda"):
86
- global _MODEL_CACHE
87
-
88
- if _MODEL_CACHE is None:
89
- # โœ… ๅœจๅญ่ฟ›็จ‹ไธญๅŠ ่ฝฝ๏ผˆ็ฌฌไธ€ๆฌก่ฐƒ็”จๆ—ถ๏ผ‰
90
- print("Loading model in GPU subprocess...")
91
- model_dir = os.environ.get("DA3_MODEL_DIR", "...")
92
- _MODEL_CACHE = DepthAnything3.from_pretrained(model_dir)
93
- _MODEL_CACHE = _MODEL_CACHE.to(device) # โœ… ๅœจๅญ่ฟ›็จ‹ไธญ็งปๅŠจ
94
- _MODEL_CACHE.eval()
95
- else:
96
- # โœ… ๅค็”จ็ผ“ๅญ˜็š„ๆจกๅž‹
97
- print("Using cached model")
98
-
99
- return _MODEL_CACHE # โœ… ่ฟ”ๅ›žๆจกๅž‹๏ผŒไธๅญ˜ๅ‚จ
100
- ```
101
-
102
- **ไธบไป€ไนˆๆญฃ็กฎ๏ผŸ**
103
- - โœ… ๆจกๅž‹ๅชๅœจๅญ่ฟ›็จ‹ๅŠ ่ฝฝ๏ผˆGPU ็Žฏๅขƒ๏ผ‰
104
- - โœ… ๅ…จๅฑ€ๅ˜้‡ๅœจๅญ่ฟ›็จ‹ๅ†…ๅฎ‰ๅ…จ๏ผˆๆฏไธชๅญ่ฟ›็จ‹็‹ฌ็ซ‹๏ผ‰
105
- - โœ… ไธๆฑกๆŸ“ไธป่ฟ›็จ‹
106
- - โœ… ๅฏไปฅ็ผ“ๅญ˜ๅค็”จ๏ผˆ้ฟๅ…้‡ๅคๅŠ ่ฝฝ๏ผ‰
107
-
108
- ## ๐ŸŽฏ ๅฎŒๆ•ดๅฎž็Žฐ็คบไพ‹
109
-
110
- ### ๆ–‡ไปถ็ป“ๆž„
111
-
112
- ```
113
- app.py # ไธปๅ…ฅๅฃ๏ผŒ้…็ฝฎ @spaces.GPU
114
- depth_anything_3/app/modules/
115
- โ”œโ”€โ”€ model_inference.py # ๆจกๅž‹ๆŽจ็†๏ผˆไฝฟ็”จๅ…จๅฑ€ๅ˜้‡๏ผ‰
116
- โ””โ”€โ”€ event_handlers.py # ไบ‹ไปถๅค„็†๏ผˆไธป่ฟ›็จ‹๏ผŒไธๅŠ ่ฝฝๆจกๅž‹๏ผ‰
117
- ```
118
-
119
- ### 1. app.py - ่ฃ…้ฅฐๅ™จ้…็ฝฎ
120
-
121
- ```python
122
- import spaces
123
- from depth_anything_3.app.modules.model_inference import ModelInference
124
-
125
- # โœ… ่ฃ…้ฅฐ run_inference ๆ–นๆณ•
126
- original_run_inference = ModelInference.run_inference
127
-
128
- @spaces.GPU(duration=120)
129
- def gpu_run_inference(self, *args, **kwargs):
130
- """
131
- ๅœจ GPU ๅญ่ฟ›็จ‹ไธญ่ฟ่กŒๆŽจ็†ใ€‚
132
-
133
- ่ฟ™ไธชๅ‡ฝๆ•ฐไผšๅœจ็‹ฌ็ซ‹็š„ GPU ๅญ่ฟ›็จ‹ไธญๆ‰ง่กŒ๏ผŒ
134
- ๅฏไปฅๅฎ‰ๅ…จๅœฐๅˆๅง‹ๅŒ– CUDA ๅ’ŒๅŠ ่ฝฝๆจกๅž‹ใ€‚
135
- """
136
- return original_run_inference(self, *args, **kwargs)
137
-
138
- # ๆ›ฟๆขๅŽŸๆ–นๆณ•
139
- ModelInference.run_inference = gpu_run_inference
140
-
141
- # โœ… ไธป่ฟ›็จ‹๏ผšๅชๅˆ›ๅปบๅบ”็”จ๏ผŒไธๅŠ ่ฝฝๆจกๅž‹
142
- if __name__ == "__main__":
143
- app = DepthAnything3App(...)
144
- app.launch(host="0.0.0.0", port=7860)
145
- ```
146
-
147
- ### 2. model_inference.py - ๆจกๅž‹็ฎก็†
148
-
149
- ```python
150
- import torch
151
- from depth_anything_3.api import DepthAnything3
152
-
153
- # ========================================
154
- # โœ… ๅ…จๅฑ€ๅ˜้‡็ผ“ๅญ˜๏ผˆๅญ่ฟ›็จ‹ๅฎ‰ๅ…จ๏ผ‰
155
- # ========================================
156
- _MODEL_CACHE = None
157
-
158
- class ModelInference:
159
- def __init__(self):
160
- """
161
- ๅˆๅง‹ๅŒ– - ไธๅญ˜ๅ‚จไปปไฝ•็Šถๆ€ใ€‚
162
-
163
- ๆณจๆ„๏ผš่ฟ™ไธชๅฎžไพ‹ๅœจไธป่ฟ›็จ‹ๅˆ›ๅปบ๏ผŒไฝ†ๆจกๅž‹ๅŠ ่ฝฝๅœจๅญ่ฟ›็จ‹ใ€‚
164
- """
165
- pass # โœ… ๆ— ๅฎžไพ‹ๅ˜้‡
166
-
167
- def initialize_model(self, device: str = "cuda"):
168
- """
169
- ๅœจๅญ่ฟ›็จ‹ไธญๅŠ ่ฝฝๆจกๅž‹ใ€‚
170
-
171
- ไฝฟ็”จๅ…จๅฑ€ๅ˜้‡็ผ“ๅญ˜๏ผŒๅ› ไธบ๏ผš
172
- 1. @spaces.GPU ๅœจๅญ่ฟ›็จ‹่ฟ่กŒ
173
- 2. ๆฏไธชๅญ่ฟ›็จ‹ๆœ‰็‹ฌ็ซ‹็š„ๅ…จๅฑ€ๅ‘ฝๅ็ฉบ้—ด
174
- 3. ๅฏไปฅๅฎ‰ๅ…จ็ผ“ๅญ˜๏ผŒ้ฟๅ…้‡ๅคๅŠ ่ฝฝ
175
- """
176
- global _MODEL_CACHE
177
-
178
- if _MODEL_CACHE is None:
179
- # ็ฌฌไธ€ๆฌก่ฐƒ็”จ๏ผšๅŠ ่ฝฝๆจกๅž‹
180
- model_dir = os.environ.get("DA3_MODEL_DIR", "...")
181
- print(f"๐Ÿ”„ Loading model in GPU subprocess from {model_dir}")
182
-
183
- _MODEL_CACHE = DepthAnything3.from_pretrained(model_dir)
184
- _MODEL_CACHE = _MODEL_CACHE.to(device) # โœ… ๅœจๅญ่ฟ›็จ‹ไธญ็งปๅŠจ
185
- _MODEL_CACHE.eval()
186
-
187
- print(f"โœ… Model loaded on {device}")
188
- else:
189
- # ๅŽ็ปญ่ฐƒ็”จ๏ผšๅค็”จ็ผ“ๅญ˜
190
- print("โœ… Using cached model")
191
- # ็กฎไฟๅœจๆญฃ็กฎ็š„่ฎพๅค‡ไธŠ๏ผˆ้˜ฒๅพกๆ€ง็ผ–็จ‹๏ผ‰
192
- _MODEL_CACHE = _MODEL_CACHE.to(device)
193
-
194
- return _MODEL_CACHE
195
-
196
- def run_inference(self, target_dir, ...):
197
- """
198
- ่ฟ่กŒๆŽจ็† - ๅœจ GPU ๅญ่ฟ›็จ‹ไธญๆ‰ง่กŒใ€‚
199
-
200
- ่ฟ™ไธชๅ‡ฝๆ•ฐ่ขซ @spaces.GPU ่ฃ…้ฅฐ๏ผŒไผšๅœจๅญ่ฟ›็จ‹่ฟ่กŒใ€‚
201
- """
202
- # โœ… ๅœจๅญ่ฟ›็จ‹ไธญ่Žทๅ–ๆจกๅž‹๏ผˆๅฑ€้ƒจๅ˜้‡๏ผ‰
203
- device = "cuda" if torch.cuda.is_available() else "cpu"
204
- model = self.initialize_model(device) # โœ… ่ฟ”ๅ›žๆจกๅž‹๏ผŒไธๅญ˜ๅ‚จ
205
-
206
- # โœ… ่ฟ่กŒๆŽจ็†
207
- with torch.no_grad():
208
- prediction = model.inference(...)
209
-
210
- # โœ… ๅค„็†็ป“ๆžœ
211
- # ...
212
-
213
- # โœ… ๅ…ณ้”ฎ๏ผš่ฟ”ๅ›žๅ‰็งปๅŠจๆ‰€ๆœ‰ CUDA ๅผ ้‡ๅˆฐ CPU
214
- prediction = self._move_to_cpu(prediction)
215
-
216
- return prediction, processed_data
217
-
218
- def _move_to_cpu(self, prediction):
219
- """็งปๅŠจๆ‰€ๆœ‰ CUDA ๅผ ้‡ๅˆฐ CPU๏ผŒ็กฎไฟ pickle ๅฎ‰ๅ…จ"""
220
- # ... ๅฎž็Žฐ่งไธ‹ๆ–‡
221
- return prediction
222
- ```
223
-
224
- ### 3. event_handlers.py - ไธป่ฟ›็จ‹ไปฃ็ 
225
-
226
- ```python
227
- class EventHandlers:
228
- def __init__(self):
229
- """
230
- ไธป่ฟ›็จ‹ๅˆๅง‹ๅŒ– - ไธๅŠ ่ฝฝๆจกๅž‹ใ€‚
231
-
232
- ๆณจๆ„๏ผš่ฟ™้‡Œๅˆ›ๅปบ ModelInference ๅฎžไพ‹ๆ˜ฏๅฎ‰ๅ…จ็š„๏ผŒ
233
- ๅ› ไธบๅฎƒไธ็ซ‹ๅณๅŠ ่ฝฝๆจกๅž‹ใ€‚ๆจกๅž‹ไผšๅœจๅญ่ฟ›็จ‹ไธญๅŠ ่ฝฝใ€‚
234
- """
235
- # โœ… ๅฏไปฅๅˆ›ๅปบๅฎžไพ‹๏ผˆไธๅŠ ่ฝฝๆจกๅž‹๏ผ‰
236
- self.model_inference = ModelInference()
237
-
238
- # โŒ ไธ่ฆๅœจ่ฟ™้‡Œ่ฐƒ็”จ initialize_model()
239
- # โŒ ไธ่ฆๅœจ่ฟ™้‡ŒๅŠ ่ฝฝๆจกๅž‹
240
-
241
- def gradio_demo(self, ...):
242
- """
243
- Gradio ๅ›ž่ฐƒ - ๅœจไธป่ฟ›็จ‹่ฐƒ็”จใ€‚
244
-
245
- ่ฟ™ไธชๅ‡ฝๆ•ฐไผš่ฐƒ็”จ self.model_inference.run_inference๏ผŒ
246
- ่€Œ run_inference ่ขซ @spaces.GPU ่ฃ…้ฅฐ๏ผŒไผšๅœจๅญ่ฟ›็จ‹่ฟ่กŒใ€‚
247
- """
248
- # โœ… ่ฐƒ็”จ่ขซ่ฃ…้ฅฐ็š„ๆ–นๆณ•๏ผˆ่‡ชๅŠจๅœจๅญ่ฟ›็จ‹่ฟ่กŒ๏ผ‰
249
- result = self.model_inference.run_inference(...)
250
- return result
251
- ```
252
-
253
- ## ๐Ÿ”‘ ๅ…ณ้”ฎๅŽŸๅˆ™ๆ€ป็ป“
254
-
255
- ### โœ… DO๏ผˆๅบ”่ฏฅๅš๏ผ‰
256
-
257
- 1. **ไธป่ฟ›็จ‹๏ผšๅชๅˆ›ๅปบๅฎžไพ‹๏ผŒไธๅŠ ่ฝฝๆจกๅž‹**
258
- ```python
259
- # โœ… ไธป่ฟ›็จ‹
260
- model_inference = ModelInference() # ๅฎ‰ๅ…จ
261
- # ไธ่ฐƒ็”จ initialize_model()
262
- ```
263
-
264
- 2. **ๅญ่ฟ›็จ‹๏ผšไฝฟ็”จๅ…จๅฑ€ๅ˜้‡็ผ“ๅญ˜ๆจกๅž‹**
265
- ```python
266
- # โœ… ๅญ่ฟ›็จ‹๏ผˆ@spaces.GPU ่ฃ…้ฅฐ็š„ๅ‡ฝๆ•ฐๅ†…๏ผ‰
267
- _MODEL_CACHE = None # ๅ…จๅฑ€ๅ˜้‡
268
- model = initialize_model() # ๅœจๅญ่ฟ›็จ‹ๅŠ ่ฝฝ
269
- ```
270
-
271
- 3. **่ฟ”ๅ›žๅ‰๏ผš็งปๅŠจๆ‰€ๆœ‰ๅผ ้‡ๅˆฐ CPU**
272
- ```python
273
- # โœ… ่ฟ”ๅ›žๅ‰
274
- prediction = move_all_tensors_to_cpu(prediction)
275
- return prediction
276
- ```
277
-
278
- 4. **ๆธ…็† GPU ๅ†…ๅญ˜**
279
- ```python
280
- # โœ… ๆŽจ็†ๅŽ
281
- torch.cuda.empty_cache()
282
- ```
283
-
284
- ### โŒ DON'T๏ผˆไธๅบ”่ฏฅๅš๏ผ‰
285
-
286
- 1. **ไธป่ฟ›็จ‹๏ผšไธ่ฆๅˆๅง‹ๅŒ– CUDA**
287
- ```python
288
- # โŒ ไธป่ฟ›็จ‹
289
- model.to("cuda") # ๐Ÿ’ฅ ้”™่ฏฏ
290
- torch.cuda.is_available() # ๐Ÿ’ฅ ๅฏ่ƒฝ่งฆๅ‘ๅˆๅง‹ๅŒ–
291
- ```
292
-
293
- 2. **ไธ่ฆ็”จๅฎžไพ‹ๅ˜้‡ๅญ˜ๅ‚จๆจกๅž‹**
294
- ```python
295
- # โŒ
296
- self.model = load_model() # ็Šถๆ€ๆททไนฑ
297
- ```
298
-
299
- 3. **ไธ่ฆ่ฟ”ๅ›ž CUDA ๅผ ้‡**
300
- ```python
301
- # โŒ
302
- return prediction # ๅฆ‚ๆžœๅŒ…ๅซ CUDA ๅผ ้‡๏ผŒไผšๆŠฅ้”™
303
- ```
304
-
305
- 4. **ไธ่ฆๅœจ __init__ ไธญๅŠ ่ฝฝๆจกๅž‹**
306
- ```python
307
- # โŒ
308
- def __init__(self):
309
- self.model = load_model() # ๅœจไธป่ฟ›็จ‹ๆ‰ง่กŒ๏ผŒไผšๆŠฅ้”™
310
- ```
311
-
312
- ## ๐Ÿ“Š ๆ‰ง่กŒๆต็จ‹ๅฏนๆฏ”
313
-
314
- ### โŒ ้”™่ฏฏๆต็จ‹
315
-
316
- ```
317
- ไธป่ฟ›็จ‹ๅฏๅŠจ
318
- โ†“
319
- ๅˆ›ๅปบ ModelInference() ๅฎžไพ‹
320
- โ†“
321
- __init__ ไธญ self.model = None # โœ… ๅฎ‰ๅ…จ
322
- โ†“
323
- ็ฌฌไธ€ๆฌก่ฐƒ็”จ run_inference
324
- โ†“
325
- @spaces.GPU ๅˆ›ๅปบๅญ่ฟ›็จ‹
326
- โ†“
327
- ๅญ่ฟ›็จ‹๏ผšself.model = load_model() # โœ… ๅœจๅญ่ฟ›็จ‹
328
- โ†“
329
- ่ฟ”ๅ›ž prediction๏ผˆๅŒ…ๅซ CUDA ๅผ ้‡๏ผ‰ # โŒ ้”™่ฏฏ
330
- โ†“
331
- pickle ๅฐ่ฏ•ๅœจไธป่ฟ›็จ‹้‡ๅปบ CUDA ๅผ ้‡ # ๐Ÿ’ฅ ๆŠฅ้”™
332
- ```
333
-
334
- ### โœ… ๆญฃ็กฎๆต็จ‹
335
-
336
- ```
337
- ไธป่ฟ›็จ‹ๅฏๅŠจ
338
- โ†“
339
- ๅˆ›ๅปบ ModelInference() ๅฎžไพ‹๏ผˆๆ— ็Šถๆ€๏ผ‰ # โœ…
340
- โ†“
341
- ็ฌฌไธ€ๆฌก่ฐƒ็”จ run_inference
342
- โ†“
343
- @spaces.GPU ๅˆ›ๅปบๅญ่ฟ›็จ‹
344
- โ†“
345
- ๅญ่ฟ›็จ‹๏ผš_MODEL_CACHE = load_model() # โœ… ๅ…จๅฑ€ๅ˜้‡
346
- โ†“
347
- ๅญ่ฟ›็จ‹๏ผšmodel = _MODEL_CACHE # โœ… ๅฑ€้ƒจๅ˜้‡
348
- โ†“
349
- ๅญ่ฟ›็จ‹๏ผšprediction = model.inference(...)
350
- โ†“
351
- ๅญ่ฟ›็จ‹๏ผšprediction = move_to_cpu(prediction) # โœ…
352
- โ†“
353
- ่ฟ”ๅ›ž prediction๏ผˆๆ‰€ๆœ‰ๅผ ้‡ๅœจ CPU๏ผ‰ # โœ…
354
- โ†“
355
- ไธป่ฟ›็จ‹๏ผšๅฎ‰ๅ…จๆŽฅๆ”ถ CPU ๆ•ฐๆฎ # โœ…
356
- ```
357
-
358
- ## ๐Ÿงช ้ชŒ่ฏๆธ…ๅ•
359
-
360
- ### ไธป่ฟ›็จ‹ๆฃ€ๆŸฅ
361
-
362
- ```python
363
- # โœ… ๅบ”่ฏฅ้€š่ฟ‡
364
- def test_main_process():
365
- # ๅฏไปฅๅˆ›ๅปบๅฎžไพ‹
366
- model_inference = ModelInference()
367
-
368
- # ไธๅบ”่ฏฅๆœ‰ๆจกๅž‹
369
- assert not hasattr(model_inference, 'model') or model_inference.model is None
370
-
371
- # ไธๅบ”่ฏฅๅˆๅง‹ๅŒ– CUDA
372
- # (่ฟ™ไธชๆต‹่ฏ•้œ€่ฆๅœจไธป่ฟ›็จ‹่ฟ่กŒ)
373
- ```
374
-
375
- ### ๅญ่ฟ›็จ‹ๆฃ€ๆŸฅ
376
-
377
- ```python
378
- # โœ… ๅบ”่ฏฅ้€š่ฟ‡
379
- @spaces.GPU
380
- def test_gpu_subprocess():
381
- model_inference = ModelInference()
382
-
383
- # ๅฏไปฅๅŠ ่ฝฝๆจกๅž‹
384
- model = model_inference.initialize_model("cuda")
385
- assert model is not None
386
-
387
- # ๆจกๅž‹ๅบ”่ฏฅๅœจ GPU
388
- # (ๆฃ€ๆŸฅๆจกๅž‹ๅ‚ๆ•ฐ่ฎพๅค‡)
389
-
390
- # ๅฏไปฅ่ฟ่กŒๆŽจ็†
391
- # ...
392
-
393
- # ่ฟ”ๅ›žๅ‰ๅบ”่ฏฅ็งปๅˆฐ CPU
394
- # ...
395
- ```
396
-
397
- ## ๐ŸŽ“ ๅธธ่ง้—ฎ้ข˜
398
-
399
- ### Q1: ไธบไป€ไนˆไธ่ƒฝ็”จๅฎžไพ‹ๅ˜้‡๏ผŸ
400
-
401
- **A:** ๅ› ไธบๅฎžไพ‹ๅœจไธป่ฟ›็จ‹ๅˆ›ๅปบ๏ผŒๅฆ‚ๆžœๅญ˜ๅ‚จๆจกๅž‹็Šถๆ€๏ผŒไผš่ทจ่ฟ›็จ‹ๆททไนฑใ€‚
402
-
403
- ```python
404
- # โŒ ้—ฎ้ข˜
405
- self.model = load_model() # ็Šถๆ€ๅฏ่ƒฝๆททไนฑ
406
-
407
- # โœ… ่งฃๅ†ณ
408
- _MODEL_CACHE = load_model() # ๆฏไธชๅญ่ฟ›็จ‹็‹ฌ็ซ‹
409
- ```
410
-
411
- ### Q2: ๅ…จๅฑ€ๅ˜้‡ๅฎ‰ๅ…จๅ—๏ผŸ
412
-
413
- **A:** ๆ˜ฏ็š„๏ผๅ› ไธบ๏ผš
414
- - ๆฏไธชๅญ่ฟ›็จ‹ๆœ‰็‹ฌ็ซ‹็š„ๅ…จๅฑ€ๅ‘ฝๅ็ฉบ้—ด
415
- - ไธป่ฟ›็จ‹ไธไผš่ฎฟ้—ฎๅญ่ฟ›็จ‹็š„ๅ…จๅฑ€ๅ˜้‡
416
- - ไธไผš่ทจ่ฟ›็จ‹ๆฑกๆŸ“
417
-
418
- ### Q3: ๆจกๅž‹ไผš้‡ๅคๅŠ ่ฝฝๅ—๏ผŸ
419
-
420
- **A:** ไธไผš๏ผๅ› ไธบ๏ผš
421
- - ๅ…จๅฑ€ๅ˜้‡ๅœจๅญ่ฟ›็จ‹ๅ†…็ผ“ๅญ˜
422
- - ๅŒไธ€ไธชๅญ่ฟ›็จ‹็š„ๅคšๆฌก่ฐƒ็”จไผšๅค็”จ
423
- - ไธๅŒๅญ่ฟ›็จ‹ๅ„่‡ช็ผ“ๅญ˜๏ผˆๅฆ‚ๆžœ้œ€่ฆ๏ผ‰
424
-
425
- ### Q4: ๅฆ‚ไฝ•ๆธ…็†ๆจกๅž‹๏ผŸ
426
-
427
- **A:** ้€šๅธธไธ้œ€่ฆๆ‰‹ๅŠจๆธ…็†๏ผŒๅ› ไธบ๏ผš
428
- - ๅญ่ฟ›็จ‹็ป“ๆŸๅŽ่‡ชๅŠจๆธ…็†
429
- - ๅฆ‚ๆžœ้œ€่ฆ๏ผŒๅฏไปฅๅœจๅญ่ฟ›็จ‹ไธญ๏ผš
430
- ```python
431
- global _MODEL_CACHE
432
- _MODEL_CACHE = None
433
- del model
434
- torch.cuda.empty_cache()
435
- ```
436
-
437
- ## ๐Ÿ“ ๅฎŒๆ•ดไปฃ็ ๆจกๆฟ
438
-
439
- ```python
440
- # ========================================
441
- # model_inference.py
442
- # ========================================
443
- _MODEL_CACHE = None # ๅ…จๅฑ€็ผ“ๅญ˜
444
-
445
- class ModelInference:
446
- def __init__(self):
447
- pass # ๆ— ็Šถๆ€
448
-
449
- def initialize_model(self, device="cuda"):
450
- global _MODEL_CACHE
451
- if _MODEL_CACHE is None:
452
- _MODEL_CACHE = load_model().to(device)
453
- return _MODEL_CACHE
454
-
455
- def run_inference(self, ...):
456
- model = self.initialize_model("cuda")
457
- prediction = model.inference(...)
458
- prediction = self._move_to_cpu(prediction)
459
- return prediction
460
-
461
- # ========================================
462
- # app.py
463
- # ========================================
464
- @spaces.GPU(duration=120)
465
- def gpu_run_inference(self, *args, **kwargs):
466
- return ModelInference.run_inference(self, *args, **kwargs)
467
-
468
- ModelInference.run_inference = gpu_run_inference
469
- ```
470
-
471
- ## ๐ŸŽฏ ๆ€ป็ป“
472
-
473
- **ๆ ธๅฟƒๅŽŸๅˆ™๏ผš**
474
-
475
- 1. โœ… **ไธป่ฟ›็จ‹ = CPU ็Žฏๅขƒ**๏ผŒไธๅŠ ่ฝฝๆจกๅž‹๏ผŒไธๅˆๅง‹ๅŒ– CUDA
476
- 2. โœ… **ๅญ่ฟ›็จ‹ = GPU ็Žฏๅขƒ**๏ผŒๅŠ ่ฝฝๆจกๅž‹๏ผŒ่ฟ่กŒๆŽจ็†
477
- 3. โœ… **ๅ…จๅฑ€ๅ˜้‡็ผ“ๅญ˜**๏ผŒๆฏไธชๅญ่ฟ›็จ‹็‹ฌ็ซ‹
478
- 4. โœ… **่ฟ”ๅ›ž CPU ๆ•ฐๆฎ**๏ผŒ็กฎไฟ pickle ๅฎ‰ๅ…จ
479
-
480
- ้ตๅพช่ฟ™ไบ›ๅŽŸๅˆ™๏ผŒไฝ ็š„ Spaces GPU ๅบ”็”จๅฐฑ่ƒฝ็จณๅฎš่ฟ่กŒ๏ผ๐Ÿš€
481
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SPACES_GPU_FIX_GUIDE.md DELETED
@@ -1,484 +0,0 @@
1
- # ๐Ÿ”ง Spaces GPU ้—ฎ้ข˜ๅฎŒๆ•ดไฟฎๅคๆŒ‡ๅ—
2
-
3
- ## ๐ŸŽฏ ้—ฎ้ข˜่ฏŠๆ–ญ๏ผšไฝ ่ฏดๅพ—ๅฎŒๅ…จๆญฃ็กฎ๏ผ
4
-
5
- ### ้—ฎ้ข˜ๆ นๆบๅˆ†ๆž
6
-
7
- ```python
8
- # event_handlers.py - ไธป่ฟ›็จ‹ไธญ
9
- class EventHandlers:
10
- def __init__(self):
11
- self.model_inference = ModelInference() # โŒ ๅœจไธป่ฟ›็จ‹ๅˆ›ๅปบๅฎžไพ‹
12
-
13
- # model_inference.py
14
- class ModelInference:
15
- def __init__(self):
16
- self.model = None # โŒ ๅฎžไพ‹ๅ˜้‡๏ผŒ่ทจ่ฟ›็จ‹ๅ…ฑไบซ็Šถๆ€ๆœ‰้—ฎ้ข˜
17
-
18
- def initialize_model(self, device):
19
- if self.model is None:
20
- self.model = load_model() # ็ฌฌไธ€ๆฌก๏ผšๅœจๅญ่ฟ›็จ‹ๅŠ ่ฝฝ
21
- else:
22
- self.model = self.model.to(device) # ็ฌฌไบŒๆฌก๏ผš๐Ÿ’ฅ ไธป่ฟ›็จ‹CUDAๆ“ไฝœ๏ผ
23
- ```
24
-
25
- ### ไธบไป€ไนˆ็ฌฌไบŒๆฌกไผšๅคฑ่ดฅ๏ผŸ
26
-
27
- 1. **็ฌฌไธ€ๆฌก่ฐƒ็”จ**๏ผš
28
- - `@spaces.GPU` ๅœจๅญ่ฟ›็จ‹่ฟ่กŒ
29
- - `self.model is None` โ†’ ๅŠ ่ฝฝๆจกๅž‹
30
- - `self.model` ไฟๅญ˜ๅœจๅฎžไพ‹ไธญ
31
- - ่ฟ”ๅ›žๆ—ถ `prediction.gaussians` ๅŒ…ๅซ CUDA ๅผ ้‡
32
- - **pickle ๆ—ถๅฐ่ฏ•ๅœจไธป่ฟ›็จ‹้‡ๅปบ CUDA ๅผ ้‡** โ†’ ๐Ÿ’ฅ
33
-
34
- 2. **็ฌฌไบŒๆฌก่ฐƒ็”จ**๏ผˆๅณไฝฟ็ฌฌไธ€ๆฌกๆˆๅŠŸไบ†๏ผ‰๏ผš
35
- - ๆ–ฐ็š„ๅญ่ฟ›็จ‹ๆˆ–็Šถๆ€ๆททไนฑ
36
- - `self.model` ็Šถๆ€ไธ็กฎๅฎš
37
- - ๅฐ่ฏ• `.to(device)` ๆ“ไฝœ โ†’ ๐Ÿ’ฅ
38
-
39
- ## โœ… ่งฃๅ†ณๆ–นๆกˆ๏ผšไธคไธชๅ…ณ้”ฎไฟฎๆ”น
40
-
41
- ### ไฟฎๆ”น 1๏ผšไฝฟ็”จๅ…จๅฑ€ๅ˜้‡็ผ“ๅญ˜ๆจกๅž‹๏ผˆ้ฟๅ…ๅฎžไพ‹็Šถๆ€๏ผ‰
42
-
43
- **ไธบไป€ไนˆ็”จๅ…จๅฑ€ๅ˜้‡๏ผŸ**
44
- - `@spaces.GPU` ๆฏๆฌกๅœจ็‹ฌ็ซ‹ๅญ่ฟ›็จ‹่ฟ่กŒ
45
- - ๅ…จๅฑ€ๅ˜้‡ๅœจๅญ่ฟ›็จ‹ๅ†…ๆ˜ฏๅฎ‰ๅ…จ็š„
46
- - ไธไผšๆฑกๆŸ“ไธป่ฟ›็จ‹
47
-
48
- ### ไฟฎๆ”น 2๏ผš่ฟ”ๅ›žๅ‰็งปๅŠจๆ‰€ๆœ‰ CUDA ๅผ ้‡ๅˆฐ CPU
49
-
50
- **ไธบไป€ไนˆ้œ€่ฆ๏ผŸ**
51
- - Pickle ๅบๅˆ—ๅŒ–่ฟ”ๅ›žๅ€ผๆ—ถไผšๅฐ่ฏ•้‡ๅปบ CUDA ๅผ ้‡
52
- - ๅฟ…้กป็กฎไฟ่ฟ”ๅ›ž็š„ๆ•ฐๆฎ้ƒฝๅœจ CPU ไธŠ
53
-
54
- ## ๐Ÿ“ ๅฎŒๆ•ดไฟฎๅคไปฃ็ 
55
-
56
- ### ๆ–‡ไปถ๏ผš`depth_anything_3/app/modules/model_inference.py`
57
-
58
- ```python
59
- """
60
- Model inference module for Depth Anything 3 Gradio app.
61
-
62
- Modified for HF Spaces GPU compatibility.
63
- """
64
-
65
- import gc
66
- import glob
67
- import os
68
- from typing import Any, Dict, Optional, Tuple
69
- import numpy as np
70
- import torch
71
-
72
- from depth_anything_3.api import DepthAnything3
73
- from depth_anything_3.utils.export.glb import export_to_glb
74
- from depth_anything_3.utils.export.gs import export_to_gs_video
75
-
76
-
77
- # ========================================
78
- # ๐Ÿ”‘ ๅ…ณ้”ฎไฟฎๆ”น 1๏ผšไฝฟ็”จๅ…จๅฑ€ๅ˜้‡็ผ“ๅญ˜ๆจกๅž‹
79
- # ========================================
80
- # Global cache for model (used in GPU subprocess)
81
- # This is SAFE because @spaces.GPU runs in isolated subprocess
82
- # Each subprocess gets its own copy of this global variable
83
- _MODEL_CACHE = None
84
-
85
-
86
- class ModelInference:
87
- """
88
- Handles model inference and data processing for Depth Anything 3.
89
-
90
- Modified for HF Spaces GPU compatibility - does NOT store state
91
- in instance variables to avoid cross-process issues.
92
- """
93
-
94
- def __init__(self):
95
- """Initialize the model inference handler.
96
-
97
- Note: Do NOT store model in instance variable to avoid
98
- state sharing issues with @spaces.GPU decorator.
99
- """
100
- # No instance variables! All state in global or local variables
101
- pass
102
-
103
- def initialize_model(self, device: str = "cuda"):
104
- """
105
- Initialize the DepthAnything3 model using global cache.
106
-
107
- This uses a global variable which is safe because:
108
- 1. @spaces.GPU runs in isolated subprocess
109
- 2. Each subprocess has its own global namespace
110
- 3. No state leaks to main process
111
-
112
- Args:
113
- device: Device to load the model on
114
-
115
- Returns:
116
- Model instance ready for inference
117
- """
118
- global _MODEL_CACHE
119
-
120
- if _MODEL_CACHE is None:
121
- # First time loading in this subprocess
122
- model_dir = os.environ.get(
123
- "DA3_MODEL_DIR", "depth-anything/DA3NESTED-GIANT-LARGE"
124
- )
125
- print(f"๐Ÿ”„ Loading model from {model_dir}...")
126
- _MODEL_CACHE = DepthAnything3.from_pretrained(model_dir)
127
- _MODEL_CACHE = _MODEL_CACHE.to(device)
128
- _MODEL_CACHE.eval()
129
- print("โœ… Model loaded and ready on GPU")
130
- else:
131
- # Model already cached in this subprocess
132
- print("โœ… Using cached model")
133
- # Ensure it's on the correct device (defensive programming)
134
- _MODEL_CACHE = _MODEL_CACHE.to(device)
135
-
136
- return _MODEL_CACHE
137
-
138
- def run_inference(
139
- self,
140
- target_dir: str,
141
- filter_black_bg: bool = False,
142
- filter_white_bg: bool = False,
143
- process_res_method: str = "upper_bound_resize",
144
- show_camera: bool = True,
145
- selected_first_frame: Optional[str] = None,
146
- save_percentage: float = 30.0,
147
- num_max_points: int = 1_000_000,
148
- infer_gs: bool = False,
149
- gs_trj_mode: str = "extend",
150
- gs_video_quality: str = "high",
151
- ) -> Tuple[Any, Dict[int, Dict[str, Any]]]:
152
- """
153
- Run DepthAnything3 model inference on images.
154
-
155
- This method is wrapped with @spaces.GPU in app.py.
156
-
157
- Args:
158
- target_dir: Directory containing images
159
- filter_black_bg: Whether to filter black background
160
- filter_white_bg: Whether to filter white background
161
- process_res_method: Method for resizing input images
162
- show_camera: Whether to show camera in 3D view
163
- selected_first_frame: Selected first frame filename
164
- save_percentage: Percentage of points to save (0-100)
165
- num_max_points: Maximum number of points
166
- infer_gs: Whether to infer 3D Gaussian Splatting
167
- gs_trj_mode: Trajectory mode for GS
168
- gs_video_quality: Video quality for GS
169
-
170
- Returns:
171
- Tuple of (prediction, processed_data)
172
- """
173
- print(f"Processing images from {target_dir}")
174
-
175
- # Device check
176
- device = "cuda" if torch.cuda.is_available() else "cpu"
177
- device = torch.device(device)
178
- print(f"Using device: {device}")
179
-
180
- # ๐Ÿ”‘ ไฝฟ็”จ่ฟ”ๅ›žๅ€ผ๏ผŒ่€Œไธๆ˜ฏ self.model
181
- model = self.initialize_model(device)
182
-
183
- # Get image paths
184
- print("Loading images...")
185
- image_folder_path = os.path.join(target_dir, "images")
186
- all_image_paths = sorted(glob.glob(os.path.join(image_folder_path, "*")))
187
-
188
- # Filter for image files
189
- image_extensions = [".jpg", ".jpeg", ".png", ".bmp", ".tiff", ".tif"]
190
- all_image_paths = [
191
- path
192
- for path in all_image_paths
193
- if any(path.lower().endswith(ext) for ext in image_extensions)
194
- ]
195
-
196
- print(f"Found {len(all_image_paths)} images")
197
-
198
- # Apply first frame selection logic
199
- if selected_first_frame:
200
- selected_path = None
201
- for path in all_image_paths:
202
- if os.path.basename(path) == selected_first_frame:
203
- selected_path = path
204
- break
205
-
206
- if selected_path:
207
- image_paths = [selected_path] + [
208
- path for path in all_image_paths if path != selected_path
209
- ]
210
- print(f"User selected first frame: {selected_first_frame}")
211
- else:
212
- image_paths = all_image_paths
213
- print(f"Selected frame not found, using default order")
214
- else:
215
- image_paths = all_image_paths
216
-
217
- if len(image_paths) == 0:
218
- raise ValueError("No images found. Check your upload.")
219
-
220
- # Map UI options to actual method names
221
- method_mapping = {"high_res": "lower_bound_resize", "low_res": "upper_bound_resize"}
222
- actual_method = method_mapping.get(process_res_method, "upper_bound_crop")
223
-
224
- # Run model inference
225
- print(f"Running inference with method: {actual_method}")
226
- with torch.no_grad():
227
- # ๐Ÿ”‘ ไฝฟ็”จๅฑ€้ƒจๅ˜้‡ model๏ผŒไธๆ˜ฏ self.model
228
- prediction = model.inference(
229
- image_paths, export_dir=None, process_res_method=actual_method, infer_gs=infer_gs
230
- )
231
-
232
- # Export to GLB
233
- export_to_glb(
234
- prediction,
235
- filter_black_bg=filter_black_bg,
236
- filter_white_bg=filter_white_bg,
237
- export_dir=target_dir,
238
- show_cameras=show_camera,
239
- conf_thresh_percentile=save_percentage,
240
- num_max_points=int(num_max_points),
241
- )
242
-
243
- # Export to GS video if needed
244
- if infer_gs:
245
- mode_mapping = {"extend": "extend", "smooth": "interpolate_smooth"}
246
- print(f"GS mode: {gs_trj_mode}; Backend mode: {mode_mapping[gs_trj_mode]}")
247
- export_to_gs_video(
248
- prediction,
249
- export_dir=target_dir,
250
- chunk_size=4,
251
- trj_mode=mode_mapping.get(gs_trj_mode, "extend"),
252
- enable_tqdm=True,
253
- vis_depth="hcat",
254
- video_quality=gs_video_quality,
255
- )
256
-
257
- # Save predictions cache
258
- self._save_predictions_cache(target_dir, prediction)
259
-
260
- # Process results
261
- processed_data = self._process_results(target_dir, prediction, image_paths)
262
-
263
- # ========================================
264
- # ๐Ÿ”‘ ๅ…ณ้”ฎไฟฎๆ”น 2๏ผš่ฟ”ๅ›žๅ‰็งปๅŠจๆ‰€ๆœ‰ CUDA ๅผ ้‡ๅˆฐ CPU
265
- # ========================================
266
- print("Moving all tensors to CPU for safe return...")
267
- prediction = self._move_prediction_to_cpu(prediction)
268
-
269
- # Clean up GPU memory
270
- torch.cuda.empty_cache()
271
-
272
- return prediction, processed_data
273
-
274
- def _move_prediction_to_cpu(self, prediction: Any) -> Any:
275
- """
276
- Move all CUDA tensors in prediction to CPU for safe pickling.
277
-
278
- This is CRITICAL for HF Spaces with @spaces.GPU decorator.
279
- Without this, pickle will try to reconstruct CUDA tensors in
280
- the main process, causing CUDA initialization error.
281
-
282
- Args:
283
- prediction: Prediction object that may contain CUDA tensors
284
-
285
- Returns:
286
- Prediction object with all tensors moved to CPU
287
- """
288
- # Move gaussians tensors to CPU
289
- if hasattr(prediction, 'gaussians') and prediction.gaussians is not None:
290
- gaussians = prediction.gaussians
291
-
292
- # Move each tensor attribute to CPU
293
- tensor_attrs = ['means', 'scales', 'rotations', 'harmonics', 'opacities']
294
- for attr in tensor_attrs:
295
- if hasattr(gaussians, attr):
296
- tensor = getattr(gaussians, attr)
297
- if isinstance(tensor, torch.Tensor) and tensor.is_cuda:
298
- setattr(gaussians, attr, tensor.cpu())
299
- print(f" โœ“ Moved gaussians.{attr} to CPU")
300
-
301
- # Move any tensors in aux dict to CPU
302
- if hasattr(prediction, 'aux') and prediction.aux is not None:
303
- for key, value in list(prediction.aux.items()):
304
- if isinstance(value, torch.Tensor) and value.is_cuda:
305
- prediction.aux[key] = value.cpu()
306
- print(f" โœ“ Moved aux['{key}'] to CPU")
307
- elif isinstance(value, dict):
308
- # Recursively handle nested dicts
309
- for k, v in list(value.items()):
310
- if isinstance(v, torch.Tensor) and v.is_cuda:
311
- value[k] = v.cpu()
312
- print(f" โœ“ Moved aux['{key}']['{k}'] to CPU")
313
-
314
- print("โœ… All tensors moved to CPU")
315
- return prediction
316
-
317
- def _save_predictions_cache(self, target_dir: str, prediction: Any) -> None:
318
- """Save predictions data to predictions.npz for caching."""
319
- try:
320
- output_file = os.path.join(target_dir, "predictions.npz")
321
- save_dict = {}
322
-
323
- if prediction.processed_images is not None:
324
- save_dict["images"] = prediction.processed_images
325
-
326
- if prediction.depth is not None:
327
- save_dict["depths"] = np.round(prediction.depth, 6)
328
-
329
- if prediction.conf is not None:
330
- save_dict["conf"] = np.round(prediction.conf, 2)
331
-
332
- if prediction.extrinsics is not None:
333
- save_dict["extrinsics"] = prediction.extrinsics
334
- if prediction.intrinsics is not None:
335
- save_dict["intrinsics"] = prediction.intrinsics
336
-
337
- np.savez_compressed(output_file, **save_dict)
338
- print(f"Saved predictions cache to: {output_file}")
339
-
340
- except Exception as e:
341
- print(f"Warning: Failed to save predictions cache: {e}")
342
-
343
- def _process_results(
344
- self, target_dir: str, prediction: Any, image_paths: list
345
- ) -> Dict[int, Dict[str, Any]]:
346
- """Process model results into structured data."""
347
- processed_data = {}
348
-
349
- depth_vis_dir = os.path.join(target_dir, "depth_vis")
350
-
351
- if os.path.exists(depth_vis_dir):
352
- depth_files = sorted(glob.glob(os.path.join(depth_vis_dir, "*.jpg")))
353
- for i, depth_file in enumerate(depth_files):
354
- processed_image = None
355
- if prediction.processed_images is not None and i < len(
356
- prediction.processed_images
357
- ):
358
- processed_image = prediction.processed_images[i]
359
-
360
- processed_data[i] = {
361
- "depth_image": depth_file,
362
- "image": processed_image,
363
- "original_image_path": image_paths[i] if i < len(image_paths) else None,
364
- "depth": prediction.depth[i] if i < len(prediction.depth) else None,
365
- "intrinsics": (
366
- prediction.intrinsics[i]
367
- if prediction.intrinsics is not None and i < len(prediction.intrinsics)
368
- else None
369
- ),
370
- "mask": None,
371
- }
372
-
373
- return processed_data
374
-
375
- def cleanup(self) -> None:
376
- """Clean up GPU memory."""
377
- if torch.cuda.is_available():
378
- torch.cuda.empty_cache()
379
- gc.collect()
380
- ```
381
-
382
- ## ๐Ÿ” ๅ…ณ้”ฎๅ˜ๅŒ–ๆ€ป็ป“
383
-
384
- ### Before (ๆœ‰้—ฎ้ข˜)๏ผš
385
- ```python
386
- class ModelInference:
387
- def __init__(self):
388
- self.model = None # โŒ ๅฎžไพ‹ๅ˜้‡
389
-
390
- def initialize_model(self, device):
391
- if self.model is None:
392
- self.model = load_model() # โŒ ไฟๅญ˜ๅœจๅฎžไพ‹ไธญ
393
- else:
394
- self.model = self.model.to(device) # โŒ ่ทจ่ฟ›็จ‹ๆ“ไฝœ
395
-
396
- def run_inference(self):
397
- self.initialize_model(device) # โŒ ไฝฟ็”จๅฎžไพ‹ๆ–นๆณ•
398
- prediction = self.model.inference(...) # โŒ ไฝฟ็”จๅฎžไพ‹ๅ˜้‡
399
- return prediction # โŒ ๅŒ…ๅซ CUDA ๅผ ้‡
400
- ```
401
-
402
- ### After (ๆญฃ็กฎ)๏ผš
403
- ```python
404
- _MODEL_CACHE = None # โœ… ๅ…จๅฑ€ๅ˜้‡๏ผˆๅญ่ฟ›็จ‹ๅฎ‰ๅ…จ๏ผ‰
405
-
406
- class ModelInference:
407
- def __init__(self):
408
- pass # โœ… ๆ— ๅฎžไพ‹ๅ˜้‡
409
-
410
- def initialize_model(self, device):
411
- global _MODEL_CACHE
412
- if _MODEL_CACHE is None:
413
- _MODEL_CACHE = load_model() # โœ… ไฟๅญ˜ๅœจๅ…จๅฑ€
414
- return _MODEL_CACHE # โœ… ่ฟ”ๅ›ž่€Œไธๆ˜ฏๅญ˜ๅ‚จ
415
-
416
- def run_inference(self):
417
- model = self.initialize_model(device) # โœ… ๅฑ€้ƒจๅ˜้‡
418
- prediction = model.inference(...) # โœ… ไฝฟ็”จๅฑ€้ƒจๅ˜้‡
419
- prediction = self._move_prediction_to_cpu(prediction) # โœ… ็งปๅˆฐ CPU
420
- return prediction # โœ… ๅฎ‰ๅ…จ่ฟ”ๅ›ž
421
- ```
422
-
423
- ## ๐ŸŽฏ ไธบไป€ไนˆ่ฟ™ๆ ทไฟฎๆ”น๏ผŸ
424
-
425
- ### 1. ๅ…จๅฑ€ๅ˜้‡ vs ๅฎžไพ‹ๅ˜้‡
426
-
427
- | ๆ–นๅผ | ้—ฎ้ข˜ | ๅŽŸๅ›  |
428
- |------|------|------|
429
- | `self.model` | โŒ ่ทจ่ฟ›็จ‹็Šถๆ€ๆททไนฑ | ๅฎžไพ‹ๅœจไธป่ฟ›็จ‹ๅˆ›ๅปบ |
430
- | `_MODEL_CACHE` | โœ… ๅญ่ฟ›็จ‹ๅ†…ๅฎ‰ๅ…จ | ๆฏไธชๅญ่ฟ›็จ‹็‹ฌ็ซ‹ |
431
-
432
- ### 2. ่ฟ”ๅ›ž CPU ๅผ ้‡
433
-
434
- ```python
435
- # โŒ ็›ดๆŽฅ่ฟ”ๅ›žไผšๆŠฅ้”™
436
- return prediction # prediction.gaussians.means is on CUDA
437
-
438
- # โœ… ็งปๅˆฐ CPU ๅŽ่ฟ”ๅ›ž
439
- prediction = move_to_cpu(prediction)
440
- return prediction # All tensors are on CPU, pickle safe
441
- ```
442
-
443
- ## ๐Ÿงช ๆต‹่ฏ•ไฟฎๅค
444
-
445
- ```bash
446
- # 1. ๅบ”็”จไฟฎๆ”น
447
- # ๅคๅˆถไธŠ้ข็š„ๅฎŒๆ•ดไปฃ็ ๅˆฐ model_inference.py
448
-
449
- # 2. ๆŽจ้€ๅˆฐ Spaces
450
- git add depth_anything_3/app/modules/model_inference.py
451
- git commit -m "Fix: Spaces GPU CUDA initialization error"
452
- git push
453
-
454
- # 3. ๆต‹่ฏ•ๅคšๆฌก่ฟ่กŒ
455
- # ๅœจ Space ไธญ่ฟž็ปญ่ฟ่กŒ 2-3 ๆฌกๆŽจ็†
456
- # ๅบ”่ฏฅไธๅ†ๅ‡บ็Žฐ CUDA ้”™่ฏฏ
457
- ```
458
-
459
- ## ๐Ÿ“Š ไฟฎๅคๆ•ˆๆžœ
460
-
461
- | ้—ฎ้ข˜ | Before | After |
462
- |------|--------|-------|
463
- | ็ฌฌไธ€ๆฌกๆŽจ็† | โŒ CUDA ้”™่ฏฏ | โœ… ๆญฃๅธธ |
464
- | ็ฌฌไบŒๆฌกๆŽจ็† | โŒ CUDA ้”™่ฏฏ | โœ… ๆญฃๅธธ |
465
- | ่ฟž็ปญๆŽจ็† | โŒ ๅคฑ่ดฅ | โœ… ็จณๅฎš |
466
- | ๆจกๅž‹ๅŠ ่ฝฝ | ๆฏๆฌก้‡ๆ–ฐๅŠ ่ฝฝ | ็ผ“ๅญ˜ๅค็”จ |
467
-
468
- ## ๐Ÿ’ก ๆœ€ไฝณๅฎž่ทต
469
-
470
- ๅฏนไบŽ `@spaces.GPU` ่ฃ…้ฅฐ็š„ๅ‡ฝๆ•ฐ๏ผš
471
-
472
- 1. โœ… ไฝฟ็”จ**ๅ…จๅฑ€ๅ˜้‡**็ผ“ๅญ˜ๆจกๅž‹๏ผˆๅญ่ฟ›็จ‹ๅฎ‰ๅ…จ๏ผ‰
473
- 2. โœ… **ไธ่ฆ**ไฝฟ็”จๅฎžไพ‹ๅ˜้‡ๅญ˜ๅ‚จๆจกๅž‹
474
- 3. โœ… ่ฟ”ๅ›žๅ‰**็งปๅŠจๆ‰€ๆœ‰ๅผ ้‡ๅˆฐ CPU**
475
- 4. โœ… ๆธ…็† GPU ๅ†…ๅญ˜ (`torch.cuda.empty_cache()`)
476
- 5. โŒ **ไธ่ฆ**ๅœจไธป่ฟ›็จ‹ไธญๅˆๅง‹ๅŒ– CUDA
477
- 6. โŒ **ไธ่ฆ**่ฟ”ๅ›ž CUDA ๅผ ้‡
478
-
479
- ## ๐Ÿ”— ็›ธๅ…ณ่ต„ๆบ
480
-
481
- - [HF Spaces Zero GPU ๆ–‡ๆกฃ](https://huggingface.co/docs/hub/spaces-gpus#zero-gpu)
482
- - [PyTorch Multiprocessing](https://pytorch.org/docs/stable/notes/multiprocessing.html)
483
- - [Pickle ๅ่ฎฎ](https://docs.python.org/3/library/pickle.html)
484
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SPACES_SETUP.md DELETED
@@ -1,190 +0,0 @@
1
- # Hugging Face Spaces ้ƒจ็ฝฒๆŒ‡ๅ—
2
-
3
- ## ๐Ÿ“‹ ๆฆ‚่ฟฐ
4
-
5
- ่ฟ™ไธช้กน็›ฎๅทฒ็ป้…็ฝฎๅฅฝๅฏไปฅ้ƒจ็ฝฒๅˆฐ Hugging Face Spaces๏ผŒไฝฟ็”จ `@spaces.GPU` ่ฃ…้ฅฐๅ™จๆฅๅŠจๆ€ๅˆ†้… GPU ่ต„ๆบใ€‚
6
-
7
- ## ๐ŸŽฏ ๅ…ณ้”ฎๆ–‡ไปถ
8
-
9
- ### 1. `app.py` - ไธปๅบ”็”จๆ–‡ไปถ
10
-
11
- ```python
12
- import spaces
13
- from depth_anything_3.app.gradio_app import DepthAnything3App
14
- from depth_anything_3.app.modules.model_inference import ModelInference
15
-
16
- # ไฝฟ็”จ monkey-patching ๅฐ† GPU ่ฃ…้ฅฐๅ™จๅบ”็”จๅˆฐๆŽจ็†ๅ‡ฝๆ•ฐ
17
- original_run_inference = ModelInference.run_inference
18
-
19
- @spaces.GPU(duration=120) # ่ฏทๆฑ‚ GPU๏ผŒๆœ€ๅคš 120 ็ง’
20
- def gpu_run_inference(self, *args, **kwargs):
21
- return original_run_inference(self, *args, **kwargs)
22
-
23
- ModelInference.run_inference = gpu_run_inference
24
- ```
25
-
26
- **ๅทฅไฝœๅŽŸ็†๏ผš**
27
- - `@spaces.GPU` ่ฃ…้ฅฐๅ™จๅœจๅ‡ฝๆ•ฐ่ฐƒ็”จๆ—ถๅŠจๆ€ๅˆ†้… GPU
28
- - `duration=120` ่กจ็คบๅ•ๆฌกๆŽจ็†ๆœ€ๅคšไฝฟ็”จ GPU 120 ็ง’
29
- - ้€š่ฟ‡ monkey-patching๏ผŒๆˆ‘ไปฌๅฐ†่ฃ…้ฅฐๅ™จๅบ”็”จๅˆฐๅทฒๆœ‰็š„ๆŽจ็†ๅ‡ฝๆ•ฐไธŠ๏ผŒๆ— ้œ€ไฟฎๆ”นๆ ธๅฟƒไปฃ็ 
30
-
31
- ### 2. `README.md` - Spaces ้…็ฝฎ
32
-
33
- ```yaml
34
- ---
35
- title: Depth Anything 3
36
- sdk: gradio
37
- sdk_version: 5.49.1
38
- app_file: app.py
39
- pinned: false
40
- license: cc-by-nc-4.0
41
- ---
42
- ```
43
-
44
- ่ฟ™ไธช YAML ๅ‰็ฝฎๅ†…ๅฎนๅ‘Š่ฏ‰ Hugging Face Spaces๏ผš
45
- - ไฝฟ็”จ Gradio SDK
46
- - ๅ…ฅๅฃๆ–‡ไปถๆ˜ฏ `app.py`
47
- - ไฝฟ็”จ็š„ Gradio ็‰ˆๆœฌ
48
-
49
- ### 3. `pyproject.toml` - ไพ่ต–้…็ฝฎ
50
-
51
- ๅทฒ็ปๆ›ดๆ–ฐ๏ผŒๅŒ…ๅซไบ† `spaces` ไพ่ต–๏ผš
52
-
53
- ```toml
54
- [project.optional-dependencies]
55
- app = ["gradio>=5", "pillow>=9.0", "spaces"]
56
- ```
57
-
58
- ## ๐Ÿš€ ้ƒจ็ฝฒๆญฅ้ชค
59
-
60
- ### ๆ–นๅผ 1๏ผš้€š่ฟ‡ Hugging Face ็ฝ‘้กต็•Œ้ข
61
-
62
- 1. ๅœจ Hugging Face ๅˆ›ๅปบไธ€ไธชๆ–ฐ็š„ Space
63
- 2. ้€‰ๆ‹ฉ **Gradio** ไฝœไธบ SDK
64
- 3. ไธŠไผ ไฝ ็š„ไปฃ็ ๏ผˆๅŒ…ๆ‹ฌ `app.py`, `src/`, `pyproject.toml` ็ญ‰๏ผ‰
65
- 4. Space ไผš่‡ชๅŠจๆž„ๅปบๅนถๅฏๅŠจ
66
-
67
- ### ๆ–นๅผ 2๏ผš้€š่ฟ‡ Git
68
-
69
- ```bash
70
- # ๅ…‹้š†ไฝ ็š„ Space ไป“ๅบ“
71
- git clone https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
72
- cd YOUR_SPACE_NAME
73
-
74
- # ๆทปๅŠ ไฝ ็š„ไปฃ็ 
75
- cp -r /path/to/depth-anything-3/* .
76
-
77
- # ๆไบคๅนถๆŽจ้€
78
- git add .
79
- git commit -m "Initial commit"
80
- git push
81
- ```
82
-
83
- ## ๐Ÿ”ง ้…็ฝฎ้€‰้กน
84
-
85
- ### GPU ็ฑปๅž‹
86
-
87
- Hugging Face Spaces ๆ”ฏๆŒไธๅŒ็š„ GPU ็ฑปๅž‹๏ผš
88
-
89
- - **Free (T4)**: ๅ…่ดน๏ผŒ้€‚ๅˆๅฐๅž‹ๆจกๅž‹
90
- - **A10G**: ไป˜่ดน๏ผŒๆ›ดๅผบๅคง
91
- - **A100**: ไป˜่ดน๏ผŒๆœ€ๅผบๅคง
92
-
93
- ### GPU Duration
94
-
95
- ๅœจ `app.py` ไธญๅฏไปฅ่ฐƒๆ•ด๏ผš
96
-
97
- ```python
98
- @spaces.GPU(duration=120) # 120 ็ง’
99
- ```
100
-
101
- - ่ฎพ็ฝฎๅคช็Ÿญ๏ผšๅคๆ‚ๆŽจ็†ๅฏ่ƒฝ่ถ…ๆ—ถ
102
- - ่ฎพ็ฝฎๅคช้•ฟ๏ผšๆตช่ดน่ต„ๆบ
103
- - ๆŽจ่๏ผšๆ นๆฎๅฎž้™…ๆŽจ็†ๆ—ถ้—ด่ฎพ็ฝฎ๏ผˆๅฏไปฅๅ…ˆ่ฎพ้•ฟไธ€็‚น๏ผŒ็„ถๅŽๆ นๆฎๆ—ฅๅฟ—่ฐƒๆ•ด๏ผ‰
104
-
105
- ### ็Žฏๅขƒๅ˜้‡
106
-
107
- ๅฏไปฅๅœจ Space ่ฎพ็ฝฎไธญ้…็ฝฎ็Žฏๅขƒๅ˜้‡๏ผš
108
-
109
- - `DA3_MODEL_DIR`: ๆจกๅž‹็›ฎๅฝ•่ทฏๅพ„
110
- - `DA3_WORKSPACE_DIR`: ๅทฅไฝœ็ฉบ้—ด็›ฎๅฝ•
111
- - `DA3_GALLERY_DIR`: ๅ›พๅบ“็›ฎๅฝ•
112
-
113
- ## ๐Ÿ“Š ็›‘ๆŽงๅ’Œ่ฐƒ่ฏ•
114
-
115
- ### ๆŸฅ็œ‹ๆ—ฅๅฟ—
116
-
117
- ๅœจ Spaces ็•Œ้ข็‚นๅ‡ป "Logs" ๆ ‡็ญพๅฏไปฅ็œ‹ๅˆฐ๏ผš
118
-
119
- ```
120
- ๐Ÿš€ Launching Depth Anything 3 on Hugging Face Spaces...
121
- ๐Ÿ“ฆ Model Directory: depth-anything/DA3NESTED-GIANT-LARGE
122
- ๐Ÿ“ Workspace Directory: workspace/gradio
123
- ๐Ÿ–ผ๏ธ Gallery Directory: workspace/gallery
124
- ```
125
-
126
- ### GPU ไฝฟ็”จๆƒ…ๅ†ต
127
-
128
- ๅœจ่ฃ…้ฅฐ็š„ๅ‡ฝๆ•ฐๅ†…้ƒจ๏ผŒๅฏไปฅๆฃ€ๆŸฅ GPU ็Šถๆ€๏ผš
129
-
130
- ```python
131
- print(torch.cuda.is_available()) # True
132
- print(torch.cuda.device_count()) # 1 (้€šๅธธ)
133
- print(torch.cuda.get_device_name(0)) # 'Tesla T4' ๆˆ–ๅ…ถไป–
134
- ```
135
-
136
- ## ๐ŸŽ“ ็คบไพ‹ไปฃ็ 
137
-
138
- ๆŸฅ็œ‹ `example_spaces_gpu.py` ไบ†่งฃ `@spaces.GPU` ่ฃ…้ฅฐๅ™จ็š„ๅŸบๆœฌ็”จๆณ•ใ€‚
139
-
140
- ## โ“ ๅธธ่ง้—ฎ้ข˜
141
-
142
- ### Q: ไธบไป€ไนˆไฝฟ็”จ monkey-patching๏ผŸ
143
-
144
- A: ่ฟ™ๆ ทๅฏไปฅๅœจไธไฟฎๆ”นๆ ธๅฟƒไปฃ็ ็š„ๆƒ…ๅ†ตไธ‹ๆทปๅŠ  Spaces ๆ”ฏๆŒใ€‚ๅฆ‚ๆžœไฝ ๆƒณๆ›ดไผ˜้›…็š„ๆ–นๅผ๏ผŒๅฏไปฅ๏ผš
145
-
146
- 1. ็›ดๆŽฅๅœจ `ModelInference.run_inference` ๆ–นๆณ•ไธŠๆทปๅŠ ่ฃ…้ฅฐๅ™จ
147
- 2. ๅˆ›ๅปบไธ€ไธช็ปงๆ‰ฟ่‡ช `ModelInference` ็š„ๆ–ฐ็ฑป
148
-
149
- ### Q: ๅฆ‚ไฝ•ๆต‹่ฏ•ๆœฌๅœฐๆ˜ฏๅฆ่ƒฝ่ฟ่กŒ๏ผŸ
150
-
151
- A: ๆœฌๅœฐ่ฟ่กŒๆ—ถ๏ผŒ`spaces.GPU` ่ฃ…้ฅฐๅ™จไผš่ขซๅฟฝ็•ฅ๏ผˆๅฆ‚ๆžœๆฒกๆœ‰ๅฎ‰่ฃ… spaces ๅŒ…๏ผ‰๏ผŒๆˆ–่€…ไผš็›ดๆŽฅๆ‰ง่กŒๅ‡ฝๆ•ฐ่€Œไธๅš็‰นๆฎŠๅค„็†ใ€‚
152
-
153
- ```bash
154
- # ๆœฌๅœฐๆต‹่ฏ•
155
- python app.py
156
- ```
157
-
158
- ### Q: ๅฏไปฅ่ฃ…้ฅฐๅคšไธชๅ‡ฝๆ•ฐๅ—๏ผŸ
159
-
160
- A: ๅฏไปฅ๏ผไฝ ๅฏไปฅ็ป™ไปปไฝ•้œ€่ฆ GPU ็š„ๅ‡ฝๆ•ฐๆทปๅŠ  `@spaces.GPU` ่ฃ…้ฅฐๅ™จใ€‚
161
-
162
- ```python
163
- @spaces.GPU(duration=60)
164
- def function1():
165
- pass
166
-
167
- @spaces.GPU(duration=120)
168
- def function2():
169
- pass
170
- ```
171
-
172
- ### Q: ๅฆ‚ไฝ•ไผ˜ๅŒ– GPU ไฝฟ็”จ๏ผŸ
173
-
174
- A: ไธ€ไบ›ๅปบ่ฎฎ๏ผš
175
-
176
- 1. **ๅช่ฃ…้ฅฐๅฟ…่ฆ็š„ๅ‡ฝๆ•ฐ**๏ผšไธ่ฆ่ฃ…้ฅฐๆ•ดไธช app๏ผŒๅช่ฃ…้ฅฐๅฎž้™…ไฝฟ็”จ GPU ็š„ๆŽจ็†ๅ‡ฝๆ•ฐ
177
- 2. **่ฎพ็ฝฎๅˆ้€‚็š„ duration**๏ผšๆ นๆฎๅฎž้™…้œ€ๆฑ‚่ฎพ็ฝฎ
178
- 3. **ๆธ…็† GPU ๅ†…ๅญ˜**๏ผšๅœจๅ‡ฝๆ•ฐ็ป“ๆŸๆ—ถ่ฐƒ็”จ `torch.cuda.empty_cache()`
179
- 4. **ๆ‰นๅค„็†**๏ผšๅฆ‚ๆžœๅฏ่ƒฝ๏ผŒๆ‰น้‡ๅค„็†ๅคšไธช่ฏทๆฑ‚
180
-
181
- ## ๐Ÿ”— ็›ธๅ…ณ่ต„ๆบ
182
-
183
- - [Hugging Face Spaces ๆ–‡ๆกฃ](https://huggingface.co/docs/hub/spaces)
184
- - [Spaces GPU ไฝฟ็”จๆŒ‡ๅ—](https://huggingface.co/docs/hub/spaces-gpus)
185
- - [Gradio ๆ–‡ๆกฃ](https://gradio.app/docs)
186
-
187
- ## ๐Ÿ“ ่ฎธๅฏ่ฏ
188
-
189
- Apache-2.0
190
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
UPLOAD_EXAMPLES.md DELETED
@@ -1,314 +0,0 @@
1
- # ๐Ÿ“ค ไธŠไผ  Examples ๅˆฐ Hugging Face Spaces ๆŒ‡ๅ—
2
-
3
- ## ๐Ÿšจ ้—ฎ้ข˜๏ผšไบŒ่ฟ›ๅˆถๆ–‡ไปถ่ขซๆ‹’็ป
4
-
5
- Hugging Face Spaces ไผšๆ‹’็ปๅคงๆ–‡ไปถ๏ผˆ>100MB๏ผ‰ๆˆ–ไบŒ่ฟ›ๅˆถๆ–‡ไปถ๏ผŒ้œ€่ฆไฝฟ็”จ **Git LFS** ๆฅไธŠไผ ใ€‚
6
-
7
- ## โœ… ่งฃๅ†ณๆ–นๆกˆ
8
-
9
- ### ๆ–นๆกˆ 1๏ผšไฝฟ็”จ Git LFS๏ผˆๆŽจ่๏ผ‰โญ
10
-
11
- #### ๆญฅ้ชค 1๏ผš้…็ฝฎ Git LFS
12
-
13
- ๆˆ‘ๅทฒ็ปไธบไฝ ๅˆ›ๅปบไบ† `.gitattributes` ๆ–‡ไปถ๏ผŒ้…็ฝฎไบ†ๅ›พ็‰‡ๆ–‡ไปถ็š„ Git LFS๏ผš
14
-
15
- ```gitattributes
16
- # Images in examples directory
17
- workspace/gradio/examples/**/*.png filter=lfs diff=lfs merge=lfs -text
18
- workspace/gradio/examples/**/*.jpg filter=lfs diff=lfs merge=lfs -text
19
- workspace/gradio/examples/**/*.jpeg filter=lfs diff=lfs merge=lfs -text
20
- workspace/gradio/examples/**/*.bmp filter=lfs diff=lfs merge=lfs -text
21
- workspace/gradio/examples/**/*.tiff filter=lfs diff=lfs merge=lfs -text
22
- workspace/gradio/examples/**/*.tif filter=lfs diff=lfs merge=lfs -text
23
- ```
24
-
25
- #### ๆญฅ้ชค 2๏ผšๅฎ‰่ฃ… Git LFS๏ผˆๅฆ‚ๆžœ่ฟ˜ๆฒกๆœ‰๏ผ‰
26
-
27
- ```bash
28
- # macOS
29
- brew install git-lfs
30
-
31
- # Linux
32
- sudo apt-get install git-lfs
33
-
34
- # Windows
35
- # ไธ‹่ฝฝๅฎ‰่ฃ…๏ผšhttps://git-lfs.github.com/
36
- ```
37
-
38
- #### ๆญฅ้ชค 3๏ผšๅˆๅง‹ๅŒ– Git LFS
39
-
40
- ```bash
41
- cd /Users/bytedance/depth-anything-3
42
-
43
- # ๅˆๅง‹ๅŒ– Git LFS
44
- git lfs install
45
-
46
- # ้ชŒ่ฏ้…็ฝฎ
47
- git lfs track
48
- ```
49
-
50
- #### ๆญฅ้ชค 4๏ผšๆทปๅŠ ็คบไพ‹ๅœบๆ™ฏ
51
-
52
- ```bash
53
- # ๅˆ›ๅปบ examples ็›ฎๅฝ•
54
- mkdir -p workspace/gradio/examples/my_scene
55
-
56
- # ๆทปๅŠ ๅ›พๅƒๆ–‡ไปถ
57
- cp your_images/* workspace/gradio/examples/my_scene/
58
-
59
- # ๆทปๅŠ ๆ–‡ไปถๅˆฐ Git LFS
60
- git add workspace/gradio/examples/
61
- git add .gitattributes
62
-
63
- # ๆไบค
64
- git commit -m "Add example scenes with Git LFS"
65
-
66
- # ๆŽจ้€ๅˆฐ Spaces
67
- git push origin main
68
- ```
69
-
70
- #### ๆญฅ้ชค 5๏ผš้ชŒ่ฏ
71
-
72
- ```bash
73
- # ๆฃ€ๆŸฅๅ“ชไบ›ๆ–‡ไปถไฝฟ็”จไบ† LFS
74
- git lfs ls-files
75
-
76
- # ๅบ”่ฏฅ็œ‹ๅˆฐไฝ ็š„ๅ›พ็‰‡ๆ–‡ไปถ
77
- ```
78
-
79
- ---
80
-
81
- ### ๆ–นๆกˆ 2๏ผšไฝฟ็”จๆŒไน…ๅญ˜ๅ‚จ๏ผˆๆŽจ่็”จไบŽๅคง้‡ๆ•ฐๆฎ๏ผ‰โญ
82
-
83
- ๅฆ‚ๆžœ็คบไพ‹ๅœบๆ™ฏๅพˆๅคง๏ผŒๅฏไปฅไฝฟ็”จ Hugging Face Spaces ็š„ๆŒไน…ๅญ˜ๅ‚จๅŠŸ่ƒฝใ€‚
84
-
85
- #### ๆญฅ้ชค 1๏ผšๅœจ Spaces ่ฎพ็ฝฎไธญๅฏ็”จๆŒไน…ๅญ˜ๅ‚จ
86
-
87
- 1. ่ฟ›ๅ…ฅไฝ ็š„ Space ่ฎพ็ฝฎ
88
- 2. ๅฏ็”จ "Persistent storage"
89
- 3. ่ฎพ็ฝฎๅญ˜ๅ‚จๅคงๅฐ๏ผˆๅฆ‚ 50GB๏ผ‰
90
-
91
- #### ๆญฅ้ชค 2๏ผšๅœจๅบ”็”จๅฏๅŠจๆ—ถไธ‹่ฝฝ็คบไพ‹
92
-
93
- ไฟฎๆ”น `app.py`๏ผŒๅœจๅฏๅŠจๆ—ถไปŽๅค–้ƒจๆบไธ‹่ฝฝ็คบไพ‹๏ผš
94
-
95
- ```python
96
- import os
97
- import subprocess
98
-
99
- def download_examples():
100
- """Download examples from external source if not exists"""
101
- examples_dir = "workspace/gradio/examples"
102
- if not os.path.exists(examples_dir) or not os.listdir(examples_dir):
103
- print("Downloading example scenes...")
104
- # ไปŽ Hugging Face Dataset ไธ‹่ฝฝ
105
- # ๆˆ–ไปŽๅ…ถไป–ๅญ˜ๅ‚จๆœๅŠกไธ‹่ฝฝ
106
- # subprocess.run(["huggingface-cli", "download", "dataset/examples", ...])
107
- pass
108
-
109
- if __name__ == "__main__":
110
- download_examples()
111
- # ... ๅฏๅŠจๅบ”็”จ
112
- ```
113
-
114
- #### ๆญฅ้ชค 3๏ผšไธŠไผ ๅˆฐ Hugging Face Dataset
115
-
116
- ```bash
117
- # ๅฎ‰่ฃ…ไพ่ต–
118
- pip install huggingface_hub datasets
119
-
120
- # ไธŠไผ ๅˆฐ Dataset
121
- python -c "
122
- from datasets import Dataset
123
- from huggingface_hub import HfApi
124
-
125
- # ๅˆ›ๅปบ dataset ๅนถไธŠไผ 
126
- api = HfApi()
127
- api.upload_folder(
128
- folder_path='workspace/gradio/examples',
129
- repo_id='your-username/your-examples-dataset',
130
- repo_type='dataset'
131
- )
132
- "
133
- ```
134
-
135
- ---
136
-
137
- ### ๆ–นๆกˆ 3๏ผšๅŽ‹็ผฉๅŽไธŠไผ ๏ผˆๅฐๆ–‡ไปถ๏ผ‰
138
-
139
- ๅฆ‚ๆžœๅ›พ็‰‡ๆ–‡ไปถ่พƒๅฐ๏ผˆ<100MB๏ผ‰๏ผŒๅฏไปฅๅŽ‹็ผฉๅŽไธŠไผ ๏ผš
140
-
141
- ```bash
142
- # ๅŽ‹็ผฉ examples ็›ฎๅฝ•
143
- tar -czf examples.tar.gz workspace/gradio/examples/
144
-
145
- # ๆทปๅŠ ๅˆฐ Git๏ผˆไฝœไธบๆ™ฎ้€šๆ–‡ไปถ๏ผ‰
146
- git add examples.tar.gz
147
- git commit -m "Add compressed examples"
148
- git push
149
-
150
- # ๅœจๅบ”็”จๅฏๅŠจๆ—ถ่งฃๅŽ‹
151
- # ๅœจ app.py ไธญๆทปๅŠ ๏ผš
152
- import tarfile
153
- if not os.path.exists("workspace/gradio/examples"):
154
- print("Extracting examples...")
155
- tarfile.open("examples.tar.gz").extractall()
156
- ```
157
-
158
- ---
159
-
160
- ### ๆ–นๆกˆ 4๏ผš่ฟ่กŒๆ—ถไธ‹่ฝฝ๏ผˆๆŽจ่็”จไบŽ็”Ÿไบง๏ผ‰โญ
161
-
162
- ๅœจๅบ”็”จๅฏๅŠจๆ—ถไปŽๅค–้ƒจๆบไธ‹่ฝฝ็คบไพ‹ๅœบๆ™ฏ๏ผš
163
-
164
- #### ไฟฎๆ”น `app.py`
165
-
166
- ```python
167
- import os
168
- import subprocess
169
- from huggingface_hub import hf_hub_download
170
-
171
- def setup_examples():
172
- """Setup examples directory by downloading if needed"""
173
- examples_dir = "workspace/gradio/examples"
174
- os.makedirs(examples_dir, exist_ok=True)
175
-
176
- # ๅฆ‚ๆžœ examples ็›ฎๅฝ•ไธบ็ฉบ๏ผŒไปŽๅค–้ƒจๆบไธ‹่ฝฝ
177
- if not os.listdir(examples_dir):
178
- print("๐Ÿ“ฅ Downloading example scenes...")
179
-
180
- # ๆ–นๅผ 1: ไปŽ Hugging Face Dataset ไธ‹่ฝฝ
181
- try:
182
- from datasets import load_dataset
183
- dataset = load_dataset("your-username/your-examples-dataset")
184
- # ๅค„็†ๅนถไฟๅญ˜ๅˆฐ examples_dir
185
- except:
186
- pass
187
-
188
- # ๆ–นๅผ 2: ไปŽ URL ไธ‹่ฝฝๅŽ‹็ผฉๅŒ…
189
- # import urllib.request
190
- # urllib.request.urlretrieve("https://...", "examples.zip")
191
- # ่งฃๅŽ‹ๅˆฐ examples_dir
192
-
193
- print("โœ… Examples downloaded")
194
-
195
- if __name__ == "__main__":
196
- setup_examples()
197
- # ... ๅฏๅŠจๅบ”็”จ
198
- ```
199
-
200
- ---
201
-
202
- ## ๐ŸŽฏ ๆŽจ่ๆ–นๆกˆๅฏนๆฏ”
203
-
204
- | ๆ–นๆกˆ | ไผ˜็‚น | ็ผบ็‚น | ้€‚็”จๅœบๆ™ฏ |
205
- |------|------|------|----------|
206
- | **Git LFS** | โœ… ็ฎ€ๅ•็›ดๆŽฅ<br>โœ… ็‰ˆ๏ฟฝ๏ฟฝ๏ฟฝๆŽงๅˆถ | โš ๏ธ ้œ€่ฆ LFS ้…้ข<br>โš ๏ธ ๅคงๆ–‡ไปถๅฏ่ƒฝๆ…ข | ๅฐๅˆฐไธญ็ญ‰็คบไพ‹๏ผˆ<1GB๏ผ‰ |
207
- | **ๆŒไน…ๅญ˜ๅ‚จ** | โœ… ๆ— ๅคงๅฐ้™ๅˆถ<br>โœ… ๅฟซ้€Ÿ่ฎฟ้—ฎ | โš ๏ธ ้œ€่ฆๆ‰‹ๅŠจไธŠไผ <br>โš ๏ธ ้œ€่ฆไป˜่ดน | ๅคง้‡็คบไพ‹๏ผˆ>1GB๏ผ‰ |
208
- | **่ฟ่กŒๆ—ถไธ‹่ฝฝ** | โœ… ไธๅ ็”จไป“ๅบ“็ฉบ้—ด<br>โœ… ็ตๆดป | โš ๏ธ ้ฆ–ๆฌกๅฏๅŠจๆ…ข<br>โš ๏ธ ้œ€่ฆ็ฝ‘็ปœ | ็”Ÿไบง็Žฏๅขƒ |
209
- | **ๅŽ‹็ผฉไธŠไผ ** | โœ… ็ฎ€ๅ• | โš ๏ธ ๅคงๅฐ้™ๅˆถ<br>โš ๏ธ ้œ€่ฆ่งฃๅŽ‹ | ๅฐๆ–‡ไปถ๏ผˆ<100MB๏ผ‰ |
210
-
211
- ---
212
-
213
- ## ๐Ÿ“ ๅฎŒๆ•ด Git LFS ่ฎพ็ฝฎๆญฅ้ชค
214
-
215
- ### 1. ็กฎไฟ Git LFS ๅทฒๅฎ‰่ฃ…
216
-
217
- ```bash
218
- git lfs version
219
- # ๅฆ‚ๆžœๆœชๅฎ‰่ฃ…๏ผŒๆŒ‰็…งไธŠ้ข็š„ๆญฅ้ชคๅฎ‰่ฃ…
220
- ```
221
-
222
- ### 2. ๅˆๅง‹ๅŒ– Git LFS
223
-
224
- ```bash
225
- cd /Users/bytedance/depth-anything-3
226
- git lfs install
227
- ```
228
-
229
- ### 3. ๆฃ€ๆŸฅ .gitattributes
230
-
231
- ็กฎไฟ `.gitattributes` ๅŒ…ๅซๅ›พ็‰‡ๆ–‡ไปถ้…็ฝฎ๏ผˆๆˆ‘ๅทฒ็ปๆทปๅŠ ไบ†๏ผ‰ใ€‚
232
-
233
- ### 4. ๆทปๅŠ ็คบไพ‹ๅœบๆ™ฏ
234
-
235
- ```bash
236
- # ๅˆ›ๅปบๅœบๆ™ฏ
237
- mkdir -p workspace/gradio/examples/scene1
238
- cp your_images/* workspace/gradio/examples/scene1/
239
-
240
- # ๆทปๅŠ ๆ–‡ไปถ
241
- git add workspace/gradio/examples/
242
- git add .gitattributes
243
-
244
- # ๆฃ€ๆŸฅๅ“ชไบ›ๆ–‡ไปถไผšไฝฟ็”จ LFS
245
- git lfs ls-files
246
-
247
- # ๆไบค
248
- git commit -m "Add example scenes with Git LFS"
249
-
250
- # ๆŽจ้€
251
- git push origin main
252
- ```
253
-
254
- ### 5. ้ชŒ่ฏไธŠไผ 
255
-
256
- ๅœจ Spaces ไธญๆฃ€ๆŸฅๆ–‡ไปถๆ˜ฏๅฆๆˆๅŠŸไธŠไผ ๏ผŒๅ›พ็‰‡ๆ–‡ไปถๅบ”่ฏฅๆ˜พ็คบไธบ LFS ๆŒ‡้’ˆใ€‚
257
-
258
- ---
259
-
260
- ## ๐Ÿ”ง ๆ•…้šœๆŽ’้™ค
261
-
262
- ### ้—ฎ้ข˜ 1๏ผšGit LFS ้…้ขไธ่ถณ
263
-
264
- **่งฃๅ†ณๆ–นๆกˆ๏ผš**
265
- - ไฝฟ็”จๆ–นๆกˆ 2๏ผˆๆŒไน…ๅญ˜ๅ‚จ๏ผ‰ๆˆ–ๆ–นๆกˆ 4๏ผˆ่ฟ่กŒๆ—ถไธ‹่ฝฝ๏ผ‰
266
- - ๅŽ‹็ผฉๅ›พ็‰‡ๆ–‡ไปถ
267
- - ๅชไธŠไผ ๅฟ…่ฆ็š„็คบไพ‹
268
-
269
- ### ้—ฎ้ข˜ 2๏ผšๆŽจ้€ๅคฑ่ดฅ
270
-
271
- **ๆฃ€ๆŸฅ๏ผš**
272
- ```bash
273
- # ๆฃ€ๆŸฅ LFS ๆ–‡ไปถ
274
- git lfs ls-files
275
-
276
- # ๆฃ€ๆŸฅ LFS ็Šถๆ€
277
- git lfs status
278
-
279
- # ้‡ๆ–ฐๆŽจ้€
280
- git push origin main --force
281
- ```
282
-
283
- ### ้—ฎ้ข˜ 3๏ผšๆ–‡ไปถไป็„ถ่ขซๆ‹’็ป
284
-
285
- **ๅฏ่ƒฝๅŽŸๅ› ๏ผš**
286
- - `.gitattributes` ้…็ฝฎไธๆญฃ็กฎ
287
- - ๆ–‡ไปถๆฒกๆœ‰้€š่ฟ‡ LFS ๆทปๅŠ 
288
-
289
- **่งฃๅ†ณ๏ผš**
290
- ```bash
291
- # ็งป้™คๅนถ้‡ๆ–ฐๆทปๅŠ 
292
- git rm --cached workspace/gradio/examples/**/*.png
293
- git add workspace/gradio/examples/
294
- git commit -m "Fix: Add images via Git LFS"
295
- git push
296
- ```
297
-
298
- ---
299
-
300
- ## ๐Ÿ’ก ๆœ€ไฝณๅฎž่ทต
301
-
302
- 1. **ๅฐ็คบไพ‹๏ผˆ<100MB๏ผ‰**๏ผšไฝฟ็”จ Git LFS
303
- 2. **ไธญ็ญ‰็คบไพ‹๏ผˆ100MB-1GB๏ผ‰**๏ผšไฝฟ็”จ Git LFS ๆˆ–ๆŒไน…ๅญ˜ๅ‚จ
304
- 3. **ๅคง็คบไพ‹๏ผˆ>1GB๏ผ‰**๏ผšไฝฟ็”จๆŒไน…ๅญ˜ๅ‚จๆˆ–่ฟ่กŒๆ—ถไธ‹่ฝฝ
305
- 4. **็”Ÿไบง็Žฏๅขƒ**๏ผšไฝฟ็”จ่ฟ่กŒๆ—ถไธ‹่ฝฝ๏ผŒไปŽๅค–้ƒจๆบ่Žทๅ–
306
-
307
- ---
308
-
309
- ## ๐Ÿ“š ็›ธๅ…ณ่ต„ๆบ
310
-
311
- - [Git LFS ๆ–‡ๆกฃ](https://git-lfs.github.com/)
312
- - [Hugging Face Spaces ๆ–‡ๆกฃ](https://huggingface.co/docs/hub/spaces)
313
- - [Hugging Face Datasets](https://huggingface.co/docs/datasets)
314
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
XFORMERS_GUIDE.md DELETED
@@ -1,299 +0,0 @@
1
- # xformers ไพ่ต–่ฏดๆ˜Ž
2
-
3
- ## ๐Ÿ” ้—ฎ้ข˜ๆ่ฟฐ
4
-
5
- ๆž„ๅปบๆ—ถ้‡ๅˆฐ xformers ๅฎ‰่ฃ…ๅคฑ่ดฅ๏ผš
6
-
7
- ```
8
- RuntimeError: CUTLASS submodule not found. Did you forget to run `git submodule update --init --recursive` ?
9
- ```
10
-
11
- ## โœ… ๅฅฝๆถˆๆฏ๏ผšxformers ไธๆ˜ฏๅฟ…้œ€็š„๏ผ
12
-
13
- ไฝ ็š„ไปฃ็ ๅทฒ็ปๆœ‰ **fallback ๆœบๅˆถ**๏ผŒๅœจๆฒกๆœ‰ xformers ็š„ๆƒ…ๅ†ตไธ‹ไผš่‡ชๅŠจไฝฟ็”จ็บฏ PyTorch ๅฎž็Žฐ๏ผš
14
-
15
- ```python
16
- # src/depth_anything_3/model/dinov2/layers/swiglu_ffn.py
17
- try:
18
- from xformers.ops import SwiGLU
19
- XFORMERS_AVAILABLE = True
20
- except ImportError:
21
- SwiGLU = SwiGLUFFN # ไฝฟ็”จ็บฏ PyTorch ๅฎž็Žฐ
22
- XFORMERS_AVAILABLE = False
23
- ```
24
-
25
- **ๆ€ง่ƒฝๅทฎๅผ‚๏ผš**
26
- - **ๆœ‰ xformers**: ็จๅฟซไธ€ไบ›๏ผˆ~5-10%๏ผ‰
27
- - **ๆ—  xformers**: ็จๆ…ขไธ€ไบ›๏ผŒไฝ†ๅŠŸ่ƒฝๅฎŒๅ…จ็›ธๅŒ
28
-
29
- ## ๐ŸŽฏ ๆŽจ่้…็ฝฎ
30
-
31
- ### ๅฝ“ๅ‰้…็ฝฎ๏ผˆๅทฒ่ฎพ็ฝฎ๏ผ‰โœ…
32
-
33
- **requirements.txt** - xformers ๅทฒๆณจ้‡Šๆމ๏ผš
34
- ```txt
35
- # xformers - install separately if needed
36
- ```
37
-
38
- ่ฟ™ๆ ทๅฏไปฅ็กฎไฟๆž„ๅปบๆˆๅŠŸ๏ผŒๅบ”็”จๆญฃๅธธ่ฟ่กŒใ€‚
39
-
40
- ## ๐Ÿ“ ไธ‰็งไฝฟ็”จๆ–นๅผ
41
-
42
- ---
43
-
44
- ### ๆ–นๅผ 1๏ผšไธไฝฟ็”จ xformers๏ผˆๅฝ“ๅ‰้…็ฝฎ๏ผ‰โญ ๆŽจ่
45
-
46
- **ไผ˜็‚น๏ผš**
47
- - โœ… ๆž„ๅปบๅฟซ้€Ÿ๏ผˆ5-10 ๅˆ†้’Ÿ๏ผ‰
48
- - โœ… 100% ๆˆๅŠŸ็އ
49
- - โœ… ๅŠŸ่ƒฝๅฎŒๆ•ด
50
- - โœ… ๆ— ้œ€ๅค„็†ๅ…ผๅฎนๆ€ง้—ฎ้ข˜
51
-
52
- **็ผบ็‚น๏ผš**
53
- - โš ๏ธ ๆ€ง่ƒฝ็•ฅไฝŽ๏ผˆ5-10%๏ผ‰
54
-
55
- **้€‚็”จๅœบๆ™ฏ๏ผš**
56
- - HF Spaces ้ƒจ็ฝฒ
57
- - ๅฟซ้€Ÿๆต‹่ฏ•
58
- - ไธๆƒณๅค„็†็ผ–่ฏ‘้—ฎ้ข˜
59
-
60
- ---
61
-
62
- ### ๆ–นๅผ 2๏ผšไฝฟ็”จ้ข„็ผ–่ฏ‘ xformers
63
-
64
- ๅฆ‚ๆžœไฝ ๆƒณ่ฆๆ›ดๅฅฝ็š„ๆ€ง่ƒฝ๏ผŒๅฏไปฅไฝฟ็”จ้ข„็ผ–่ฏ‘็‰ˆๆœฌ๏ผš
65
-
66
- **ๆญฅ้ชค 1๏ผš็กฎๅฎš PyTorch ๅ’Œ CUDA ็‰ˆๆœฌ**
67
-
68
- ```python
69
- import torch
70
- print(f"PyTorch: {torch.__version__}")
71
- print(f"CUDA: {torch.version.cuda}")
72
- ```
73
-
74
- **ๆญฅ้ชค 2๏ผš้€‰ๆ‹ฉๅฏนๅบ”็š„ xformers ็‰ˆๆœฌ**
75
-
76
- ่ฎฟ้—ฎ๏ผšhttps://github.com/facebookresearch/xformers#installing-xformers
77
-
78
- | PyTorch | CUDA | xformers |
79
- |---------|------|----------|
80
- | 2.1.x | 11.8 | 0.0.23 |
81
- | 2.0.x | 11.8 | 0.0.22 |
82
- | 2.0.x | 11.7 | 0.0.20 |
83
-
84
- **ๆญฅ้ชค 3๏ผšไฟฎๆ”น requirements.txt**
85
-
86
- ```txt
87
- # ๅœจ torch ๅ’Œ torchvision ไน‹ๅŽๆทปๅŠ 
88
- torch==2.1.0
89
- torchvision==0.16.0
90
- xformers==0.0.23 # ๅŒน้… PyTorch 2.1 + CUDA 11.8
91
- ```
92
-
93
- **ๆˆ–่€…ไฝฟ็”จๅฎ˜ๆ–น็ดขๅผ•๏ผš**
94
-
95
- ```txt
96
- torch==2.1.0
97
- torchvision==0.16.0
98
- --extra-index-url https://download.pytorch.org/whl/cu118
99
- xformers==0.0.23
100
- ```
101
-
102
- ---
103
-
104
- ### ๆ–นๅผ 3๏ผšไปŽๆบ็ ็ผ–่ฏ‘๏ผˆไธๆŽจ่๏ผ‰
105
-
106
- **ไป…ๅœจไปฅไธ‹ๆƒ…ๅ†ต่€ƒ่™‘๏ผš**
107
- - ้œ€่ฆๆœ€ๆ–ฐ็š„ xformers ๅŠŸ่ƒฝ
108
- - ๆœ‰็‰นๆฎŠ็š„ CUDA ็‰ˆๆœฌ้œ€ๆฑ‚
109
- - ๆ„ฟๆ„่Šฑ่ดน 15-30 ๅˆ†้’Ÿๆž„ๅปบๆ—ถ้—ด
110
-
111
- **requirements.txt:**
112
- ```txt
113
- # ้œ€่ฆ CUDA ็Žฏๅขƒๅ’Œ git submodules
114
- xformers @ git+https://github.com/facebookresearch/xformers.git
115
- ```
116
-
117
- **้ขๅค–่ฆๆฑ‚๏ผš**
118
-
119
- **packages.txt:**
120
- ```txt
121
- build-essential
122
- git
123
- ninja-build
124
- ```
125
-
126
- **ๆณจๆ„๏ผš**
127
- - โš ๏ธ ๆž„ๅปบๅฏ่ƒฝๅคฑ่ดฅ
128
- - โš ๏ธ ๆž„ๅปบๆ—ถ้—ด้•ฟ
129
- - โš ๏ธ ้œ€่ฆ GPU ็Žฏๅขƒ
130
-
131
- ---
132
-
133
- ## ๐Ÿ”ง ๅฎž้™…้…็ฝฎ็คบไพ‹
134
-
135
- ### ็คบไพ‹ 1๏ผšHF Spaces๏ผˆๆŽจ่๏ผ‰โœ…
136
-
137
- **requirements.txt:**
138
- ```txt
139
- torch>=2.0.0
140
- torchvision
141
- gradio>=5.0.0
142
- spaces
143
- # xformers ไธๅŒ…ๅซ - ไฝฟ็”จ PyTorch fallback
144
- ```
145
-
146
- **ๆ•ˆๆžœ๏ผš**
147
- - ๆž„ๅปบๆ—ถ้—ด๏ผš5-10 ๅˆ†้’Ÿ
148
- - ๆˆๅŠŸ็އ๏ผš100%
149
- - ๆ€ง่ƒฝ๏ผš่‰ฏๅฅฝ
150
-
151
- ### ็คบไพ‹ 2๏ผšๅธฆ้ข„็ผ–่ฏ‘ xformers
152
-
153
- **requirements.txt:**
154
- ```txt
155
- torch==2.1.0
156
- torchvision==0.16.0
157
- xformers==0.0.23
158
- gradio>=5.0.0
159
- spaces
160
- ```
161
-
162
- **ๆ•ˆๆžœ๏ผš**
163
- - ๆž„ๅปบๆ—ถ้—ด๏ผš8-12 ๅˆ†้’Ÿ
164
- - ๆˆๅŠŸ็އ๏ผš95%๏ผˆๅ–ๅ†ณไบŽ็‰ˆๆœฌๅŒน้…๏ผ‰
165
- - ๆ€ง่ƒฝ๏ผšๆœ€ไฝณ
166
-
167
- ### ็คบไพ‹ 3๏ผšๆœฌๅœฐๅผ€ๅ‘๏ผˆๆœ€็ตๆดป๏ผ‰
168
-
169
- ```bash
170
- # ๅ…ˆๅฎ‰่ฃ…ๅŸบ็ก€ไพ่ต–
171
- pip install -r requirements.txt
172
-
173
- # ๅฏ้€‰๏ผšๅฎ‰่ฃ… xformers๏ผˆๅฆ‚ๆžœ้œ€่ฆ๏ผ‰
174
- pip install xformers==0.0.23
175
-
176
- # ๆˆ–่€…่ฎฉ PyTorch ่‡ชๅŠจ้€‰ๆ‹ฉ็‰ˆๆœฌ
177
- pip install xformers
178
- ```
179
-
180
- ---
181
-
182
- ## ๐Ÿ› ๅธธ่ง้—ฎ้ข˜
183
-
184
- ### Q1: ๅฆ‚ไฝ•็Ÿฅ้“ๆ˜ฏๅฆไฝฟ็”จไบ† xformers๏ผŸ
185
-
186
- **ๆฃ€ๆŸฅไปฃ็ ๏ผš**
187
- ```python
188
- from depth_anything_3.model.dinov2.layers.swiglu_ffn import XFORMERS_AVAILABLE
189
-
190
- print(f"xformers available: {XFORMERS_AVAILABLE}")
191
- ```
192
-
193
- **ๆˆ–่€…ๅœจๆ—ฅๅฟ—ไธญๆŸฅ็œ‹๏ผš**
194
- ```python
195
- import logging
196
- logging.basicConfig(level=logging.INFO)
197
- # ๅฆ‚ๆžœ xformers ไธๅฏ็”จ๏ผŒไธไผšๆœ‰้”™่ฏฏ๏ผŒๅชๆ˜ฏไฝฟ็”จ fallback
198
- ```
199
-
200
- ### Q2: xformers ็‰ˆๆœฌไธๅŒน้…ๆ€ŽไนˆๅŠž๏ผŸ
201
-
202
- **้”™่ฏฏไฟกๆฏ๏ผš**
203
- ```
204
- RuntimeError: xformers is not compatible with this PyTorch version
205
- ```
206
-
207
- **่งฃๅ†ณๆ–นๆณ•๏ผš**
208
- 1. ็งป้™ค xformers๏ผˆไฝฟ็”จ fallback๏ผ‰
209
- 2. ๆˆ–่€…ๅŒน้… PyTorch ๅ’Œ xformers ็‰ˆๆœฌ๏ผˆๅ‚่€ƒไธŠ้ข็š„่กจๆ ผ๏ผ‰
210
-
211
- ### Q3: ๆ€ง่ƒฝๅทฎๅผ‚ๅคงๅ—๏ผŸ
212
-
213
- **ๅŸบๅ‡†ๆต‹่ฏ•๏ผˆๅ‚่€ƒ๏ผ‰๏ผš**
214
- - ๅ•ๅ›พๆŽจ็†๏ผšๅ‡ ไนŽๆ— ๅทฎๅผ‚๏ผˆ< 5%๏ผ‰
215
- - ๆ‰น้‡ๆŽจ็†๏ผš5-10% ๅทฎๅผ‚
216
- - ๅ†…ๅญ˜ไฝฟ็”จ๏ผš็›ธ่ฟ‘
217
-
218
- **็ป“่ฎบ๏ผš** ๅฏนๅคงๅคšๆ•ฐ็”จๆˆทๆฅ่ฏด๏ผŒๅทฎๅผ‚ๅฏไปฅๅฟฝ็•ฅใ€‚
219
-
220
- ### Q4: ไธบไป€ไนˆไธ็›ดๆŽฅๅŒ…ๅซ xformers๏ผŸ
221
-
222
- **ๅŽŸๅ› ๏ผš**
223
- 1. **ๅ…ผๅฎนๆ€งๅคๆ‚** - ้œ€่ฆ็ฒพ็กฎๅŒน้… PyTorchใ€CUDAใ€Python ็‰ˆๆœฌ
224
- 2. **ๆž„ๅปบไธ็จณๅฎš** - ไปŽๆบ็ ็ผ–่ฏ‘็ปๅธธๅคฑ่ดฅ
225
- 3. **ไธๆ˜ฏๅฟ…้œ€็š„** - ไปฃ็ ๆœ‰ fallback
226
- 4. **ๅขžๅŠ ๆž„ๅปบๆ—ถ้—ด** - ๅฏ่ƒฝๅขžๅŠ  5-15 ๅˆ†้’Ÿ
227
-
228
- ---
229
-
230
- ## ๐Ÿ“Š ๆ€ง่ƒฝๅฏนๆฏ”
231
-
232
- ### ๆŽจ็†้€Ÿๅบฆ๏ผˆๅ•ๅ›พ๏ผŒGPU T4๏ผ‰
233
-
234
- | ้…็ฝฎ | ๆ—ถ้—ด | ็›ธๅฏน้€Ÿๅบฆ |
235
- |------|------|---------|
236
- | PyTorch (ๆ—  xformers) | 1.00s | 100% |
237
- | xformers 0.0.23 | 0.95s | 105% โšก |
238
-
239
- **็ป“่ฎบ๏ผš** ๆ€ง่ƒฝๆๅ‡ไธๆ˜Žๆ˜พ๏ผŒไธๅ€ผๅพ—ไธบๆญคๅขžๅŠ ้ƒจ็ฝฒๅคๆ‚ๅบฆใ€‚
240
-
241
- ### ๆž„ๅปบๆ—ถ้—ด
242
-
243
- | ้…็ฝฎ | ้ฆ–ๆฌกๆž„ๅปบ | ๆˆๅŠŸ็އ |
244
- |------|---------|--------|
245
- | ๆ—  xformers | 5-10 ๅˆ†้’Ÿ | โœ… 100% |
246
- | ้ข„็ผ–่ฏ‘ xformers | 8-12 ๅˆ†้’Ÿ | โœ… 95% |
247
- | ๆบ็ ็ผ–่ฏ‘ xformers | 20-40 ๅˆ†้’Ÿ | โš ๏ธ 60% |
248
-
249
- ---
250
-
251
- ## ๐ŸŽฏ ๆœ€็ปˆๅปบ่ฎฎ
252
-
253
- ### ๅฏนไบŽ HF Spaces ้ƒจ็ฝฒ๏ผšโญ
254
-
255
- **ๆŽจ่๏ผšไธไฝฟ็”จ xformers**
256
-
257
- ็†็”ฑ๏ผš
258
- 1. ๆž„ๅปบ็จณๅฎšๅฏ้ 
259
- 2. ๆ€ง่ƒฝๅทฎๅผ‚ๅฏๅฟฝ็•ฅ
260
- 3. ็”จๆˆทไฝ“้ชŒๆ›ดๅฅฝ๏ผˆไธไผšๅ› ๆž„ๅปบๅคฑ่ดฅ่€Œๆ— ๆณ•ไฝฟ็”จ๏ผ‰
261
-
262
- ### ๅฏนไบŽๆœฌๅœฐๅผ€ๅ‘๏ผš
263
-
264
- **ๅฏ้€‰๏ผšๅฎ‰่ฃ…้ข„็ผ–่ฏ‘ xformers**
265
-
266
- ```bash
267
- pip install -r requirements.txt
268
- pip install xformers # ๅฏ้€‰
269
- ```
270
-
271
- ### ๅฏนไบŽ็”Ÿไบง็Žฏๅขƒ๏ผš
272
-
273
- **ๅฆ‚้œ€ๆœ€ไฝณๆ€ง่ƒฝ๏ผŒไฝฟ็”จ้ข„็ผ–่ฏ‘ xformers**
274
-
275
- ```txt
276
- torch==2.1.0
277
- xformers==0.0.23
278
- ```
279
-
280
- ---
281
-
282
- ## ๐Ÿ”— ็›ธๅ…ณ่ต„ๆบ
283
-
284
- - [xformers GitHub](https://github.com/facebookresearch/xformers)
285
- - [xformers ๅฎ‰่ฃ…ๆŒ‡ๅ—](https://github.com/facebookresearch/xformers#installing-xformers)
286
- - [PyTorch ็‰ˆๆœฌๅ…ผๅฎนๆ€ง](https://pytorch.org/get-started/previous-versions/)
287
-
288
- ---
289
-
290
- ## โœ… ๅฝ“ๅ‰็Šถๆ€
291
-
292
- ไฝ ็š„้…็ฝฎ๏ผš
293
- - โœ… **requirements.txt** - xformers ๅทฒๆณจ้‡Š๏ผˆไฝฟ็”จ fallback๏ผ‰
294
- - โœ… **ไปฃ็ ๆ”ฏๆŒ** - ่‡ชๅŠจ fallback ๅˆฐ PyTorch ๅฎž็Žฐ
295
- - โœ… **ๅŠŸ่ƒฝๅฎŒๆ•ด** - ๆ‰€ๆœ‰ๅŠŸ่ƒฝๆญฃๅธธๅทฅไฝœ
296
- - โœ… **ๆž„ๅปบ็จณๅฎš** - 100% ๆˆๅŠŸ็އ
297
-
298
- **ๆ— ้œ€่ฟ›ไธ€ๆญฅๆ“ไฝœ๏ผŒๅฏไปฅ็›ดๆŽฅ้ƒจ็ฝฒ๏ผ** ๐Ÿš€
299
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
depth_anything_3/app/css_and_html.py CHANGED
@@ -390,7 +390,7 @@ def get_header_html(logo_base64=None):
390
  <a href="https://depth-anything-3.github.io" target="_blank" class="link-btn">
391
  <i class="fas fa-globe" style="margin-right: 8px;"></i> Project Page
392
  </a>
393
- <a href="https://arxiv.org/abs/2406.09414" target="_blank" class="link-btn">
394
  <i class="fas fa-file-pdf" style="margin-right: 8px;"></i> Paper
395
  </a>
396
  <a href="https://github.com/ByteDance-Seed/Depth-Anything-3" target="_blank" class="link-btn">
 
390
  <a href="https://depth-anything-3.github.io" target="_blank" class="link-btn">
391
  <i class="fas fa-globe" style="margin-right: 8px;"></i> Project Page
392
  </a>
393
+ <a href="https://arxiv.org/abs/2511.10647" target="_blank" class="link-btn">
394
  <i class="fas fa-file-pdf" style="margin-right: 8px;"></i> Paper
395
  </a>
396
  <a href="https://github.com/ByteDance-Seed/Depth-Anything-3" target="_blank" class="link-btn">
fix_spaces_gpu.patch DELETED
@@ -1,142 +0,0 @@
1
- --- a/depth_anything_3/app/modules/model_inference.py
2
- +++ b/depth_anything_3/app/modules/model_inference.py
3
- @@ -31,47 +31,67 @@ from depth_anything_3.utils.export.glb import export_to_glb
4
- from depth_anything_3.utils.export.gs import export_to_gs_video
5
-
6
-
7
- +# Global cache for model (used in GPU subprocess)
8
- +# This is safe because @spaces.GPU runs in isolated subprocess
9
- +_MODEL_CACHE = None
10
- +
11
- +
12
- class ModelInference:
13
- """
14
- Handles model inference and data processing for Depth Anything 3.
15
- """
16
-
17
- def __init__(self):
18
- - """Initialize the model inference handler."""
19
- - self.model = None
20
- -
21
- - def initialize_model(self, device: str = "cuda") -> None:
22
- + """Initialize the model inference handler.
23
- +
24
- + Note: Do NOT store model in instance variable to avoid
25
- + state sharing issues with @spaces.GPU decorator.
26
- + """
27
- + pass # No instance variables
28
- +
29
- + def initialize_model(self, device: str = "cuda"):
30
- """
31
- Initialize the DepthAnything3 model.
32
- +
33
- + Uses global cache to store model safely in GPU subprocess.
34
- + This avoids CUDA initialization in main process.
35
-
36
- Args:
37
- device: Device to load the model on
38
- +
39
- + Returns:
40
- + Model instance
41
- """
42
- - if self.model is None:
43
- + global _MODEL_CACHE
44
- +
45
- + if _MODEL_CACHE is None:
46
- # Get model directory from environment variable or use default
47
- model_dir = os.environ.get(
48
- "DA3_MODEL_DIR", "/dev/shm/da3_models/DA3HF-VITG-METRIC_VITL"
49
- )
50
- - self.model = DepthAnything3.from_pretrained(model_dir)
51
- - self.model = self.model.to(device)
52
- + print(f"Loading model from {model_dir}...")
53
- + _MODEL_CACHE = DepthAnything3.from_pretrained(model_dir)
54
- + _MODEL_CACHE = _MODEL_CACHE.to(device)
55
- + _MODEL_CACHE.eval()
56
- + print("Model loaded and moved to GPU")
57
- else:
58
- - self.model = self.model.to(device)
59
- -
60
- - self.model.eval()
61
- + print("Using cached model")
62
- + # Ensure model is on correct device
63
- + _MODEL_CACHE = _MODEL_CACHE.to(device)
64
- +
65
- + return _MODEL_CACHE
66
-
67
- def run_inference(
68
- self,
69
- ...
70
- # Initialize model if needed
71
- - self.initialize_model(device)
72
- + model = self.initialize_model(device)
73
-
74
- ...
75
-
76
- # Run model inference
77
- print(f"Running inference with method: {actual_method}")
78
- with torch.no_grad():
79
- - prediction = self.model.inference(
80
- + prediction = model.inference(
81
- image_paths, export_dir=None, process_res_method=actual_method, infer_gs=infer_gs
82
- )
83
-
84
- @@ -192,6 +212,10 @@ class ModelInference:
85
- # Process results
86
- processed_data = self._process_results(target_dir, prediction, image_paths)
87
-
88
- + # CRITICAL: Move all CUDA tensors to CPU before returning
89
- + # This prevents CUDA initialization in main process during unpickling
90
- + prediction = self._move_prediction_to_cpu(prediction)
91
- +
92
- # Clean up
93
- torch.cuda.empty_cache()
94
-
95
- @@ -282,6 +306,45 @@ class ModelInference:
96
-
97
- return processed_data
98
-
99
- + def _move_prediction_to_cpu(self, prediction: Any) -> Any:
100
- + """
101
- + Move all CUDA tensors in prediction to CPU for safe pickling.
102
- +
103
- + This is REQUIRED for HF Spaces with @spaces.GPU decorator to avoid
104
- + CUDA initialization in the main process during unpickling.
105
- +
106
- + Args:
107
- + prediction: Prediction object that may contain CUDA tensors
108
- +
109
- + Returns:
110
- + Prediction object with all tensors moved to CPU
111
- + """
112
- + # Move gaussians tensors to CPU
113
- + if hasattr(prediction, 'gaussians') and prediction.gaussians is not None:
114
- + gaussians = prediction.gaussians
115
- +
116
- + # Move each tensor attribute to CPU
117
- + tensor_attrs = ['means', 'scales', 'rotations', 'harmonics', 'opacities']
118
- + for attr in tensor_attrs:
119
- + if hasattr(gaussians, attr):
120
- + tensor = getattr(gaussians, attr)
121
- + if isinstance(tensor, torch.Tensor) and tensor.is_cuda:
122
- + setattr(gaussians, attr, tensor.cpu())
123
- + print(f"Moved gaussians.{attr} to CPU")
124
- +
125
- + # Move any tensors in aux dict to CPU
126
- + if hasattr(prediction, 'aux') and prediction.aux is not None:
127
- + for key, value in list(prediction.aux.items()):
128
- + if isinstance(value, torch.Tensor) and value.is_cuda:
129
- + prediction.aux[key] = value.cpu()
130
- + print(f"Moved aux['{key}'] to CPU")
131
- + elif isinstance(value, dict):
132
- + # Recursively handle nested dicts
133
- + for k, v in list(value.items()):
134
- + if isinstance(v, torch.Tensor) and v.is_cuda:
135
- + value[k] = v.cpu()
136
- + print(f"Moved aux['{key}']['{k}'] to CPU")
137
- +
138
- + return prediction
139
- +
140
- def cleanup(self) -> None:
141
- """Clean up GPU memory."""
142
-