Spaces:
Running
Running
linhaotong
commited on
Commit
ยท
b396ed8
1
Parent(s):
6a8ae2c
update paper link and clean files
Browse files- EXAMPLES_DIRECTORY.md +0 -286
- SPACES_GPU_BEST_PRACTICES.md +0 -481
- SPACES_GPU_FIX_GUIDE.md +0 -484
- SPACES_SETUP.md +0 -190
- UPLOAD_EXAMPLES.md +0 -314
- XFORMERS_GUIDE.md +0 -299
- depth_anything_3/app/css_and_html.py +1 -1
- fix_spaces_gpu.patch +0 -142
EXAMPLES_DIRECTORY.md
DELETED
|
@@ -1,286 +0,0 @@
|
|
| 1 |
-
# ๐ Examples ็ฎๅฝ้
็ฝฎๆๅ
|
| 2 |
-
|
| 3 |
-
## ๐ Examples ็ฎๅฝไฝ็ฝฎ
|
| 4 |
-
|
| 5 |
-
### ้ป่ฎคไฝ็ฝฎ
|
| 6 |
-
|
| 7 |
-
Examples ็ฎๅฝๅบ่ฏฅๆพๅจ๏ผ
|
| 8 |
-
|
| 9 |
-
```
|
| 10 |
-
workspace/gradio/examples/
|
| 11 |
-
```
|
| 12 |
-
|
| 13 |
-
### ๅฎๆด่ทฏๅพ่ฏดๆ
|
| 14 |
-
|
| 15 |
-
ๆ นๆฎ `app.py` ็้
็ฝฎ๏ผ
|
| 16 |
-
|
| 17 |
-
```python
|
| 18 |
-
workspace_dir = os.environ.get("DA3_WORKSPACE_DIR", "workspace/gradio")
|
| 19 |
-
examples_dir = os.path.join(workspace_dir, "examples")
|
| 20 |
-
# ็ปๆ: workspace/gradio/examples/
|
| 21 |
-
```
|
| 22 |
-
|
| 23 |
-
## ๐ ็ฎๅฝ็ปๆ
|
| 24 |
-
|
| 25 |
-
Examples ็ฎๅฝๅบ่ฏฅๆไปฅไธ็ปๆ็ป็ป๏ผ
|
| 26 |
-
|
| 27 |
-
```
|
| 28 |
-
workspace/gradio/examples/
|
| 29 |
-
โโโ scene1/ # ๅบๆฏ 1
|
| 30 |
-
โ โโโ 000.png # ๅพๅๆไปถ
|
| 31 |
-
โ โโโ 010.png
|
| 32 |
-
โ โโโ 020.png
|
| 33 |
-
โ โโโ ...
|
| 34 |
-
โโโ scene2/ # ๅบๆฏ 2
|
| 35 |
-
โ โโโ 000.jpg
|
| 36 |
-
โ โโโ 010.jpg
|
| 37 |
-
โ โโโ ...
|
| 38 |
-
โโโ scene3/ # ๅบๆฏ 3
|
| 39 |
-
โโโ image1.png
|
| 40 |
-
โโโ image2.png
|
| 41 |
-
โโโ ...
|
| 42 |
-
```
|
| 43 |
-
|
| 44 |
-
### ่ฆๆฑ
|
| 45 |
-
|
| 46 |
-
1. **ๆฏไธชๅบๆฏไธไธชๆไปถๅคน**๏ผๆฏไธชๅบๆฏๅบ่ฏฅๆ่ชๅทฑ็ๆไปถๅคน
|
| 47 |
-
2. **ๆไปถๅคนๅ็งฐ**๏ผๆไปถๅคนๅ็งฐไผๆพ็คบไธบๅบๆฏๅ็งฐ
|
| 48 |
-
3. **ๅพๅๆไปถ**๏ผๆฏๆ `.jpg`, `.jpeg`, `.png`, `.bmp`, `.tiff`, `.tif` ๆ ผๅผ
|
| 49 |
-
4. **็ฌฌไธๅผ ๅพๅ**๏ผ็ฌฌไธๅผ ๅพๅ๏ผๆๆไปถๅๆๅบ๏ผไผ็จไฝ็ผฉ็ฅๅพ
|
| 50 |
-
|
| 51 |
-
## ๐ง ้
็ฝฎๆนๅผ
|
| 52 |
-
|
| 53 |
-
### ๆนๅผ 1๏ผไฝฟ็จ้ป่ฎค่ทฏๅพ๏ผๆจ่๏ผ
|
| 54 |
-
|
| 55 |
-
็ดๆฅๅๅปบ็ฎๅฝ๏ผ
|
| 56 |
-
|
| 57 |
-
```bash
|
| 58 |
-
mkdir -p workspace/gradio/examples
|
| 59 |
-
```
|
| 60 |
-
|
| 61 |
-
็ถๅๆทปๅ ๅบๆฏ๏ผ
|
| 62 |
-
|
| 63 |
-
```bash
|
| 64 |
-
# ๅๅปบๅบๆฏๆไปถๅคน
|
| 65 |
-
mkdir -p workspace/gradio/examples/my_scene
|
| 66 |
-
|
| 67 |
-
# ๅคๅถๅพๅๆไปถ
|
| 68 |
-
cp your_images/* workspace/gradio/examples/my_scene/
|
| 69 |
-
```
|
| 70 |
-
|
| 71 |
-
### ๆนๅผ 2๏ผไฝฟ็จ็ฏๅขๅ้
|
| 72 |
-
|
| 73 |
-
้่ฟ็ฏๅขๅ้่ชๅฎไนไฝ็ฝฎ๏ผ
|
| 74 |
-
|
| 75 |
-
```bash
|
| 76 |
-
# ่ฎพ็ฝฎ็ฏๅขๅ้
|
| 77 |
-
export DA3_WORKSPACE_DIR="/path/to/your/workspace"
|
| 78 |
-
|
| 79 |
-
# ็ถๅ examples ไผๅจ /path/to/your/workspace/examples
|
| 80 |
-
```
|
| 81 |
-
|
| 82 |
-
ๆๅจ `app.py` ไธญไฟฎๆน๏ผ
|
| 83 |
-
|
| 84 |
-
```python
|
| 85 |
-
workspace_dir = os.environ.get("DA3_WORKSPACE_DIR", "/custom/path/workspace")
|
| 86 |
-
```
|
| 87 |
-
|
| 88 |
-
### ๆนๅผ 3๏ผๅจ Hugging Face Spaces ไธญ
|
| 89 |
-
|
| 90 |
-
ๅจ Spaces ไธญ๏ผๅฏไปฅ้่ฟไปฅไธๆนๅผๆทปๅ examples๏ผ
|
| 91 |
-
|
| 92 |
-
1. **้่ฟ Git ไธไผ **๏ผ
|
| 93 |
-
```bash
|
| 94 |
-
git add workspace/gradio/examples/
|
| 95 |
-
git commit -m "Add example scenes"
|
| 96 |
-
git push
|
| 97 |
-
```
|
| 98 |
-
|
| 99 |
-
2. **้่ฟ็ฝ้กต็้ขไธไผ **๏ผ
|
| 100 |
-
- ๅจ Spaces ็ๆไปถๆต่งๅจไธญๅๅปบ `workspace/gradio/examples/` ็ฎๅฝ
|
| 101 |
-
- ไธไผ ๅบๆฏๆไปถๅคนๅๅพๅ
|
| 102 |
-
|
| 103 |
-
3. **ไฝฟ็จๆไน
ๅญๅจ**๏ผ
|
| 104 |
-
- ๅฆๆไฝฟ็จๆไน
ๅญๅจ๏ผexamples ไผไฟๅญๅจๆไน
ๅญๅจไธญ
|
| 105 |
-
- ่ทฏๅพไป็ถๆฏ `workspace/gradio/examples/`
|
| 106 |
-
|
| 107 |
-
## ๐ ็คบไพๅบๆฏ็ปๆ็คบไพ
|
| 108 |
-
|
| 109 |
-
### ็คบไพ 1๏ผ็ฎๅๅบๆฏ
|
| 110 |
-
|
| 111 |
-
```
|
| 112 |
-
workspace/gradio/examples/
|
| 113 |
-
โโโ indoor_room/
|
| 114 |
-
โโโ 000.png
|
| 115 |
-
โโโ 010.png
|
| 116 |
-
โโโ 020.png
|
| 117 |
-
โโโ 030.png
|
| 118 |
-
```
|
| 119 |
-
|
| 120 |
-
### ็คบไพ 2๏ผๅคไธชๅบๆฏ
|
| 121 |
-
|
| 122 |
-
```
|
| 123 |
-
workspace/gradio/examples/
|
| 124 |
-
โโโ outdoor_garden/
|
| 125 |
-
โ โโโ frame_001.jpg
|
| 126 |
-
โ โโโ frame_002.jpg
|
| 127 |
-
โ โโโ frame_003.jpg
|
| 128 |
-
โโโ office_space/
|
| 129 |
-
โ โโโ img_000.png
|
| 130 |
-
โ โโโ img_010.png
|
| 131 |
-
โ โโโ img_020.png
|
| 132 |
-
โโโ street_scene/
|
| 133 |
-
โโโ 000.png
|
| 134 |
-
โโโ 010.png
|
| 135 |
-
โโโ 020.png
|
| 136 |
-
```
|
| 137 |
-
|
| 138 |
-
## ๐ ้ช่ฏ Examples ็ฎๅฝ
|
| 139 |
-
|
| 140 |
-
### ๆฃๆฅ็ฎๅฝๆฏๅฆๅญๅจ
|
| 141 |
-
|
| 142 |
-
```bash
|
| 143 |
-
# ๆฃๆฅ้ป่ฎคไฝ็ฝฎ
|
| 144 |
-
ls -la workspace/gradio/examples/
|
| 145 |
-
|
| 146 |
-
# ๆไฝฟ็จ Python
|
| 147 |
-
python -c "
|
| 148 |
-
import os
|
| 149 |
-
workspace_dir = os.environ.get('DA3_WORKSPACE_DIR', 'workspace/gradio')
|
| 150 |
-
examples_dir = os.path.join(workspace_dir, 'examples')
|
| 151 |
-
print(f'Examples directory: {examples_dir}')
|
| 152 |
-
print(f'Exists: {os.path.exists(examples_dir)}')
|
| 153 |
-
if os.path.exists(examples_dir):
|
| 154 |
-
scenes = [d for d in os.listdir(examples_dir) if os.path.isdir(os.path.join(examples_dir, d))]
|
| 155 |
-
print(f'Found {len(scenes)} scenes: {scenes}')
|
| 156 |
-
"
|
| 157 |
-
```
|
| 158 |
-
|
| 159 |
-
### ๆฃๆฅๅบๆฏไฟกๆฏ
|
| 160 |
-
|
| 161 |
-
ๅบ็จๅฏๅจๆถไผ่ชๅจๆซๆ examples ็ฎๅฝ๏ผๅนถๅจๆฅๅฟไธญๆพ็คบ๏ผ
|
| 162 |
-
|
| 163 |
-
```
|
| 164 |
-
Found 3 example scenes:
|
| 165 |
-
- scene1 (5 images)
|
| 166 |
-
- scene2 (10 images)
|
| 167 |
-
- scene3 (8 images)
|
| 168 |
-
```
|
| 169 |
-
|
| 170 |
-
## ๐ ๅฟซ้ๅผๅง
|
| 171 |
-
|
| 172 |
-
### 1. ๅๅปบ็ฎๅฝ็ปๆ
|
| 173 |
-
|
| 174 |
-
```bash
|
| 175 |
-
# ๅจ้กน็ฎๆ น็ฎๅฝ
|
| 176 |
-
mkdir -p workspace/gradio/examples
|
| 177 |
-
```
|
| 178 |
-
|
| 179 |
-
### 2. ๆทปๅ ็คบไพๅบๆฏ
|
| 180 |
-
|
| 181 |
-
```bash
|
| 182 |
-
# ๅๅปบๅบๆฏๆไปถๅคน
|
| 183 |
-
mkdir -p workspace/gradio/examples/my_first_scene
|
| 184 |
-
|
| 185 |
-
# ๆทปๅ ๅพๅๆไปถ๏ผๅคๅถไฝ ็ๅพๅ๏ผ
|
| 186 |
-
cp /path/to/your/images/* workspace/gradio/examples/my_first_scene/
|
| 187 |
-
```
|
| 188 |
-
|
| 189 |
-
### 3. ้ช่ฏ
|
| 190 |
-
|
| 191 |
-
ๅฏๅจๅบ็จๅ๏ผไฝ ๅบ่ฏฅ่ฝๅจ UI ไธญ็ๅฐ็คบไพๅบๆฏ็ฝๆ ผใ
|
| 192 |
-
|
| 193 |
-
## ๐ ๅจ Hugging Face Spaces ไธญ
|
| 194 |
-
|
| 195 |
-
### ไธไผ ๆนๅผ
|
| 196 |
-
|
| 197 |
-
1. **้่ฟ Git**๏ผๆจ่๏ผ๏ผ
|
| 198 |
-
```bash
|
| 199 |
-
# ๅจๆฌๅฐๅๅค examples
|
| 200 |
-
mkdir -p workspace/gradio/examples
|
| 201 |
-
# ... ๆทปๅ ๅบๆฏ ...
|
| 202 |
-
|
| 203 |
-
# ๆไบคๅนถๆจ้
|
| 204 |
-
git add workspace/gradio/examples/
|
| 205 |
-
git commit -m "Add example scenes"
|
| 206 |
-
git push
|
| 207 |
-
```
|
| 208 |
-
|
| 209 |
-
2. **้่ฟ็ฝ้กต็้ข**๏ผ
|
| 210 |
-
- ๅจ Spaces ็ๆไปถๆต่งๅจไธญ
|
| 211 |
-
- ๅๅปบ `workspace/gradio/examples/` ็ฎๅฝ
|
| 212 |
-
- ไธไผ ๅบๆฏๆไปถๅคน
|
| 213 |
-
|
| 214 |
-
### ๆณจๆไบ้กน
|
| 215 |
-
|
| 216 |
-
- **ๆไปถๅคงๅฐ้ๅถ**๏ผ็กฎไฟๅพๅๆไปถไธ่ถ
่ฟ Spaces ็ๆไปถๅคงๅฐ้ๅถ
|
| 217 |
-
- **ๆไน
ๅญๅจ**๏ผๅฆๆไฝฟ็จๆไน
ๅญๅจ๏ผexamples ไผๆไน
ไฟๅญ
|
| 218 |
-
- **็ผๅญ**๏ผ็คบไพๅบๆฏ็็ปๆไผ็ผๅญๅจ `workspace/gradio/input_images/` ไธ
|
| 219 |
-
|
| 220 |
-
## ๐ ็ธๅ
ณ้
็ฝฎ
|
| 221 |
-
|
| 222 |
-
### ็ฏๅขๅ้
|
| 223 |
-
|
| 224 |
-
- `DA3_WORKSPACE_DIR`: ๅทฅไฝ็ฉบ้ด็ฎๅฝ๏ผ้ป่ฎค๏ผ`workspace/gradio`๏ผ
|
| 225 |
-
- Examples ็ฎๅฝ่ชๅจ่ฎพ็ฝฎไธบ๏ผ`{DA3_WORKSPACE_DIR}/examples`
|
| 226 |
-
|
| 227 |
-
### ไปฃ็ ไธญ็้
็ฝฎ
|
| 228 |
-
|
| 229 |
-
- `depth_anything_3/app/gradio_app.py`: `cache_examples()` ๆนๆณ
|
| 230 |
-
- `depth_anything_3/app/modules/utils.py`: `get_scene_info()` ๅฝๆฐ
|
| 231 |
-
- `depth_anything_3/app/modules/event_handlers.py`: `load_example_scene()` ๆนๆณ
|
| 232 |
-
|
| 233 |
-
## โ ๅธธ่ง้ฎ้ข
|
| 234 |
-
|
| 235 |
-
### Q: Examples ็ฎๅฝไธๅญๅจๆไนๅ๏ผ
|
| 236 |
-
|
| 237 |
-
A: ๅบ็จไผ่ชๅจๅๅปบ `workspace/gradio/` ็ฎๅฝ๏ผไฝไธไผ่ชๅจๅๅปบ `examples/` ๅญ็ฎๅฝใไฝ ้่ฆๆๅจๅๅปบ๏ผ
|
| 238 |
-
|
| 239 |
-
```bash
|
| 240 |
-
mkdir -p workspace/gradio/examples
|
| 241 |
-
```
|
| 242 |
-
|
| 243 |
-
### Q: ๅฆไฝๆทปๅ ๆฐ็็คบไพๅบๆฏ๏ผ
|
| 244 |
-
|
| 245 |
-
A: ๅช้ๅจ `workspace/gradio/examples/` ไธๅๅปบๆฐๆไปถๅคนๅนถๆทปๅ ๅพๅ๏ผ
|
| 246 |
-
|
| 247 |
-
```bash
|
| 248 |
-
mkdir -p workspace/gradio/examples/new_scene
|
| 249 |
-
cp images/* workspace/gradio/examples/new_scene/
|
| 250 |
-
```
|
| 251 |
-
|
| 252 |
-
ๅบ็จไผๅจไธๆฌกๅฏๅจๆถ่ชๅจๆฃๆตๆฐๅบๆฏใ
|
| 253 |
-
|
| 254 |
-
### Q: ๅบๆฏๅ็งฐๅฆไฝๆพ็คบ๏ผ
|
| 255 |
-
|
| 256 |
-
A: ๅบๆฏๅ็งฐๅฐฑๆฏๆไปถๅคนๅ็งฐใไพๅฆ๏ผ
|
| 257 |
-
- ๆไปถๅคน๏ผ`workspace/gradio/examples/indoor_room/`
|
| 258 |
-
- ๆพ็คบๅ็งฐ๏ผ`indoor_room`
|
| 259 |
-
|
| 260 |
-
### Q: ็ผฉ็ฅๅพๅฆไฝ้ๆฉ๏ผ
|
| 261 |
-
|
| 262 |
-
A: ็ผฉ็ฅๅพๆฏๆไปถๅคนไธญๆๆไปถๅๆๅบๅ็็ฌฌไธๅผ ๅพๅใ
|
| 263 |
-
|
| 264 |
-
## ๐ ๆป็ป
|
| 265 |
-
|
| 266 |
-
**Examples ็ฎๅฝไฝ็ฝฎ๏ผ**
|
| 267 |
-
- **้ป่ฎค**๏ผ`workspace/gradio/examples/`
|
| 268 |
-
- **ๅฏ้่ฟ็ฏๅขๅ้**๏ผ`DA3_WORKSPACE_DIR` ่ชๅฎไน
|
| 269 |
-
|
| 270 |
-
**็ฎๅฝ็ปๆ๏ผ**
|
| 271 |
-
```
|
| 272 |
-
workspace/gradio/examples/
|
| 273 |
-
โโโ scene1/
|
| 274 |
-
โ โโโ images...
|
| 275 |
-
โโโ scene2/
|
| 276 |
-
โ โโโ images...
|
| 277 |
-
โโโ scene3/
|
| 278 |
-
โโโ images...
|
| 279 |
-
```
|
| 280 |
-
|
| 281 |
-
**ๅฟซ้ๅๅปบ๏ผ**
|
| 282 |
-
```bash
|
| 283 |
-
mkdir -p workspace/gradio/examples
|
| 284 |
-
# ็ถๅๆทปๅ ๅบๆฏๆไปถๅคนๅๅพๅ
|
| 285 |
-
```
|
| 286 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
SPACES_GPU_BEST_PRACTICES.md
DELETED
|
@@ -1,481 +0,0 @@
|
|
| 1 |
-
# ๐ฏ Spaces GPU ๆไฝณๅฎ่ทตๆๅ
|
| 2 |
-
|
| 3 |
-
## ๐ spaces.GPU ๅทฅไฝๅ็
|
| 4 |
-
|
| 5 |
-
### ๆถๆๆฆ่ง
|
| 6 |
-
|
| 7 |
-
```
|
| 8 |
-
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
|
| 9 |
-
โ ไธป่ฟ็จ (Main Process) โ
|
| 10 |
-
โ - CPU ็ฏๅข โ
|
| 11 |
-
โ - โ ไธ่ฝๅๅงๅ CUDA โ
|
| 12 |
-
โ - โ
ๅฏไปฅๅๅปบ Gradio UI โ
|
| 13 |
-
โ - โ
ๅฏไปฅๅๅปบ ModelInference ๅฎไพ๏ผไฝไธๅ ่ฝฝๆจกๅ๏ผ โ
|
| 14 |
-
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
|
| 15 |
-
โ
|
| 16 |
-
โ ่ฐ็จ @spaces.GPU ่ฃ
้ฅฐ็ๅฝๆฐ
|
| 17 |
-
โ
|
| 18 |
-
โผ
|
| 19 |
-
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
|
| 20 |
-
โ ๅญ่ฟ็จ (GPU Worker Process) โ
|
| 21 |
-
โ - GPU ็ฏๅข โ
|
| 22 |
-
โ - โ
ๅฏไปฅๅๅงๅ CUDA โ
|
| 23 |
-
โ - โ
ๅฏไปฅๅ ่ฝฝๆจกๅๅฐ GPU โ
|
| 24 |
-
โ - โ
่ฟ่กๆจ็ โ
|
| 25 |
-
โ - โ
ๅ
จๅฑๅ้็ผๅญ๏ผๆฏไธชๅญ่ฟ็จ็ฌ็ซ๏ผ โ
|
| 26 |
-
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
|
| 27 |
-
โ
|
| 28 |
-
โ pickle ๅบๅๅ่ฟๅๅผ
|
| 29 |
-
โ
|
| 30 |
-
โผ
|
| 31 |
-
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
|
| 32 |
-
โ ไธป่ฟ็จๆฅๆถ่ฟๅๅผ โ
|
| 33 |
-
โ - โ
ๅฟ
้กปๆฏ CPU ๆฐๆฎ๏ผnumpy, ๅบๆฌ็ฑปๅ๏ผ โ
|
| 34 |
-
โ - โ ไธ่ฝๅ
ๅซ CUDA ๅผ ้ โ
|
| 35 |
-
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
|
| 36 |
-
```
|
| 37 |
-
|
| 38 |
-
## โ
ๆไฝณๅฎ่ทต๏ผๆจกๅๅ ่ฝฝ็ญ็ฅ
|
| 39 |
-
|
| 40 |
-
### โ ้่ฏฏๅๆณ 1๏ผไธป่ฟ็จๅ ่ฝฝๆจกๅ
|
| 41 |
-
|
| 42 |
-
```python
|
| 43 |
-
# โ ้่ฏฏ๏ผๅจไธป่ฟ็จๅ ่ฝฝๆจกๅ
|
| 44 |
-
class EventHandlers:
|
| 45 |
-
def __init__(self):
|
| 46 |
-
self.model_inference = ModelInference()
|
| 47 |
-
# โ ๅฆๆๅจไธป่ฟ็จ่ฐ็จ่ฟไธช๏ผไผ่งฆๅ CUDA ๅๅงๅ้่ฏฏ
|
| 48 |
-
self.model_inference.initialize_model("cuda") # ๐ฅ
|
| 49 |
-
```
|
| 50 |
-
|
| 51 |
-
**ไธบไปไน้่ฏฏ๏ผ**
|
| 52 |
-
- ไธป่ฟ็จไธ่ฝๅๅงๅ CUDA
|
| 53 |
-
- ไผ็ซๅณๆฅ้๏ผ`CUDA must not be initialized in the main process`
|
| 54 |
-
|
| 55 |
-
### โ ้่ฏฏๅๆณ 2๏ผๅฎไพๅ้ๅญๅจๆจกๅ
|
| 56 |
-
|
| 57 |
-
```python
|
| 58 |
-
# โ ้่ฏฏ๏ผไฝฟ็จๅฎไพๅ้ๅญๅจๆจกๅ
|
| 59 |
-
class ModelInference:
|
| 60 |
-
def __init__(self):
|
| 61 |
-
self.model = None # โ ๅฎไพๅ้
|
| 62 |
-
|
| 63 |
-
def initialize_model(self, device):
|
| 64 |
-
if self.model is None:
|
| 65 |
-
self.model = load_model() # โ ไฟๅญๅจๅฎไพไธญ
|
| 66 |
-
return self.model
|
| 67 |
-
```
|
| 68 |
-
|
| 69 |
-
**ไธบไปไน้่ฏฏ๏ผ**
|
| 70 |
-
- ๅฎไพๅจไธป่ฟ็จๅๅปบ
|
| 71 |
-
- ๆจกๅ็ถๆๅฏ่ฝ่ทจ่ฟ็จๆททไนฑ
|
| 72 |
-
- ็ฌฌไบๆฌก่ฐ็จๆถ็ถๆไธ็กฎๅฎ
|
| 73 |
-
|
| 74 |
-
### โ
ๆญฃ็กฎๅๆณ๏ผๅญ่ฟ็จๅ
จๅฑๅ้็ผๅญ
|
| 75 |
-
|
| 76 |
-
```python
|
| 77 |
-
# โ
ๆญฃ็กฎ๏ผไฝฟ็จๅ
จๅฑๅ้ๅจๅญ่ฟ็จไธญ็ผๅญ
|
| 78 |
-
_MODEL_CACHE = None # ๅ
จๅฑๅ้๏ผๆฏไธชๅญ่ฟ็จ็ฌ็ซ
|
| 79 |
-
|
| 80 |
-
class ModelInference:
|
| 81 |
-
def __init__(self):
|
| 82 |
-
# โ
ไธๅญๅจไปปไฝ็ถๆ
|
| 83 |
-
pass
|
| 84 |
-
|
| 85 |
-
def initialize_model(self, device: str = "cuda"):
|
| 86 |
-
global _MODEL_CACHE
|
| 87 |
-
|
| 88 |
-
if _MODEL_CACHE is None:
|
| 89 |
-
# โ
ๅจๅญ่ฟ็จไธญๅ ่ฝฝ๏ผ็ฌฌไธๆฌก่ฐ็จๆถ๏ผ
|
| 90 |
-
print("Loading model in GPU subprocess...")
|
| 91 |
-
model_dir = os.environ.get("DA3_MODEL_DIR", "...")
|
| 92 |
-
_MODEL_CACHE = DepthAnything3.from_pretrained(model_dir)
|
| 93 |
-
_MODEL_CACHE = _MODEL_CACHE.to(device) # โ
ๅจๅญ่ฟ็จไธญ็งปๅจ
|
| 94 |
-
_MODEL_CACHE.eval()
|
| 95 |
-
else:
|
| 96 |
-
# โ
ๅค็จ็ผๅญ็ๆจกๅ
|
| 97 |
-
print("Using cached model")
|
| 98 |
-
|
| 99 |
-
return _MODEL_CACHE # โ
่ฟๅๆจกๅ๏ผไธๅญๅจ
|
| 100 |
-
```
|
| 101 |
-
|
| 102 |
-
**ไธบไปไนๆญฃ็กฎ๏ผ**
|
| 103 |
-
- โ
ๆจกๅๅชๅจๅญ่ฟ็จๅ ่ฝฝ๏ผGPU ็ฏๅข๏ผ
|
| 104 |
-
- โ
ๅ
จๅฑๅ้ๅจๅญ่ฟ็จๅ
ๅฎๅ
จ๏ผๆฏไธชๅญ่ฟ็จ็ฌ็ซ๏ผ
|
| 105 |
-
- โ
ไธๆฑกๆไธป่ฟ็จ
|
| 106 |
-
- โ
ๅฏไปฅ็ผๅญๅค็จ๏ผ้ฟๅ
้ๅคๅ ่ฝฝ๏ผ
|
| 107 |
-
|
| 108 |
-
## ๐ฏ ๅฎๆดๅฎ็ฐ็คบไพ
|
| 109 |
-
|
| 110 |
-
### ๆไปถ็ปๆ
|
| 111 |
-
|
| 112 |
-
```
|
| 113 |
-
app.py # ไธปๅ
ฅๅฃ๏ผ้
็ฝฎ @spaces.GPU
|
| 114 |
-
depth_anything_3/app/modules/
|
| 115 |
-
โโโ model_inference.py # ๆจกๅๆจ็๏ผไฝฟ็จๅ
จๅฑๅ้๏ผ
|
| 116 |
-
โโโ event_handlers.py # ไบไปถๅค็๏ผไธป่ฟ็จ๏ผไธๅ ่ฝฝๆจกๅ๏ผ
|
| 117 |
-
```
|
| 118 |
-
|
| 119 |
-
### 1. app.py - ่ฃ
้ฅฐๅจ้
็ฝฎ
|
| 120 |
-
|
| 121 |
-
```python
|
| 122 |
-
import spaces
|
| 123 |
-
from depth_anything_3.app.modules.model_inference import ModelInference
|
| 124 |
-
|
| 125 |
-
# โ
่ฃ
้ฅฐ run_inference ๆนๆณ
|
| 126 |
-
original_run_inference = ModelInference.run_inference
|
| 127 |
-
|
| 128 |
-
@spaces.GPU(duration=120)
|
| 129 |
-
def gpu_run_inference(self, *args, **kwargs):
|
| 130 |
-
"""
|
| 131 |
-
ๅจ GPU ๅญ่ฟ็จไธญ่ฟ่กๆจ็ใ
|
| 132 |
-
|
| 133 |
-
่ฟไธชๅฝๆฐไผๅจ็ฌ็ซ็ GPU ๅญ่ฟ็จไธญๆง่ก๏ผ
|
| 134 |
-
ๅฏไปฅๅฎๅ
จๅฐๅๅงๅ CUDA ๅๅ ่ฝฝๆจกๅใ
|
| 135 |
-
"""
|
| 136 |
-
return original_run_inference(self, *args, **kwargs)
|
| 137 |
-
|
| 138 |
-
# ๆฟๆขๅๆนๆณ
|
| 139 |
-
ModelInference.run_inference = gpu_run_inference
|
| 140 |
-
|
| 141 |
-
# โ
ไธป่ฟ็จ๏ผๅชๅๅปบๅบ็จ๏ผไธๅ ่ฝฝๆจกๅ
|
| 142 |
-
if __name__ == "__main__":
|
| 143 |
-
app = DepthAnything3App(...)
|
| 144 |
-
app.launch(host="0.0.0.0", port=7860)
|
| 145 |
-
```
|
| 146 |
-
|
| 147 |
-
### 2. model_inference.py - ๆจกๅ็ฎก็
|
| 148 |
-
|
| 149 |
-
```python
|
| 150 |
-
import torch
|
| 151 |
-
from depth_anything_3.api import DepthAnything3
|
| 152 |
-
|
| 153 |
-
# ========================================
|
| 154 |
-
# โ
ๅ
จๅฑๅ้็ผๅญ๏ผๅญ่ฟ็จๅฎๅ
จ๏ผ
|
| 155 |
-
# ========================================
|
| 156 |
-
_MODEL_CACHE = None
|
| 157 |
-
|
| 158 |
-
class ModelInference:
|
| 159 |
-
def __init__(self):
|
| 160 |
-
"""
|
| 161 |
-
ๅๅงๅ - ไธๅญๅจไปปไฝ็ถๆใ
|
| 162 |
-
|
| 163 |
-
ๆณจๆ๏ผ่ฟไธชๅฎไพๅจไธป่ฟ็จๅๅปบ๏ผไฝๆจกๅๅ ่ฝฝๅจๅญ่ฟ็จใ
|
| 164 |
-
"""
|
| 165 |
-
pass # โ
ๆ ๅฎไพๅ้
|
| 166 |
-
|
| 167 |
-
def initialize_model(self, device: str = "cuda"):
|
| 168 |
-
"""
|
| 169 |
-
ๅจๅญ่ฟ็จไธญๅ ่ฝฝๆจกๅใ
|
| 170 |
-
|
| 171 |
-
ไฝฟ็จๅ
จๅฑๅ้็ผๅญ๏ผๅ ไธบ๏ผ
|
| 172 |
-
1. @spaces.GPU ๅจๅญ่ฟ็จ่ฟ่ก
|
| 173 |
-
2. ๆฏไธชๅญ่ฟ็จๆ็ฌ็ซ็ๅ
จๅฑๅฝๅ็ฉบ้ด
|
| 174 |
-
3. ๅฏไปฅๅฎๅ
จ็ผๅญ๏ผ้ฟๅ
้ๅคๅ ่ฝฝ
|
| 175 |
-
"""
|
| 176 |
-
global _MODEL_CACHE
|
| 177 |
-
|
| 178 |
-
if _MODEL_CACHE is None:
|
| 179 |
-
# ็ฌฌไธๆฌก่ฐ็จ๏ผๅ ่ฝฝๆจกๅ
|
| 180 |
-
model_dir = os.environ.get("DA3_MODEL_DIR", "...")
|
| 181 |
-
print(f"๐ Loading model in GPU subprocess from {model_dir}")
|
| 182 |
-
|
| 183 |
-
_MODEL_CACHE = DepthAnything3.from_pretrained(model_dir)
|
| 184 |
-
_MODEL_CACHE = _MODEL_CACHE.to(device) # โ
ๅจๅญ่ฟ็จไธญ็งปๅจ
|
| 185 |
-
_MODEL_CACHE.eval()
|
| 186 |
-
|
| 187 |
-
print(f"โ
Model loaded on {device}")
|
| 188 |
-
else:
|
| 189 |
-
# ๅ็ปญ่ฐ็จ๏ผๅค็จ็ผๅญ
|
| 190 |
-
print("โ
Using cached model")
|
| 191 |
-
# ็กฎไฟๅจๆญฃ็กฎ็่ฎพๅคไธ๏ผ้ฒๅพกๆง็ผ็จ๏ผ
|
| 192 |
-
_MODEL_CACHE = _MODEL_CACHE.to(device)
|
| 193 |
-
|
| 194 |
-
return _MODEL_CACHE
|
| 195 |
-
|
| 196 |
-
def run_inference(self, target_dir, ...):
|
| 197 |
-
"""
|
| 198 |
-
่ฟ่กๆจ็ - ๅจ GPU ๅญ่ฟ็จไธญๆง่กใ
|
| 199 |
-
|
| 200 |
-
่ฟไธชๅฝๆฐ่ขซ @spaces.GPU ่ฃ
้ฅฐ๏ผไผๅจๅญ่ฟ็จ่ฟ่กใ
|
| 201 |
-
"""
|
| 202 |
-
# โ
ๅจๅญ่ฟ็จไธญ่ทๅๆจกๅ๏ผๅฑ้จๅ้๏ผ
|
| 203 |
-
device = "cuda" if torch.cuda.is_available() else "cpu"
|
| 204 |
-
model = self.initialize_model(device) # โ
่ฟๅๆจกๅ๏ผไธๅญๅจ
|
| 205 |
-
|
| 206 |
-
# โ
่ฟ่กๆจ็
|
| 207 |
-
with torch.no_grad():
|
| 208 |
-
prediction = model.inference(...)
|
| 209 |
-
|
| 210 |
-
# โ
ๅค็็ปๆ
|
| 211 |
-
# ...
|
| 212 |
-
|
| 213 |
-
# โ
ๅ
ณ้ฎ๏ผ่ฟๅๅ็งปๅจๆๆ CUDA ๅผ ้ๅฐ CPU
|
| 214 |
-
prediction = self._move_to_cpu(prediction)
|
| 215 |
-
|
| 216 |
-
return prediction, processed_data
|
| 217 |
-
|
| 218 |
-
def _move_to_cpu(self, prediction):
|
| 219 |
-
"""็งปๅจๆๆ CUDA ๅผ ้ๅฐ CPU๏ผ็กฎไฟ pickle ๅฎๅ
จ"""
|
| 220 |
-
# ... ๅฎ็ฐ่งไธๆ
|
| 221 |
-
return prediction
|
| 222 |
-
```
|
| 223 |
-
|
| 224 |
-
### 3. event_handlers.py - ไธป่ฟ็จไปฃ็
|
| 225 |
-
|
| 226 |
-
```python
|
| 227 |
-
class EventHandlers:
|
| 228 |
-
def __init__(self):
|
| 229 |
-
"""
|
| 230 |
-
ไธป่ฟ็จๅๅงๅ - ไธๅ ่ฝฝๆจกๅใ
|
| 231 |
-
|
| 232 |
-
ๆณจๆ๏ผ่ฟ้ๅๅปบ ModelInference ๅฎไพๆฏๅฎๅ
จ็๏ผ
|
| 233 |
-
ๅ ไธบๅฎไธ็ซๅณๅ ่ฝฝๆจกๅใๆจกๅไผๅจๅญ่ฟ็จไธญๅ ่ฝฝใ
|
| 234 |
-
"""
|
| 235 |
-
# โ
ๅฏไปฅๅๅปบๅฎไพ๏ผไธๅ ่ฝฝๆจกๅ๏ผ
|
| 236 |
-
self.model_inference = ModelInference()
|
| 237 |
-
|
| 238 |
-
# โ ไธ่ฆๅจ่ฟ้่ฐ็จ initialize_model()
|
| 239 |
-
# โ ไธ่ฆๅจ่ฟ้ๅ ่ฝฝๆจกๅ
|
| 240 |
-
|
| 241 |
-
def gradio_demo(self, ...):
|
| 242 |
-
"""
|
| 243 |
-
Gradio ๅ่ฐ - ๅจไธป่ฟ็จ่ฐ็จใ
|
| 244 |
-
|
| 245 |
-
่ฟไธชๅฝๆฐไผ่ฐ็จ self.model_inference.run_inference๏ผ
|
| 246 |
-
่ run_inference ่ขซ @spaces.GPU ่ฃ
้ฅฐ๏ผไผๅจๅญ่ฟ็จ่ฟ่กใ
|
| 247 |
-
"""
|
| 248 |
-
# โ
่ฐ็จ่ขซ่ฃ
้ฅฐ็ๆนๆณ๏ผ่ชๅจๅจๅญ่ฟ็จ่ฟ่ก๏ผ
|
| 249 |
-
result = self.model_inference.run_inference(...)
|
| 250 |
-
return result
|
| 251 |
-
```
|
| 252 |
-
|
| 253 |
-
## ๐ ๅ
ณ้ฎๅๅๆป็ป
|
| 254 |
-
|
| 255 |
-
### โ
DO๏ผๅบ่ฏฅๅ๏ผ
|
| 256 |
-
|
| 257 |
-
1. **ไธป่ฟ็จ๏ผๅชๅๅปบๅฎไพ๏ผไธๅ ่ฝฝๆจกๅ**
|
| 258 |
-
```python
|
| 259 |
-
# โ
ไธป่ฟ็จ
|
| 260 |
-
model_inference = ModelInference() # ๅฎๅ
จ
|
| 261 |
-
# ไธ่ฐ็จ initialize_model()
|
| 262 |
-
```
|
| 263 |
-
|
| 264 |
-
2. **ๅญ่ฟ็จ๏ผไฝฟ็จๅ
จๅฑๅ้็ผๅญๆจกๅ**
|
| 265 |
-
```python
|
| 266 |
-
# โ
ๅญ่ฟ็จ๏ผ@spaces.GPU ่ฃ
้ฅฐ็ๅฝๆฐๅ
๏ผ
|
| 267 |
-
_MODEL_CACHE = None # ๅ
จๅฑๅ้
|
| 268 |
-
model = initialize_model() # ๅจๅญ่ฟ็จๅ ่ฝฝ
|
| 269 |
-
```
|
| 270 |
-
|
| 271 |
-
3. **่ฟๅๅ๏ผ็งปๅจๆๆๅผ ้ๅฐ CPU**
|
| 272 |
-
```python
|
| 273 |
-
# โ
่ฟๅๅ
|
| 274 |
-
prediction = move_all_tensors_to_cpu(prediction)
|
| 275 |
-
return prediction
|
| 276 |
-
```
|
| 277 |
-
|
| 278 |
-
4. **ๆธ
็ GPU ๅ
ๅญ**
|
| 279 |
-
```python
|
| 280 |
-
# โ
ๆจ็ๅ
|
| 281 |
-
torch.cuda.empty_cache()
|
| 282 |
-
```
|
| 283 |
-
|
| 284 |
-
### โ DON'T๏ผไธๅบ่ฏฅๅ๏ผ
|
| 285 |
-
|
| 286 |
-
1. **ไธป่ฟ็จ๏ผไธ่ฆๅๅงๅ CUDA**
|
| 287 |
-
```python
|
| 288 |
-
# โ ไธป่ฟ็จ
|
| 289 |
-
model.to("cuda") # ๐ฅ ้่ฏฏ
|
| 290 |
-
torch.cuda.is_available() # ๐ฅ ๅฏ่ฝ่งฆๅๅๅงๅ
|
| 291 |
-
```
|
| 292 |
-
|
| 293 |
-
2. **ไธ่ฆ็จๅฎไพๅ้ๅญๅจๆจกๅ**
|
| 294 |
-
```python
|
| 295 |
-
# โ
|
| 296 |
-
self.model = load_model() # ็ถๆๆททไนฑ
|
| 297 |
-
```
|
| 298 |
-
|
| 299 |
-
3. **ไธ่ฆ่ฟๅ CUDA ๅผ ้**
|
| 300 |
-
```python
|
| 301 |
-
# โ
|
| 302 |
-
return prediction # ๅฆๆๅ
ๅซ CUDA ๅผ ้๏ผไผๆฅ้
|
| 303 |
-
```
|
| 304 |
-
|
| 305 |
-
4. **ไธ่ฆๅจ __init__ ไธญๅ ่ฝฝๆจกๅ**
|
| 306 |
-
```python
|
| 307 |
-
# โ
|
| 308 |
-
def __init__(self):
|
| 309 |
-
self.model = load_model() # ๅจไธป่ฟ็จๆง่ก๏ผไผๆฅ้
|
| 310 |
-
```
|
| 311 |
-
|
| 312 |
-
## ๐ ๆง่กๆต็จๅฏนๆฏ
|
| 313 |
-
|
| 314 |
-
### โ ้่ฏฏๆต็จ
|
| 315 |
-
|
| 316 |
-
```
|
| 317 |
-
ไธป่ฟ็จๅฏๅจ
|
| 318 |
-
โ
|
| 319 |
-
ๅๅปบ ModelInference() ๅฎไพ
|
| 320 |
-
โ
|
| 321 |
-
__init__ ไธญ self.model = None # โ
ๅฎๅ
จ
|
| 322 |
-
โ
|
| 323 |
-
็ฌฌไธๆฌก่ฐ็จ run_inference
|
| 324 |
-
โ
|
| 325 |
-
@spaces.GPU ๅๅปบๅญ่ฟ็จ
|
| 326 |
-
โ
|
| 327 |
-
ๅญ่ฟ็จ๏ผself.model = load_model() # โ
ๅจๅญ่ฟ็จ
|
| 328 |
-
โ
|
| 329 |
-
่ฟๅ prediction๏ผๅ
ๅซ CUDA ๅผ ้๏ผ # โ ้่ฏฏ
|
| 330 |
-
โ
|
| 331 |
-
pickle ๅฐ่ฏๅจไธป่ฟ็จ้ๅปบ CUDA ๅผ ้ # ๐ฅ ๆฅ้
|
| 332 |
-
```
|
| 333 |
-
|
| 334 |
-
### โ
ๆญฃ็กฎๆต็จ
|
| 335 |
-
|
| 336 |
-
```
|
| 337 |
-
ไธป่ฟ็จๅฏๅจ
|
| 338 |
-
โ
|
| 339 |
-
ๅๅปบ ModelInference() ๅฎไพ๏ผๆ ็ถๆ๏ผ # โ
|
| 340 |
-
โ
|
| 341 |
-
็ฌฌไธๆฌก่ฐ็จ run_inference
|
| 342 |
-
โ
|
| 343 |
-
@spaces.GPU ๅๅปบๅญ่ฟ็จ
|
| 344 |
-
โ
|
| 345 |
-
ๅญ่ฟ็จ๏ผ_MODEL_CACHE = load_model() # โ
ๅ
จๅฑๅ้
|
| 346 |
-
โ
|
| 347 |
-
ๅญ่ฟ็จ๏ผmodel = _MODEL_CACHE # โ
ๅฑ้จๅ้
|
| 348 |
-
โ
|
| 349 |
-
ๅญ่ฟ็จ๏ผprediction = model.inference(...)
|
| 350 |
-
โ
|
| 351 |
-
ๅญ่ฟ็จ๏ผprediction = move_to_cpu(prediction) # โ
|
| 352 |
-
โ
|
| 353 |
-
่ฟๅ prediction๏ผๆๆๅผ ้ๅจ CPU๏ผ # โ
|
| 354 |
-
โ
|
| 355 |
-
ไธป่ฟ็จ๏ผๅฎๅ
จๆฅๆถ CPU ๆฐๆฎ # โ
|
| 356 |
-
```
|
| 357 |
-
|
| 358 |
-
## ๐งช ้ช่ฏๆธ
ๅ
|
| 359 |
-
|
| 360 |
-
### ไธป่ฟ็จๆฃๆฅ
|
| 361 |
-
|
| 362 |
-
```python
|
| 363 |
-
# โ
ๅบ่ฏฅ้่ฟ
|
| 364 |
-
def test_main_process():
|
| 365 |
-
# ๅฏไปฅๅๅปบๅฎไพ
|
| 366 |
-
model_inference = ModelInference()
|
| 367 |
-
|
| 368 |
-
# ไธๅบ่ฏฅๆๆจกๅ
|
| 369 |
-
assert not hasattr(model_inference, 'model') or model_inference.model is None
|
| 370 |
-
|
| 371 |
-
# ไธๅบ่ฏฅๅๅงๅ CUDA
|
| 372 |
-
# (่ฟไธชๆต่ฏ้่ฆๅจไธป่ฟ็จ่ฟ่ก)
|
| 373 |
-
```
|
| 374 |
-
|
| 375 |
-
### ๅญ่ฟ็จๆฃๆฅ
|
| 376 |
-
|
| 377 |
-
```python
|
| 378 |
-
# โ
ๅบ่ฏฅ้่ฟ
|
| 379 |
-
@spaces.GPU
|
| 380 |
-
def test_gpu_subprocess():
|
| 381 |
-
model_inference = ModelInference()
|
| 382 |
-
|
| 383 |
-
# ๅฏไปฅๅ ่ฝฝๆจกๅ
|
| 384 |
-
model = model_inference.initialize_model("cuda")
|
| 385 |
-
assert model is not None
|
| 386 |
-
|
| 387 |
-
# ๆจกๅๅบ่ฏฅๅจ GPU
|
| 388 |
-
# (ๆฃๆฅๆจกๅๅๆฐ่ฎพๅค)
|
| 389 |
-
|
| 390 |
-
# ๅฏไปฅ่ฟ่กๆจ็
|
| 391 |
-
# ...
|
| 392 |
-
|
| 393 |
-
# ่ฟๅๅๅบ่ฏฅ็งปๅฐ CPU
|
| 394 |
-
# ...
|
| 395 |
-
```
|
| 396 |
-
|
| 397 |
-
## ๐ ๅธธ่ง้ฎ้ข
|
| 398 |
-
|
| 399 |
-
### Q1: ไธบไปไนไธ่ฝ็จๅฎไพๅ้๏ผ
|
| 400 |
-
|
| 401 |
-
**A:** ๅ ไธบๅฎไพๅจไธป่ฟ็จๅๅปบ๏ผๅฆๆๅญๅจๆจกๅ็ถๆ๏ผไผ่ทจ่ฟ็จๆททไนฑใ
|
| 402 |
-
|
| 403 |
-
```python
|
| 404 |
-
# โ ้ฎ้ข
|
| 405 |
-
self.model = load_model() # ็ถๆๅฏ่ฝๆททไนฑ
|
| 406 |
-
|
| 407 |
-
# โ
่งฃๅณ
|
| 408 |
-
_MODEL_CACHE = load_model() # ๆฏไธชๅญ่ฟ็จ็ฌ็ซ
|
| 409 |
-
```
|
| 410 |
-
|
| 411 |
-
### Q2: ๅ
จๅฑๅ้ๅฎๅ
จๅ๏ผ
|
| 412 |
-
|
| 413 |
-
**A:** ๆฏ็๏ผๅ ไธบ๏ผ
|
| 414 |
-
- ๆฏไธชๅญ่ฟ็จๆ็ฌ็ซ็ๅ
จๅฑๅฝๅ็ฉบ้ด
|
| 415 |
-
- ไธป่ฟ็จไธไผ่ฎฟ้ฎๅญ่ฟ็จ็ๅ
จๅฑๅ้
|
| 416 |
-
- ไธไผ่ทจ่ฟ็จๆฑกๆ
|
| 417 |
-
|
| 418 |
-
### Q3: ๆจกๅไผ้ๅคๅ ่ฝฝๅ๏ผ
|
| 419 |
-
|
| 420 |
-
**A:** ไธไผ๏ผๅ ไธบ๏ผ
|
| 421 |
-
- ๅ
จๅฑๅ้ๅจๅญ่ฟ็จๅ
็ผๅญ
|
| 422 |
-
- ๅไธไธชๅญ่ฟ็จ็ๅคๆฌก่ฐ็จไผๅค็จ
|
| 423 |
-
- ไธๅๅญ่ฟ็จๅ่ช็ผๅญ๏ผๅฆๆ้่ฆ๏ผ
|
| 424 |
-
|
| 425 |
-
### Q4: ๅฆไฝๆธ
็ๆจกๅ๏ผ
|
| 426 |
-
|
| 427 |
-
**A:** ้ๅธธไธ้่ฆๆๅจๆธ
็๏ผๅ ไธบ๏ผ
|
| 428 |
-
- ๅญ่ฟ็จ็ปๆๅ่ชๅจๆธ
็
|
| 429 |
-
- ๅฆๆ้่ฆ๏ผๅฏไปฅๅจๅญ่ฟ็จไธญ๏ผ
|
| 430 |
-
```python
|
| 431 |
-
global _MODEL_CACHE
|
| 432 |
-
_MODEL_CACHE = None
|
| 433 |
-
del model
|
| 434 |
-
torch.cuda.empty_cache()
|
| 435 |
-
```
|
| 436 |
-
|
| 437 |
-
## ๐ ๅฎๆดไปฃ็ ๆจกๆฟ
|
| 438 |
-
|
| 439 |
-
```python
|
| 440 |
-
# ========================================
|
| 441 |
-
# model_inference.py
|
| 442 |
-
# ========================================
|
| 443 |
-
_MODEL_CACHE = None # ๅ
จๅฑ็ผๅญ
|
| 444 |
-
|
| 445 |
-
class ModelInference:
|
| 446 |
-
def __init__(self):
|
| 447 |
-
pass # ๆ ็ถๆ
|
| 448 |
-
|
| 449 |
-
def initialize_model(self, device="cuda"):
|
| 450 |
-
global _MODEL_CACHE
|
| 451 |
-
if _MODEL_CACHE is None:
|
| 452 |
-
_MODEL_CACHE = load_model().to(device)
|
| 453 |
-
return _MODEL_CACHE
|
| 454 |
-
|
| 455 |
-
def run_inference(self, ...):
|
| 456 |
-
model = self.initialize_model("cuda")
|
| 457 |
-
prediction = model.inference(...)
|
| 458 |
-
prediction = self._move_to_cpu(prediction)
|
| 459 |
-
return prediction
|
| 460 |
-
|
| 461 |
-
# ========================================
|
| 462 |
-
# app.py
|
| 463 |
-
# ========================================
|
| 464 |
-
@spaces.GPU(duration=120)
|
| 465 |
-
def gpu_run_inference(self, *args, **kwargs):
|
| 466 |
-
return ModelInference.run_inference(self, *args, **kwargs)
|
| 467 |
-
|
| 468 |
-
ModelInference.run_inference = gpu_run_inference
|
| 469 |
-
```
|
| 470 |
-
|
| 471 |
-
## ๐ฏ ๆป็ป
|
| 472 |
-
|
| 473 |
-
**ๆ ธๅฟๅๅ๏ผ**
|
| 474 |
-
|
| 475 |
-
1. โ
**ไธป่ฟ็จ = CPU ็ฏๅข**๏ผไธๅ ่ฝฝๆจกๅ๏ผไธๅๅงๅ CUDA
|
| 476 |
-
2. โ
**ๅญ่ฟ็จ = GPU ็ฏๅข**๏ผๅ ่ฝฝๆจกๅ๏ผ่ฟ่กๆจ็
|
| 477 |
-
3. โ
**ๅ
จๅฑๅ้็ผๅญ**๏ผๆฏไธชๅญ่ฟ็จ็ฌ็ซ
|
| 478 |
-
4. โ
**่ฟๅ CPU ๆฐๆฎ**๏ผ็กฎไฟ pickle ๅฎๅ
จ
|
| 479 |
-
|
| 480 |
-
้ตๅพช่ฟไบๅๅ๏ผไฝ ็ Spaces GPU ๅบ็จๅฐฑ่ฝ็จณๅฎ่ฟ่ก๏ผ๐
|
| 481 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
SPACES_GPU_FIX_GUIDE.md
DELETED
|
@@ -1,484 +0,0 @@
|
|
| 1 |
-
# ๐ง Spaces GPU ้ฎ้ขๅฎๆดไฟฎๅคๆๅ
|
| 2 |
-
|
| 3 |
-
## ๐ฏ ้ฎ้ข่ฏๆญ๏ผไฝ ่ฏดๅพๅฎๅ
จๆญฃ็กฎ๏ผ
|
| 4 |
-
|
| 5 |
-
### ้ฎ้ขๆ นๆบๅๆ
|
| 6 |
-
|
| 7 |
-
```python
|
| 8 |
-
# event_handlers.py - ไธป่ฟ็จไธญ
|
| 9 |
-
class EventHandlers:
|
| 10 |
-
def __init__(self):
|
| 11 |
-
self.model_inference = ModelInference() # โ ๅจไธป่ฟ็จๅๅปบๅฎไพ
|
| 12 |
-
|
| 13 |
-
# model_inference.py
|
| 14 |
-
class ModelInference:
|
| 15 |
-
def __init__(self):
|
| 16 |
-
self.model = None # โ ๅฎไพๅ้๏ผ่ทจ่ฟ็จๅ
ฑไบซ็ถๆๆ้ฎ้ข
|
| 17 |
-
|
| 18 |
-
def initialize_model(self, device):
|
| 19 |
-
if self.model is None:
|
| 20 |
-
self.model = load_model() # ็ฌฌไธๆฌก๏ผๅจๅญ่ฟ็จๅ ่ฝฝ
|
| 21 |
-
else:
|
| 22 |
-
self.model = self.model.to(device) # ็ฌฌไบๆฌก๏ผ๐ฅ ไธป่ฟ็จCUDAๆไฝ๏ผ
|
| 23 |
-
```
|
| 24 |
-
|
| 25 |
-
### ไธบไปไน็ฌฌไบๆฌกไผๅคฑ่ดฅ๏ผ
|
| 26 |
-
|
| 27 |
-
1. **็ฌฌไธๆฌก่ฐ็จ**๏ผ
|
| 28 |
-
- `@spaces.GPU` ๅจๅญ่ฟ็จ่ฟ่ก
|
| 29 |
-
- `self.model is None` โ ๅ ่ฝฝๆจกๅ
|
| 30 |
-
- `self.model` ไฟๅญๅจๅฎไพไธญ
|
| 31 |
-
- ่ฟๅๆถ `prediction.gaussians` ๅ
ๅซ CUDA ๅผ ้
|
| 32 |
-
- **pickle ๆถๅฐ่ฏๅจไธป่ฟ็จ้ๅปบ CUDA ๅผ ้** โ ๐ฅ
|
| 33 |
-
|
| 34 |
-
2. **็ฌฌไบๆฌก่ฐ็จ**๏ผๅณไฝฟ็ฌฌไธๆฌกๆๅไบ๏ผ๏ผ
|
| 35 |
-
- ๆฐ็ๅญ่ฟ็จๆ็ถๆๆททไนฑ
|
| 36 |
-
- `self.model` ็ถๆไธ็กฎๅฎ
|
| 37 |
-
- ๅฐ่ฏ `.to(device)` ๆไฝ โ ๐ฅ
|
| 38 |
-
|
| 39 |
-
## โ
่งฃๅณๆนๆก๏ผไธคไธชๅ
ณ้ฎไฟฎๆน
|
| 40 |
-
|
| 41 |
-
### ไฟฎๆน 1๏ผไฝฟ็จๅ
จๅฑๅ้็ผๅญๆจกๅ๏ผ้ฟๅ
ๅฎไพ็ถๆ๏ผ
|
| 42 |
-
|
| 43 |
-
**ไธบไปไน็จๅ
จๅฑๅ้๏ผ**
|
| 44 |
-
- `@spaces.GPU` ๆฏๆฌกๅจ็ฌ็ซๅญ่ฟ็จ่ฟ่ก
|
| 45 |
-
- ๅ
จๅฑๅ้ๅจๅญ่ฟ็จๅ
ๆฏๅฎๅ
จ็
|
| 46 |
-
- ไธไผๆฑกๆไธป่ฟ็จ
|
| 47 |
-
|
| 48 |
-
### ไฟฎๆน 2๏ผ่ฟๅๅ็งปๅจๆๆ CUDA ๅผ ้ๅฐ CPU
|
| 49 |
-
|
| 50 |
-
**ไธบไปไน้่ฆ๏ผ**
|
| 51 |
-
- Pickle ๅบๅๅ่ฟๅๅผๆถไผๅฐ่ฏ้ๅปบ CUDA ๅผ ้
|
| 52 |
-
- ๅฟ
้กป็กฎไฟ่ฟๅ็ๆฐๆฎ้ฝๅจ CPU ไธ
|
| 53 |
-
|
| 54 |
-
## ๐ ๅฎๆดไฟฎๅคไปฃ็
|
| 55 |
-
|
| 56 |
-
### ๆไปถ๏ผ`depth_anything_3/app/modules/model_inference.py`
|
| 57 |
-
|
| 58 |
-
```python
|
| 59 |
-
"""
|
| 60 |
-
Model inference module for Depth Anything 3 Gradio app.
|
| 61 |
-
|
| 62 |
-
Modified for HF Spaces GPU compatibility.
|
| 63 |
-
"""
|
| 64 |
-
|
| 65 |
-
import gc
|
| 66 |
-
import glob
|
| 67 |
-
import os
|
| 68 |
-
from typing import Any, Dict, Optional, Tuple
|
| 69 |
-
import numpy as np
|
| 70 |
-
import torch
|
| 71 |
-
|
| 72 |
-
from depth_anything_3.api import DepthAnything3
|
| 73 |
-
from depth_anything_3.utils.export.glb import export_to_glb
|
| 74 |
-
from depth_anything_3.utils.export.gs import export_to_gs_video
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
# ========================================
|
| 78 |
-
# ๐ ๅ
ณ้ฎไฟฎๆน 1๏ผไฝฟ็จๅ
จๅฑๅ้็ผๅญๆจกๅ
|
| 79 |
-
# ========================================
|
| 80 |
-
# Global cache for model (used in GPU subprocess)
|
| 81 |
-
# This is SAFE because @spaces.GPU runs in isolated subprocess
|
| 82 |
-
# Each subprocess gets its own copy of this global variable
|
| 83 |
-
_MODEL_CACHE = None
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
class ModelInference:
|
| 87 |
-
"""
|
| 88 |
-
Handles model inference and data processing for Depth Anything 3.
|
| 89 |
-
|
| 90 |
-
Modified for HF Spaces GPU compatibility - does NOT store state
|
| 91 |
-
in instance variables to avoid cross-process issues.
|
| 92 |
-
"""
|
| 93 |
-
|
| 94 |
-
def __init__(self):
|
| 95 |
-
"""Initialize the model inference handler.
|
| 96 |
-
|
| 97 |
-
Note: Do NOT store model in instance variable to avoid
|
| 98 |
-
state sharing issues with @spaces.GPU decorator.
|
| 99 |
-
"""
|
| 100 |
-
# No instance variables! All state in global or local variables
|
| 101 |
-
pass
|
| 102 |
-
|
| 103 |
-
def initialize_model(self, device: str = "cuda"):
|
| 104 |
-
"""
|
| 105 |
-
Initialize the DepthAnything3 model using global cache.
|
| 106 |
-
|
| 107 |
-
This uses a global variable which is safe because:
|
| 108 |
-
1. @spaces.GPU runs in isolated subprocess
|
| 109 |
-
2. Each subprocess has its own global namespace
|
| 110 |
-
3. No state leaks to main process
|
| 111 |
-
|
| 112 |
-
Args:
|
| 113 |
-
device: Device to load the model on
|
| 114 |
-
|
| 115 |
-
Returns:
|
| 116 |
-
Model instance ready for inference
|
| 117 |
-
"""
|
| 118 |
-
global _MODEL_CACHE
|
| 119 |
-
|
| 120 |
-
if _MODEL_CACHE is None:
|
| 121 |
-
# First time loading in this subprocess
|
| 122 |
-
model_dir = os.environ.get(
|
| 123 |
-
"DA3_MODEL_DIR", "depth-anything/DA3NESTED-GIANT-LARGE"
|
| 124 |
-
)
|
| 125 |
-
print(f"๐ Loading model from {model_dir}...")
|
| 126 |
-
_MODEL_CACHE = DepthAnything3.from_pretrained(model_dir)
|
| 127 |
-
_MODEL_CACHE = _MODEL_CACHE.to(device)
|
| 128 |
-
_MODEL_CACHE.eval()
|
| 129 |
-
print("โ
Model loaded and ready on GPU")
|
| 130 |
-
else:
|
| 131 |
-
# Model already cached in this subprocess
|
| 132 |
-
print("โ
Using cached model")
|
| 133 |
-
# Ensure it's on the correct device (defensive programming)
|
| 134 |
-
_MODEL_CACHE = _MODEL_CACHE.to(device)
|
| 135 |
-
|
| 136 |
-
return _MODEL_CACHE
|
| 137 |
-
|
| 138 |
-
def run_inference(
|
| 139 |
-
self,
|
| 140 |
-
target_dir: str,
|
| 141 |
-
filter_black_bg: bool = False,
|
| 142 |
-
filter_white_bg: bool = False,
|
| 143 |
-
process_res_method: str = "upper_bound_resize",
|
| 144 |
-
show_camera: bool = True,
|
| 145 |
-
selected_first_frame: Optional[str] = None,
|
| 146 |
-
save_percentage: float = 30.0,
|
| 147 |
-
num_max_points: int = 1_000_000,
|
| 148 |
-
infer_gs: bool = False,
|
| 149 |
-
gs_trj_mode: str = "extend",
|
| 150 |
-
gs_video_quality: str = "high",
|
| 151 |
-
) -> Tuple[Any, Dict[int, Dict[str, Any]]]:
|
| 152 |
-
"""
|
| 153 |
-
Run DepthAnything3 model inference on images.
|
| 154 |
-
|
| 155 |
-
This method is wrapped with @spaces.GPU in app.py.
|
| 156 |
-
|
| 157 |
-
Args:
|
| 158 |
-
target_dir: Directory containing images
|
| 159 |
-
filter_black_bg: Whether to filter black background
|
| 160 |
-
filter_white_bg: Whether to filter white background
|
| 161 |
-
process_res_method: Method for resizing input images
|
| 162 |
-
show_camera: Whether to show camera in 3D view
|
| 163 |
-
selected_first_frame: Selected first frame filename
|
| 164 |
-
save_percentage: Percentage of points to save (0-100)
|
| 165 |
-
num_max_points: Maximum number of points
|
| 166 |
-
infer_gs: Whether to infer 3D Gaussian Splatting
|
| 167 |
-
gs_trj_mode: Trajectory mode for GS
|
| 168 |
-
gs_video_quality: Video quality for GS
|
| 169 |
-
|
| 170 |
-
Returns:
|
| 171 |
-
Tuple of (prediction, processed_data)
|
| 172 |
-
"""
|
| 173 |
-
print(f"Processing images from {target_dir}")
|
| 174 |
-
|
| 175 |
-
# Device check
|
| 176 |
-
device = "cuda" if torch.cuda.is_available() else "cpu"
|
| 177 |
-
device = torch.device(device)
|
| 178 |
-
print(f"Using device: {device}")
|
| 179 |
-
|
| 180 |
-
# ๐ ไฝฟ็จ่ฟๅๅผ๏ผ่ไธๆฏ self.model
|
| 181 |
-
model = self.initialize_model(device)
|
| 182 |
-
|
| 183 |
-
# Get image paths
|
| 184 |
-
print("Loading images...")
|
| 185 |
-
image_folder_path = os.path.join(target_dir, "images")
|
| 186 |
-
all_image_paths = sorted(glob.glob(os.path.join(image_folder_path, "*")))
|
| 187 |
-
|
| 188 |
-
# Filter for image files
|
| 189 |
-
image_extensions = [".jpg", ".jpeg", ".png", ".bmp", ".tiff", ".tif"]
|
| 190 |
-
all_image_paths = [
|
| 191 |
-
path
|
| 192 |
-
for path in all_image_paths
|
| 193 |
-
if any(path.lower().endswith(ext) for ext in image_extensions)
|
| 194 |
-
]
|
| 195 |
-
|
| 196 |
-
print(f"Found {len(all_image_paths)} images")
|
| 197 |
-
|
| 198 |
-
# Apply first frame selection logic
|
| 199 |
-
if selected_first_frame:
|
| 200 |
-
selected_path = None
|
| 201 |
-
for path in all_image_paths:
|
| 202 |
-
if os.path.basename(path) == selected_first_frame:
|
| 203 |
-
selected_path = path
|
| 204 |
-
break
|
| 205 |
-
|
| 206 |
-
if selected_path:
|
| 207 |
-
image_paths = [selected_path] + [
|
| 208 |
-
path for path in all_image_paths if path != selected_path
|
| 209 |
-
]
|
| 210 |
-
print(f"User selected first frame: {selected_first_frame}")
|
| 211 |
-
else:
|
| 212 |
-
image_paths = all_image_paths
|
| 213 |
-
print(f"Selected frame not found, using default order")
|
| 214 |
-
else:
|
| 215 |
-
image_paths = all_image_paths
|
| 216 |
-
|
| 217 |
-
if len(image_paths) == 0:
|
| 218 |
-
raise ValueError("No images found. Check your upload.")
|
| 219 |
-
|
| 220 |
-
# Map UI options to actual method names
|
| 221 |
-
method_mapping = {"high_res": "lower_bound_resize", "low_res": "upper_bound_resize"}
|
| 222 |
-
actual_method = method_mapping.get(process_res_method, "upper_bound_crop")
|
| 223 |
-
|
| 224 |
-
# Run model inference
|
| 225 |
-
print(f"Running inference with method: {actual_method}")
|
| 226 |
-
with torch.no_grad():
|
| 227 |
-
# ๐ ไฝฟ็จๅฑ้จๅ้ model๏ผไธๆฏ self.model
|
| 228 |
-
prediction = model.inference(
|
| 229 |
-
image_paths, export_dir=None, process_res_method=actual_method, infer_gs=infer_gs
|
| 230 |
-
)
|
| 231 |
-
|
| 232 |
-
# Export to GLB
|
| 233 |
-
export_to_glb(
|
| 234 |
-
prediction,
|
| 235 |
-
filter_black_bg=filter_black_bg,
|
| 236 |
-
filter_white_bg=filter_white_bg,
|
| 237 |
-
export_dir=target_dir,
|
| 238 |
-
show_cameras=show_camera,
|
| 239 |
-
conf_thresh_percentile=save_percentage,
|
| 240 |
-
num_max_points=int(num_max_points),
|
| 241 |
-
)
|
| 242 |
-
|
| 243 |
-
# Export to GS video if needed
|
| 244 |
-
if infer_gs:
|
| 245 |
-
mode_mapping = {"extend": "extend", "smooth": "interpolate_smooth"}
|
| 246 |
-
print(f"GS mode: {gs_trj_mode}; Backend mode: {mode_mapping[gs_trj_mode]}")
|
| 247 |
-
export_to_gs_video(
|
| 248 |
-
prediction,
|
| 249 |
-
export_dir=target_dir,
|
| 250 |
-
chunk_size=4,
|
| 251 |
-
trj_mode=mode_mapping.get(gs_trj_mode, "extend"),
|
| 252 |
-
enable_tqdm=True,
|
| 253 |
-
vis_depth="hcat",
|
| 254 |
-
video_quality=gs_video_quality,
|
| 255 |
-
)
|
| 256 |
-
|
| 257 |
-
# Save predictions cache
|
| 258 |
-
self._save_predictions_cache(target_dir, prediction)
|
| 259 |
-
|
| 260 |
-
# Process results
|
| 261 |
-
processed_data = self._process_results(target_dir, prediction, image_paths)
|
| 262 |
-
|
| 263 |
-
# ========================================
|
| 264 |
-
# ๐ ๅ
ณ้ฎไฟฎๆน 2๏ผ่ฟๅๅ็งปๅจๆๆ CUDA ๅผ ้ๅฐ CPU
|
| 265 |
-
# ========================================
|
| 266 |
-
print("Moving all tensors to CPU for safe return...")
|
| 267 |
-
prediction = self._move_prediction_to_cpu(prediction)
|
| 268 |
-
|
| 269 |
-
# Clean up GPU memory
|
| 270 |
-
torch.cuda.empty_cache()
|
| 271 |
-
|
| 272 |
-
return prediction, processed_data
|
| 273 |
-
|
| 274 |
-
def _move_prediction_to_cpu(self, prediction: Any) -> Any:
|
| 275 |
-
"""
|
| 276 |
-
Move all CUDA tensors in prediction to CPU for safe pickling.
|
| 277 |
-
|
| 278 |
-
This is CRITICAL for HF Spaces with @spaces.GPU decorator.
|
| 279 |
-
Without this, pickle will try to reconstruct CUDA tensors in
|
| 280 |
-
the main process, causing CUDA initialization error.
|
| 281 |
-
|
| 282 |
-
Args:
|
| 283 |
-
prediction: Prediction object that may contain CUDA tensors
|
| 284 |
-
|
| 285 |
-
Returns:
|
| 286 |
-
Prediction object with all tensors moved to CPU
|
| 287 |
-
"""
|
| 288 |
-
# Move gaussians tensors to CPU
|
| 289 |
-
if hasattr(prediction, 'gaussians') and prediction.gaussians is not None:
|
| 290 |
-
gaussians = prediction.gaussians
|
| 291 |
-
|
| 292 |
-
# Move each tensor attribute to CPU
|
| 293 |
-
tensor_attrs = ['means', 'scales', 'rotations', 'harmonics', 'opacities']
|
| 294 |
-
for attr in tensor_attrs:
|
| 295 |
-
if hasattr(gaussians, attr):
|
| 296 |
-
tensor = getattr(gaussians, attr)
|
| 297 |
-
if isinstance(tensor, torch.Tensor) and tensor.is_cuda:
|
| 298 |
-
setattr(gaussians, attr, tensor.cpu())
|
| 299 |
-
print(f" โ Moved gaussians.{attr} to CPU")
|
| 300 |
-
|
| 301 |
-
# Move any tensors in aux dict to CPU
|
| 302 |
-
if hasattr(prediction, 'aux') and prediction.aux is not None:
|
| 303 |
-
for key, value in list(prediction.aux.items()):
|
| 304 |
-
if isinstance(value, torch.Tensor) and value.is_cuda:
|
| 305 |
-
prediction.aux[key] = value.cpu()
|
| 306 |
-
print(f" โ Moved aux['{key}'] to CPU")
|
| 307 |
-
elif isinstance(value, dict):
|
| 308 |
-
# Recursively handle nested dicts
|
| 309 |
-
for k, v in list(value.items()):
|
| 310 |
-
if isinstance(v, torch.Tensor) and v.is_cuda:
|
| 311 |
-
value[k] = v.cpu()
|
| 312 |
-
print(f" โ Moved aux['{key}']['{k}'] to CPU")
|
| 313 |
-
|
| 314 |
-
print("โ
All tensors moved to CPU")
|
| 315 |
-
return prediction
|
| 316 |
-
|
| 317 |
-
def _save_predictions_cache(self, target_dir: str, prediction: Any) -> None:
|
| 318 |
-
"""Save predictions data to predictions.npz for caching."""
|
| 319 |
-
try:
|
| 320 |
-
output_file = os.path.join(target_dir, "predictions.npz")
|
| 321 |
-
save_dict = {}
|
| 322 |
-
|
| 323 |
-
if prediction.processed_images is not None:
|
| 324 |
-
save_dict["images"] = prediction.processed_images
|
| 325 |
-
|
| 326 |
-
if prediction.depth is not None:
|
| 327 |
-
save_dict["depths"] = np.round(prediction.depth, 6)
|
| 328 |
-
|
| 329 |
-
if prediction.conf is not None:
|
| 330 |
-
save_dict["conf"] = np.round(prediction.conf, 2)
|
| 331 |
-
|
| 332 |
-
if prediction.extrinsics is not None:
|
| 333 |
-
save_dict["extrinsics"] = prediction.extrinsics
|
| 334 |
-
if prediction.intrinsics is not None:
|
| 335 |
-
save_dict["intrinsics"] = prediction.intrinsics
|
| 336 |
-
|
| 337 |
-
np.savez_compressed(output_file, **save_dict)
|
| 338 |
-
print(f"Saved predictions cache to: {output_file}")
|
| 339 |
-
|
| 340 |
-
except Exception as e:
|
| 341 |
-
print(f"Warning: Failed to save predictions cache: {e}")
|
| 342 |
-
|
| 343 |
-
def _process_results(
|
| 344 |
-
self, target_dir: str, prediction: Any, image_paths: list
|
| 345 |
-
) -> Dict[int, Dict[str, Any]]:
|
| 346 |
-
"""Process model results into structured data."""
|
| 347 |
-
processed_data = {}
|
| 348 |
-
|
| 349 |
-
depth_vis_dir = os.path.join(target_dir, "depth_vis")
|
| 350 |
-
|
| 351 |
-
if os.path.exists(depth_vis_dir):
|
| 352 |
-
depth_files = sorted(glob.glob(os.path.join(depth_vis_dir, "*.jpg")))
|
| 353 |
-
for i, depth_file in enumerate(depth_files):
|
| 354 |
-
processed_image = None
|
| 355 |
-
if prediction.processed_images is not None and i < len(
|
| 356 |
-
prediction.processed_images
|
| 357 |
-
):
|
| 358 |
-
processed_image = prediction.processed_images[i]
|
| 359 |
-
|
| 360 |
-
processed_data[i] = {
|
| 361 |
-
"depth_image": depth_file,
|
| 362 |
-
"image": processed_image,
|
| 363 |
-
"original_image_path": image_paths[i] if i < len(image_paths) else None,
|
| 364 |
-
"depth": prediction.depth[i] if i < len(prediction.depth) else None,
|
| 365 |
-
"intrinsics": (
|
| 366 |
-
prediction.intrinsics[i]
|
| 367 |
-
if prediction.intrinsics is not None and i < len(prediction.intrinsics)
|
| 368 |
-
else None
|
| 369 |
-
),
|
| 370 |
-
"mask": None,
|
| 371 |
-
}
|
| 372 |
-
|
| 373 |
-
return processed_data
|
| 374 |
-
|
| 375 |
-
def cleanup(self) -> None:
|
| 376 |
-
"""Clean up GPU memory."""
|
| 377 |
-
if torch.cuda.is_available():
|
| 378 |
-
torch.cuda.empty_cache()
|
| 379 |
-
gc.collect()
|
| 380 |
-
```
|
| 381 |
-
|
| 382 |
-
## ๐ ๅ
ณ้ฎๅๅๆป็ป
|
| 383 |
-
|
| 384 |
-
### Before (ๆ้ฎ้ข)๏ผ
|
| 385 |
-
```python
|
| 386 |
-
class ModelInference:
|
| 387 |
-
def __init__(self):
|
| 388 |
-
self.model = None # โ ๅฎไพๅ้
|
| 389 |
-
|
| 390 |
-
def initialize_model(self, device):
|
| 391 |
-
if self.model is None:
|
| 392 |
-
self.model = load_model() # โ ไฟๅญๅจๅฎไพไธญ
|
| 393 |
-
else:
|
| 394 |
-
self.model = self.model.to(device) # โ ่ทจ่ฟ็จๆไฝ
|
| 395 |
-
|
| 396 |
-
def run_inference(self):
|
| 397 |
-
self.initialize_model(device) # โ ไฝฟ็จๅฎไพๆนๆณ
|
| 398 |
-
prediction = self.model.inference(...) # โ ไฝฟ็จๅฎไพๅ้
|
| 399 |
-
return prediction # โ ๅ
ๅซ CUDA ๅผ ้
|
| 400 |
-
```
|
| 401 |
-
|
| 402 |
-
### After (ๆญฃ็กฎ)๏ผ
|
| 403 |
-
```python
|
| 404 |
-
_MODEL_CACHE = None # โ
ๅ
จๅฑๅ้๏ผๅญ่ฟ็จๅฎๅ
จ๏ผ
|
| 405 |
-
|
| 406 |
-
class ModelInference:
|
| 407 |
-
def __init__(self):
|
| 408 |
-
pass # โ
ๆ ๅฎไพๅ้
|
| 409 |
-
|
| 410 |
-
def initialize_model(self, device):
|
| 411 |
-
global _MODEL_CACHE
|
| 412 |
-
if _MODEL_CACHE is None:
|
| 413 |
-
_MODEL_CACHE = load_model() # โ
ไฟๅญๅจๅ
จๅฑ
|
| 414 |
-
return _MODEL_CACHE # โ
่ฟๅ่ไธๆฏๅญๅจ
|
| 415 |
-
|
| 416 |
-
def run_inference(self):
|
| 417 |
-
model = self.initialize_model(device) # โ
ๅฑ้จๅ้
|
| 418 |
-
prediction = model.inference(...) # โ
ไฝฟ็จๅฑ้จๅ้
|
| 419 |
-
prediction = self._move_prediction_to_cpu(prediction) # โ
็งปๅฐ CPU
|
| 420 |
-
return prediction # โ
ๅฎๅ
จ่ฟๅ
|
| 421 |
-
```
|
| 422 |
-
|
| 423 |
-
## ๐ฏ ไธบไปไน่ฟๆ ทไฟฎๆน๏ผ
|
| 424 |
-
|
| 425 |
-
### 1. ๅ
จๅฑๅ้ vs ๅฎไพๅ้
|
| 426 |
-
|
| 427 |
-
| ๆนๅผ | ้ฎ้ข | ๅๅ |
|
| 428 |
-
|------|------|------|
|
| 429 |
-
| `self.model` | โ ่ทจ่ฟ็จ็ถๆๆททไนฑ | ๅฎไพๅจไธป่ฟ็จๅๅปบ |
|
| 430 |
-
| `_MODEL_CACHE` | โ
ๅญ่ฟ็จๅ
ๅฎๅ
จ | ๆฏไธชๅญ่ฟ็จ็ฌ็ซ |
|
| 431 |
-
|
| 432 |
-
### 2. ่ฟๅ CPU ๅผ ้
|
| 433 |
-
|
| 434 |
-
```python
|
| 435 |
-
# โ ็ดๆฅ่ฟๅไผๆฅ้
|
| 436 |
-
return prediction # prediction.gaussians.means is on CUDA
|
| 437 |
-
|
| 438 |
-
# โ
็งปๅฐ CPU ๅ่ฟๅ
|
| 439 |
-
prediction = move_to_cpu(prediction)
|
| 440 |
-
return prediction # All tensors are on CPU, pickle safe
|
| 441 |
-
```
|
| 442 |
-
|
| 443 |
-
## ๐งช ๆต่ฏไฟฎๅค
|
| 444 |
-
|
| 445 |
-
```bash
|
| 446 |
-
# 1. ๅบ็จไฟฎๆน
|
| 447 |
-
# ๅคๅถไธ้ข็ๅฎๆดไปฃ็ ๅฐ model_inference.py
|
| 448 |
-
|
| 449 |
-
# 2. ๆจ้ๅฐ Spaces
|
| 450 |
-
git add depth_anything_3/app/modules/model_inference.py
|
| 451 |
-
git commit -m "Fix: Spaces GPU CUDA initialization error"
|
| 452 |
-
git push
|
| 453 |
-
|
| 454 |
-
# 3. ๆต่ฏๅคๆฌก่ฟ่ก
|
| 455 |
-
# ๅจ Space ไธญ่ฟ็ปญ่ฟ่ก 2-3 ๆฌกๆจ็
|
| 456 |
-
# ๅบ่ฏฅไธๅๅบ็ฐ CUDA ้่ฏฏ
|
| 457 |
-
```
|
| 458 |
-
|
| 459 |
-
## ๐ ไฟฎๅคๆๆ
|
| 460 |
-
|
| 461 |
-
| ้ฎ้ข | Before | After |
|
| 462 |
-
|------|--------|-------|
|
| 463 |
-
| ็ฌฌไธๆฌกๆจ็ | โ CUDA ้่ฏฏ | โ
ๆญฃๅธธ |
|
| 464 |
-
| ็ฌฌไบๆฌกๆจ็ | โ CUDA ้่ฏฏ | โ
ๆญฃๅธธ |
|
| 465 |
-
| ่ฟ็ปญๆจ็ | โ ๅคฑ่ดฅ | โ
็จณๅฎ |
|
| 466 |
-
| ๆจกๅๅ ่ฝฝ | ๆฏๆฌก้ๆฐๅ ่ฝฝ | ็ผๅญๅค็จ |
|
| 467 |
-
|
| 468 |
-
## ๐ก ๆไฝณๅฎ่ทต
|
| 469 |
-
|
| 470 |
-
ๅฏนไบ `@spaces.GPU` ่ฃ
้ฅฐ็ๅฝๆฐ๏ผ
|
| 471 |
-
|
| 472 |
-
1. โ
ไฝฟ็จ**ๅ
จๅฑๅ้**็ผๅญๆจกๅ๏ผๅญ่ฟ็จๅฎๅ
จ๏ผ
|
| 473 |
-
2. โ
**ไธ่ฆ**ไฝฟ็จๅฎไพๅ้ๅญๅจๆจกๅ
|
| 474 |
-
3. โ
่ฟๅๅ**็งปๅจๆๆๅผ ้ๅฐ CPU**
|
| 475 |
-
4. โ
ๆธ
็ GPU ๅ
ๅญ (`torch.cuda.empty_cache()`)
|
| 476 |
-
5. โ **ไธ่ฆ**ๅจไธป่ฟ็จไธญๅๅงๅ CUDA
|
| 477 |
-
6. โ **ไธ่ฆ**่ฟๅ CUDA ๅผ ้
|
| 478 |
-
|
| 479 |
-
## ๐ ็ธๅ
ณ่ตๆบ
|
| 480 |
-
|
| 481 |
-
- [HF Spaces Zero GPU ๆๆกฃ](https://huggingface.co/docs/hub/spaces-gpus#zero-gpu)
|
| 482 |
-
- [PyTorch Multiprocessing](https://pytorch.org/docs/stable/notes/multiprocessing.html)
|
| 483 |
-
- [Pickle ๅ่ฎฎ](https://docs.python.org/3/library/pickle.html)
|
| 484 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
SPACES_SETUP.md
DELETED
|
@@ -1,190 +0,0 @@
|
|
| 1 |
-
# Hugging Face Spaces ้จ็ฝฒๆๅ
|
| 2 |
-
|
| 3 |
-
## ๐ ๆฆ่ฟฐ
|
| 4 |
-
|
| 5 |
-
่ฟไธช้กน็ฎๅทฒ็ป้
็ฝฎๅฅฝๅฏไปฅ้จ็ฝฒๅฐ Hugging Face Spaces๏ผไฝฟ็จ `@spaces.GPU` ่ฃ
้ฅฐๅจๆฅๅจๆๅ้
GPU ่ตๆบใ
|
| 6 |
-
|
| 7 |
-
## ๐ฏ ๅ
ณ้ฎๆไปถ
|
| 8 |
-
|
| 9 |
-
### 1. `app.py` - ไธปๅบ็จๆไปถ
|
| 10 |
-
|
| 11 |
-
```python
|
| 12 |
-
import spaces
|
| 13 |
-
from depth_anything_3.app.gradio_app import DepthAnything3App
|
| 14 |
-
from depth_anything_3.app.modules.model_inference import ModelInference
|
| 15 |
-
|
| 16 |
-
# ไฝฟ็จ monkey-patching ๅฐ GPU ่ฃ
้ฅฐๅจๅบ็จๅฐๆจ็ๅฝๆฐ
|
| 17 |
-
original_run_inference = ModelInference.run_inference
|
| 18 |
-
|
| 19 |
-
@spaces.GPU(duration=120) # ่ฏทๆฑ GPU๏ผๆๅค 120 ็ง
|
| 20 |
-
def gpu_run_inference(self, *args, **kwargs):
|
| 21 |
-
return original_run_inference(self, *args, **kwargs)
|
| 22 |
-
|
| 23 |
-
ModelInference.run_inference = gpu_run_inference
|
| 24 |
-
```
|
| 25 |
-
|
| 26 |
-
**ๅทฅไฝๅ็๏ผ**
|
| 27 |
-
- `@spaces.GPU` ่ฃ
้ฅฐๅจๅจๅฝๆฐ่ฐ็จๆถๅจๆๅ้
GPU
|
| 28 |
-
- `duration=120` ่กจ็คบๅๆฌกๆจ็ๆๅคไฝฟ็จ GPU 120 ็ง
|
| 29 |
-
- ้่ฟ monkey-patching๏ผๆไปฌๅฐ่ฃ
้ฅฐๅจๅบ็จๅฐๅทฒๆ็ๆจ็ๅฝๆฐไธ๏ผๆ ้ไฟฎๆนๆ ธๅฟไปฃ็
|
| 30 |
-
|
| 31 |
-
### 2. `README.md` - Spaces ้
็ฝฎ
|
| 32 |
-
|
| 33 |
-
```yaml
|
| 34 |
-
---
|
| 35 |
-
title: Depth Anything 3
|
| 36 |
-
sdk: gradio
|
| 37 |
-
sdk_version: 5.49.1
|
| 38 |
-
app_file: app.py
|
| 39 |
-
pinned: false
|
| 40 |
-
license: cc-by-nc-4.0
|
| 41 |
-
---
|
| 42 |
-
```
|
| 43 |
-
|
| 44 |
-
่ฟไธช YAML ๅ็ฝฎๅ
ๅฎนๅ่ฏ Hugging Face Spaces๏ผ
|
| 45 |
-
- ไฝฟ็จ Gradio SDK
|
| 46 |
-
- ๅ
ฅๅฃๆไปถๆฏ `app.py`
|
| 47 |
-
- ไฝฟ็จ็ Gradio ็ๆฌ
|
| 48 |
-
|
| 49 |
-
### 3. `pyproject.toml` - ไพ่ต้
็ฝฎ
|
| 50 |
-
|
| 51 |
-
ๅทฒ็ปๆดๆฐ๏ผๅ
ๅซไบ `spaces` ไพ่ต๏ผ
|
| 52 |
-
|
| 53 |
-
```toml
|
| 54 |
-
[project.optional-dependencies]
|
| 55 |
-
app = ["gradio>=5", "pillow>=9.0", "spaces"]
|
| 56 |
-
```
|
| 57 |
-
|
| 58 |
-
## ๐ ้จ็ฝฒๆญฅ้ชค
|
| 59 |
-
|
| 60 |
-
### ๆนๅผ 1๏ผ้่ฟ Hugging Face ็ฝ้กต็้ข
|
| 61 |
-
|
| 62 |
-
1. ๅจ Hugging Face ๅๅปบไธไธชๆฐ็ Space
|
| 63 |
-
2. ้ๆฉ **Gradio** ไฝไธบ SDK
|
| 64 |
-
3. ไธไผ ไฝ ็ไปฃ็ ๏ผๅ
ๆฌ `app.py`, `src/`, `pyproject.toml` ็ญ๏ผ
|
| 65 |
-
4. Space ไผ่ชๅจๆๅปบๅนถๅฏๅจ
|
| 66 |
-
|
| 67 |
-
### ๆนๅผ 2๏ผ้่ฟ Git
|
| 68 |
-
|
| 69 |
-
```bash
|
| 70 |
-
# ๅ
้ไฝ ็ Space ไปๅบ
|
| 71 |
-
git clone https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
|
| 72 |
-
cd YOUR_SPACE_NAME
|
| 73 |
-
|
| 74 |
-
# ๆทปๅ ไฝ ็ไปฃ็
|
| 75 |
-
cp -r /path/to/depth-anything-3/* .
|
| 76 |
-
|
| 77 |
-
# ๆไบคๅนถๆจ้
|
| 78 |
-
git add .
|
| 79 |
-
git commit -m "Initial commit"
|
| 80 |
-
git push
|
| 81 |
-
```
|
| 82 |
-
|
| 83 |
-
## ๐ง ้
็ฝฎ้้กน
|
| 84 |
-
|
| 85 |
-
### GPU ็ฑปๅ
|
| 86 |
-
|
| 87 |
-
Hugging Face Spaces ๆฏๆไธๅ็ GPU ็ฑปๅ๏ผ
|
| 88 |
-
|
| 89 |
-
- **Free (T4)**: ๅ
่ดน๏ผ้ๅๅฐๅๆจกๅ
|
| 90 |
-
- **A10G**: ไป่ดน๏ผๆดๅผบๅคง
|
| 91 |
-
- **A100**: ไป่ดน๏ผๆๅผบๅคง
|
| 92 |
-
|
| 93 |
-
### GPU Duration
|
| 94 |
-
|
| 95 |
-
ๅจ `app.py` ไธญๅฏไปฅ่ฐๆด๏ผ
|
| 96 |
-
|
| 97 |
-
```python
|
| 98 |
-
@spaces.GPU(duration=120) # 120 ็ง
|
| 99 |
-
```
|
| 100 |
-
|
| 101 |
-
- ่ฎพ็ฝฎๅคช็ญ๏ผๅคๆๆจ็ๅฏ่ฝ่ถ
ๆถ
|
| 102 |
-
- ่ฎพ็ฝฎๅคช้ฟ๏ผๆตช่ดน่ตๆบ
|
| 103 |
-
- ๆจ่๏ผๆ นๆฎๅฎ้
ๆจ็ๆถ้ด่ฎพ็ฝฎ๏ผๅฏไปฅๅ
่ฎพ้ฟไธ็น๏ผ็ถๅๆ นๆฎๆฅๅฟ่ฐๆด๏ผ
|
| 104 |
-
|
| 105 |
-
### ็ฏๅขๅ้
|
| 106 |
-
|
| 107 |
-
ๅฏไปฅๅจ Space ่ฎพ็ฝฎไธญ้
็ฝฎ็ฏๅขๅ้๏ผ
|
| 108 |
-
|
| 109 |
-
- `DA3_MODEL_DIR`: ๆจกๅ็ฎๅฝ่ทฏๅพ
|
| 110 |
-
- `DA3_WORKSPACE_DIR`: ๅทฅไฝ็ฉบ้ด็ฎๅฝ
|
| 111 |
-
- `DA3_GALLERY_DIR`: ๅพๅบ็ฎๅฝ
|
| 112 |
-
|
| 113 |
-
## ๐ ็ๆงๅ่ฐ่ฏ
|
| 114 |
-
|
| 115 |
-
### ๆฅ็ๆฅๅฟ
|
| 116 |
-
|
| 117 |
-
ๅจ Spaces ็้ข็นๅป "Logs" ๆ ็ญพๅฏไปฅ็ๅฐ๏ผ
|
| 118 |
-
|
| 119 |
-
```
|
| 120 |
-
๐ Launching Depth Anything 3 on Hugging Face Spaces...
|
| 121 |
-
๐ฆ Model Directory: depth-anything/DA3NESTED-GIANT-LARGE
|
| 122 |
-
๐ Workspace Directory: workspace/gradio
|
| 123 |
-
๐ผ๏ธ Gallery Directory: workspace/gallery
|
| 124 |
-
```
|
| 125 |
-
|
| 126 |
-
### GPU ไฝฟ็จๆ
ๅต
|
| 127 |
-
|
| 128 |
-
ๅจ่ฃ
้ฅฐ็ๅฝๆฐๅ
้จ๏ผๅฏไปฅๆฃๆฅ GPU ็ถๆ๏ผ
|
| 129 |
-
|
| 130 |
-
```python
|
| 131 |
-
print(torch.cuda.is_available()) # True
|
| 132 |
-
print(torch.cuda.device_count()) # 1 (้ๅธธ)
|
| 133 |
-
print(torch.cuda.get_device_name(0)) # 'Tesla T4' ๆๅ
ถไป
|
| 134 |
-
```
|
| 135 |
-
|
| 136 |
-
## ๐ ็คบไพไปฃ็
|
| 137 |
-
|
| 138 |
-
ๆฅ็ `example_spaces_gpu.py` ไบ่งฃ `@spaces.GPU` ่ฃ
้ฅฐๅจ็ๅบๆฌ็จๆณใ
|
| 139 |
-
|
| 140 |
-
## โ ๅธธ่ง้ฎ้ข
|
| 141 |
-
|
| 142 |
-
### Q: ไธบไปไนไฝฟ็จ monkey-patching๏ผ
|
| 143 |
-
|
| 144 |
-
A: ่ฟๆ ทๅฏไปฅๅจไธไฟฎๆนๆ ธๅฟไปฃ็ ็ๆ
ๅตไธๆทปๅ Spaces ๆฏๆใๅฆๆไฝ ๆณๆดไผ้
็ๆนๅผ๏ผๅฏไปฅ๏ผ
|
| 145 |
-
|
| 146 |
-
1. ็ดๆฅๅจ `ModelInference.run_inference` ๆนๆณไธๆทปๅ ่ฃ
้ฅฐๅจ
|
| 147 |
-
2. ๅๅปบไธไธช็ปงๆฟ่ช `ModelInference` ็ๆฐ็ฑป
|
| 148 |
-
|
| 149 |
-
### Q: ๅฆไฝๆต่ฏๆฌๅฐๆฏๅฆ่ฝ่ฟ่ก๏ผ
|
| 150 |
-
|
| 151 |
-
A: ๆฌๅฐ่ฟ่กๆถ๏ผ`spaces.GPU` ่ฃ
้ฅฐๅจไผ่ขซๅฟฝ็ฅ๏ผๅฆๆๆฒกๆๅฎ่ฃ
spaces ๅ
๏ผ๏ผๆ่
ไผ็ดๆฅๆง่กๅฝๆฐ่ไธๅ็นๆฎๅค็ใ
|
| 152 |
-
|
| 153 |
-
```bash
|
| 154 |
-
# ๆฌๅฐๆต่ฏ
|
| 155 |
-
python app.py
|
| 156 |
-
```
|
| 157 |
-
|
| 158 |
-
### Q: ๅฏไปฅ่ฃ
้ฅฐๅคไธชๅฝๆฐๅ๏ผ
|
| 159 |
-
|
| 160 |
-
A: ๅฏไปฅ๏ผไฝ ๅฏไปฅ็ปไปปไฝ้่ฆ GPU ็ๅฝๆฐๆทปๅ `@spaces.GPU` ่ฃ
้ฅฐๅจใ
|
| 161 |
-
|
| 162 |
-
```python
|
| 163 |
-
@spaces.GPU(duration=60)
|
| 164 |
-
def function1():
|
| 165 |
-
pass
|
| 166 |
-
|
| 167 |
-
@spaces.GPU(duration=120)
|
| 168 |
-
def function2():
|
| 169 |
-
pass
|
| 170 |
-
```
|
| 171 |
-
|
| 172 |
-
### Q: ๅฆไฝไผๅ GPU ไฝฟ็จ๏ผ
|
| 173 |
-
|
| 174 |
-
A: ไธไบๅปบ่ฎฎ๏ผ
|
| 175 |
-
|
| 176 |
-
1. **ๅช่ฃ
้ฅฐๅฟ
่ฆ็ๅฝๆฐ**๏ผไธ่ฆ่ฃ
้ฅฐๆดไธช app๏ผๅช่ฃ
้ฅฐๅฎ้
ไฝฟ็จ GPU ็ๆจ็ๅฝๆฐ
|
| 177 |
-
2. **่ฎพ็ฝฎๅ้็ duration**๏ผๆ นๆฎๅฎ้
้ๆฑ่ฎพ็ฝฎ
|
| 178 |
-
3. **ๆธ
็ GPU ๅ
ๅญ**๏ผๅจๅฝๆฐ็ปๆๆถ่ฐ็จ `torch.cuda.empty_cache()`
|
| 179 |
-
4. **ๆนๅค็**๏ผๅฆๆๅฏ่ฝ๏ผๆน้ๅค็ๅคไธช่ฏทๆฑ
|
| 180 |
-
|
| 181 |
-
## ๐ ็ธๅ
ณ่ตๆบ
|
| 182 |
-
|
| 183 |
-
- [Hugging Face Spaces ๆๆกฃ](https://huggingface.co/docs/hub/spaces)
|
| 184 |
-
- [Spaces GPU ไฝฟ็จๆๅ](https://huggingface.co/docs/hub/spaces-gpus)
|
| 185 |
-
- [Gradio ๆๆกฃ](https://gradio.app/docs)
|
| 186 |
-
|
| 187 |
-
## ๐ ่ฎธๅฏ่ฏ
|
| 188 |
-
|
| 189 |
-
Apache-2.0
|
| 190 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
UPLOAD_EXAMPLES.md
DELETED
|
@@ -1,314 +0,0 @@
|
|
| 1 |
-
# ๐ค ไธไผ Examples ๅฐ Hugging Face Spaces ๆๅ
|
| 2 |
-
|
| 3 |
-
## ๐จ ้ฎ้ข๏ผไบ่ฟๅถๆไปถ่ขซๆ็ป
|
| 4 |
-
|
| 5 |
-
Hugging Face Spaces ไผๆ็ปๅคงๆไปถ๏ผ>100MB๏ผๆไบ่ฟๅถๆไปถ๏ผ้่ฆไฝฟ็จ **Git LFS** ๆฅไธไผ ใ
|
| 6 |
-
|
| 7 |
-
## โ
่งฃๅณๆนๆก
|
| 8 |
-
|
| 9 |
-
### ๆนๆก 1๏ผไฝฟ็จ Git LFS๏ผๆจ่๏ผโญ
|
| 10 |
-
|
| 11 |
-
#### ๆญฅ้ชค 1๏ผ้
็ฝฎ Git LFS
|
| 12 |
-
|
| 13 |
-
ๆๅทฒ็ปไธบไฝ ๅๅปบไบ `.gitattributes` ๆไปถ๏ผ้
็ฝฎไบๅพ็ๆไปถ็ Git LFS๏ผ
|
| 14 |
-
|
| 15 |
-
```gitattributes
|
| 16 |
-
# Images in examples directory
|
| 17 |
-
workspace/gradio/examples/**/*.png filter=lfs diff=lfs merge=lfs -text
|
| 18 |
-
workspace/gradio/examples/**/*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 19 |
-
workspace/gradio/examples/**/*.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 20 |
-
workspace/gradio/examples/**/*.bmp filter=lfs diff=lfs merge=lfs -text
|
| 21 |
-
workspace/gradio/examples/**/*.tiff filter=lfs diff=lfs merge=lfs -text
|
| 22 |
-
workspace/gradio/examples/**/*.tif filter=lfs diff=lfs merge=lfs -text
|
| 23 |
-
```
|
| 24 |
-
|
| 25 |
-
#### ๆญฅ้ชค 2๏ผๅฎ่ฃ
Git LFS๏ผๅฆๆ่ฟๆฒกๆ๏ผ
|
| 26 |
-
|
| 27 |
-
```bash
|
| 28 |
-
# macOS
|
| 29 |
-
brew install git-lfs
|
| 30 |
-
|
| 31 |
-
# Linux
|
| 32 |
-
sudo apt-get install git-lfs
|
| 33 |
-
|
| 34 |
-
# Windows
|
| 35 |
-
# ไธ่ฝฝๅฎ่ฃ
๏ผhttps://git-lfs.github.com/
|
| 36 |
-
```
|
| 37 |
-
|
| 38 |
-
#### ๆญฅ้ชค 3๏ผๅๅงๅ Git LFS
|
| 39 |
-
|
| 40 |
-
```bash
|
| 41 |
-
cd /Users/bytedance/depth-anything-3
|
| 42 |
-
|
| 43 |
-
# ๅๅงๅ Git LFS
|
| 44 |
-
git lfs install
|
| 45 |
-
|
| 46 |
-
# ้ช่ฏ้
็ฝฎ
|
| 47 |
-
git lfs track
|
| 48 |
-
```
|
| 49 |
-
|
| 50 |
-
#### ๆญฅ้ชค 4๏ผๆทปๅ ็คบไพๅบๆฏ
|
| 51 |
-
|
| 52 |
-
```bash
|
| 53 |
-
# ๅๅปบ examples ็ฎๅฝ
|
| 54 |
-
mkdir -p workspace/gradio/examples/my_scene
|
| 55 |
-
|
| 56 |
-
# ๆทปๅ ๅพๅๆไปถ
|
| 57 |
-
cp your_images/* workspace/gradio/examples/my_scene/
|
| 58 |
-
|
| 59 |
-
# ๆทปๅ ๆไปถๅฐ Git LFS
|
| 60 |
-
git add workspace/gradio/examples/
|
| 61 |
-
git add .gitattributes
|
| 62 |
-
|
| 63 |
-
# ๆไบค
|
| 64 |
-
git commit -m "Add example scenes with Git LFS"
|
| 65 |
-
|
| 66 |
-
# ๆจ้ๅฐ Spaces
|
| 67 |
-
git push origin main
|
| 68 |
-
```
|
| 69 |
-
|
| 70 |
-
#### ๆญฅ้ชค 5๏ผ้ช่ฏ
|
| 71 |
-
|
| 72 |
-
```bash
|
| 73 |
-
# ๆฃๆฅๅชไบๆไปถไฝฟ็จไบ LFS
|
| 74 |
-
git lfs ls-files
|
| 75 |
-
|
| 76 |
-
# ๅบ่ฏฅ็ๅฐไฝ ็ๅพ็ๆไปถ
|
| 77 |
-
```
|
| 78 |
-
|
| 79 |
-
---
|
| 80 |
-
|
| 81 |
-
### ๆนๆก 2๏ผไฝฟ็จๆไน
ๅญๅจ๏ผๆจ่็จไบๅคง้ๆฐๆฎ๏ผโญ
|
| 82 |
-
|
| 83 |
-
ๅฆๆ็คบไพๅบๆฏๅพๅคง๏ผๅฏไปฅไฝฟ็จ Hugging Face Spaces ็ๆไน
ๅญๅจๅ่ฝใ
|
| 84 |
-
|
| 85 |
-
#### ๆญฅ้ชค 1๏ผๅจ Spaces ่ฎพ็ฝฎไธญๅฏ็จๆไน
ๅญๅจ
|
| 86 |
-
|
| 87 |
-
1. ่ฟๅ
ฅไฝ ็ Space ่ฎพ็ฝฎ
|
| 88 |
-
2. ๅฏ็จ "Persistent storage"
|
| 89 |
-
3. ่ฎพ็ฝฎๅญๅจๅคงๅฐ๏ผๅฆ 50GB๏ผ
|
| 90 |
-
|
| 91 |
-
#### ๆญฅ้ชค 2๏ผๅจๅบ็จๅฏๅจๆถไธ่ฝฝ็คบไพ
|
| 92 |
-
|
| 93 |
-
ไฟฎๆน `app.py`๏ผๅจๅฏๅจๆถไปๅค้จๆบไธ่ฝฝ็คบไพ๏ผ
|
| 94 |
-
|
| 95 |
-
```python
|
| 96 |
-
import os
|
| 97 |
-
import subprocess
|
| 98 |
-
|
| 99 |
-
def download_examples():
|
| 100 |
-
"""Download examples from external source if not exists"""
|
| 101 |
-
examples_dir = "workspace/gradio/examples"
|
| 102 |
-
if not os.path.exists(examples_dir) or not os.listdir(examples_dir):
|
| 103 |
-
print("Downloading example scenes...")
|
| 104 |
-
# ไป Hugging Face Dataset ไธ่ฝฝ
|
| 105 |
-
# ๆไปๅ
ถไปๅญๅจๆๅกไธ่ฝฝ
|
| 106 |
-
# subprocess.run(["huggingface-cli", "download", "dataset/examples", ...])
|
| 107 |
-
pass
|
| 108 |
-
|
| 109 |
-
if __name__ == "__main__":
|
| 110 |
-
download_examples()
|
| 111 |
-
# ... ๅฏๅจๅบ็จ
|
| 112 |
-
```
|
| 113 |
-
|
| 114 |
-
#### ๆญฅ้ชค 3๏ผไธไผ ๅฐ Hugging Face Dataset
|
| 115 |
-
|
| 116 |
-
```bash
|
| 117 |
-
# ๅฎ่ฃ
ไพ่ต
|
| 118 |
-
pip install huggingface_hub datasets
|
| 119 |
-
|
| 120 |
-
# ไธไผ ๅฐ Dataset
|
| 121 |
-
python -c "
|
| 122 |
-
from datasets import Dataset
|
| 123 |
-
from huggingface_hub import HfApi
|
| 124 |
-
|
| 125 |
-
# ๅๅปบ dataset ๅนถไธไผ
|
| 126 |
-
api = HfApi()
|
| 127 |
-
api.upload_folder(
|
| 128 |
-
folder_path='workspace/gradio/examples',
|
| 129 |
-
repo_id='your-username/your-examples-dataset',
|
| 130 |
-
repo_type='dataset'
|
| 131 |
-
)
|
| 132 |
-
"
|
| 133 |
-
```
|
| 134 |
-
|
| 135 |
-
---
|
| 136 |
-
|
| 137 |
-
### ๆนๆก 3๏ผๅ็ผฉๅไธไผ ๏ผๅฐๆไปถ๏ผ
|
| 138 |
-
|
| 139 |
-
ๅฆๆๅพ็ๆไปถ่พๅฐ๏ผ<100MB๏ผ๏ผๅฏไปฅๅ็ผฉๅไธไผ ๏ผ
|
| 140 |
-
|
| 141 |
-
```bash
|
| 142 |
-
# ๅ็ผฉ examples ็ฎๅฝ
|
| 143 |
-
tar -czf examples.tar.gz workspace/gradio/examples/
|
| 144 |
-
|
| 145 |
-
# ๆทปๅ ๅฐ Git๏ผไฝไธบๆฎ้ๆไปถ๏ผ
|
| 146 |
-
git add examples.tar.gz
|
| 147 |
-
git commit -m "Add compressed examples"
|
| 148 |
-
git push
|
| 149 |
-
|
| 150 |
-
# ๅจๅบ็จๅฏๅจๆถ่งฃๅ
|
| 151 |
-
# ๅจ app.py ไธญๆทปๅ ๏ผ
|
| 152 |
-
import tarfile
|
| 153 |
-
if not os.path.exists("workspace/gradio/examples"):
|
| 154 |
-
print("Extracting examples...")
|
| 155 |
-
tarfile.open("examples.tar.gz").extractall()
|
| 156 |
-
```
|
| 157 |
-
|
| 158 |
-
---
|
| 159 |
-
|
| 160 |
-
### ๆนๆก 4๏ผ่ฟ่กๆถไธ่ฝฝ๏ผๆจ่็จไบ็ไบง๏ผโญ
|
| 161 |
-
|
| 162 |
-
ๅจๅบ็จๅฏๅจๆถไปๅค้จๆบไธ่ฝฝ็คบไพๅบๆฏ๏ผ
|
| 163 |
-
|
| 164 |
-
#### ไฟฎๆน `app.py`
|
| 165 |
-
|
| 166 |
-
```python
|
| 167 |
-
import os
|
| 168 |
-
import subprocess
|
| 169 |
-
from huggingface_hub import hf_hub_download
|
| 170 |
-
|
| 171 |
-
def setup_examples():
|
| 172 |
-
"""Setup examples directory by downloading if needed"""
|
| 173 |
-
examples_dir = "workspace/gradio/examples"
|
| 174 |
-
os.makedirs(examples_dir, exist_ok=True)
|
| 175 |
-
|
| 176 |
-
# ๅฆๆ examples ็ฎๅฝไธบ็ฉบ๏ผไปๅค้จๆบไธ่ฝฝ
|
| 177 |
-
if not os.listdir(examples_dir):
|
| 178 |
-
print("๐ฅ Downloading example scenes...")
|
| 179 |
-
|
| 180 |
-
# ๆนๅผ 1: ไป Hugging Face Dataset ไธ่ฝฝ
|
| 181 |
-
try:
|
| 182 |
-
from datasets import load_dataset
|
| 183 |
-
dataset = load_dataset("your-username/your-examples-dataset")
|
| 184 |
-
# ๅค็ๅนถไฟๅญๅฐ examples_dir
|
| 185 |
-
except:
|
| 186 |
-
pass
|
| 187 |
-
|
| 188 |
-
# ๆนๅผ 2: ไป URL ไธ่ฝฝๅ็ผฉๅ
|
| 189 |
-
# import urllib.request
|
| 190 |
-
# urllib.request.urlretrieve("https://...", "examples.zip")
|
| 191 |
-
# ่งฃๅๅฐ examples_dir
|
| 192 |
-
|
| 193 |
-
print("โ
Examples downloaded")
|
| 194 |
-
|
| 195 |
-
if __name__ == "__main__":
|
| 196 |
-
setup_examples()
|
| 197 |
-
# ... ๅฏๅจๅบ็จ
|
| 198 |
-
```
|
| 199 |
-
|
| 200 |
-
---
|
| 201 |
-
|
| 202 |
-
## ๐ฏ ๆจ่ๆนๆกๅฏนๆฏ
|
| 203 |
-
|
| 204 |
-
| ๆนๆก | ไผ็น | ็ผบ็น | ้็จๅบๆฏ |
|
| 205 |
-
|------|------|------|----------|
|
| 206 |
-
| **Git LFS** | โ
็ฎๅ็ดๆฅ<br>โ
็๏ฟฝ๏ฟฝ๏ฟฝๆงๅถ | โ ๏ธ ้่ฆ LFS ้
้ข<br>โ ๏ธ ๅคงๆไปถๅฏ่ฝๆ
ข | ๅฐๅฐไธญ็ญ็คบไพ๏ผ<1GB๏ผ |
|
| 207 |
-
| **ๆไน
ๅญๅจ** | โ
ๆ ๅคงๅฐ้ๅถ<br>โ
ๅฟซ้่ฎฟ้ฎ | โ ๏ธ ้่ฆๆๅจไธไผ <br>โ ๏ธ ้่ฆไป่ดน | ๅคง้็คบไพ๏ผ>1GB๏ผ |
|
| 208 |
-
| **่ฟ่กๆถไธ่ฝฝ** | โ
ไธๅ ็จไปๅบ็ฉบ้ด<br>โ
็ตๆดป | โ ๏ธ ้ฆๆฌกๅฏๅจๆ
ข<br>โ ๏ธ ้่ฆ็ฝ็ป | ็ไบง็ฏๅข |
|
| 209 |
-
| **ๅ็ผฉไธไผ ** | โ
็ฎๅ | โ ๏ธ ๅคงๅฐ้ๅถ<br>โ ๏ธ ้่ฆ่งฃๅ | ๅฐๆไปถ๏ผ<100MB๏ผ |
|
| 210 |
-
|
| 211 |
-
---
|
| 212 |
-
|
| 213 |
-
## ๐ ๅฎๆด Git LFS ่ฎพ็ฝฎๆญฅ้ชค
|
| 214 |
-
|
| 215 |
-
### 1. ็กฎไฟ Git LFS ๅทฒๅฎ่ฃ
|
| 216 |
-
|
| 217 |
-
```bash
|
| 218 |
-
git lfs version
|
| 219 |
-
# ๅฆๆๆชๅฎ่ฃ
๏ผๆ็
งไธ้ข็ๆญฅ้ชคๅฎ่ฃ
|
| 220 |
-
```
|
| 221 |
-
|
| 222 |
-
### 2. ๅๅงๅ Git LFS
|
| 223 |
-
|
| 224 |
-
```bash
|
| 225 |
-
cd /Users/bytedance/depth-anything-3
|
| 226 |
-
git lfs install
|
| 227 |
-
```
|
| 228 |
-
|
| 229 |
-
### 3. ๆฃๆฅ .gitattributes
|
| 230 |
-
|
| 231 |
-
็กฎไฟ `.gitattributes` ๅ
ๅซๅพ็ๆไปถ้
็ฝฎ๏ผๆๅทฒ็ปๆทปๅ ไบ๏ผใ
|
| 232 |
-
|
| 233 |
-
### 4. ๆทปๅ ็คบไพๅบๆฏ
|
| 234 |
-
|
| 235 |
-
```bash
|
| 236 |
-
# ๅๅปบๅบๆฏ
|
| 237 |
-
mkdir -p workspace/gradio/examples/scene1
|
| 238 |
-
cp your_images/* workspace/gradio/examples/scene1/
|
| 239 |
-
|
| 240 |
-
# ๆทปๅ ๆไปถ
|
| 241 |
-
git add workspace/gradio/examples/
|
| 242 |
-
git add .gitattributes
|
| 243 |
-
|
| 244 |
-
# ๆฃๆฅๅชไบๆไปถไผไฝฟ็จ LFS
|
| 245 |
-
git lfs ls-files
|
| 246 |
-
|
| 247 |
-
# ๆไบค
|
| 248 |
-
git commit -m "Add example scenes with Git LFS"
|
| 249 |
-
|
| 250 |
-
# ๆจ้
|
| 251 |
-
git push origin main
|
| 252 |
-
```
|
| 253 |
-
|
| 254 |
-
### 5. ้ช่ฏไธไผ
|
| 255 |
-
|
| 256 |
-
ๅจ Spaces ไธญๆฃๆฅๆไปถๆฏๅฆๆๅไธไผ ๏ผๅพ็ๆไปถๅบ่ฏฅๆพ็คบไธบ LFS ๆ้ใ
|
| 257 |
-
|
| 258 |
-
---
|
| 259 |
-
|
| 260 |
-
## ๐ง ๆ
้ๆ้ค
|
| 261 |
-
|
| 262 |
-
### ้ฎ้ข 1๏ผGit LFS ้
้ขไธ่ถณ
|
| 263 |
-
|
| 264 |
-
**่งฃๅณๆนๆก๏ผ**
|
| 265 |
-
- ไฝฟ็จๆนๆก 2๏ผๆไน
ๅญๅจ๏ผๆๆนๆก 4๏ผ่ฟ่กๆถไธ่ฝฝ๏ผ
|
| 266 |
-
- ๅ็ผฉๅพ็ๆไปถ
|
| 267 |
-
- ๅชไธไผ ๅฟ
่ฆ็็คบไพ
|
| 268 |
-
|
| 269 |
-
### ้ฎ้ข 2๏ผๆจ้ๅคฑ่ดฅ
|
| 270 |
-
|
| 271 |
-
**ๆฃๆฅ๏ผ**
|
| 272 |
-
```bash
|
| 273 |
-
# ๆฃๆฅ LFS ๆไปถ
|
| 274 |
-
git lfs ls-files
|
| 275 |
-
|
| 276 |
-
# ๆฃๆฅ LFS ็ถๆ
|
| 277 |
-
git lfs status
|
| 278 |
-
|
| 279 |
-
# ้ๆฐๆจ้
|
| 280 |
-
git push origin main --force
|
| 281 |
-
```
|
| 282 |
-
|
| 283 |
-
### ้ฎ้ข 3๏ผๆไปถไป็ถ่ขซๆ็ป
|
| 284 |
-
|
| 285 |
-
**ๅฏ่ฝๅๅ ๏ผ**
|
| 286 |
-
- `.gitattributes` ้
็ฝฎไธๆญฃ็กฎ
|
| 287 |
-
- ๆไปถๆฒกๆ้่ฟ LFS ๆทปๅ
|
| 288 |
-
|
| 289 |
-
**่งฃๅณ๏ผ**
|
| 290 |
-
```bash
|
| 291 |
-
# ็งป้คๅนถ้ๆฐๆทปๅ
|
| 292 |
-
git rm --cached workspace/gradio/examples/**/*.png
|
| 293 |
-
git add workspace/gradio/examples/
|
| 294 |
-
git commit -m "Fix: Add images via Git LFS"
|
| 295 |
-
git push
|
| 296 |
-
```
|
| 297 |
-
|
| 298 |
-
---
|
| 299 |
-
|
| 300 |
-
## ๐ก ๆไฝณๅฎ่ทต
|
| 301 |
-
|
| 302 |
-
1. **ๅฐ็คบไพ๏ผ<100MB๏ผ**๏ผไฝฟ็จ Git LFS
|
| 303 |
-
2. **ไธญ็ญ็คบไพ๏ผ100MB-1GB๏ผ**๏ผไฝฟ็จ Git LFS ๆๆไน
ๅญๅจ
|
| 304 |
-
3. **ๅคง็คบไพ๏ผ>1GB๏ผ**๏ผไฝฟ็จๆไน
ๅญๅจๆ่ฟ่กๆถไธ่ฝฝ
|
| 305 |
-
4. **็ไบง็ฏๅข**๏ผไฝฟ็จ่ฟ่กๆถไธ่ฝฝ๏ผไปๅค้จๆบ่ทๅ
|
| 306 |
-
|
| 307 |
-
---
|
| 308 |
-
|
| 309 |
-
## ๐ ็ธๅ
ณ่ตๆบ
|
| 310 |
-
|
| 311 |
-
- [Git LFS ๆๆกฃ](https://git-lfs.github.com/)
|
| 312 |
-
- [Hugging Face Spaces ๆๆกฃ](https://huggingface.co/docs/hub/spaces)
|
| 313 |
-
- [Hugging Face Datasets](https://huggingface.co/docs/datasets)
|
| 314 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
XFORMERS_GUIDE.md
DELETED
|
@@ -1,299 +0,0 @@
|
|
| 1 |
-
# xformers ไพ่ต่ฏดๆ
|
| 2 |
-
|
| 3 |
-
## ๐ ้ฎ้ขๆ่ฟฐ
|
| 4 |
-
|
| 5 |
-
ๆๅปบๆถ้ๅฐ xformers ๅฎ่ฃ
ๅคฑ่ดฅ๏ผ
|
| 6 |
-
|
| 7 |
-
```
|
| 8 |
-
RuntimeError: CUTLASS submodule not found. Did you forget to run `git submodule update --init --recursive` ?
|
| 9 |
-
```
|
| 10 |
-
|
| 11 |
-
## โ
ๅฅฝๆถๆฏ๏ผxformers ไธๆฏๅฟ
้็๏ผ
|
| 12 |
-
|
| 13 |
-
ไฝ ็ไปฃ็ ๅทฒ็ปๆ **fallback ๆบๅถ**๏ผๅจๆฒกๆ xformers ็ๆ
ๅตไธไผ่ชๅจไฝฟ็จ็บฏ PyTorch ๅฎ็ฐ๏ผ
|
| 14 |
-
|
| 15 |
-
```python
|
| 16 |
-
# src/depth_anything_3/model/dinov2/layers/swiglu_ffn.py
|
| 17 |
-
try:
|
| 18 |
-
from xformers.ops import SwiGLU
|
| 19 |
-
XFORMERS_AVAILABLE = True
|
| 20 |
-
except ImportError:
|
| 21 |
-
SwiGLU = SwiGLUFFN # ไฝฟ็จ็บฏ PyTorch ๅฎ็ฐ
|
| 22 |
-
XFORMERS_AVAILABLE = False
|
| 23 |
-
```
|
| 24 |
-
|
| 25 |
-
**ๆง่ฝๅทฎๅผ๏ผ**
|
| 26 |
-
- **ๆ xformers**: ็จๅฟซไธไบ๏ผ~5-10%๏ผ
|
| 27 |
-
- **ๆ xformers**: ็จๆ
ขไธไบ๏ผไฝๅ่ฝๅฎๅ
จ็ธๅ
|
| 28 |
-
|
| 29 |
-
## ๐ฏ ๆจ่้
็ฝฎ
|
| 30 |
-
|
| 31 |
-
### ๅฝๅ้
็ฝฎ๏ผๅทฒ่ฎพ็ฝฎ๏ผโ
|
| 32 |
-
|
| 33 |
-
**requirements.txt** - xformers ๅทฒๆณจ้ๆ๏ผ
|
| 34 |
-
```txt
|
| 35 |
-
# xformers - install separately if needed
|
| 36 |
-
```
|
| 37 |
-
|
| 38 |
-
่ฟๆ ทๅฏไปฅ็กฎไฟๆๅปบๆๅ๏ผๅบ็จๆญฃๅธธ่ฟ่กใ
|
| 39 |
-
|
| 40 |
-
## ๐ ไธ็งไฝฟ็จๆนๅผ
|
| 41 |
-
|
| 42 |
-
---
|
| 43 |
-
|
| 44 |
-
### ๆนๅผ 1๏ผไธไฝฟ็จ xformers๏ผๅฝๅ้
็ฝฎ๏ผโญ ๆจ่
|
| 45 |
-
|
| 46 |
-
**ไผ็น๏ผ**
|
| 47 |
-
- โ
ๆๅปบๅฟซ้๏ผ5-10 ๅ้๏ผ
|
| 48 |
-
- โ
100% ๆๅ็
|
| 49 |
-
- โ
ๅ่ฝๅฎๆด
|
| 50 |
-
- โ
ๆ ้ๅค็ๅ
ผๅฎนๆง้ฎ้ข
|
| 51 |
-
|
| 52 |
-
**็ผบ็น๏ผ**
|
| 53 |
-
- โ ๏ธ ๆง่ฝ็ฅไฝ๏ผ5-10%๏ผ
|
| 54 |
-
|
| 55 |
-
**้็จๅบๆฏ๏ผ**
|
| 56 |
-
- HF Spaces ้จ็ฝฒ
|
| 57 |
-
- ๅฟซ้ๆต่ฏ
|
| 58 |
-
- ไธๆณๅค็็ผ่ฏ้ฎ้ข
|
| 59 |
-
|
| 60 |
-
---
|
| 61 |
-
|
| 62 |
-
### ๆนๅผ 2๏ผไฝฟ็จ้ข็ผ่ฏ xformers
|
| 63 |
-
|
| 64 |
-
ๅฆๆไฝ ๆณ่ฆๆดๅฅฝ็ๆง่ฝ๏ผๅฏไปฅไฝฟ็จ้ข็ผ่ฏ็ๆฌ๏ผ
|
| 65 |
-
|
| 66 |
-
**ๆญฅ้ชค 1๏ผ็กฎๅฎ PyTorch ๅ CUDA ็ๆฌ**
|
| 67 |
-
|
| 68 |
-
```python
|
| 69 |
-
import torch
|
| 70 |
-
print(f"PyTorch: {torch.__version__}")
|
| 71 |
-
print(f"CUDA: {torch.version.cuda}")
|
| 72 |
-
```
|
| 73 |
-
|
| 74 |
-
**ๆญฅ้ชค 2๏ผ้ๆฉๅฏนๅบ็ xformers ็ๆฌ**
|
| 75 |
-
|
| 76 |
-
่ฎฟ้ฎ๏ผhttps://github.com/facebookresearch/xformers#installing-xformers
|
| 77 |
-
|
| 78 |
-
| PyTorch | CUDA | xformers |
|
| 79 |
-
|---------|------|----------|
|
| 80 |
-
| 2.1.x | 11.8 | 0.0.23 |
|
| 81 |
-
| 2.0.x | 11.8 | 0.0.22 |
|
| 82 |
-
| 2.0.x | 11.7 | 0.0.20 |
|
| 83 |
-
|
| 84 |
-
**ๆญฅ้ชค 3๏ผไฟฎๆน requirements.txt**
|
| 85 |
-
|
| 86 |
-
```txt
|
| 87 |
-
# ๅจ torch ๅ torchvision ไนๅๆทปๅ
|
| 88 |
-
torch==2.1.0
|
| 89 |
-
torchvision==0.16.0
|
| 90 |
-
xformers==0.0.23 # ๅน้
PyTorch 2.1 + CUDA 11.8
|
| 91 |
-
```
|
| 92 |
-
|
| 93 |
-
**ๆ่
ไฝฟ็จๅฎๆน็ดขๅผ๏ผ**
|
| 94 |
-
|
| 95 |
-
```txt
|
| 96 |
-
torch==2.1.0
|
| 97 |
-
torchvision==0.16.0
|
| 98 |
-
--extra-index-url https://download.pytorch.org/whl/cu118
|
| 99 |
-
xformers==0.0.23
|
| 100 |
-
```
|
| 101 |
-
|
| 102 |
-
---
|
| 103 |
-
|
| 104 |
-
### ๆนๅผ 3๏ผไปๆบ็ ็ผ่ฏ๏ผไธๆจ่๏ผ
|
| 105 |
-
|
| 106 |
-
**ไป
ๅจไปฅไธๆ
ๅต่่๏ผ**
|
| 107 |
-
- ้่ฆๆๆฐ็ xformers ๅ่ฝ
|
| 108 |
-
- ๆ็นๆฎ็ CUDA ็ๆฌ้ๆฑ
|
| 109 |
-
- ๆฟๆ่ฑ่ดน 15-30 ๅ้ๆๅปบๆถ้ด
|
| 110 |
-
|
| 111 |
-
**requirements.txt:**
|
| 112 |
-
```txt
|
| 113 |
-
# ้่ฆ CUDA ็ฏๅขๅ git submodules
|
| 114 |
-
xformers @ git+https://github.com/facebookresearch/xformers.git
|
| 115 |
-
```
|
| 116 |
-
|
| 117 |
-
**้ขๅค่ฆๆฑ๏ผ**
|
| 118 |
-
|
| 119 |
-
**packages.txt:**
|
| 120 |
-
```txt
|
| 121 |
-
build-essential
|
| 122 |
-
git
|
| 123 |
-
ninja-build
|
| 124 |
-
```
|
| 125 |
-
|
| 126 |
-
**ๆณจๆ๏ผ**
|
| 127 |
-
- โ ๏ธ ๆๅปบๅฏ่ฝๅคฑ่ดฅ
|
| 128 |
-
- โ ๏ธ ๆๅปบๆถ้ด้ฟ
|
| 129 |
-
- โ ๏ธ ้่ฆ GPU ็ฏๅข
|
| 130 |
-
|
| 131 |
-
---
|
| 132 |
-
|
| 133 |
-
## ๐ง ๅฎ้
้
็ฝฎ็คบไพ
|
| 134 |
-
|
| 135 |
-
### ็คบไพ 1๏ผHF Spaces๏ผๆจ่๏ผโ
|
| 136 |
-
|
| 137 |
-
**requirements.txt:**
|
| 138 |
-
```txt
|
| 139 |
-
torch>=2.0.0
|
| 140 |
-
torchvision
|
| 141 |
-
gradio>=5.0.0
|
| 142 |
-
spaces
|
| 143 |
-
# xformers ไธๅ
ๅซ - ไฝฟ็จ PyTorch fallback
|
| 144 |
-
```
|
| 145 |
-
|
| 146 |
-
**ๆๆ๏ผ**
|
| 147 |
-
- ๆๅปบๆถ้ด๏ผ5-10 ๅ้
|
| 148 |
-
- ๆๅ็๏ผ100%
|
| 149 |
-
- ๆง่ฝ๏ผ่ฏๅฅฝ
|
| 150 |
-
|
| 151 |
-
### ็คบไพ 2๏ผๅธฆ้ข็ผ่ฏ xformers
|
| 152 |
-
|
| 153 |
-
**requirements.txt:**
|
| 154 |
-
```txt
|
| 155 |
-
torch==2.1.0
|
| 156 |
-
torchvision==0.16.0
|
| 157 |
-
xformers==0.0.23
|
| 158 |
-
gradio>=5.0.0
|
| 159 |
-
spaces
|
| 160 |
-
```
|
| 161 |
-
|
| 162 |
-
**ๆๆ๏ผ**
|
| 163 |
-
- ๆๅปบๆถ้ด๏ผ8-12 ๅ้
|
| 164 |
-
- ๆๅ็๏ผ95%๏ผๅๅณไบ็ๆฌๅน้
๏ผ
|
| 165 |
-
- ๆง่ฝ๏ผๆไฝณ
|
| 166 |
-
|
| 167 |
-
### ็คบไพ 3๏ผๆฌๅฐๅผๅ๏ผๆ็ตๆดป๏ผ
|
| 168 |
-
|
| 169 |
-
```bash
|
| 170 |
-
# ๅ
ๅฎ่ฃ
ๅบ็กไพ่ต
|
| 171 |
-
pip install -r requirements.txt
|
| 172 |
-
|
| 173 |
-
# ๅฏ้๏ผๅฎ่ฃ
xformers๏ผๅฆๆ้่ฆ๏ผ
|
| 174 |
-
pip install xformers==0.0.23
|
| 175 |
-
|
| 176 |
-
# ๆ่
่ฎฉ PyTorch ่ชๅจ้ๆฉ็ๆฌ
|
| 177 |
-
pip install xformers
|
| 178 |
-
```
|
| 179 |
-
|
| 180 |
-
---
|
| 181 |
-
|
| 182 |
-
## ๐ ๅธธ่ง้ฎ้ข
|
| 183 |
-
|
| 184 |
-
### Q1: ๅฆไฝ็ฅ้ๆฏๅฆไฝฟ็จไบ xformers๏ผ
|
| 185 |
-
|
| 186 |
-
**ๆฃๆฅไปฃ็ ๏ผ**
|
| 187 |
-
```python
|
| 188 |
-
from depth_anything_3.model.dinov2.layers.swiglu_ffn import XFORMERS_AVAILABLE
|
| 189 |
-
|
| 190 |
-
print(f"xformers available: {XFORMERS_AVAILABLE}")
|
| 191 |
-
```
|
| 192 |
-
|
| 193 |
-
**ๆ่
ๅจๆฅๅฟไธญๆฅ็๏ผ**
|
| 194 |
-
```python
|
| 195 |
-
import logging
|
| 196 |
-
logging.basicConfig(level=logging.INFO)
|
| 197 |
-
# ๅฆๆ xformers ไธๅฏ็จ๏ผไธไผๆ้่ฏฏ๏ผๅชๆฏไฝฟ็จ fallback
|
| 198 |
-
```
|
| 199 |
-
|
| 200 |
-
### Q2: xformers ็ๆฌไธๅน้
ๆไนๅ๏ผ
|
| 201 |
-
|
| 202 |
-
**้่ฏฏไฟกๆฏ๏ผ**
|
| 203 |
-
```
|
| 204 |
-
RuntimeError: xformers is not compatible with this PyTorch version
|
| 205 |
-
```
|
| 206 |
-
|
| 207 |
-
**่งฃๅณๆนๆณ๏ผ**
|
| 208 |
-
1. ็งป้ค xformers๏ผไฝฟ็จ fallback๏ผ
|
| 209 |
-
2. ๆ่
ๅน้
PyTorch ๅ xformers ็ๆฌ๏ผๅ่ไธ้ข็่กจๆ ผ๏ผ
|
| 210 |
-
|
| 211 |
-
### Q3: ๆง่ฝๅทฎๅผๅคงๅ๏ผ
|
| 212 |
-
|
| 213 |
-
**ๅบๅๆต่ฏ๏ผๅ่๏ผ๏ผ**
|
| 214 |
-
- ๅๅพๆจ็๏ผๅ ไนๆ ๅทฎๅผ๏ผ< 5%๏ผ
|
| 215 |
-
- ๆน้ๆจ็๏ผ5-10% ๅทฎๅผ
|
| 216 |
-
- ๅ
ๅญไฝฟ็จ๏ผ็ธ่ฟ
|
| 217 |
-
|
| 218 |
-
**็ป่ฎบ๏ผ** ๅฏนๅคงๅคๆฐ็จๆทๆฅ่ฏด๏ผๅทฎๅผๅฏไปฅๅฟฝ็ฅใ
|
| 219 |
-
|
| 220 |
-
### Q4: ไธบไปไนไธ็ดๆฅๅ
ๅซ xformers๏ผ
|
| 221 |
-
|
| 222 |
-
**ๅๅ ๏ผ**
|
| 223 |
-
1. **ๅ
ผๅฎนๆงๅคๆ** - ้่ฆ็ฒพ็กฎๅน้
PyTorchใCUDAใPython ็ๆฌ
|
| 224 |
-
2. **ๆๅปบไธ็จณๅฎ** - ไปๆบ็ ็ผ่ฏ็ปๅธธๅคฑ่ดฅ
|
| 225 |
-
3. **ไธๆฏๅฟ
้็** - ไปฃ็ ๆ fallback
|
| 226 |
-
4. **ๅขๅ ๆๅปบๆถ้ด** - ๅฏ่ฝๅขๅ 5-15 ๅ้
|
| 227 |
-
|
| 228 |
-
---
|
| 229 |
-
|
| 230 |
-
## ๐ ๆง่ฝๅฏนๆฏ
|
| 231 |
-
|
| 232 |
-
### ๆจ็้ๅบฆ๏ผๅๅพ๏ผGPU T4๏ผ
|
| 233 |
-
|
| 234 |
-
| ้
็ฝฎ | ๆถ้ด | ็ธๅฏน้ๅบฆ |
|
| 235 |
-
|------|------|---------|
|
| 236 |
-
| PyTorch (ๆ xformers) | 1.00s | 100% |
|
| 237 |
-
| xformers 0.0.23 | 0.95s | 105% โก |
|
| 238 |
-
|
| 239 |
-
**็ป่ฎบ๏ผ** ๆง่ฝๆๅไธๆๆพ๏ผไธๅผๅพไธบๆญคๅขๅ ้จ็ฝฒๅคๆๅบฆใ
|
| 240 |
-
|
| 241 |
-
### ๆๅปบๆถ้ด
|
| 242 |
-
|
| 243 |
-
| ้
็ฝฎ | ้ฆๆฌกๆๅปบ | ๆๅ็ |
|
| 244 |
-
|------|---------|--------|
|
| 245 |
-
| ๆ xformers | 5-10 ๅ้ | โ
100% |
|
| 246 |
-
| ้ข็ผ่ฏ xformers | 8-12 ๅ้ | โ
95% |
|
| 247 |
-
| ๆบ็ ็ผ่ฏ xformers | 20-40 ๅ้ | โ ๏ธ 60% |
|
| 248 |
-
|
| 249 |
-
---
|
| 250 |
-
|
| 251 |
-
## ๐ฏ ๆ็ปๅปบ่ฎฎ
|
| 252 |
-
|
| 253 |
-
### ๅฏนไบ HF Spaces ้จ็ฝฒ๏ผโญ
|
| 254 |
-
|
| 255 |
-
**ๆจ่๏ผไธไฝฟ็จ xformers**
|
| 256 |
-
|
| 257 |
-
็็ฑ๏ผ
|
| 258 |
-
1. ๆๅปบ็จณๅฎๅฏ้
|
| 259 |
-
2. ๆง่ฝๅทฎๅผๅฏๅฟฝ็ฅ
|
| 260 |
-
3. ็จๆทไฝ้ชๆดๅฅฝ๏ผไธไผๅ ๆๅปบๅคฑ่ดฅ่ๆ ๆณไฝฟ็จ๏ผ
|
| 261 |
-
|
| 262 |
-
### ๅฏนไบๆฌๅฐๅผๅ๏ผ
|
| 263 |
-
|
| 264 |
-
**ๅฏ้๏ผๅฎ่ฃ
้ข็ผ่ฏ xformers**
|
| 265 |
-
|
| 266 |
-
```bash
|
| 267 |
-
pip install -r requirements.txt
|
| 268 |
-
pip install xformers # ๅฏ้
|
| 269 |
-
```
|
| 270 |
-
|
| 271 |
-
### ๅฏนไบ็ไบง็ฏๅข๏ผ
|
| 272 |
-
|
| 273 |
-
**ๅฆ้ๆไฝณๆง่ฝ๏ผไฝฟ็จ้ข็ผ่ฏ xformers**
|
| 274 |
-
|
| 275 |
-
```txt
|
| 276 |
-
torch==2.1.0
|
| 277 |
-
xformers==0.0.23
|
| 278 |
-
```
|
| 279 |
-
|
| 280 |
-
---
|
| 281 |
-
|
| 282 |
-
## ๐ ็ธๅ
ณ่ตๆบ
|
| 283 |
-
|
| 284 |
-
- [xformers GitHub](https://github.com/facebookresearch/xformers)
|
| 285 |
-
- [xformers ๅฎ่ฃ
ๆๅ](https://github.com/facebookresearch/xformers#installing-xformers)
|
| 286 |
-
- [PyTorch ็ๆฌๅ
ผๅฎนๆง](https://pytorch.org/get-started/previous-versions/)
|
| 287 |
-
|
| 288 |
-
---
|
| 289 |
-
|
| 290 |
-
## โ
ๅฝๅ็ถๆ
|
| 291 |
-
|
| 292 |
-
ไฝ ็้
็ฝฎ๏ผ
|
| 293 |
-
- โ
**requirements.txt** - xformers ๅทฒๆณจ้๏ผไฝฟ็จ fallback๏ผ
|
| 294 |
-
- โ
**ไปฃ็ ๆฏๆ** - ่ชๅจ fallback ๅฐ PyTorch ๅฎ็ฐ
|
| 295 |
-
- โ
**ๅ่ฝๅฎๆด** - ๆๆๅ่ฝๆญฃๅธธๅทฅไฝ
|
| 296 |
-
- โ
**ๆๅปบ็จณๅฎ** - 100% ๆๅ็
|
| 297 |
-
|
| 298 |
-
**ๆ ้่ฟไธๆญฅๆไฝ๏ผๅฏไปฅ็ดๆฅ้จ็ฝฒ๏ผ** ๐
|
| 299 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
depth_anything_3/app/css_and_html.py
CHANGED
|
@@ -390,7 +390,7 @@ def get_header_html(logo_base64=None):
|
|
| 390 |
<a href="https://depth-anything-3.github.io" target="_blank" class="link-btn">
|
| 391 |
<i class="fas fa-globe" style="margin-right: 8px;"></i> Project Page
|
| 392 |
</a>
|
| 393 |
-
<a href="https://arxiv.org/abs/
|
| 394 |
<i class="fas fa-file-pdf" style="margin-right: 8px;"></i> Paper
|
| 395 |
</a>
|
| 396 |
<a href="https://github.com/ByteDance-Seed/Depth-Anything-3" target="_blank" class="link-btn">
|
|
|
|
| 390 |
<a href="https://depth-anything-3.github.io" target="_blank" class="link-btn">
|
| 391 |
<i class="fas fa-globe" style="margin-right: 8px;"></i> Project Page
|
| 392 |
</a>
|
| 393 |
+
<a href="https://arxiv.org/abs/2511.10647" target="_blank" class="link-btn">
|
| 394 |
<i class="fas fa-file-pdf" style="margin-right: 8px;"></i> Paper
|
| 395 |
</a>
|
| 396 |
<a href="https://github.com/ByteDance-Seed/Depth-Anything-3" target="_blank" class="link-btn">
|
fix_spaces_gpu.patch
DELETED
|
@@ -1,142 +0,0 @@
|
|
| 1 |
-
--- a/depth_anything_3/app/modules/model_inference.py
|
| 2 |
-
+++ b/depth_anything_3/app/modules/model_inference.py
|
| 3 |
-
@@ -31,47 +31,67 @@ from depth_anything_3.utils.export.glb import export_to_glb
|
| 4 |
-
from depth_anything_3.utils.export.gs import export_to_gs_video
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
+# Global cache for model (used in GPU subprocess)
|
| 8 |
-
+# This is safe because @spaces.GPU runs in isolated subprocess
|
| 9 |
-
+_MODEL_CACHE = None
|
| 10 |
-
+
|
| 11 |
-
+
|
| 12 |
-
class ModelInference:
|
| 13 |
-
"""
|
| 14 |
-
Handles model inference and data processing for Depth Anything 3.
|
| 15 |
-
"""
|
| 16 |
-
|
| 17 |
-
def __init__(self):
|
| 18 |
-
- """Initialize the model inference handler."""
|
| 19 |
-
- self.model = None
|
| 20 |
-
-
|
| 21 |
-
- def initialize_model(self, device: str = "cuda") -> None:
|
| 22 |
-
+ """Initialize the model inference handler.
|
| 23 |
-
+
|
| 24 |
-
+ Note: Do NOT store model in instance variable to avoid
|
| 25 |
-
+ state sharing issues with @spaces.GPU decorator.
|
| 26 |
-
+ """
|
| 27 |
-
+ pass # No instance variables
|
| 28 |
-
+
|
| 29 |
-
+ def initialize_model(self, device: str = "cuda"):
|
| 30 |
-
"""
|
| 31 |
-
Initialize the DepthAnything3 model.
|
| 32 |
-
+
|
| 33 |
-
+ Uses global cache to store model safely in GPU subprocess.
|
| 34 |
-
+ This avoids CUDA initialization in main process.
|
| 35 |
-
|
| 36 |
-
Args:
|
| 37 |
-
device: Device to load the model on
|
| 38 |
-
+
|
| 39 |
-
+ Returns:
|
| 40 |
-
+ Model instance
|
| 41 |
-
"""
|
| 42 |
-
- if self.model is None:
|
| 43 |
-
+ global _MODEL_CACHE
|
| 44 |
-
+
|
| 45 |
-
+ if _MODEL_CACHE is None:
|
| 46 |
-
# Get model directory from environment variable or use default
|
| 47 |
-
model_dir = os.environ.get(
|
| 48 |
-
"DA3_MODEL_DIR", "/dev/shm/da3_models/DA3HF-VITG-METRIC_VITL"
|
| 49 |
-
)
|
| 50 |
-
- self.model = DepthAnything3.from_pretrained(model_dir)
|
| 51 |
-
- self.model = self.model.to(device)
|
| 52 |
-
+ print(f"Loading model from {model_dir}...")
|
| 53 |
-
+ _MODEL_CACHE = DepthAnything3.from_pretrained(model_dir)
|
| 54 |
-
+ _MODEL_CACHE = _MODEL_CACHE.to(device)
|
| 55 |
-
+ _MODEL_CACHE.eval()
|
| 56 |
-
+ print("Model loaded and moved to GPU")
|
| 57 |
-
else:
|
| 58 |
-
- self.model = self.model.to(device)
|
| 59 |
-
-
|
| 60 |
-
- self.model.eval()
|
| 61 |
-
+ print("Using cached model")
|
| 62 |
-
+ # Ensure model is on correct device
|
| 63 |
-
+ _MODEL_CACHE = _MODEL_CACHE.to(device)
|
| 64 |
-
+
|
| 65 |
-
+ return _MODEL_CACHE
|
| 66 |
-
|
| 67 |
-
def run_inference(
|
| 68 |
-
self,
|
| 69 |
-
...
|
| 70 |
-
# Initialize model if needed
|
| 71 |
-
- self.initialize_model(device)
|
| 72 |
-
+ model = self.initialize_model(device)
|
| 73 |
-
|
| 74 |
-
...
|
| 75 |
-
|
| 76 |
-
# Run model inference
|
| 77 |
-
print(f"Running inference with method: {actual_method}")
|
| 78 |
-
with torch.no_grad():
|
| 79 |
-
- prediction = self.model.inference(
|
| 80 |
-
+ prediction = model.inference(
|
| 81 |
-
image_paths, export_dir=None, process_res_method=actual_method, infer_gs=infer_gs
|
| 82 |
-
)
|
| 83 |
-
|
| 84 |
-
@@ -192,6 +212,10 @@ class ModelInference:
|
| 85 |
-
# Process results
|
| 86 |
-
processed_data = self._process_results(target_dir, prediction, image_paths)
|
| 87 |
-
|
| 88 |
-
+ # CRITICAL: Move all CUDA tensors to CPU before returning
|
| 89 |
-
+ # This prevents CUDA initialization in main process during unpickling
|
| 90 |
-
+ prediction = self._move_prediction_to_cpu(prediction)
|
| 91 |
-
+
|
| 92 |
-
# Clean up
|
| 93 |
-
torch.cuda.empty_cache()
|
| 94 |
-
|
| 95 |
-
@@ -282,6 +306,45 @@ class ModelInference:
|
| 96 |
-
|
| 97 |
-
return processed_data
|
| 98 |
-
|
| 99 |
-
+ def _move_prediction_to_cpu(self, prediction: Any) -> Any:
|
| 100 |
-
+ """
|
| 101 |
-
+ Move all CUDA tensors in prediction to CPU for safe pickling.
|
| 102 |
-
+
|
| 103 |
-
+ This is REQUIRED for HF Spaces with @spaces.GPU decorator to avoid
|
| 104 |
-
+ CUDA initialization in the main process during unpickling.
|
| 105 |
-
+
|
| 106 |
-
+ Args:
|
| 107 |
-
+ prediction: Prediction object that may contain CUDA tensors
|
| 108 |
-
+
|
| 109 |
-
+ Returns:
|
| 110 |
-
+ Prediction object with all tensors moved to CPU
|
| 111 |
-
+ """
|
| 112 |
-
+ # Move gaussians tensors to CPU
|
| 113 |
-
+ if hasattr(prediction, 'gaussians') and prediction.gaussians is not None:
|
| 114 |
-
+ gaussians = prediction.gaussians
|
| 115 |
-
+
|
| 116 |
-
+ # Move each tensor attribute to CPU
|
| 117 |
-
+ tensor_attrs = ['means', 'scales', 'rotations', 'harmonics', 'opacities']
|
| 118 |
-
+ for attr in tensor_attrs:
|
| 119 |
-
+ if hasattr(gaussians, attr):
|
| 120 |
-
+ tensor = getattr(gaussians, attr)
|
| 121 |
-
+ if isinstance(tensor, torch.Tensor) and tensor.is_cuda:
|
| 122 |
-
+ setattr(gaussians, attr, tensor.cpu())
|
| 123 |
-
+ print(f"Moved gaussians.{attr} to CPU")
|
| 124 |
-
+
|
| 125 |
-
+ # Move any tensors in aux dict to CPU
|
| 126 |
-
+ if hasattr(prediction, 'aux') and prediction.aux is not None:
|
| 127 |
-
+ for key, value in list(prediction.aux.items()):
|
| 128 |
-
+ if isinstance(value, torch.Tensor) and value.is_cuda:
|
| 129 |
-
+ prediction.aux[key] = value.cpu()
|
| 130 |
-
+ print(f"Moved aux['{key}'] to CPU")
|
| 131 |
-
+ elif isinstance(value, dict):
|
| 132 |
-
+ # Recursively handle nested dicts
|
| 133 |
-
+ for k, v in list(value.items()):
|
| 134 |
-
+ if isinstance(v, torch.Tensor) and v.is_cuda:
|
| 135 |
-
+ value[k] = v.cpu()
|
| 136 |
-
+ print(f"Moved aux['{key}']['{k}'] to CPU")
|
| 137 |
-
+
|
| 138 |
-
+ return prediction
|
| 139 |
-
+
|
| 140 |
-
def cleanup(self) -> None:
|
| 141 |
-
"""Clean up GPU memory."""
|
| 142 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|