update README
Browse files- README.md +7 -3
- README_CN.md +7 -2
README.md
CHANGED
|
@@ -254,7 +254,11 @@ torchrun --nproc_per_node=$N_INFERENCE_GPU generate.py \
|
|
| 254 |
> ```bash
|
| 255 |
> export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True,max_split_size_mb:128
|
| 256 |
> ```
|
| 257 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 258 |
|
| 259 |
|
| 260 |
### Command Line Arguments
|
|
@@ -300,8 +304,8 @@ The following table provides the optimal inference configurations (CFG scale, em
|
|
| 300 |
| 480p I2V CFG Distilled | 1 | None | 5 | 50 |
|
| 301 |
| 720p T2V CFG Distilled | 1 | None | 9 | 50 |
|
| 302 |
| 720p I2V CFG Distilled | 1 | None | 7 | 50 |
|
| 303 |
-
| 720p T2V CFG Distilled Sparse | 1 | None |
|
| 304 |
-
| 720p I2V CFG Distilled Sparse | 1 | None |
|
| 305 |
| 480→720 SR Step Distilled | 1 | None | 2 | 6 |
|
| 306 |
| 720→1080 SR Step Distilled | 1 | None | 2 | 8 |
|
| 307 |
|
|
|
|
| 254 |
> ```bash
|
| 255 |
> export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True,max_split_size_mb:128
|
| 256 |
> ```
|
| 257 |
+
>
|
| 258 |
+
> **Tips:** If you have limited CPU memory and encounter OOM during inference, you can try disable overlapped group offloading by adding the following argument:
|
| 259 |
+
> ```bash
|
| 260 |
+
> --overlap_group_offloading false
|
| 261 |
+
> ```
|
| 262 |
|
| 263 |
|
| 264 |
### Command Line Arguments
|
|
|
|
| 304 |
| 480p I2V CFG Distilled | 1 | None | 5 | 50 |
|
| 305 |
| 720p T2V CFG Distilled | 1 | None | 9 | 50 |
|
| 306 |
| 720p I2V CFG Distilled | 1 | None | 7 | 50 |
|
| 307 |
+
| 720p T2V CFG Distilled Sparse | 1 | None | 9 | 50 |
|
| 308 |
+
| 720p I2V CFG Distilled Sparse | 1 | None | 7 | 50 |
|
| 309 |
| 480→720 SR Step Distilled | 1 | None | 2 | 6 |
|
| 310 |
| 720→1080 SR Step Distilled | 1 | None | 2 | 8 |
|
| 311 |
|
README_CN.md
CHANGED
|
@@ -241,6 +241,11 @@ torchrun --nproc_per_node=$N_INFERENCE_GPU generate.py \
|
|
| 241 |
> ```bash
|
| 242 |
> export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True,max_split_size_mb:128
|
| 243 |
> ```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 244 |
|
| 245 |
### 命令行参数
|
| 246 |
|
|
@@ -285,8 +290,8 @@ torchrun --nproc_per_node=$N_INFERENCE_GPU generate.py \
|
|
| 285 |
| 480p I2V cfg 蒸馏 | 1 | None | 5 | 50 |
|
| 286 |
| 720p T2V cfg 蒸馏 | 1 | None | 9 | 50 |
|
| 287 |
| 720p I2V cfg 蒸馏 | 1 | None | 7 | 50 |
|
| 288 |
-
| 720p T2V cfg 蒸馏稀疏 | 1 | None |
|
| 289 |
-
| 720p I2V cfg 蒸馏稀疏 | 1 | None |
|
| 290 |
| 480→720 超分 步数蒸馏 | 1 | None | 2 | 6 |
|
| 291 |
| 720→1080 超分 步数蒸馏 | 1 | None | 2 | 8 |
|
| 292 |
|
|
|
|
| 241 |
> ```bash
|
| 242 |
> export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True,max_split_size_mb:128
|
| 243 |
> ```
|
| 244 |
+
>
|
| 245 |
+
> **Tips:** 如果您有 CPU 内存有限并且遇到推理时的 OOM 错误,可以尝试禁用重叠组卸载,通过添加以下参数:
|
| 246 |
+
> ```bash
|
| 247 |
+
> --overlap_group_offloading false
|
| 248 |
+
> ```
|
| 249 |
|
| 250 |
### 命令行参数
|
| 251 |
|
|
|
|
| 290 |
| 480p I2V cfg 蒸馏 | 1 | None | 5 | 50 |
|
| 291 |
| 720p T2V cfg 蒸馏 | 1 | None | 9 | 50 |
|
| 292 |
| 720p I2V cfg 蒸馏 | 1 | None | 7 | 50 |
|
| 293 |
+
| 720p T2V cfg 蒸馏稀疏 | 1 | None | 9 | 50 |
|
| 294 |
+
| 720p I2V cfg 蒸馏稀疏 | 1 | None | 7 | 50 |
|
| 295 |
| 480→720 超分 步数蒸馏 | 1 | None | 2 | 6 |
|
| 296 |
| 720→1080 超分 步数蒸馏 | 1 | None | 2 | 8 |
|
| 297 |
|