Update README.md
Browse files
README.md
CHANGED
|
@@ -9,19 +9,15 @@ Please follow https://github.com/Tencent/HunyuanDiT/blob/main/controlnet/README.
|
|
| 9 |
|
| 10 |

|
| 11 |
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
| 13 |
You can use the following command line for inference.
|
| 14 |
|
| 15 |
a. Using canny ControlNet during inference
|
| 16 |
|
| 17 |
-
python3 sample_controlnet.py --no-enhance --load-key
|
| 18 |
b. Using pose ControlNet during inference
|
| 19 |
|
| 20 |
-
python3 sample_controlnet.py --no-enhance --load-key distill --infer-steps 50 --control-type depth --prompt "在茂密的森林中,一只黑白相间的熊猫静静地坐在绿树红花中,周围是山川和海洋。背景是白天的森林,光线充足" --condition-image-path controlnet/asset/input/depth.jpg --control-weight 1.0
|
| 21 |
-
c. Using depth ControlNet during inference
|
| 22 |
-
|
| 23 |
-
python3 sample_controlnet.py --no-enhance --load-key distill --infer-steps 50 --control-type pose --prompt "一位亚洲女性,身穿绿色上衣,戴着紫色头巾和紫色围巾,站在黑板前。背景是黑板。照片采用近景、平视和居中构图的方式呈现真实摄影风格" --condition-image-path controlnet/asset/input/pose.jpg --control-weight 1.0
|
| 24 |
-
|
| 25 |
-
you need to add the tile into the config.json file and place it in the cpkts/t2i/models/controlnet
|
| 26 |
-
and rename to pytorch_model_ema.pt
|
| 27 |
|
|
|
|
| 9 |
|
| 10 |

|
| 11 |
|
| 12 |
+
remember modify the ./hydit/config.py change the line 50 to
|
| 13 |
+
parser.add_argument("--control-type", type=str, default='canny', choices=['canny', 'depth', 'pose', 'tile'], help="Controlnet condition type")
|
| 14 |
+
|
| 15 |
+
Inference example
|
| 16 |
You can use the following command line for inference.
|
| 17 |
|
| 18 |
a. Using canny ControlNet during inference
|
| 19 |
|
| 20 |
+
python3 sample_controlnet.py --no-enhance --load-key ema --infer-steps 100 --control-type tile --prompt "input your prompt here" --condition-image-path controlnet/asset/input/yourimg.jpg --control-weight 1.0
|
| 21 |
b. Using pose ControlNet during inference
|
| 22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|