Unable to train wan2.1 (network problem???)
I am just smart enough to be able to use ComfyUI for video generation and I have very basic coding skills (Java) but this python stuff is beyond me so please treat me like a noob. I have launched this GUI and set it up to the best of my ability, pointing it to all the things it's asking for. I worked through a few errors but now I'm completely stuck. It seems to be saying my model file doesn't have any unet modules (whatever that means) and I've tried several different files just to rule out that it's a file problem. I'm pretty sure I have the right one now as it was pulled from the link in the readme. But whatever I do I can't get it to detect any modules in the dit.
I also can't seem to change the network to lora_wan because it changes it back again. I figure I can't fix this from the GUI but again I don't know my way around python files or the directory structure. Can someone please look at my screenshots and error log and tell me what I should do? It would be really nice if there were some "default settings" that were verified to work on Wan because I'm just spending hours fighting with AIs trying to decipher what's wrong and I'm getting nowhere fast. Thank you.
Preparing training process
--- Cache enabled. Starting caching steps... ---
--- Preparing caching_latents... ---
--- Starting caching_latents ---
Script: C:/Users/User/Desktop/musubi_portable/musubi-tuner/wan_cache_latents.py
Working Directory: C:\Users\User\Desktop\musubi_portable
Running command: C:\Users\User\Desktop\musubi_portable\python\python.exe C:/Users/User/Desktop/musubi_portable/musubi-tuner/wan_cache_latents.py --clip C:/Users/User/Desktop/ComfyUI_Windows_portable/ComfyUI/models/clip_vision/models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth --num_workers 1 --vae C:/Users/User/Desktop/ComfyUI_Windows_portable/ComfyUI/models/vae/Wan2_1_VAE_bf16.safetensors --dataset_config C:/Users/User/Desktop/musubi_portable/dataset_config.toml --batch_size 1
[caching_latents STDERR] INFO:main:Load dataset config from C:/Users/User/Desktop/musubi_portable/dataset_config.toml
[caching_latents STDERR] INFO:dataset.image_video_dataset:glob images in C:/Users/User/Desktop/musubi_portable/dataset/images
[caching_latents STDERR] INFO:dataset.image_video_dataset:found 1 images
[caching_latents STDERR] INFO:dataset.config_utils:[Dataset 0]
[caching_latents STDERR] is_image_dataset: True
[caching_latents STDERR] resolution: (474, 466)
[caching_latents STDERR] batch_size: 1
[caching_latents STDERR] num_repeats: 1
[caching_latents STDERR] caption_extension: ".txt"
[caching_latents STDERR] enable_bucket: True
[caching_latents STDERR] bucket_no_upscale: False
[caching_latents STDERR] cache_directory: "C:/Users/User/Desktop/musubi_portable/dataset/cache"
[caching_latents STDERR] debug_dataset: False
[caching_latents STDERR] image_directory: "C:/Users/User/Desktop/musubi_portable/dataset/images"
[caching_latents STDERR] image_jsonl_file: "None"
[caching_latents STDERR]
[caching_latents STDERR]
[caching_latents STDERR] INFO:main:Loading VAE model from C:/Users/User/Desktop/ComfyUI_Windows_portable/ComfyUI/models/vae/Wan2_1_VAE_bf16.safetensors
[caching_latents STDERR] INFO:root:loading C:/Users/User/Desktop/ComfyUI_Windows_portable/ComfyUI/models/vae/Wan2_1_VAE_bf16.safetensors
[caching_latents STDERR] INFO:root:loading C:/Users/User/Desktop/ComfyUI_Windows_portable/ComfyUI/models/clip_vision/models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth
[caching_latents STDERR] INFO:root:weights loaded from C:/Users/User/Desktop/ComfyUI_Windows_portable/ComfyUI/models/clip_vision/models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth:
[caching_latents STDERR] INFO:cache_latents:Encoding dataset [0]
[caching_latents STDERR]
[caching_latents STDERR] 0it [00:00, ?it/s]
[caching_latents STDERR] 1it [00:01, 1.40s/it]
[caching_latents STDERR] 1it [00:01, 1.40s/it]
--- STDOUT reader thread for caching_latents finished. ---
--- STDERR reader thread for caching_latents finished. ---
--- caching_latents process finished successfully (Exit Code: 0). ---
--- Preparing caching_text... ---
--- Starting caching_text ---
Script: C:/Users/User/Desktop/musubi_portable/musubi-tuner/wan_cache_text_encoder_outputs.py
Working Directory: C:\Users\User\Desktop\musubi_portable
Running command: C:\Users\User\Desktop\musubi_portable\python\python.exe C:/Users/User/Desktop/musubi_portable/musubi-tuner/wan_cache_text_encoder_outputs.py --t5 C:/Users/User/Desktop/ComfyUI_Windows_portable/ComfyUI/models/clip/umt5-xxl-enc-bf16.safetensors --num_workers 1 --fp8_t5 --dataset_config C:/Users/User/Desktop/musubi_portable/dataset_config.toml --batch_size 1
[caching_text STDERR] INFO:main:Load dataset config from C:/Users/User/Desktop/musubi_portable/dataset_config.toml
[caching_text STDERR] INFO:dataset.image_video_dataset:glob images in C:/Users/User/Desktop/musubi_portable/dataset/images
[caching_text STDERR] INFO:dataset.image_video_dataset:found 1 images
[caching_text STDERR] INFO:dataset.config_utils:[Dataset 0]
[caching_text STDERR] is_image_dataset: True
[caching_text STDERR] resolution: (474, 466)
[caching_text STDERR] batch_size: 1
[caching_text STDERR] num_repeats: 1
[caching_text STDERR] caption_extension: ".txt"
[caching_text STDERR] enable_bucket: True
[caching_text STDERR] bucket_no_upscale: False
[caching_text STDERR] cache_directory: "C:/Users/User/Desktop/musubi_portable/dataset/cache"
[caching_text STDERR] debug_dataset: False
[caching_text STDERR] image_directory: "C:/Users/User/Desktop/musubi_portable/dataset/images"
[caching_text STDERR] image_jsonl_file: "None"
[caching_text STDERR]
[caching_text STDERR]
[caching_text STDERR] INFO:main:Loading T5: C:/Users/User/Desktop/ComfyUI_Windows_portable/ComfyUI/models/clip/umt5-xxl-enc-bf16.safetensors
[caching_text STDERR] INFO:wan.modules.t5:loading weights from C:/Users/User/Desktop/ComfyUI_Windows_portable/ComfyUI/models/clip/umt5-xxl-enc-bf16.safetensors
[caching_text STDERR] INFO:wan.modules.t5:moving model to cuda and casting to torch.float8_e4m3fn
[caching_text STDERR] INFO:wan.modules.t5:preparing model for fp8
[caching_text STDERR] INFO:main:Encoding with T5
[caching_text STDERR] INFO:cache_text_encoder_outputs:Encoding dataset [0]
[caching_text STDERR]
[caching_text STDERR] 0it [00:00, ?it/s]
[caching_text STDERR] 1it [00:01, 1.31s/it]
[caching_text STDERR] 1it [00:01, 1.31s/it]
--- STDOUT reader thread for caching_text finished. ---
--- STDERR reader thread for caching_text finished. ---
--- caching_text process finished successfully (Exit Code: 0). ---
--- Preparing training... ---
--- Starting training task ---
--- Starting training ---
Script: C:/Users/User/Desktop/musubi_portable/musubi-tuner/wan_train_network.py
Working Directory: C:\Users\User\Desktop\musubi_portable
Running command: C:\Users\User\Desktop\musubi_portable\python\python.exe C:/Users/User/Desktop/musubi_portable/musubi-tuner/wan_train_network.py --network_alpha 4 --logging_dir C:/Users/User/Desktop/musubi_portable/logs --xformers --optimizer_type adamw8bit --dataset_config C:/Users/User/Desktop/musubi_portable/dataset_config.toml --img_in_txt_in_offloading --lr_warmup_steps 0 --save_every_n_epochs 10 --fp8_t5 --timestep_sampling shift --gradient_checkpointing --persistent_data_loader_workers --discrete_flow_shift 3 --vae C:/Users/User/Desktop/ComfyUI_Windows_portable/ComfyUI/models/vae/Wan2_1_VAE_bf16.safetensors --seed 1234 --network_dim 32 --learning_rate 2e-05 --blocks_to_swap 16 --mixed_precision bf16 --max_data_loader_n_workers 1 --output_dir C:/Users/User/Desktop/musubi_portable/output --dit C:/Users/User/Desktop/ComfyUI_Windows_portable/ComfyUI/models/unet/wan2.1_t2v_14B_bf16.safetensors --save_state --clip C:/Users/User/Desktop/ComfyUI_Windows_portable/ComfyUI/models/clip_vision/models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth --max_train_epochs 10 --log_with tensorboard --lr_decay_steps 0 --output_name My_Best_Lora_v1 --weighting_scheme none --lr_scheduler constant --t5 C:/Users/User/Desktop/ComfyUI_Windows_portable/ComfyUI/models/clip/umt5-xxl-enc-bf16.safetensors --task t2v-14B --network_args loraplus_lr_ratio=4.0 conv_dim=4 conv_alpha=1 target_modules=proj.to_attn,proj_out,proj_in,to_q,to_k,to_v,attn network_model=networks.lora_wan --network_module networks.lora
[training STDERR] INFO:wan.modules.model:Detected DiT dtype: torch.bfloat16
[training STDERR] INFO:hv_train_network:Load dataset config from C:\Users\User\Desktop\musubi_portable\dataset_config.toml
[training STDERR] INFO:dataset.image_video_dataset:glob images in C:/Users/User/Desktop/musubi_portable/dataset/images
[training STDERR] INFO:dataset.image_video_dataset:found 1 images
[training STDERR] INFO:dataset.config_utils:[Dataset 0]
[training STDERR] is_image_dataset: True
[training STDERR] resolution: (474, 466)
[training STDERR] batch_size: 1
[training STDERR] num_repeats: 1
[training STDERR] caption_extension: ".txt"
[training STDERR] enable_bucket: True
[training STDERR] bucket_no_upscale: False
[training STDERR] cache_directory: "C:/Users/User/Desktop/musubi_portable/dataset/cache"
[training STDERR] debug_dataset: False
[training STDERR] image_directory: "C:/Users/User/Desktop/musubi_portable/dataset/images"
[training STDERR] image_jsonl_file: "None"
[training STDERR]
[training STDERR]
[training STDERR] INFO:dataset.image_video_dataset:bucket: (464, 464), count: 1
[training STDERR] INFO:dataset.image_video_dataset:total batches: 1
[training STDERR] INFO:hv_train_network:preparing accelerator
[training STDERR] INFO:hv_train_network:DiT precision: torch.bfloat16, weight precision: torch.bfloat16
[training STDERR] INFO:hv_train_network:Loading DiT model from C:/Users/User/Desktop/ComfyUI_Windows_portable/ComfyUI/models/unet/wan2.1_t2v_14B_bf16.safetensors
[training STDERR] INFO:wan.modules.model:Creating WanModel
[training STDERR] INFO:wan.modules.model:Loading DiT model from C:/Users/User/Desktop/ComfyUI_Windows_portable/ComfyUI/models/unet/wan2.1_t2v_14B_bf16.safetensors, device=cpu, dtype=torch.bfloat16
[training STDERR] INFO:wan.modules.model:Loaded DiT model from C:/Users/User/Desktop/ComfyUI_Windows_portable/ComfyUI/models/unet/wan2.1_t2v_14B_bf16.safetensors, info=
[training STDERR] INFO:hv_train_network:enable swap 16 blocks to CPU from device: cuda
[training STDERR] INFO:networks.lora:create LoRA network. base dim (rank): 32, alpha: 4.0
[training STDERR] INFO:networks.lora:neuron dropout: p=None, rank dropout: p=None, module dropout: p=None
[training STDERR] INFO:networks.lora:create LoRA for U-Net/DiT: 0 modules.
[training STDERR] INFO:networks.lora:LoRA+ UNet LR Ratio: 4.0
[training STDERR] INFO:networks.lora:enable LoRA for U-Net: 0 modules
[training STDERR] INFO:hv_train_network:use 8-bit AdamW optimizer | {}
[training STDERR] Traceback (most recent call last):
[training STDERR] File "C:\Users\User\Desktop\musubi_portable\musubi-tuner\wan_train_network.py", line 442, in
[training STDOUT] Trying to import sageattention
[training STDOUT] Failed to import sageattention
[training STDOUT] accelerator device: cuda
[training STDOUT] WanModel: Block swap enabled. Swapping 16 blocks out of 40 blocks. Supports backward: True
[training STDOUT] import network module: networks.lora
[training STDOUT] WanModel: Gradient checkpointing enabled.
[training STDOUT] prepare optimizer, data loader etc.
[training STDERR] trainer.train(args)
[training STDERR] File "C:\Users\User\Desktop\musubi_portable\musubi-tuner\hv_train_network.py", line 1558, in train
[training STDERR] optimizer_name, optimizer_args, optimizer, optimizer_train_fn, optimizer_eval_fn = self.get_optimizer(
[training STDERR] File "C:\Users\User\Desktop\musubi_portable\musubi-tuner\hv_train_network.py", line 449, in get_optimizer
[training STDERR] optimizer = optimizer_class(trainable_params, lr=lr, **optimizer_kwargs)
[training STDERR] File "C:\Users\User\Desktop\musubi_portable\python\lib\site-packages\bitsandbytes\optim\adamw.py", line 114, in init
[training STDERR] super().init(
[training STDERR] File "C:\Users\User\Desktop\musubi_portable\python\lib\site-packages\bitsandbytes\optim\optimizer.py", line 426, in init
[training STDERR] super().init(params, defaults, optim_bits, is_paged)
[training STDERR] File "C:\Users\User\Desktop\musubi_portable\python\lib\site-packages\bitsandbytes\optim\optimizer.py", line 125, in init
[training STDERR] super().init(params, defaults)
[training STDERR] File "C:\Users\User\Desktop\musubi_portable\python\lib\site-packages\torch\optim\optimizer.py", line 383, in init
[training STDERR] raise ValueError("optimizer got an empty parameter list")
[training STDERR] ValueError: optimizer got an empty parameter list
--- STDERR reader thread for training finished. ---
--- STDOUT reader thread for training finished. ---
--- training process finished with errors or non-zero exit code (1). ---
--- Training sequence finished with errors.





