You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Device: Windows 11 4090 24GB 96GB RAM
Video material, resolution below 960, 16fps, duration 5s
Training 2D motion effects with the same material and settings, using AI Toolkit, LoRa works fine, but Musubi Tuner doesn't.Need help from experts and professionals to successfully implement musubi-tuner training lora and achieve the effect of using AI Toolkit. The following includes all the settings
After a period of training, the generated lora is completely different from the lora generated using AI Toolkit. The lora trained with musubi-tuner has no effect at all. What should I modify? Thank you very much
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Device: Windows 11 4090 24GB 96GB RAM
Video material, resolution below 960, 16fps, duration 5s
Training 2D motion effects with the same material and settings, using AI Toolkit, LoRa works fine, but Musubi Tuner doesn't.Need help from experts and professionals to successfully implement musubi-tuner training lora and achieve the effect of using AI Toolkit. The following includes all the settings
video dataests
https://huggingface.co/datasets/wzgrx/2D-animation-effects
using AI Toolkit trian lora
https://civitai.com/models/1920897/wan22-2d-animation-effects-2d
Test results
https://github.com/user-attachments/assets/37332aa7-6286-49ef-9241-157bb82b7580
using musubi-tuner
same video
[general]
caption_extension = ".txt"
batch_size = 1
enable_bucket = false
[[datasets]]
image_directory = "E:/AI/musubi-tuner/train/image"
cache_directory = "E:/AI/musubi-tuner/train/cache_image"
resolution = [1024, 1024]
num_repeats = 1
[[datasets]]
video_directory = "E:/AI/musubi-tuner/train/video"
cache_directory = "E:/AI/musubi-tuner/train/cache_video"
frame_extraction = "uniform"
source_fps = 16.0
target_frames = [57]
max_frames = 57
enable_bucket = true
bucket_no_upscale = false
resolution = [384, 384]
python src/musubi_tuner/wan_cache_latents.py
--vae "E:\AI\musubi-tuner\wan\wan_2.1_vae.safetensors"
--batch_size 2 `
--i2v--dataset_config "E:\AI\musubi-tuner\train\2.toml"--vae_cache_cpu
python src/musubi_tuner/wan_cache_text_encoder_outputs.py
--t5 "E:\AI\musubi-tuner\wan\models_t5_umt5-xxl-enc-bf16.pth"
--batch_size 4--dataset_config "E:\AI\musubi-tuner\train\2.toml"--fp8_t5
accelerate launch --num_cpu_threads_per_process 1 src\musubi_tuner\wan_train_network.py
--dataset_config "E:\AI\musubi-tuner\train\2.toml"
--vae "E:\AI\musubi-tuner\wan\wan_2.1_vae.safetensors"
--flash_attn --split_attn
--fp8_base --fp8_scaled
--gradient_checkpointing
--network_module networks.lora_wan --network_dim 16 --network_alpha 16
--min_timestep 900 --max_timestep 1000
--max_train_epochs 20 --save_every_n_epochs 2 --seed 42
--log_with tensorboard
--blocks_to_swap 16 `
--offload_inactive_dit--task i2v-A14B--dit "E:\AI\musubi-tuner\wan\wan2.2_i2v_high_noise_14B_fp16.safetensors"--t5 "E:\AI\musubi-tuner\wan\models_t5_umt5-xxl-enc-bf16.pth"--mixed_precision fp16--optimizer_type adamw8bit --learning_rate 2e-4--max_data_loader_n_workers 8 --persistent_data_loader_workers--timestep_sampling shift --discrete_flow_shift 5.0--preserve_distribution_shape--output_dir "E:\AI\musubi-tuner\out" --output_name "my_i2v_lora_high"--logging_dir "E:\AI\musubi-tuner\train\log"
accelerate launch --num_cpu_threads_per_process 1 --mixed_precision fp16
--task i2v-A14B
--dit "E:\AI\musubi-tuner\wan\wan2.2_i2v_low_noise_14B_fp16.safetensors"
--vae "E:\AI\musubi-tuner\wan\wan_2.1_vae.safetensors"
--fp8_base --fp8_scaled
--gradient_checkpointing
--network_module networks.lora_wan --network_dim 16 --network_alpha 16
--timestep_sampling shift --discrete_flow_shift 5.0
--preserve_distribution_shape
--output_dir "E:\AI\musubi-tuner\out" --output_name "my_i2v_lora_low"
--log_with tensorboard
--offload_inactive_dit.\src\musubi_tuner\wan_train_network.py--dataset_config "E:\AI\musubi-tuner\train\2.toml"--xformers--t5 "E:\AI\musubi-tuner\wan\models_t5_umt5-xxl-enc-bf16.pth"--optimizer_type adamw8bit --learning_rate 2e-5--max_data_loader_n_workers 8 --persistent_data_loader_workers--network_args "loraplus_lr_ratio=4"--min_timestep 0 --max_timestep 900--max_train_epochs 20 --save_every_n_epochs 2 --seed 42--blocks_to_swap 18--logging_dir "E:\AI\musubi-tuner\train\log"
After a period of training, the generated lora is completely different from the lora generated using AI Toolkit. The lora trained with musubi-tuner has no effect at all. What should I modify? Thank you very much
Beta Was this translation helpful? Give feedback.
All reactions