Replies: 1 comment 1 reply
-
|
This should be fixed with #441. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
[INFO] Running command:
D:\musubi-train\musubi-tuner\venv\Scripts\python.exe -m accelerate.commands.launch --num_cpu_threads_per_process 1 --mixed_precision fp16 wan_train_network.py --task t2v-A14B --dataset_config C:\Users\31236\AppData\Local\Temp\tmpoe_fyger.toml --sdpa --mixed_precision fp16 --optimizer_type AdamW8bit --logging_dir ./logs --log_with tensorboard --max_data_loader_n_workers 1 --lr_scheduler cosine --lr_warmup_steps 0 --network_module networks.lora_wan --learning_rate 1e-4 --network_dim 32 --max_train_epochs 10 --save_every_n_epochs 1 --seed 42 --output_dir D:\musubi-train\musubi-tuner\output --output_name test --persistent_data_loader_workers --gradient_checkpointing --gradient_accumulation_steps 1 --fp8_base --dit D:\Comfyui_Video\ComfyUI_windows_portable\ComfyUI\models\diffusion_models\wan2.2\T2V\wan2.2_t2v_high_noise_14B_fp16.safetensors --min_timestep 875 --preserve_distribution_shape --blocks_to_swap 22
INFO:musubi_tuner.wan.modules.model:Detected DiT dtype: torch.float16
INFO:musubi_tuner.wan_train_network:Converted timestep_boundary to 0 to 1 range: 0.875
INFO:musubi_tuner.hv_train_network:Load dataset config from C:\Users\31236\AppData\Local\Temp\tmpoe_fyger.toml
INFO:musubi_tuner.dataset.image_video_dataset:glob videos in D:\musubi-tuner\train\test
INFO:musubi_tuner.dataset.image_video_dataset:found 2 videos
INFO:musubi_tuner.dataset.config_utils:[Dataset 0]
is_image_dataset: False
resolution: (416, 240)
batch_size: 1
num_repeats: 1
caption_extension: ".txt"
enable_bucket: True
bucket_no_upscale: False
cache_directory: "D:\musubi-tuner\train\test\cache"
debug_dataset: False
video_directory: "D:\musubi-tuner\train\test"
video_jsonl_file: "None"
control_directory: "None"
target_frames: (1, 29, 57, 85)
frame_extraction: uniform
frame_stride: 1
frame_sample: 3
max_frames: 129
source_fps: None
fp_latent_window_size: 9
INFO:musubi_tuner.dataset.image_video_dataset:bucket: (240, 416, 1), count: 3
INFO:musubi_tuner.dataset.image_video_dataset:bucket: (240, 416, 29), count: 3
INFO:musubi_tuner.dataset.image_video_dataset:bucket: (240, 416, 57), count: 3
INFO:musubi_tuner.dataset.image_video_dataset:bucket: (416, 240, 1), count: 3
INFO:musubi_tuner.dataset.image_video_dataset:bucket: (416, 240, 29), count: 3
INFO:musubi_tuner.dataset.image_video_dataset:bucket: (416, 240, 57), count: 3
INFO:musubi_tuner.dataset.image_video_dataset:total batches: 18
INFO:musubi_tuner.hv_train_network:preparing accelerator
INFO:musubi_tuner.hv_train_network:DiT precision: torch.float16, weight precision: torch.float8_e4m3fn
INFO:musubi_tuner.hv_train_network:Loading DiT model from D:\Comfyui_Video\ComfyUI_windows_portable\ComfyUI\models\diffusion_models\wan2.2\T2V\wan2.2_t2v_high_noise_14B_fp16.safetensors
INFO:musubi_tuner.wan.modules.model:Creating WanModel. I2V: False, FLF2V: False, V2.2: True, device: cuda, loading_device: cpu, fp8_scaled: False
INFO:musubi_tuner.wan.modules.model:Loading DiT model from D:\Comfyui_Video\ComfyUI_windows_portable\ComfyUI\models\diffusion_models\wan2.2\T2V\wan2.2_t2v_high_noise_14B_fp16.safetensors, device=cpu
INFO:musubi_tuner.utils.lora_utils:Loading model files: ['D:\Comfyui_Video\ComfyUI_windows_portable\ComfyUI\models\diffusion_models\wan2.2\T2V\wan2.2_t2v_high_noise_14B_fp16.safetensors']
INFO:musubi_tuner.utils.lora_utils:Loading state dict without FP8 optimization. Hook enabled: False
Trying to import sageattention
Successfully imported sageattention
accelerator device: cuda
Loading wan2.2_t2v_high_noise_14B_fp16.safetensors: 0%| | 0/1095 [00:00<?, ?it/s]
Training locally on Windows, 64G-RAM, 16G-VRAM
Beta Was this translation helpful? Give feedback.
All reactions