-
Notifications
You must be signed in to change notification settings - Fork 150
Description
nunchaku-1.2.0+torch2.9-cp311-cp311-win_amd64.whl
pytorch version: 2.9.1+cu130
Python version: 3.11.9
ComfyUI version: 0.8.2
ComfyUI frontend version: 1.36.13
got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
[FlowMatch Scheduler] Auto-detected device: CUDA
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Requested to load ZImageTEModel_
loaded completely; 27960.80 MB usable, 7672.25 MB loaded, full load: True
Z-Image Batch Encode Output:
Batch size: 8
Condition shape: torch.Size([8, 77, 2560])
Pooled shape: torch.Size([8, 2560])
unet_dtype: torch.bfloat16, manual_cast_dtype: None, svdq_linear_dtype: torch.bfloat16
model weight dtype torch.bfloat16, manual cast: None
model_type FLOW
Requested to load Lumina2
loaded completely; 23819.71 MB usable, 4007.28 MB loaded, full load: True
0%| | 0/1 [00:00<?, ?it/s]Assertion failed: rotary_emb.shape[0] * rotary_emb.shape[1] == M, file C:\Users\muyangl\actions-runner_work\nunchaku\nunchaku\src\kernels\zgemm\gemm_w4a4_launch_impl.cuh, line 353
请按任意键继续. . .