VAE: Not Loaded, AttributeError: 'NoneType' object has no attribute 'shape', WARNING SHAPE MISMATCH #3915
Unanswered
PeterDragon50
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have tried installing straight from GitHub and using several versions from Stability Matrix, but I cannot inpaint with any SDXL that is ACTUALLY made for inpainting. I can use any SDXL that is not made for inpainting with no issue. I am a bit of a noob, but I am not sure what to do next and can't seem to find anyone with the same issue. Ryzen 3600, RTX 2070 Super 8GB, 32GB RAM, Windows.
Log:
[Parameters] Adaptive CFG = 7
[Parameters] CLIP Skip = 2
[Parameters] Sharpness = 2
[Parameters] ControlNet Softness = 0.25
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] Seed = 0
[Parameters] CFG = 3
[Fooocus] Downloading upscale models ...
[Fooocus] Downloading inpainter ...
[Inpaint] Current inpaint model is G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\models\inpaint\inpaint_v26.fooocus.patch
[Fooocus] Loading control models ...
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 60 - 48
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Synthetic Refiner Activated
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: G:\StabilityMatrix-win-x64\Data\Models\StableDiffusion\realvisxlInpainting_v5lightning.safetensors
VAE loaded: None
Synthetic Refiner Activated
Request to load LoRAs [("G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\models\inpaint\inpaint_v26.fooocus.patch", 1.0)] for model [G:\StabilityMatrix-win-x64\Data\Models\StableDiffusion\realvisxlInpainting_v5lightning.safetensors].
Loaded LoRA [G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\models\inpaint\inpaint_v26.fooocus.patch] for UNet [G:\StabilityMatrix-win-x64\Data\Models\StableDiffusion\realvisxlInpainting_v5lightning.safetensors] with 960 keys at weight 1.0.
Request to load LoRAs [] for model [G:\StabilityMatrix-win-x64\Data\Models\StableDiffusion\realvisxlInpainting_v5lightning.safetensors].
Requested to load SDXLClipModel
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 1.02 seconds
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] Woman in black shirt, cinematic, highly detailed, sharp focus, candid, elegant, intricate, very inspired, innocent, fine color, deep colors, enhanced light, amazing, creative, pure, wonderful atmosphere, symmetry, aesthetic, great composition, perfect, professional, winning, vivid, beautiful, inspirational, thought, epic, stunning, gorgeous, colossal, cool, awesome
[Fooocus] Encoding positive #1 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Image processing ...
[Fooocus] VAE Inpaint encoding ...
Requested to load AutoencoderKL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.73 seconds
[Fooocus] VAE encoding ...
Final resolution is (1920, 2560), latent is (896, 1152).
[Parameters] Denoising Strength = 1
[Parameters] Initial Latent shape: torch.Size([1, 4, 144, 112])
Preparation time: 51.81 seconds
Using karras scheduler.
[Fooocus] Preparing task 1/1 ...
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight FOOOCUS WEIGHT NOT MERGED torch.Size([320, 4, 3, 3]) != torch.Size([320, 9, 3, 3])
[Fooocus Model Management] Moving model(s) has taken 2.47 seconds
Traceback (most recent call last):
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\modules\async_worker.py", line 1435, in worker
handler(task)
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\modules\async_worker.py", line 1277, in handler
imgs, img_paths, current_progress = process_task(all_steps, async_task, callback, controlnet_canny_path,
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\modules\async_worker.py", line 292, in process_task
imgs = pipeline.process_diffusion(
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\modules\default_pipeline.py", line 379, in process_diffusion
sampled_latent = core.ksampler(
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\modules\core.py", line 310, in ksampler
samples = ldm_patched.modules.sample.sample(model,
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\ldm_patched\modules\sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\ldm_patched\modules\samplers.py", line 712, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\modules\sample_hijack.py", line 107, in sample_hacked
positive = encode_model_conds(model.extra_conds, positive, noise, device, "positive", latent_image=latent_image, denoise_mask=denoise_mask)
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\ldm_patched\modules\samplers.py", line 498, in encode_model_conds
out = model_function(**params)
File "G:\StabilityMatrix-win-x64\Data\Packages\Fooocus - mashb1t's 1-Up Edition\ldm_patched\modules\model_base.py", line 117, in extra_conds
if len(denoise_mask.shape) == len(noise.shape):
AttributeError: 'NoneType' object has no attribute 'shape'
Total time: 54.30 seconds
Beta Was this translation helpful? Give feedback.
All reactions