Skip to content

[Bug]: Enhance not working with Intel Arc A-series GPU (Arc a750) #3936

@RafinRono

Description

@RafinRono

Checklist

  • The issue has not been resolved by following the troubleshooting guide
  • The issue exists on a clean installation of Fooocus
  • The issue exists in the current version of Fooocus
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

When I try to Enhance a given image, it simply throws an exception and does not work. The main issue is withing this attribute:
AttributeError: module 'intel_extension_for_pytorch._C' has no attribute 'get_fp32_math_mode'. Did you mean: '_get_fp32_math_mode'?

Steps to reproduce the problem

  1. Go to Input image
  2. Select Enhance
  3. Upload or drag an image
  4. click Should have been Nodes #1
  5. Check Enable
  6. Choose Hand, enter prompts
  7. Press Generate

What should have happened?

The given image should have been upscaled but it does not work, it returns an error.

What browsers do you use to access Fooocus?

Microsoft Edge

Where are you running Fooocus?

Locally

What operating system are you using?

Windows 10

Console logs

[Parameters] Adaptive CFG = 7
[Parameters] CLIP Skip = 2
[Parameters] Sharpness = 2
[Parameters] ControlNet Softness = 0.25
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] Seed = 7783800987034681910
[Parameters] CFG = 4
[Fooocus] Loading control models ...
[Parameters] Sampler = euler_ancestral - karras
[Parameters] Steps = 60 - 30
[Fooocus] Initializing ...
[Fooocus] Image processing ...
[Parameters] Denoising Strength = 1.0
[Parameters] Initial Latent shape: Image Space (1216, 832)
Preparation time: 0.00 seconds
Using karras scheduler.
[Fooocus] Processing enhance ...
[Fooocus] Preparing enhancement 1/1 ...
[Enhance] Searching for "hand"
Requested to load GroundingDINO
Loading 1 new model
D:\Extra\Fooocus\venv\lib\site-packages\intel_extension_for_pytorch\frontend.py:465: UserWarning: Conv BatchNorm folding failed during the optimize process.
  warnings.warn(
D:\Extra\Fooocus\venv\lib\site-packages\intel_extension_for_pytorch\frontend.py:472: UserWarning: Linear BatchNorm folding failed during the optimize process.
  warnings.warn(
2025-03-30 18:57:53,772 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7865/api/predict "HTTP/1.1 200 OK"
Traceback (most recent call last):
  File "D:\Extra\Fooocus\modules\async_worker.py", line 1471, in worker
    handler(task)
  File "D:\Extra\Fooocus\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Extra\Fooocus\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Extra\Fooocus\modules\async_worker.py", line 1372, in handler
    mask, dino_detection_count, sam_detection_count, sam_detection_on_mask_count = generate_mask_from_image(
  File "D:\Extra\Fooocus\extras\inpaint_mask.py", line 71, in generate_mask_from_image
    detections, boxes, logits, phrases = default_groundingdino(
  File "D:\Extra\Fooocus\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Extra\Fooocus\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Extra\Fooocus\extras\GroundingDINO\util\inference.py", line 45, in predict_with_caption
    model_management.load_model_gpu(self.model)
  File "D:\Extra\Fooocus\ldm_patched\modules\model_management.py", line 443, in load_model_gpu
    return load_models_gpu([model])
  File "D:\Extra\Fooocus\modules\patch.py", line 447, in patched_load_models_gpu
    y = ldm_patched.modules.model_management.load_models_gpu_origin(*args, **kwargs)
  File "D:\Extra\Fooocus\ldm_patched\modules\model_management.py", line 437, in load_models_gpu
    cur_loaded_model = loaded_model.model_load(lowvram_model_memory)
  File "D:\Extra\Fooocus\ldm_patched\modules\model_management.py", line 325, in model_load
    self.real_model = torch.xpu.optimize(self.real_model.eval(), inplace=True, auto_kernel_selection=True, graph_mode=True)
  File "D:\Extra\Fooocus\venv\lib\site-packages\intel_extension_for_pytorch\xpu\utils.py", line 237, in optimize
    return frontend.optimize(
  File "D:\Extra\Fooocus\venv\lib\site-packages\intel_extension_for_pytorch\frontend.py", line 566, in optimize
    ) = weight_prepack_with_ipex(
  File "D:\Extra\Fooocus\venv\lib\site-packages\intel_extension_for_pytorch\nn\utils\_weight_prepack.py", line 527, in weight_prepack_with_ipex
    opt_model, opt_optmizer, params_attr = convert_rec(
  File "D:\Extra\Fooocus\venv\lib\site-packages\intel_extension_for_pytorch\nn\utils\_weight_prepack.py", line 523, in convert_rec
    setattr(new_m, name, convert_rec(sub_m, optimizer, params_attr)[0])
  File "D:\Extra\Fooocus\venv\lib\site-packages\intel_extension_for_pytorch\nn\utils\_weight_prepack.py", line 523, in convert_rec
    setattr(new_m, name, convert_rec(sub_m, optimizer, params_attr)[0])
  File "D:\Extra\Fooocus\venv\lib\site-packages\intel_extension_for_pytorch\nn\utils\_weight_prepack.py", line 523, in convert_rec
    setattr(new_m, name, convert_rec(sub_m, optimizer, params_attr)[0])
  [Previous line repeated 3 more times]
  File "D:\Extra\Fooocus\venv\lib\site-packages\intel_extension_for_pytorch\nn\utils\_weight_prepack.py", line 521, in convert_rec
    new_m = convert(m, optimizer, params_attr)
  File "D:\Extra\Fooocus\venv\lib\site-packages\intel_extension_for_pytorch\nn\utils\_weight_prepack.py", line 485, in convert
    param_wrapper.prepack(m, is_training)
  File "D:\Extra\Fooocus\venv\lib\site-packages\intel_extension_for_pytorch\nn\utils\_parameter_wrapper.py", line 531, in prepack
    self.linear_prepack(module, is_training)
  File "D:\Extra\Fooocus\venv\lib\site-packages\intel_extension_for_pytorch\nn\utils\_parameter_wrapper.py", line 610, in linear_prepack
    and frontend.get_fp32_math_mode(device="cpu")
  File "D:\Extra\Fooocus\venv\lib\site-packages\intel_extension_for_pytorch\frontend.py", line 725, in get_fp32_math_mode
    return core.get_fp32_math_mode()
AttributeError: module 'intel_extension_for_pytorch._C' has no attribute 'get_fp32_math_mode'. Did you mean: '_get_fp32_math_mode'?

Additional information

I have the Intel OneAPI package of 2024.1 and driver is dated from February, but the Foocus version itself was older so the driver can't be the issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingtriageThis needs an (initial) review

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions