Skip to content

[Issue]: Control: larger batches cause frontend and backend to go out of sync on Firefox #3087

Open
@lbeltrame

Description

Issue Description

I thought this was an issue of the new Modern UI, but it is instead an issue in Control.

When doing the txt2img Control workflow with larger batches (2 doesn't seem to trigger it reliably, 5 does), the frontend and the backend will go out of sync at the end, causing a Finishing text being displayed forever: further generations are not possible. This at least occurs with Firefox, the only browser I can test with. A page reload is required to allow further generations.

This was hardly noticeable in the old UI, because the Generate buttons were separate, but it is very evident in Modern UI where there's a single Generate button for everything so all workflows are impacted. In retrospect, I remember this occurring in the past, but I never figured how to trigger it properly until the Modern UI was introdued.

I haven't been able to pinpoint the actual cause. I checked, after being asked by Vlad, the difference in time between the end of generation, and end processing (paths removed):

11:58:33-719016 INFO     LoRA apply: ['great_lighting', 'xl_more_art-full_v1', 'Difference_AnimeFace', 'noribsXL_001_4'] patch=0.00 load=2.84                                                                                                               
11:58:33-752048 INFO     Base: class=StableDiffusionXLPipeline                                                                                                                                                                                              
Progress  2.00it/s █████████████████████████████████ 100% 20/20 00:09 00:00 Base
11:58:44-314487 INFO     Upscale: upscaler="ESRGAN 4x Ultrasharp" resize=0x0 upscale=1664x2432                                                                                                                                                              
11:58:44-735397 INFO     High memory utilization: GPU=63% RAM=33% {'ram': {'used': 10.18, 'total': 31.27}, 'gpu': {'used': 10.14, 'total': 15.98}, 'retries': 0, 'oom': 0}                                                                                  
Upscaling ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:03
11:58:48-760529 INFO     HiRes: class=StableDiffusionXLImg2ImgPipeline sampler="DPM++ 2M"                                                                                                                                                                   
Progress  2.40s/it ████████████████████████████████ 100% 15/15 00:35 00:00 Hires
11:59:26-333107 INFO     High memory utilization: GPU=67% RAM=33% {'ram': {'used': 10.18, 'total': 31.27}, 'gpu': {'used': 10.71, 'total': 15.98}, 'retries': 0, 'oom': 0}                                                                                  
11:59:28-883667 INFO     Saving: image="XXX.webp" type=WEBP resolution=1664x2432 size=0                                                             
11:59:34-295319 INFO     High memory utilization: GPU=61% RAM=33% {'ram': {'used': 10.18, 'total': 31.27}, 'gpu': {'used': 9.81, 'total': 15.98}, 'retries': 0, 'oom': 0}                                                                                   
11:59:34-640459 INFO     Processed: images=5 time=319.14 its=0.31 memory={'ram': {'used': 10.18, 'total': 31.27}, 'gpu': {'used': 7.73, 'total': 15.98}, 'retries': 0, 'oom': 0}                                                                            
11:59:34-678473 INFO     Saving: image="XXX-grid.jpg" type=JPEG resolution=8320x2432 size=0  

So there's roughly 50 seconds (probably less) between the end of the generation and the actual end of the run. There are no other information in the server log, and as well in the JS console.

This does not occur with the regular txt2img workflow (but it works in a completely different way, so it's kind of expected).

Version Platform Description

11:45:47-084150 INFO Starting SD.Next
11:45:47-087137 INFO Logger: file="/home/lb/Coding/automatic/sdnext.log" level=INFO size=22296660 mode=append
11:45:47-088236 INFO Python 3.11.9 on Linux
11:45:47-158504 INFO Version: app=sd.next updated=2024-04-26 hash=cccbb4b3 branch=dev url=https://github.com/vladmandic/automatic/tree/dev
11:45:47-526947 INFO Updating main repository
11:45:48-141684 INFO Upgraded to version: cccbb4b Fri Apr 26 21:32:34 2024 -0400
11:45:48-154257 INFO Platform: arch=x86_64 cpu=x86_64 system=Linux release=6.8.7-1-default python=3.11.9

Relevant log output

No response

Backend

Diffusers

Branch

Dev

Model

SD-XL

Acknowledgements

  • I have read the above and searched for existing issues
  • I confirm that this is classified correctly and its not an extension issue

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    cannot reproduceReported issue cannot be easily reproducible

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions