Partitioned asset backfill hangs with QueuedRunCoordinator #33103
Unanswered
zaneselvans
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Unresponsive QueuedRunCoordinator with partitioned job
QueuedRunCoordinatorand have setmax_concurrent_runsto 4.dagster dev) and tell it to run a backfill that encompasses all or even a significant subset of the partitions, they appear to start -- in the UI they are listed as started, and in the console, I see a run for each of them get kicked off.Default Run Coordinator?
I have also tried using the
DefaultRunCoordinatorwithmax_concurrent_runsof 4, but it did not seem to limit the number of concurrent runs. Instead all of the partitions show up as active simultaneously. However, despite all the partitions apparently starting simultaneously, it still did not start doing a bunch of data processing.Dagster UI Hanging?
Another thing that has started happening frequently, especially when working on the partitioned assets, is that the web UI will hang, and no longer be able to update the status, even though I can see that the backfill is proceeding in the console.
Zombie Processes
I'm also experiencing lots of instances of zombie processes that linger after I quit out of Dagster at the console. They are unwilling to die from a polite
killorkill -15and I usually end up having tokill -9them.Beta Was this translation helpful? Give feedback.
All reactions