Skip to content

Poller scaling#874

Merged
Sushisource merged 12 commits intomasterfrom
poller-scaling
Apr 2, 2025
Merged

Poller scaling#874
Sushisource merged 12 commits intomasterfrom
poller-scaling

Conversation

@Sushisource
Copy link
Copy Markdown
Member

I intend to add a few unit tests too, and some simpler integ tests once server is actually released with this. This PR will have to sit and wait for server to release with the changes temporalio/temporal#7300

What was changed

Read and handle poller scaling decisions from server

Why?

Part of the worker management effort to simplify configuration of workers for users.

Checklist

  1. Closes

  2. How was this tested:
    Big manual tests + integ tests (to come)

  3. Any docs updates needed?
    Doc updates will come as part of adding to all SDKs

@Sushisource Sushisource requested a review from a team as a code owner February 8, 2025 01:51
@Sushisource Sushisource force-pushed the poller-scaling branch 3 times, most recently from 7b4e70c to 14a75ad Compare February 13, 2025 00:50
@Sushisource Sushisource force-pushed the poller-scaling branch 2 times, most recently from d70715f to 7be4134 Compare February 27, 2025 02:50
Copy link
Copy Markdown
Contributor

@cretz cretz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM but didn't get into the details. I think an English description of what's happening could help. I tried to guess at one in on the comments. But since the default behavior is unchanged for users, no problem.

I do think we should consider holding off (or at least holding off lang side) until a server is released that can even take advantage of this. We will want some end to end automated test somewhere that just proves poller auto scaling works (often we can just do this in lang in a smoke-test kinda integration test just to confirm).

Comment on lines +462 to +488
Some(tokio::task::spawn(async move {
let mut interval = tokio::time::interval(Duration::from_millis(100));
loop {
tokio::select! {
_ = interval.tick() => {}
_ = shutdown.cancelled() => { break; }
}
let ingested = rhc.ingested_this_period.swap(0, Ordering::Relaxed);
let ingested_last = rhc.ingested_last_period.swap(ingested, Ordering::Relaxed);
rhc.scale_up_allowed
.store(ingested_last >= ingested, Ordering::Relaxed);
}
}))
Copy link
Copy Markdown
Contributor

@cretz cretz Feb 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So to confirm, the algorithm is:

Server returns amount of pollers to scale up (or down if negative) on each poll response, and SDK respects that decision so long as it's bounded by min/max, and in the case of scale up, there were at least as many polls accepted last 100ms period as this 100ms?

So if I accepted a poll 50ms ago that told me to scale up but my last poll response was 250ms ago, I would not scale up? (because ingested_last is 0 and ingested is 1)

Copy link
Copy Markdown
Member Author

@Sushisource Sushisource Feb 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm glad you looked at this again because it sort of obviously makes no sense. I was wondering why my testing wasn't showing good restriction of overshooting like I know it did at one point, and I had tried like a bazillion different methods but I knew this worked and it was simple, so I went back to it -- but results were inconsistent.

Somehow this comparison got flipped at some point. I've changed it back and the results more consistent again.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly though I'm tempted to get rid of this entirely. My tests show it can still be "defeated" semi often by rapid scale ups, and ingestion is indeed going up so there's not really any reason to say no, and then you end up with a bunch of polls sitting there once the backlog clears.

Smoothing out the calculation so that the shorter timeout gets set more reliably might have more value

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. I am always a bit wary when I see hardcoded time expectations like 100ms unless that's a good number arrived by testing or something (which maybe it is). Maybe a more naive "do not make poller count changes more frequently than X interval" can help, but I don't understand the details and haven't run the tests.

@Sushisource
Copy link
Copy Markdown
Member Author

Sushisource commented Feb 27, 2025

LGTM but didn't get into the details. I think an English description of what's happening could help. I tried to guess at one in on the comments. But since the default behavior is unchanged for users, no problem.

I do think we should consider holding off (or at least holding off lang side) until a server is released that can even take advantage of this. We will want some end to end automated test somewhere that just proves poller auto scaling works (often we can just do this in lang in a smoke-test kinda integration test just to confirm).

Yeah, I'm happy to wait for a server release so that we have an integ test.

I'll add a high-level prose description of the approach somewhere too.

@Sushisource Sushisource enabled auto-merge (squash) April 2, 2025 17:06
@Sushisource Sushisource merged commit 93471ac into master Apr 2, 2025
16 checks passed
@Sushisource Sushisource deleted the poller-scaling branch April 2, 2025 17:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants