Skip to content

Conversation

@nicu-da
Copy link
Contributor

@nicu-da nicu-da commented Oct 13, 2025

Pull Request Checklist

Cluster Testing

  • If a cluster test is required, comment /cluster_test on this PR to request it, and ping someone with access to the DA-internal system to approve it.
  • If a hard-migration test is required (from the latest release), comment /hdm_test on this PR to request it, and ping someone with access to the DA-internal system to approve it.

PR Guidelines

  • Include any change that might be observable by our partners or affect their deployment in the release notes.
  • Specify fixed issues with Fixes #n, and mention issues worked on using #n
  • Include a screenshot for frontend-related PRs - see README or use your favorite screenshot tool

Merge Guidelines

  • Make the git commit message look sensible when squash-merging on GitHub (most likely: just copy your PR description).

Comment on lines 44 to 50
// Retry because we want to avoid missing rewards.
// We add the retry here instead of using a PeriodicTaskTrigger because
// PeriodicTaskTrigger does not allow defining whether work was performed
// that skips the polling interval. So if we have more rewards than we can
// get in one polling interval,
// it would wait until the next polling interval before trying to collect
// them which risks missing some.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moved this to the PollingParallelTaskExecutionTrigger that the reward triggered is implemetned on, which has the same behavior. it runs the tasks until it returns noop

@nicu-da nicu-da changed the title [ci] Update fauce and collection triggers to use new scheduling [ci] Update faucet and collection triggers to use new scheduling Oct 13, 2025
Copy link
Contributor

@moritzkiefer-da moritzkiefer-da left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice thanks! If it's tricky to get the collect rewards trigger right I suggest to just skip that for now. That at least cuts the transactions in half (sv rewards are irrelevant mostly becaus the number of svs is so small) which already seems like a significant improvement.

roundDetails =
Long.unbox(issuingRound.payload.round.number) -> issuingRound.payload.opensAt,
tickDuration =
Duration.of(currentAmuletRules.tickDuration.microseconds, ChronoUnit.MICROS),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reason not to take targetClosesAt - opensAt / 2? afaict that's more accurate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no real reason, just to keep it consistent. will change

(_, issuingRounds) <- scanConnection.getOpenAndIssuingMiningRounds()
} yield {
val currentAmuletRules = AmuletConfigSchedule(amuletRules).getConfigAsOf(context.clock.now)
issuingRounds
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm why does this work? You take the minimum issuing round

Let's say I have issuing round N open at t0 and close at t0+2tick and issuing round N+1 open at t0+tick. Won't you wait until t0+2tick before you switch to issuing round N+1 at which point the first tick for N+1 is already past? I guess the argument is that it doesn't matter as long as you run every 10min but this seems kind of risky as issuing rounds are not fully aligned on tick boundaries.

The right option seems to be to look at the rewards that you still have to mint and determine the round from those no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, isn't it that we actually run really aggressively? Of course we make the assumption that rounds are more or less distributed evently and they are overlapping but you can't have round N with closing T0 and round N+1 with closing T1 for which T1 < T0.

But in your example it will run between to and t0+1tick, after this runs assuming we have n + 1 from t0 + tick, the trigger eventually runs within the normal schedule, see that N+1 is open, and just schedule between t0+tick and t0 + 2tick.

There's no scenario where you wouldn't run already for a round when time it after open + tick.

I guess the argument is that it doesn't matter as long as you run every 10min but this seems kind of risky as issuing rounds are not fully aligned on tick boundaries.

Well the trigger runs more or less every 30s so all it matters is the rounds it sees open then. it can run with an interval of more than 10m between actual work, if for one round it gets scheduled at the opening of the round and for the next one it gets scheduled close to the end of the first tick, but even so it's guaranteed to run for the first tick after opening.

Copy link
Contributor

@moritzkiefer-da moritzkiefer-da Oct 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right but then does this just do nothing in actually delaying anything? Let's say you finished minting for round N. While round N is still the lowest you'll just continue triggering every 30s. Once round N is archived, round N+1 is already in the middle (assuming they align) so then you schedule something in N+1 opensAt, opensAt + tick which is already the case so you trigger again every 30s? That seems worse than the current state because you might start very aggressively merging coin which both causes a fair bit of traffic and also has high traffic costs for the submitting node.

Copy link
Contributor Author

@nicu-da nicu-da Oct 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm probably missing something, sorry for being so slow.

We have 3 rounds:

R1 - opens at T0, closes T2
R2 - opens at T1 closes T3
R3 - opens T2 closes T4

We try to run as such:

For R1: between T0 and T1
For R2: Between T1 and T2 (at which R1 is still open)
For R3 between T2 and T3 (at which R2 is still open and R1 is archived)

At which point would it be problematic to run? (having issues understanding how this is worse compared to running more often, as in how we do it now)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed f2f, issue is the use of the earliest issuing round, which is easily fixable, and also checking the rewards we have to ensure past rounds were collected in scenarios where the synchronizer had issue and the rounds are not really overlapping over a long period.

@nicu-da nicu-da force-pushed the nicuda/rewards/move_all_triggers branch from 32d83b9 to 1fc67f8 Compare October 13, 2025 16:20
@nicu-da nicu-da changed the title [ci] Update faucet and collection triggers to use new scheduling [ci] Update faucet triggers to use new scheduling Oct 13, 2025
Base automatically changed from nicuda/rewards/hybrid to main October 14, 2025 07:05
@nicu-da nicu-da force-pushed the nicuda/rewards/move_all_triggers branch from d7fee28 to 24e91b0 Compare October 14, 2025 07:09
Signed-off-by: Nicu Reut <nicu.reut@digitalasset.com>
@nicu-da nicu-da force-pushed the nicuda/rewards/move_all_triggers branch from 24e91b0 to f98ba20 Compare October 14, 2025 07:35
@nicu-da nicu-da enabled auto-merge (squash) October 14, 2025 07:36
@nicu-da nicu-da merged commit 7dc0877 into main Oct 14, 2025
57 checks passed
@nicu-da nicu-da deleted the nicuda/rewards/move_all_triggers branch October 14, 2025 08:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants