Conversation
|
Claude finished @Helw150's task in 2m 1s —— View job Review of PR #4688: Fix band percentage math in fleet overview
Verdict: Looks good ✅ The bug diagnosis is correct and the fix is clean. Here's my analysis: The bugThe old code counted bands per-slice: for each ready slice, it collected unique bands across all VMs in that slice, then incremented each band's counter by 1. A slice running tasks in both The fixThe new code counts at VM granularity instead — each VM is assigned to its dominant band (the band with the most running tasks on that worker). Since each VM is counted at most once, band shares now partition VM capacity cleanly. Key changes:
Edge cases I checked
Minor nit (non-blocking)The inline type on line 586 is getting long: new Map<string, { total: number; inUse: number; vmTotal: number; uptimes: number[]; regions: Map<string, number>; bands: Map<string, number>; capacityByRegion: Map<string, { statuses: string[]; failures: number }> }>()Consider extracting to a named interface (like |
39e0a66 to
9b494c7
Compare
Bands were counted per-slice as 'any VM in this band', so slices running multiple bands were double-counted and percentages could sum past 100% (e.g. 100% batch + 38% interactive). Assign each slice to a single dominant band based on task counts across its VMs, matching the slice-level 'in use' percentage so shares partition in-use slices.
9b494c7 to
dc8240a
Compare
Bands were counted per-slice as 'any VM in this band', so slices running multiple bands were double-counted and percentages could sum past 100% (e.g. 100% batch + 38% interactive). Count VMs in their dominant band and divide by total VMs instead, so band shares partition capacity.