Skip to content

Re-work how fine grained targets are processed #5507

Open
@ilevkivskyi

Description

@ilevkivskyi

Currently, fine grained targets are processed per updated module. This can lead to files being processed multiple times (and also a bit hard to reason, but this may be subjective). I propose to reorganise them to be processed in topologically sorted order. So the algorithm would be like this:

  1. Process all edited files, calculate all fired triggers, chain them to find all invalid targets, check for blocking errors.
  2. Group targets per module, order them per SCC, then per same heuristics we use to order modules within SCCs in coarse grained incremental. Within module targets are ordered by line number, this is unchanged.
  3. Process targets in one module from the queue, calculate updated deps and fired triggers, update invalid targets queue (maintaining the sort order), check blockers
  4. Continue step 3 until no modules left in the queue at this SCC
  5. Flush error messages
  6. Continue steps 3-5 until no SCCs left

This way it is much less likely that we will reprocess the same module twice. This will probably give an especially significant performance gain for cold runs, where many modules are updated w.r.t. to remote cache. Also IMO this algorithm is easier to reason about (and more similar to what happens in coarse-grained mode).

This idea appeared some time ago, but was postponed. Filing an issue to not forget about this.

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions