-
Notifications
You must be signed in to change notification settings - Fork 48.4k
[reconciler] Mark moved fibers with placement flag granularly and cache host sibling lookups during commit #33048
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[reconciler] Mark moved fibers with placement flag granularly and cache host sibling lookups during commit #33048
Conversation
…ng lookups during commit
resetPlacementCommitCache(); | ||
commitMutationEffectsOnFiber(finishedWork, root, committedLanes); | ||
resetPlacementCommitCache(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was not sure how to handle the case of commit potentially failing and not getting a chance to reset the cache, but also did not want the cache to persist after the commit is done. So I decided to wrap commitMutationEffectsOnFiber
in two calls just in case. Maybe there is a place higher in the call stack that is better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could be in flushMutationEffects
since it has finally
block
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really exciting! The LIS algorithm makes sense to me. Caching the lookups for where to insert is also an obvious win. I'm not sure about the caching approach there — it would be really nice if that was purely local (passed as an argument and kept on the stack) vs a module local that has to be reset. We'd also want to feature flag this for test purposes and to help measure performance impact in practice.
But we should definitely proceed with this in some form. cc @jbrown215 had also been looking at list reconciliation performance, can you follow-up more?
} | ||
|
||
// Retrieves or creates the placement commit cache entry for a given fiber. | ||
export function getPlacementCommitCacheEntry( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doesn't need to be exported?
Comparing: 5dc00d6...73f2f6d Critical size changesIncludes critical production bundles, as well as any change greater than 2%:
Significant size changesIncludes any change greater than 0.2%: Expand to show |
Thank you for your feedback @josephsavona! I’ve considered passing this cache through an argument, but it required a bit of drilling so I was hesitant about it spilling. As far as I can tell the closest place where it can be instantiated is here: function recursivelyTraverseMutationEffects(
root: FiberRoot,
parentFiber: Fiber,
lanes: Lanes,
) {
// Deletions effects can be scheduled on any fiber type. They need to happen
// before the children effects have fired.
const deletions = parentFiber.deletions;
if (deletions !== null) {
for (let i = 0; i < deletions.length; i++) {
const childToDelete = deletions[i];
commitDeletionEffects(root, parentFiber, childToDelete);
}
}
if (
parentFiber.subtreeFlags &
(enablePersistedModeClonedFlag ? MutationMask | Cloned : MutationMask)
) {
// <— instantiate commit placement cache here
let child = parentFiber.child;
while (child !== null) {
commitMutationEffectsOnFiber(child, root, lanes);
child = child.sibling;
}
}
} Then it can be passed through this chain of calls as an argument:
This way it would have an automatic cleanup, as you mentioned, and wouldn't need to use a |
…t cache. Also refactor placement commit cache to be passed through arguments.
@josephsavona @jbrown215 Added 2 feature flags:
Should these flags be initialized with Also, took a stab at refactoring the Please let me know your thoughts! |
Overall looks good. I'd love to get review from someone else who's working in the runtime more.
The second set of numbers looked nearly identical to the first? Can you share which things got slower specifically when switching to passing the cache via arguments instead of a module local? |
Before exploring something like this we really need to add a benchmark for the common cases. The most common case that the reconciliation algorithm is applied is in a large tree where something near the root and prop drills all the way down. This means that each set of children has to be diff:ed but there will be no change. In this case it's common to have even tens of thousand of elements each of which children set is diffed only to find out that there's no change. The current algorithm is hyper optimized around no-change. Reordering thousands of items in a flat list is extremely rare and even when it is applied, it should probably be windowed instead. Even if it's not, then it's still not that slow. But diffing thousand items by prop drilling from the root happens a lot. Similarly rendering long lists of deep children that render null is extremely uncommon and we need to make sure that adding caching doesn't slow down the common case for no apparent benefit. |
@sebmarkbage Great points! Thank you for taking a look! For reconciliation, the goal is definitely to preserve the common case performance (re-render with no change) while introducing additional benefits like handling reordering interactions granularly. However, you are right that
When committing placement, having to go deep into a stable sibling just to find out that it renders null is definitely uncommon. To be clear, I think caching I found it interesting that the number of sibling visits in this case (rendering new children) is growing non-linearly with the number of children. For example, Overall, I do agree that the common case should be proven to be the same or faster when considering this. Having a benchmark that measures the common case you are describing might be more valuable than all of the above. Let me take a look into bootstrapping that. |
Summary
This PR includes 2 small optimizations:
Placement
flag granularly by computing the longest increasing subsequence (LIS) of their old indices and finding out which fibers are not in LIS.Results
Here are the js-framework-benchmark results comparing
react-hooks-placement-opt
vsreact-hooks
built locally against the vanilla JS implementation. The lower score is better.This table shows significant benchmark score improvements in
swap rows
andcreate many rows
. While I understand that these two metrics are edge cases and are specifically designed to stress the implementation, I still wanted to present my findings because the code changes turned out to be relatively minor. My main goal was to get familiar with some of the codebase while doing a fun project, however, if there is interest I am happy to get this over the finish line.Motivation
I was looking at
reconcileChildrenArray
function in react-reconciler and found something that made me curious.Suppose we have a list of nodes:
That got re-arranged in the following way:
Ideally, only
F
should be marked as moved (Placement flag). However, the reconciler outputs this:Only
A
andG
remain in place. I found this interesting and dug further because I had an assumption that React would try to derive the least amount of dom operations needed. It seemed to me as if there was a code complexity tradeoff being made and I was curious if there was a way to make it granular while maintaining runtime performance.This behavior comes from the
placeChild
helper:The above results in more work in
commitPlacement
, where for every "placed" node it will look up the closest "non-placed" stable sibling with dom (ingetHostSibling
) and insert each node before it. In our example for each "placed" node (F, B, C, D, E) React will traverse the fiber tree to lookupG
repeatedly and callinsertBefore
to place it beforeG
.This is how this would look like in the browser when only one item is moved back:

The fact that
getHostSibling
was constantly looking up stable host nodes and had to go past pending inserts also led me to consider that it could be cached to prevent potential deep traversals that result in the same output.For example, if one was rendering a table and then populated the table rows, for each row in this table
getHostSibling
will be called and traverse the entire list of siblings in front but returnnull
each time.Proposed improvements
Granular placement of updated fiber nodes
It's possible to derive the maximum number of updated nodes that can stay in place by looking at their indices in the old list and identifying which ones do not belong to the longest increasing subsequence when laid out in the new list order. An increasing subsequence of old indices in this case means those elements maintained their relative order.
If we look again at the previous example:
And take the old indices (alternate indices) of all updated nodes in the previous list:
After we find the LIS it results in the following indices:
Indices not in LIS:
We can proceed to mark
F
with the Placement flag and all other updated nodes can remain in place.Finding nodes not in LIS:
The LIS is computed using a greedy patience sort, conceptually it can be visualized like this:

Slide from here.
The time complexity is
O(n)
in the case of all numbers increasing or decreasing (the most common cases) and O(N log N) in the case of completely random reordering of all nodes in the list.The algorithm in this PR is modified to fit the exact use case of finding which entry is not in LIS.
Caching of
getHostSibling
Let's say we have a list that originally did not have any rows and then rendered the following:
When committing row placement the current implementation will:
A
and start searching for the next sibling thatA
can be placed beforenull
(all nodes are new).If we maintain a cache per parent node (in this case
List
), we can keep track of the last node that was placed and whatstateNode
it was placed before.Then when processing
B
incommitPlacement
we can:return
)A
and that it was placed beforenull
(end of list).A
is our direct previous sibling, and skip doing the traversal because we know it would result in the same host sibling.How did you test this change?
js-framework-benchmark
on the modified version of React without issuesreconcileChildrenArray