Skip to content

Reduce allocations for SdkResolverService #11803

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 12, 2025

Conversation

Erarndt
Copy link
Contributor

@Erarndt Erarndt commented May 7, 2025

Fixes #

Context

Changes Made

Testing

Notes

@Copilot Copilot AI review requested due to automatic review settings May 7, 2025 22:12
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR aims to reduce memory allocations in the SdkResolverService by modifying how resolvers are enumerated and cached.

  • Removed an unnecessary ToList() call to avoid extra allocation.
  • Refactored the locking mechanism when caching resolver instances.
Comments suppressed due to low confidence (2)

src/Build/BackEnd/Components/SdkResolution/SdkResolverService.cs:229

  • Removing the ToList() call avoids unnecessary allocations, but please verify that the resolvers enumerable is not enumerated more than once, which could incur performance penalties if iterated multiple times.
sdkReferenceLocation);

src/Build/BackEnd/Components/SdkResolution/SdkResolverService.cs:277

  • Acquiring the lock for every resolver manifest could increase contention; consider checking the cache outside the lock first if thread-safety can be maintained to further reduce overhead.
IReadOnlyList<SdkResolver> newResolvers;

@JanProvaznik JanProvaznik merged commit 8bb77e7 into dotnet:main May 12, 2025
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants