Description
Description
The current implementation of MemoryCache updates _cacheSize
using Interlocked.CompareExchange
in a retry loop (up to 100 times). This creates high contention when multiple threads attempt to modify _cacheSize
, leading to increased CPU usage and reduced throughput under high concurrency.
using System;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Extensions.Caching.Memory;
using Microsoft.Extensions.Options;
var cache = new MemoryCache(Options.Create(new MemoryCacheOptions { SizeLimit = 1000 }));
Parallel.For(0, 100, i =>
{
cache.Set(i, i, new MemoryCacheEntryOptions { Size = 10 });
});
Console.WriteLine($"Cache Count: {cache.Count}");
Issue: When running with a high number of concurrent threads, CPU usage spikes and cache performance degrades due to frequent retries in UpdateCacheSizeExceedsCapacity.
Configuration
.NET Version: .NET 9
OS: Windows 11
Architecture: x64
Machine Specs: 16-core CPU, 16GB RAM
Regression?
Not a regression but a long-standing inefficiency in MemoryCache.
Data
Analysis
Problem:
-
Interlocked.CompareExchange retried up to 100 times under contention.
-
Causes CPU spikes when multiple threads modify _cacheSize.
-
Locks too frequently, slowing down high-throughput applications.
Proposed Fix:
✅ Use ConcurrentQueue<long>
for batch updates instead of per-entry updates.
✅ Process updates asynchronously in a background task, reducing lock contention.
✅ Minimize atomic operations on _cacheSize
, improving cache scalability.