You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I didn't figure out that should I open an issue for this or not, so I've started this discussion to ask. I have seen that many issues have been opened on this topic.
Our applications use 3-4 GB at most in Windows environments. And in the first few miniutes, it has 300-400MB Heap Size at most. But when we deploy the applications to Kubernetes as Linux containers, it enlarges the working set and private memory to the limit which we gave at the memory limit in the Kubernetes workload yaml. But the heap size is same as Windows and it reaches 300-400mb at most. For instance, if we gave 22GB memory limit at Kubernetes yaml, it enlarges to 22GB and the heap size keeps small when gc collected. And it does not release memory(or segments I suppose) easily in linux containers. In windows it does not reach to the machine memory, even if it does, it reduces the memory usage when the memory load is gone and gc collect occurs. In windows the percent of heap size by working set is about 60-70%(I suppose), but in linux containers it is low as 2-3%. In our production environment which is Windows right now, if the application reaches to the machine memory limit, I saw the heap size is close to this limit, but not in linux containers, even if the application reaches to the memory limit, the heap size is close as 2-3% of the machine memory limit in linux containers.
We're planing to use Kubernetes HPA to scale the applications by memory. In Windows environments, when the applications have a lot of request, the memory usage increases and if the application uses %80 of the machine memory limit, it scales up to share and reduce request counts by machine. Because of the request counts of machine is dropped, the memory usage also drops. So if the memory usage drops to %60, it scales down. But in the kubernetes environment, the GC does not release memory to the OS, so we can't scale up/down with this scenario.
I've tried many workarounds described in the issues I gave in below. Here's what I've tried so far:
Set DOTNET_GCHeapAffinitizeRanges to small values like 0-2 the reduce the heap size
Set COMPlus_GCName to libclrgc.so
Tried DOTNET_GCConserveMemory as many values like 6,7,8,9
Tried to reduce and increase DOTNET_GCHighMemPercent
Tried on both Server and Workstation mode (This helps a little in workstation mode but not much)
Tried some native memory allocators like jemalloc, mimalloc like this: LD_PRELOAD= libmimalloc.so.2.0 (This helps a little)
Analyzed many dumps, gc traces to find if there are memory leaks but we didn't find any. Also applications works as expected in windows environments and the working set does reduce if gc collect occurs.
I've called GC.Collect with Aggressive option every 5 minutes for testing purpose, it does not release the memory in linux containers, but it does in windows.
We are migrating old wcf projects into CoreWCF. We've started at net 6, but now we've upgraded to net 7 in our development servers. So I've faced this issue both net 6 and net 7. Actually, the only problem left to done migrating to CoreWCF is this problem.
As a solution, I'm planing to setup Kubernetes HPA by a custom metric like this:
publicstaticreadonlyGaugeHeapByMemoryInPercent=Metrics.CreateGauge("heap_by_memory_in_percent","The percent of Heap size by physichal memory.");privatestaticreadonlylong_totalAvailableMemoryBytes=GC.GetGCMemoryInfo().TotalAvailableMemoryBytes;
...//Calling this every 1 seconds:HeapByMemoryInPercent.Set(Math.Round((double)GC.GetTotalMemory(false)*100/_totalAvailableMemoryBytes,2));
...
So even if the physical memory of the application reaches to the limit, I'll check the heap size by the pod memory limit to determine scale up or down.
Is this correct approach for scaling up/down by memory in kubernetes environment? Or what can I do to make the application behave like Windows environment in the linux containers.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi,
I didn't figure out that should I open an issue for this or not, so I've started this discussion to ask. I have seen that many issues have been opened on this topic.
Our applications use 3-4 GB at most in Windows environments. And in the first few miniutes, it has 300-400MB Heap Size at most. But when we deploy the applications to Kubernetes as Linux containers, it enlarges the working set and private memory to the limit which we gave at the memory limit in the Kubernetes workload yaml. But the heap size is same as Windows and it reaches 300-400mb at most. For instance, if we gave 22GB memory limit at Kubernetes yaml, it enlarges to 22GB and the heap size keeps small when gc collected. And it does not release memory(or segments I suppose) easily in linux containers. In windows it does not reach to the machine memory, even if it does, it reduces the memory usage when the memory load is gone and gc collect occurs. In windows the percent of heap size by working set is about 60-70%(I suppose), but in linux containers it is low as 2-3%. In our production environment which is Windows right now, if the application reaches to the machine memory limit, I saw the heap size is close to this limit, but not in linux containers, even if the application reaches to the memory limit, the heap size is close as 2-3% of the machine memory limit in linux containers.
We're planing to use Kubernetes HPA to scale the applications by memory. In Windows environments, when the applications have a lot of request, the memory usage increases and if the application uses %80 of the machine memory limit, it scales up to share and reduce request counts by machine. Because of the request counts of machine is dropped, the memory usage also drops. So if the memory usage drops to %60, it scales down. But in the kubernetes environment, the GC does not release memory to the OS, so we can't scale up/down with this scenario.
I've tried many workarounds described in the issues I gave in below. Here's what I've tried so far:
GC.Collect
withAggressive
option every 5 minutes for testing purpose, it does not release the memory in linux containers, but it does in windows.We are migrating old wcf projects into CoreWCF. We've started at net 6, but now we've upgraded to net 7 in our development servers. So I've faced this issue both net 6 and net 7. Actually, the only problem left to done migrating to CoreWCF is this problem.
The issues I researched so far:
#49317
#78959
#79633
#79287
#75049
#72067
#58974
As a solution, I'm planing to setup Kubernetes HPA by a custom metric like this:
So even if the physical memory of the application reaches to the limit, I'll check the heap size by the pod memory limit to determine scale up or down.
Is this correct approach for scaling up/down by memory in kubernetes environment? Or what can I do to make the application behave like Windows environment in the linux containers.
Best Regards.
Beta Was this translation helpful? Give feedback.
All reactions