Describe the Bug
Our CAP application running on SAP BTP Cloud Foundry was crashing repeatedly with out-of-memory errors. Process RSS grew continuously (~420 kB/min under normal traffic) until the CF container killed the process. The leak was completely invisible to standard Node.js monitoring — process.memoryUsage() and V8 heap metrics looked normal throughout. The growth was entirely in the C heap (OpenSSL/glibc allocations).
Root cause: executeWithAxios (http-client.ts#L434) calls mergeRequestWithAxiosDefaults() on every request, which internally calls getAxiosConfigWithDefaultsWithoutMethod (http-client.ts#L453) — allocating a brand new https.Agent with keepAlive: false on every single outbound call:
Each agent performs a full TLS handshake, allocates an OpenSSL context in the C heap, then closes the socket. The C heap allocation is never returned to the OS — this is glibc's high-water mark behavior by design. The result is permanent, monotonically growing memory that no Node.js monitor will catch.
How CAP is affected — verified, no alternative path exists:
CAP's cds.connect.to() remote service implementation (@sap/cds/libx/_runtime/remote/Service.js) has a single HTTP dispatch path — every call regardless of service kind (rest, odata-v4, odata-v2, hcql) goes through:
Service.js:292 → run(requestConfig)
client.js:57 → executeHttpRequestWithOrigin(destination, requestConfig, { fetchCsrfToken: false })
http-client.ts → execute(executeWithAxios)(...) ← executeWithAxios hardcoded, no way to override
executeHttpRequestWithOrigin hardcodes executeWithAxios as the executor. Although execute(executeFn) accepts a custom function, CAP has no way to inject one — meaning every CAP remote service call in production hits new https.Agent() on every request with no escape hatch at the framework level.
Another user also experienced the a similar issue and wrote it in stackoverflow: https://sap.stackenterprise.co/questions/72090
Steps to Reproduce
- Configure a cds.connect.to() remote service (kind: rest or OData) pointing to any HTTPS endpoint in you CAP application.
- Send requests at moderate frequency (≥ 3 req/s) over several minutes.
- Monitor C heap growth (not V8 heap) — read /proc/self/smaps and sum Private_Dirty for anonymous mappings, or use nstat -az | grep TcpActiveOpens for TCP connection count.
Our findinding shows that number of TCP connection increases on every request and cause a memory leak:]. But when we stop using cds.connect and use fetch api it does not cause this problem since in fetch api keepAlive is by default true
Measured result over 4 minutes (~4 req/s)
| Metric |
cds.connect calls(CAP default) |
fetch api calls |
| New TCP connections / min |
~234 |
~2–4 |
| C heap growth |
+420 kB/min |
flat |
Expected Behavior
Expected: executeWithAxios should use a persistent, pooled https.Agent with keepAlive: true by default. A single remote service should not open a new TLS connection per request.
Screenshots
No response
Used Versions
| Package | Version |
| @sap/cds-dk (global) | 9.2.1 |
| @sap/cds-dk | 9.8.1 |
| @sap/cds | 9.8.3 |
| @sap/cds-compiler | 6.6.2 |
| @sap-cloud-sdk/http-client | 4.3.1 |
| Node.js | 22.16.0 |
Code Examples
This is our repo: https://github.wdf.sap.corp/SAC-Ops-Infra/Cloud_Infrastructure_Cockpit/tree/master/cic-cap for our CAP service
Log File
No response
Affected Development Phase
Release
Impact
Inconvenience
Timeline
We have a work around to use fetch api for our external calls instead of using cds.connect form CAP. However, we need this to be fixed ASAP so that we can start using native CAP methods
Additional Context
No response
Describe the Bug
Our CAP application running on SAP BTP Cloud Foundry was crashing repeatedly with out-of-memory errors. Process RSS grew continuously (~420 kB/min under normal traffic) until the CF container killed the process. The leak was completely invisible to standard Node.js monitoring — process.memoryUsage() and V8 heap metrics looked normal throughout. The growth was entirely in the C heap (OpenSSL/glibc allocations).
Root cause: executeWithAxios (http-client.ts#L434) calls mergeRequestWithAxiosDefaults() on every request, which internally calls getAxiosConfigWithDefaultsWithoutMethod (http-client.ts#L453) — allocating a brand new https.Agent with keepAlive: false on every single outbound call:
Each agent performs a full TLS handshake, allocates an OpenSSL context in the C heap, then closes the socket. The C heap allocation is never returned to the OS — this is glibc's high-water mark behavior by design. The result is permanent, monotonically growing memory that no Node.js monitor will catch.
How CAP is affected — verified, no alternative path exists:
CAP's cds.connect.to() remote service implementation (@sap/cds/libx/_runtime/remote/Service.js) has a single HTTP dispatch path — every call regardless of service kind (rest, odata-v4, odata-v2, hcql) goes through:
Service.js:292 → run(requestConfig)
client.js:57 → executeHttpRequestWithOrigin(destination, requestConfig, { fetchCsrfToken: false })
http-client.ts → execute(executeWithAxios)(...) ← executeWithAxios hardcoded, no way to override
executeHttpRequestWithOrigin hardcodes executeWithAxios as the executor. Although execute(executeFn) accepts a custom function, CAP has no way to inject one — meaning every CAP remote service call in production hits new https.Agent() on every request with no escape hatch at the framework level.
Another user also experienced the a similar issue and wrote it in stackoverflow: https://sap.stackenterprise.co/questions/72090
Steps to Reproduce
Our findinding shows that number of TCP connection increases on every request and cause a memory leak:]. But when we stop using cds.connect and use fetch api it does not cause this problem since in fetch api keepAlive is by default true
Measured result over 4 minutes (~4 req/s)
Expected Behavior
Expected: executeWithAxios should use a persistent, pooled https.Agent with keepAlive: true by default. A single remote service should not open a new TLS connection per request.
Screenshots
No response
Used Versions
| Package | Version |
| @sap/cds-dk (global) | 9.2.1 |
| @sap/cds-dk | 9.8.1 |
| @sap/cds | 9.8.3 |
| @sap/cds-compiler | 6.6.2 |
| @sap-cloud-sdk/http-client | 4.3.1 |
| Node.js | 22.16.0 |
Code Examples
This is our repo: https://github.wdf.sap.corp/SAC-Ops-Infra/Cloud_Infrastructure_Cockpit/tree/master/cic-cap for our CAP service
Log File
No response
Affected Development Phase
Release
Impact
Inconvenience
Timeline
We have a work around to use fetch api for our external calls instead of using cds.connect form CAP. However, we need this to be fixed ASAP so that we can start using native CAP methods
Additional Context
No response