You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using Mittens (https://github.com/ExpediaGroup/mittens) as warmup tool running inside a Kubernetes Pod (configured as sidecar) which performs a high volume of concurrent HTTP/2 requests, it seems that Direct Buffers are not be released properly by Jetty (or the underlying OS).
After a short time the Kubernetes container exists with status 137 OOMKilled. Analyzing the Heap and non-heap memory was not bringing any insights. After doing a deeper analysis with a bunch of tools (JFR, Eclipse MAT, Native Memory Tracking, ...) it was indicating Native memory allocations of type=Other were the reason for the OOMKill. This kind of memory allocations increased all the time, but never decreased. After some time i stumbled over the Direct Buffer settings of Jetty. I also tried to find out more details by using the ByteBufferPool.Tracking#dump() output, but from this point of view it was not indicating that there are bigger issues.
Then i tried to disable direct InputDirectBuffers and the problem was gone with same configuration for the Mittens warmup.
A load test test with with 7k req/s and direct buffers enabled (this time no mittens warmup sidecar was in place) was not leading to an OOMKill. This time an Envoy proxy was in front of the Java application. Load test tool was a custom implementation in NodeJS.
The issue did not occurred when using Jetty 11.0.24 also configured to use direct buffers for input and output. Something has been changed as it seems underneath regarding the buffer handling.
During my tests I was able to make the following observations:
Using ZGC instead of G1 makes the problem even worse. OOMKill comes much faster than with G1.
Using -XX:MaxDirectMemorySize=4g was stabilizing the Java app, but Jetty did not accepted all requests anymore and rejected requests which led to EOFs in the Mittens warmup tool
Using a new ByteBufferPool.NonPooling() pool was better than using the ArrayByteBufferPool one (OOMKill was coming later)
How to reproduce?
Create an application with following configuration:
Jetty version(s)
Jetty v12.0.16
Jetty Environment
org.eclipse.jetty:jetty-server:12.0.16
org.eclipse.jetty.ee10:jetty-ee10-servlet:12.0.16
org.eclipse.jetty.http2:jetty-http2-server:12.0.16
Java version/vendor
(use: java -version)
openjdk version "25-ea" 2025-09-16 OpenJDK Runtime Environment (build 25-ea+2-135) OpenJDK 64-Bit Server VM (build 25-ea+2-135, mixed mode, sharing)
OS type/version
Description
When using Mittens (https://github.com/ExpediaGroup/mittens) as warmup tool running inside a Kubernetes Pod (configured as sidecar) which performs a high volume of concurrent HTTP/2 requests, it seems that Direct Buffers are not be released properly by Jetty (or the underlying OS).
After a short time the Kubernetes container exists with status 137 OOMKilled. Analyzing the Heap and non-heap memory was not bringing any insights. After doing a deeper analysis with a bunch of tools (JFR, Eclipse MAT, Native Memory Tracking, ...) it was indicating Native memory allocations of type=Other were the reason for the OOMKill. This kind of memory allocations increased all the time, but never decreased. After some time i stumbled over the Direct Buffer settings of Jetty. I also tried to find out more details by using the
ByteBufferPool.Tracking#dump()
output, but from this point of view it was not indicating that there are bigger issues.Then i tried to disable direct InputDirectBuffers and the problem was gone with same configuration for the Mittens warmup.
A load test test with with 7k req/s and direct buffers enabled (this time no mittens warmup sidecar was in place) was not leading to an OOMKill. This time an Envoy proxy was in front of the Java application. Load test tool was a custom implementation in NodeJS.
The issue did not occurred when using Jetty 11.0.24 also configured to use direct buffers for input and output. Something has been changed as it seems underneath regarding the buffer handling.
During my tests I was able to make the following observations:
new ByteBufferPool.NonPooling()
pool was better than using the ArrayByteBufferPool one (OOMKill was coming later)How to reproduce?
Create an application with following configuration:
Example of warmup.json structure. Of course filled with content.
The text was updated successfully, but these errors were encountered: