-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Many DirectByteBuffer with high capacity when use netty shaded client #11314
Comments
Our app run on machine with 48 cores. I could give you the full heap dump here. Given that we expect there to be a cache for each EpollEventLoop but it seems too much memory. Does it sound reasonable? |
I also wonder whether we have a limitation on the number of DirectByteBuffers inside each subpage area |
gRPC reduces the subpage size to 2 MiB, to reduce memory. It also reduces the number of threads to number of cores. I think what's hurting here is the number of threads. If we reduced the number of threads by half, would that get into a reasonable state, or are you hoping for even more memory usage redection? |
I mean while the size of each subpage is only 2MB, there is also a potential memory pressure when there are many objects of them. Even though if my server only has 1 cores, one eventloop can contains multiple subpage, 2MB each @ejona86 |
After diving deep inside Given that my GRPC Client use default grpc executor which is, in turn, a cache thread executor. Is native memory occupied by |
@ejona86 Hello, is there any progress on this issue? |
You are presenting the state of how memory is handled in grpc, but not necessarily indicative of a problem. There can be optimizations but each with their own trade-offs.
|
What version of gRPC-Java are you using?
1.60.0
What is your environment?
jdk-18.0.2.1-x64
Linux 3.10.0-1160.76.1.el7.x86_64
Client intialization?
JVM properties?
/zserver/java/jdk-18.0.2.1/bin/java --add-opens=java.base/jdk.internal.misc=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED -Dio.netty.tryReflectionSetAccessible=true -Dzappname=kiki-asr-streaming-websocket -Dzappprof=production -Dzconfdir=conf -Dzconffiles=config.ini -Djzcommonx.version=LATEST -Dzicachex.version=LATEST -Dzlogconffile=log4j2.yaml -Dlog4j2.configurationFile=conf/production.log4j2.yaml -Dlog4j2.contextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector -Dlog4j2.immediateFlush=false -Djava.net.preferIPv4Stack=true -XX:+AlwaysPreTouch -XX:+UseTLAB -XX:+ResizeTLAB -XX:+PerfDisableSharedMem -Xms1G -Xmx2G -XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:InitiatingHeapOccupancyPercent=70 -XX:ParallelGCThreads=24 -XX:ConcGCThreads=24 -XX:+ParallelRefProcEnabled -XX:-ResizePLAB -XX:G1RSetUpdatingPauseTimePercent=5 -Dspring.config.location=optional:file:./conf/production.spring.yaml -Dorg.springframework.boot.logging.LoggingSystem=none -jar /zserver/java-projects/kiki-asr-streaming-websocket/dist/kiki-asr-streaming-websocket-1.3.1.jar
What did you expect to see?
Stable number of DirectByteBuffer objects
What did you see instead?
Increasing number of DirectByteBuffer objects.
This is my OQL to list capacity of about 1,832 objects
This is the GC root references from sample FastThreadLocalThread which contains DirectByteBuffer with capacity about 2MB and there are a lot of object like that
Besides, I noted that there are many DirectByteBuffer which has null cleaner. Is it the intentional impletation of netty.
Steps to reproduce the bug
The text was updated successfully, but these errors were encountered: