-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Node Express] Memory Leak on >=v7.84 #13529
Comments
Hi, I don't have your github handle yet but thanks to whoever reported this. From the memory snapshot it looks like there is something called "beforeExit" holding a bunch of references to most likely Sentry Hub objects (can't say that with 100% confidence from looking at the screenshot). We have one place in the SDK on the version you're using where we create "beforeExit" handlers and it matches with sentry-javascript/packages/node/src/sdk.ts Lines 248 to 264 in 4f4bd32
I don't think that place should leak in that way, unless Can you verify that you are calling |
Hi @lforst, I reported this issue to @rodolfoBee. Thanks for looking into the issue and your analysis of what may be causing the memory leak. I’m currently preparing new tests with v8.27.0 of the SDK, and will also look into how often |
I’ve completed a new round of the tests using Sentry v8.27.0 and it seems like the memory leak issues have been resolved. Your analysis of the issue turned out to be spot on @lforst! During the first test I ran with v8.27.0 and since you had mentioned it, I noticed from the logs that (The heap profile didn’t show anything noticeable in regards to the Sentry SDK, so I haven’t attached it.) As it seems the upgrade to v8.27.0 together with fixing the middleware do not show any signs of a memory leak, I’ve pushed the fix upstream and asked QA to take a look as well. Thanks for taking a closer look and helping out with this issue @lforst! And thanks @rodolfoBee for guiding the process! 🙇♂ |
@stefanmirck glad to hear you could figure it out! I will close this issue to keep our stream clean but please feel free to reach out any time if there are any more questions. |
Is there an existing issue for this?
How do you use Sentry?
Sentry Saas (sentry.io)
Which SDK are you using?
@sentry/node
SDK Version
7.84.00
Framework Version
Express v4.19.2 Node.js v20.11.1
Link to Sentry event
No response
Reproduction Example/SDK Setup
A user is reporting Memory leak on Sentry SDK on an Express project. This issue is observed using v7.84.0, v7.102.0 and 8.17.0.
Steps to Reproduce
Configuration is as on the screenshots:
Expected Result
No Leaks
Actual Result
The Grafana dashboards shows the memory usage on the test Express server, you can see how shortly before 14:50 the first pod is handling the incoming requests and memory increases and continuous to increase up to ±1.25GB of memory and then stalls (~14:55). As more requests are coming in, 2 new pods are spun up until they stall (~14:55). More pods are started as other pods stall with memory usage increasing all the time. This pattern repeats a third time as pods stall at ~15:02 and ~15:03. After stopping the incoming requests, some garbage collection happens (~150MB), but never to the mem usage at startup (or close to).
Exact same code, just the Sentry config removed from the webserver. As you can see from the screenshot (red and orange lines), the memory usage is at or below the mem usage of the server running Sentry, but never higher and it stays at the same level of ~470MB throughout the test. After stopping the incoming requests to the server, some memory is released again (~100MB).
The below tests are from a local machine using the heap profiler. Snapshot 1 being made right after startup, snapshot 2 about a minute into the test, and snapshot 3 taken at 2 minutes after it. From snapshots 2 and 3 (screenshots below), it seems like the beforeExit event handler from sdk.js in the @sentry/core package contains a memory leak as it doesn’t seem to allow the array to be garbage collected.
All the tests were run using autocannon to generate ~230 requests per 10 second interval until I stopped the process using the command autocannon -r20 -f .
The text was updated successfully, but these errors were encountered: