-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak of a simple test #3955
Comments
What do you think @joanlopez ? Am i missing something here or is there actually a leak? |
FWIW I've seen something like this before and I tried recreating this on my end. I was able to easily see what @Zywertos is specifically calling out. You can especially see it if you include the number of VUs and rate by an order of magnitude. The number of iterations is absolutely insane when you do this and without looking at the code it could make some sense that memory is climbing due to metrics collection and how it is maintained. I'll try to dig into it a bit more if I have a chance this evening. I decided to also test with a simple HTTP call in the function without a payload either direction (aside from a 200). The rate was slower by virtue of actually having to transact. The memory usage was vastly slower than unthrottled. Another observation is that the non-http calling test was definitely more CPU throttled than anything. But again, this makes sense with the number of iterations (millions to reach this on my end) that are occurring and there is practically no wait involved at all. I get similar results in linux on:
|
Hey @Zywertos, @taylorflatt, I got a couple of Go (memory) profiles from a 10-min running tests based on your example, and I see no main memory allocations other than the ones related with You can find more details, and a possible workaround at #2367 (comment). Indeed, I have run your example with the suggested workaround and the memory allocations are effectively much lower. Could you check that and confirm it works for you as well, please? If so, I'll proceed to close this issue. If you're looking for a solution other than the workaround, I'd suggest you keeping an eye on the aforementioned open issue (#2367). Thanks! 🙇🏻 |
@joanlopez alright, thanks for clearing that up. So, it's not a memory leak, just a ton of metrics being stored in RAM... I'll give your suggestions a shot. |
@joanlopez those tips worked, but we really need the summary and metrics in Grafana (not in files), so we couldn’t turn them off for the real tests. We managed to work something out for our case, so it’s all good. Thanks for clearing things up! |
Now I don't remember the details of the specific |
I will look into it. Thanks a lot :) |
Brief summary
K6 is leaking memory even when running a test that doesn't perform any actual actions. I ran a test with 1000 requests per second and noticed that my RAM usage slowly increased by about 1MB every few seconds. To amplify this behavior, I ramped it up to 100k requests per second. As a result, K6's memory usage went from around 500MB to 2GB after 17 minutes of running the test.
I want to run tests on my app, and I need them to last around 24 hours. The tests are simple, but after a few hours, K6 starts using up 12GB of RAM, which ends up causing a crash...
Am I missing something? I don’t think this is the intended behavior.
k6 version
k6.exe v0.53.0 (commit/f82a27da8f, go1.22.6, windows/amd64)
OS
Windows 11
Docker version and image (if applicable)
No response
Steps to reproduce the problem
Just run this test for some time:
Expected behaviour
RAM should stabilize after a while and stop allocating more.
Actual behaviour
There's RAM leak.
The text was updated successfully, but these errors were encountered: