-
Notifications
You must be signed in to change notification settings - Fork 264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory corruption during CPU intensive work #93
Comments
There are roughly 250 processes running, so around 700-800 active processes per datanode. |
I should also mention that we use nonstandard blocksize (16KB) and both sender_thread_batch_size and sender_thread_buffer_size set to 64. |
We had another crash under the same load, but at different place:
|
yazun
changed the title
Memory corruption after some CPU intensive work
Memory corruption during CPU intensive work
Jan 21, 2021
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
After roughly 10 hours of quite intensive memory-mostly data crunching (50-60% CPU load, zeroish IO load) we see a crash and a core as below:
It happened already twice, so seems like a high probable scenario - it happens with no RAM strain.
and the offending part seems to be coming from a corrupted pocket?
offending line
Any idea if this could be fixed?
The queries are similar and involve index lookups, q3c index and lateral join + aggregates within lateral.
The text was updated successfully, but these errors were encountered: