Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

maxPending limit is enforced with some de-facto leniency under heavy load #123

Open
simonbasle opened this issue Mar 5, 2021 · 4 comments
Labels
type/enhancement A general enhancement
Milestone

Comments

@simonbasle
Copy link
Member

Follow-up to #121 and #122.

With #122, the situation has improved and maxPending cap should be enforced with some leniency/margin of error. It might accumulate pending acquire calls over the configured limit, but within acceptable bounds (under heavy load). Previously, under heavy load it would grow past the limit indefinitely.

However, ideally the pool should manage to strictly enforce the maxPending limit.
This issue tracks that goal.

@simonbasle simonbasle added this to the 0.1.x Backlog milestone Mar 5, 2021
@reactorbot reactorbot added the ❓need-triage This issue needs triage, hasn't been looked at by a team member yet label Mar 5, 2021
@simonbasle simonbasle added type/enhancement A general enhancement and removed ❓need-triage This issue needs triage, hasn't been looked at by a team member yet labels Mar 5, 2021
@simonbasle
Copy link
Member Author

might be resolved by #155

@NeilOnasch0402
Copy link

As stated in reactor/reactor-netty#2261 the bug does still exist in reactor-core:0.2.8 / reactor-netty-core:1.0.19

Is there anything we can provide to help with this issue?

@simonbasle
Copy link
Member Author

there is no easy way forward that I could see after having fixed the case where it would grow indefinitely. so with the current implementation I thought that the compromise would work out well as the amount of pending borrower going over the configured max should be limited...

in your case, it seems to be thousands despite a max at 50, which is definitely not okay. have you seen any improvements with these numbers when using 0.2.8 ?

@schreibaer
Copy link

Hi @simonbasle, I work with @NeilOnasch0402.

Even though we did upgrade our springboot version to 2.6.8 in an attempt to mitigate the problem, we were using 2.6.6 when this problem presented itself. Springboot 2.6.6 already uses reactor-pool 0.2.8 so unfortunately, the version does not seem to fix the issue in our case.

@violetagg violetagg modified the milestones: 0.1.x Backlog, 0.2.x Backlog Nov 9, 2022
@violetagg violetagg modified the milestones: 0.2.x Backlog, 1.0.x Backlog Sep 13, 2023
@violetagg violetagg modified the milestones: 1.0.x Backlog, Backburner Jun 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/enhancement A general enhancement
Projects
None yet
Development

No branches or pull requests

5 participants