-
-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Excessive Memory Usage with Multiple WAF Instances in Coraza Using CRS #975
Comments
Was there any progress here? Is this repeatable? Can we provide a script, or something, to reproduce the claim that 50 crs implementations use 2GB of memory? |
I'm also looking at high memory usage for a coraza-caddy deployment.
Source view:
I built with this branch https://github.com/corazawaf/coraza-caddy/tree/dependabot/go_modules/main/github.com/corazawaf/coraza/v3-3.2.0 since it was on the latest coraza. The top of my go.mod looks like this
I'm fairly new to golang but it seems like it's being used by this transformation cache map? If it matters this is an API app that uses RSA client certificates for authentication. Edit: Since the transformation cache is only used when you have transformations I dropped the |
Issue Summary
It is a well-documented issue that Coraza consumes significant memory when multiple
coraza.WAF
instances are created with the CRS. Previously, we mitigated this issue by employing memoization. However, a new challenge has emerged: we need to share Coraza's memory across Apache/Nginx workers efficiently.Problem Statement
In my opinion, it is unreasonable for 50 CRS implementations to occupy 2GB of memory. This raises a critical need to investigate the source of this overhead and implement strategies to reduce the memory footprint of CRS.
Expected Outcome
Identify the root cause(s) of the excessive memory usage.
Explore and implement solutions to minimise memory consumption without compromising functionality.
Additional Context:
The use of memoization previously provided a workaround, but the current scenario demands a more scalable solution, especially for environments with Apache/Nginx workers.
Your insights and suggestions on addressing this issue would be greatly appreciated.
The text was updated successfully, but these errors were encountered: