You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After our large batch send test, there might be some additional configuration we can make with Celery and the prefork worker pool to help improve stability, but we want to make sure it doesn't come at the expense of performance; or, if it does, what we can do to mitigate that.
We know this is ready to try out, but until we have more memory available to us in cloud.gov to properly test this in staging, we're not able to adequately test this just yet.
I tested locally with 1000 message sends and saw no difference with performance. It's hard to really monitor CPU and memory locally since locally the app has access to my whole laptop, but I think this is worth a try on staging. The goal would be to see a significant drop in CPU usage. Right now on staging during a load test it is easy to see CPU usage at 5000% of what is allocated to us. cloud.gov is letting us get away with that at the moment, but at some point in the future we might get throttled if we don't behave better.
After our large batch send test, there might be some additional configuration we can make with Celery and the prefork worker pool to help improve stability, but we want to make sure it doesn't come at the expense of performance; or, if it does, what we can do to mitigate that.
In addition to the Celery documentation itself, there are some other guides (e.g, Celery School - The Prefork Worker Pool) that could be of help in figuring these pieces out.
Implementation Sketch and Acceptance Criteria
Security Considerations
The text was updated successfully, but these errors were encountered: