-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accommodate slower NewRelic initialization (+goodies) #2351
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ITILGTM
@@ -5,7 +5,7 @@ | |||
import gunicorn # type: ignore | |||
import newrelic.agent # See https://bit.ly/2xBVKBH | |||
|
|||
newrelic.agent.initialize() # noqa: E402 | |||
newrelic.agent.initialize(environment=os.getenv("NOTIFY_ENVIRONMENT")) # noqa: E402 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if this will solve our issue with the lambda in dev
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would guess no, unless we specify some special configurations in the newrelic.ini
file meant for dev
env.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ya, this looks ok @jimleroyer
Summary | Résumé
The New Relic initialization slowed down due to additional package metadata retrieval starting version 8.10.1 (a minor version -- this should not have happened with such release), adding up from 10 to 30 seconds additional time.
What exacerbate the Kubernetes pod initialization with that behavior is that the pod timeout is set to 30 seconds by default. Hence the New Relic initialization gets past this default pod timeout and restart the workers, restarting the overall initialization again and potentially entering a self-triggered crash loop.
Hence a first step to try to smooth that out is to increase the pod timeout, as well as the gunicorn timeout. We are increasing by 5 seconds the graceful period to begin with and by 10 seconds the default timeout for gunicorn (default was 20 seconds for both gunicorn settings).
newrelic.ini
file should also pick up proper environment now, as the latter was not properly passed in previously but that is now fixed.Related Issues | Cartes liées
Test instructions | Instructions pour tester la modification
TODO: Fill in test instructions for the reviewer.
Release Instructions | Instructions pour le déploiement
None.
Reviewer checklist | Liste de vérification du réviseur