-
-
Notifications
You must be signed in to change notification settings - Fork 460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
copy_maintainers script fails syncing the big teams #615
Comments
Is this still in use? @etobella what you did is complementary to this? |
Well, this is used for Oca contributors team, it is not maintaned there 🤔 |
@pedrobaeza what is what @etobella did that might be complementary? I've never been involved in the housekeeping so much, but it seems if this fails new contributors aren't added to the good github group? |
I'm talking about https://github.com/OCA/repo-maintainer-conf |
I just tried to reproduce the team sync error, and... it succeeded, except for a bunch of problems related to invalid GitHub handles in our database. I'll look if the error comes up again. |
yesterday night it failed because of the rate limit, I assume that's because you used the gitbot user's token for your tests? You receive the error messages per mail too, right? Not sure how to continue here. Should we just accept this to fail sometimes? Is this a timing issue and we possibly just need to spread out the heavy scripts in time more? |
We probably need to add a retry mechanism on GitHub operations in copy_maintainers.py. This is low priority for me, but if I was to do it I'd try with tenacity or stamina. |
We call
copy_maintainers --team "OCA Contributors"
andcopy_maintainers --team "OCA Members"
daily to update the appropriate teams in github.Since a while I receive nearly daily mails about one of those failing with a bad gateway error (503). Can just reading this char field for a few hundred users really take 10min? (that's the proxy_read_timeout value of the oca nginx server). Or does nginx return 503 for some other reason? Buffer size possibly? I can't check myself because I can't see nginx' error log on the OCA server.
Chunking should help I guess? And if I touch this anyways, are there reservations against using oca's own odoorpc instead of the unmaintained erppeek?
The text was updated successfully, but these errors were encountered: