-
Notifications
You must be signed in to change notification settings - Fork 248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix_: limit the maximum number of message hashes by query hash #5688
Conversation
Jenkins Builds
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
I think we can add to release if we dogfood the changes and see no issues. Especially if it helps improve store node performance. |
Sounds good, I'll create a dogfooding PR and hopefully we can do a quick dogfooding session early next week to get this merged ASAP! 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Team, do you think this could be added to the release or is it too risky at this point?
@richard-ramos, code looks good, but before merging directly into the release branch we can ask the mobile QA team @status-im/mobile-qa to have a look (we just need to quickly create a mobile PR pointing to this branch).
It would be less risky if FetchHistory
was covered by tests.
I have created the following test PRs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But isn't pagination already setting this limit? 🤔
And why is it better to do a few parallel requests (to the same store node), rather then sequential?
The query we do kinda looks like this:
The Regarding sequential vs parallel, sending smaller requests concurrently I believe it is the better approach, as it can significantly reduce the overall time needed to retrieve all the data. I'm 100% not sure if nwaku will really benefit from this change but some of the benefits with this approach are:
|
Co-authored-by: Igor Sirotin <[email protected]>
But... shouldn't we change the query on server (nwaku) side then, rather than modifying clients not to overload servers? 🤔 |
Ok, I guess it makes sense in this case 👍 Though, again, some of this points sounds like we're trying to make clients to use servers more efficient, while we could make servers smarter and work efficient even with dumb clients 😄 (e.g., to utilize postgresql connection pool, the SQL request could be split into multiple requests on server side). |
Closing due to deciding to not go with this PR for 2.30 as described in status-im/status-mobile#21021 (comment) |
Team, do you think this could be added to the release or is it too risky at this point?
The problem of not adding this code is that if the missing message verification is enabled there's no limit for the number of missing message hashes to request and that can increase the load of the storenodes while with this change we reduce the number of message hashes per query to 50.
The equivalent PR for
develop
branch will be created later once waku-org/go-waku#1190 is merged