-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restructures & Improvements for Milk #8
base: main
Are you sure you want to change the base?
Conversation
…files. This also restructures things a bit to make code more reusable such as moving database queries to their respective files and relevant things.
…ore info. In this commit, we introduce pagination for the JSON API and a limiting of displayed users to 2,000 users for the Text API, in doing so, we performed additional changes to our queries and our schema, adding a `created_at` property to `Raid` and indexing `joined_at` for `RaidUser`. Furthermore, we now grab `RaidUser` and the total count of the `Raid` at the same time. Documentations have also been updated accordingly to accomodate these changes with an additional new documentation introduced to discuss the JSON API.
Pagination of |
strawberry milk 😍 your pr descriptions are so lovely to read, thank you for that miu ❤️ TIL you can use footnotes in them |
Milk was created back when I wasn't that used to JavaScript and related. As such, it may suffer some design flaws that were pretty bad for maintainability. This PR intends to resolve that and adds additional changes, such as supporting JSON, and documentation for both third-parties and Beemo developers.
TODO
bigint
)Additional Notes
Following 3, a little bit of testing reveals that this is really a big problem for our current plans as a sample run with 500,000 users resulted in 5.83s computation time (likely faster on the dedicated machine) with over 36.66 MB and that doesn't look good as our largest log to date is Midjourney's 450,000 user log.
A key solution for this, to be discussed, is to limit the amount of users fetched to a specific amount. The key reasoning for this solution is that as we now have a proper JSON route, third-party developers wishing to use the logging service no longer need to be considered in the text route (JSON will use pagination), as such, regular users who simply want to see a quick view of their raid would generally only need to see at most 100-500 users.
Although the above solution is up to further debate internally.
Footnotes
Code simplification still needs to be looked into and the documentation may change due to this as it would likely change our Kafka client's implementation, or even the API. ↩
Third-party and clients may want to integrate with our logs, therefore, we can simplify this process for them a bit by allowing them to receive JSON format of our log, this will especially help with plans for a web viewer for logs. ↩
We've had raid logs that go beyond 200,000 users and as we are no longer serving static files in this version, this becomes a significant issue as we'll have to fetch and cache 200,000 users, although our caching mechanism saves the string content only to possibly save us some pain, but still, this issues a need for pagination and likely better caching with those pagination. ↩ ↩2
Testing of the Kafka client is currently needed, but is blocked with the pending merge of the new
antiraid
refactor, as such, this will be drafted until testing is performed (this Pull Request should not be merged until we've completed tested the Kafka client). ↩