Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Cache Streamer Message from Block Streamer #582

Merged
merged 1 commit into from
Feb 26, 2024

Conversation

morgsmccauley
Copy link
Collaborator

No description provided.

@@ -20,7 +20,7 @@ export default class RedisClient {
SMALLEST_STREAM_ID = '0';
LARGEST_STREAM_ID = '+';
STREAMS_SET_KEY = 'streams';
STREAMER_MESSAGE_HASH_KEY_BASE = 'streamer:message:';
STREAMER_MESSAGE_HASH_KEY_BASE = 'streamer_message:';
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

: signifies parent, but we don't need that sort of nesting here

@morgsmccauley morgsmccauley self-assigned this Feb 26, 2024
Copy link
Collaborator

@darunrs darunrs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! I think the set command for Redis has never even taken a single ms to complete, so I think we have no problems if all our streamers write on each block.

Only issue is now we're setting a block even during historical (index files done or a * indexer), but a 60sec expiry would prevent that from being a problem, I think. We can loop back to it after observing any changes to memory usage, I think.

@morgsmccauley morgsmccauley merged commit 78a83ae into main Feb 26, 2024
7 checks passed
@morgsmccauley morgsmccauley deleted the feat/cache-streamer-message branch February 26, 2024 22:15
@darunrs darunrs mentioned this pull request Mar 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants