Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

discussion: start_block parameter in decorator [SBK-360] #44

Open
mikeshultz opened this issue Nov 15, 2023 · 2 comments
Open

discussion: start_block parameter in decorator [SBK-360] #44

mikeshultz opened this issue Nov 15, 2023 · 2 comments

Comments

@mikeshultz
Copy link
Contributor

Problem

Currently the start_block param of the on_() decorator only has an affect on the PollingRunner.

start_block: Optional[int] = None,

I think this may be a point of confusion since it behaves differently depending on the runner. It sure surprised me seeing blocks from 1m blocks ago during testing. I think it's worth considering either documenting, removing, or somehow supporting in the WebsocketRunner.

Possible solutions

I think there's a few options here.

  • Document the current behavior, and maybe show a warning when using websockets
  • Remove it completely. This would be easy. We wouldn't need to consider historical events. For isntance, the ApePay use case does event backlog processing during the Silverback startup event, so maybe historical processing is unnecessary in Silverback.
  • Using it as a filter in WebsocketRunner. Don't process events from before start_block. This could be used as a kind of "deploy code before available on chain" type thing.
  • Backfill events and blocks by polling the missed block range even when using websockets. This would be RPC-expensive but would be more akin to a "traditional" listener that can pickup where it left off. The complexity is an ongoing issue with this implementation, however.
@vany365 vany365 changed the title discussion: start_block parameter in decorator discussion: start_block parameter in decorator [SBK-360] Nov 15, 2023
@fubuloubu
Copy link
Member

In general, guidance should be that it only supports streaming new data events.

The historical handling is useful for restarts, but I think when we finally utilize the task queue correctly it will just start collecting unhandled items in the queue that a restarted worker would start processing on a restart, and we wouldn't need to backtrack anything

@mikeshultz
Copy link
Contributor Author

In general, guidance should be that it only supports streaming new data events.

The historical handling is useful for restarts, but I think when we finally utilize the task queue correctly it will just start collecting unhandled items in the queue that a restarted worker would start processing on a restart, and we wouldn't need to backtrack anything

Unhandled jobs aren't really the same thing as missed blocks/events though. start_block causes the runner to queue up jobs for historical blocks/events during startup when polling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants