You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think this may be a point of confusion since it behaves differently depending on the runner. It sure surprised me seeing blocks from 1m blocks ago during testing. I think it's worth considering either documenting, removing, or somehow supporting in the WebsocketRunner.
Possible solutions
I think there's a few options here.
Document the current behavior, and maybe show a warning when using websockets
Remove it completely. This would be easy. We wouldn't need to consider historical events. For isntance, the ApePay use case does event backlog processing during the Silverback startup event, so maybe historical processing is unnecessary in Silverback.
Using it as a filter in WebsocketRunner. Don't process events from before start_block. This could be used as a kind of "deploy code before available on chain" type thing.
Backfill events and blocks by polling the missed block range even when using websockets. This would be RPC-expensive but would be more akin to a "traditional" listener that can pickup where it left off. The complexity is an ongoing issue with this implementation, however.
The text was updated successfully, but these errors were encountered:
vany365
changed the title
discussion: start_block parameter in decorator
discussion: start_block parameter in decorator [SBK-360]
Nov 15, 2023
In general, guidance should be that it only supports streaming new data events.
The historical handling is useful for restarts, but I think when we finally utilize the task queue correctly it will just start collecting unhandled items in the queue that a restarted worker would start processing on a restart, and we wouldn't need to backtrack anything
In general, guidance should be that it only supports streaming new data events.
The historical handling is useful for restarts, but I think when we finally utilize the task queue correctly it will just start collecting unhandled items in the queue that a restarted worker would start processing on a restart, and we wouldn't need to backtrack anything
Unhandled jobs aren't really the same thing as missed blocks/events though. start_block causes the runner to queue up jobs for historical blocks/events during startup when polling.
Problem
Currently the
start_block
param of theon_()
decorator only has an affect on thePollingRunner
.silverback/silverback/application.py
Line 128 in 8b313b1
I think this may be a point of confusion since it behaves differently depending on the runner. It sure surprised me seeing blocks from 1m blocks ago during testing. I think it's worth considering either documenting, removing, or somehow supporting in the
WebsocketRunner
.Possible solutions
I think there's a few options here.
WebsocketRunner
. Don't process events from beforestart_block
. This could be used as a kind of "deploy code before available on chain" type thing.The text was updated successfully, but these errors were encountered: