You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I start wolff producers during the app initialization with wolff:ensure_supervised_producers/3, but it fails if my kafka server is unavailable. I need to start them anyway so they will queue messages via replayq and replay the messages when connection will be established, just like when connection to kafka lost after app start.
Is there any way to achieve this with wolff?
I see that in previous versions producers were kept running after initial connection failure, but this commit e45b4ed changed this behavior, and I don't fully understand why.
The text was updated successfully, but these errors were encountered:
Hi @SukhikhN
The challenge might be, if Kafka is not available, there is no way to know the number of partitions etc. So it cannot start the producers.
The commit e45b4ed made the deletion of the producers if failed to initialize, before this commit, the producers process will keep waiting for the initialization to complete and then hopefully start working when Kafka is back. However the old behavior does not solve your problem either, because replayq would not be ready until Kafka is back, and the producers to get initialized.
To make it more resilient to Kafka failures, we would need to detach replayq buffer from the producer processes (not depend on the number of partitions etc), which seem to be quite a big refactoring.
I start
wolff
producers during the app initialization withwolff:ensure_supervised_producers/3
, but it fails if my kafka server is unavailable. I need to start them anyway so they will queue messages viareplayq
and replay the messages when connection will be established, just like when connection to kafka lost after app start.Is there any way to achieve this with
wolff
?I see that in previous versions producers were kept running after initial connection failure, but this commit e45b4ed changed this behavior, and I don't fully understand why.
The text was updated successfully, but these errors were encountered: