-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add trigger ability for batcher #833
base: main
Are you sure you want to change the base?
Conversation
78bc0dd
to
a095124
Compare
a095124
to
f9deae5
Compare
70350bd
to
2fd8506
Compare
04186aa
to
2f45064
Compare
e9e7abd
to
d16f812
Compare
a8a34d8
to
ff018a6
Compare
Add triggering ability to batcher so it could evaluate deadlines and thresholds on demand. The approach is simple - any activity on sent or received data on any peer will trigger the batcher. This is built on assumption that triggering on incoming data would sync the batchers between two devices. However triggering only between two devices would leave other devices unsynced, thus simplified approach works even better to "sync the clocks" across all the nodes. Side effects: consider having 3 interconnected peers: A, B and C. Peer C is idling and A streams data to B. Now A and B is each triggered on every packet and in turn send premature keepalives to C at T_new=T_orig-threshold. Triggering will happen mostly on wg_consolidate() which happens every second. Signed-off-by: Lukas Pukenis <[email protected]>
ff018a6
to
56f88a0
Compare
@@ -37,56 +50,76 @@ where | |||
Self { | |||
actions: HashMap::new(), | |||
notify_add: tokio::sync::Notify::new(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you could import it? The code seems to be a bit cluttered with this fully qualified tokio:sync:Notify
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I personally like not to import stuff that's only used once in the file, it simplifies things since there's less to think about what we import and what we use from the imports. In this case there's no import of Notify
so it's clear it might be only occasionally used.
Of course, this is more of a matter of discipline, I could use tokio::sync::Notify
multiple times no problem :)
@@ -15,7 +15,7 @@ use telio_task::{task_exec, BoxAction, Runtime, Task}; | |||
use telio_utils::{ | |||
dual_target, repeated_actions, telio_log_debug, telio_log_warn, DualTarget, RepeatedActions, | |||
}; | |||
|
|||
use telio_wg::NetworkActivityGetter; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some newlines around it would be nice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe cargo fmt
should deal with it
} | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you removing it? Comment just after a bracket looks weird imho.
if let (Some(tx_ts), Some(rx_ts)) = (s.get_tx_ts(), s.get_rx_ts()) { | ||
self.network_activity_ts = Some(TxRxTimestampPair { tx_ts, rx_ts }); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, why do you need to store it? Couldn't you just run this methods in get_ts
?
} | ||
} | ||
|
||
if tx_has_changed || rx_has_changed { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, shouldn't it be only rx_has_changed
? And is SessionKeeper
the best entity to handle the triggering? I thought it should be also done when the connection is in Relayed
state?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From real life experiments both sending and receiving data causes the drainage for a while, thus we care about both of those things happening.
Now about the SessionKeeper - it is the best place since it was the one used for keepalives(both batched and non-batched). Batcher is only being used inside of it. For now only because we dealt with direct keepalives that were handled in SessionKeeper.
I would say that SessionKeeper outlived it's purpose and probably warrant a rename at this point though :).
Add triggering ability to batcher so it could
evaluate deadlines and thresholds on demand.
The approach is simple - any activity on sent
or received data on any peer will trigger the batcher.
This is built on assumption that triggering on incoming
data would sync the batchers between two devices. However
triggering only between two devices would leave other devices
unsynced, thus simplified approach works even better
to "sync the clocks" across all the nodes.
Side effects:
consider having 3 interconnected peers: A, B and C.
Peer C is idling and A streams data to B.
Now A and B is each triggered on every packet and in turn
send premature keepalives to C at T_new=T_orig-threshold.
Triggering will happen mostly on wg_consolidate() which happens
every second.
☑️ Definition of Done checklist