-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pub/sub: ~300 ms latency #434
Comments
@nodakai I think if you change the code in the subscriber to: // while let NodeEvent::Tick = node.wait(CYCLE_TIME) {
// while let Some(sample) = subscriber.receive()? {
while let NodeEvent::Tick = node.wait(Duration::ZERO) {
if let Some(sample) = subscriber.receive()? {
let tr = current_time();
let dt = tr - sample.i;
println!("received: {:?} delay: {:.1} us", *sample, dt as f64 * 1e-3);
}
} It would fix the issue. If you do not want to perform a busy wait, you can combine the publish-subscribe service with an event service and fire an event after the publisher has sent the message. On the subscriber side you wait on a listener until you have received the event and then receive your sample on the subscriber. For the event you can find an example here: https://github.com/eclipse-iceoryx/iceoryx2/tree/main/examples/rust/event |
Hmm, applying the suggested change ( It's still much slower than these numbers so I'll have to continue investigation |
@nodakai another problem can be this call This is a common method called ping-pong benchmark. |
@nodakai Oh, could you adjust your code to: // while let NodeEvent::Tick = node.wait(CYCLE_TIME) {
// while let Some(sample) = subscriber.receive()? {
loop {
if let Some(sample) = subscriber.receive()? {
let tr = current_time();
let dt = tr - sample.i;
println!("received: {:?} delay: {:.1} us", *sample, dt as f64 * 1e-3);
}
} |
Since you are working on an aarch64 target, you should be able to expect a latency in a single-digit microsecond, see: https://raw.githubusercontent.com/eclipse-iceoryx/iceoryx2/refs/heads/main/internal/plots/benchmark_architecture.svg For the raspberry pi 4b we achieved a latency of ~800ns |
Removing It still lags noticeably behind your Rasp Pi result. I know I shouldn't expect an ultimate performance from a shared, HT-enabled cloud machine but 0.8 us vs 5 us seems weird. Was your Rasp Pi result obtained with busy looping (100 % CPU core consumption)? I already verified the following test yields ~50 ns on the target Linux machine so I'm not worried about the accuracy/overhead of loop {
let t0 = current_time();
let t1 = current_time();
println!("t1 - t0 = {}", t1 - t0);
std::thread::sleep(CYCLE_TIME);
} |
@nodakai I hesitate to ask, but are you building in release mode? On my system, debug builds are ~6.5 times slower than release builds. |
Yes, our benchmarks are part of the repo and you can run them yourself: https://github.com/eclipse-iceoryx/iceoryx2/tree/main/benchmarks
We never experimented with such a machine and our benchmark suite, but it would be good to add them. One idea is that the hardware is emulated, and you pay with latency? But this is just a wild guess. |
Surely using
Thanks, I'll give it a try. However, it seems that there’s quite a bit of parameter tuning involved. If the examples in the README aren’t reflective of typical usage, there might be room for improvement in the API design. |
You could divide the parameter tuning into to fields: deployment and iceoryx2.
|
Required information
Operating system:
Linux db 6.1.0-25-cloud-arm64 #1 SMP Debian 6.1.106-3 (2024-08-26) aarch64 GNU/Linux
Rust version:
rustc 1.81.0 (eeb90cda1 2024-09-04)
Cargo version:
cargo 1.81.0 (2dbb1af80 2024-08-20)
iceoryx2 version:
Detailed log output:
Details
Observed result or behaviour:
received: Msg { data: 1234, i: 1728059776063045068 } delay: 289894.3 us
Expected result or behaviour:
single digit us latency
Conditions where it occurred / Performed steps:
The text was updated successfully, but these errors were encountered: