Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix data race in asio service #556

Merged
merged 2 commits into from
Dec 4, 2024

Conversation

antonio2368
Copy link
Contributor

We detected a data race with one of our tests
https://pastila.nl/?0003c12b/68530aca8742766996176fdebcede8dc#KBCq01QCyL4Qz2lDbsNzrA==

It's insanely hard to parse all that asio stacktrace but the race happens on operation_timer_ inside async_resolve callback.
My theory is that async_connect got resolved and had its callback called before we setup the timer and started it.
So the problem should be solved with moving timer setup before we start the async_connect.

Now for the general issue.
Asio in NuRaft is run with a single io context on multiple threads. This can lead to data races in general (e.g. timer calls socket.cancel()). I'm not sure if cancel on socket is threadsafe (I assume it's not) but to avoid similar problems shouldn't strand be used for a single asio_rpc_client?
If you think I misunderstood something or there is a simpler solution, I'm open for discussion.
If not, I can introduce strand for the client.

@greensky00
Copy link
Contributor

@antonio2368
From the TSAN report you shared, the race was between

  1. asio_rpc_client::execute_resolve -> operation_timer_ access
  2. asio_rpc_client::execute_resolve -> asio_rpc_client::connected -> asio_rpc_client::send -> operation_timer_ access
    and it happened because connected was super fast so operation_timer_ was accessed by two threads concurrently, right?

If you are going to use strand, are you going to use it for timers only? or all the async executions for the same client?

@greensky00 greensky00 merged commit 50e2f94 into eBay:master Dec 4, 2024
1 check passed
@antonio2368
Copy link
Contributor Author

antonio2368 commented Dec 4, 2024

it happened because connected was super fast so operation_timer_ was accessed by two threads concurrently, right

Yes exactly.

if you are going to use strand, are you going to use it for timers only? or all the async executions for the same client?

I think the safest option would be to use for all async executions for the same client. E.g. I'm not sure if cancel is also safe.

@greensky00
Copy link
Contributor

@antonio2368
My concern is that we recently developed a streaming mode to maximize performance in terms of both throughput and latency. In this mode, sending requests and receiving responses are designed to run in parallel, unlike the existing ping-pong mode. Thus, using a strand for the entire client might impact this performance.

If the concern is about asynchronous concurrent cancel() calls, why not use a mutex around the operation_timer_ APIs, including the cancel_socket() calls?

@antonio2368
Copy link
Contributor Author

@greensky00 I didn't know about the streaming mode, looks interesting 👀
I was thinking about strand for simplicity's sake because the communication wasn't complex, but mutex should work equally well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants