-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async api, take 2 #65
Conversation
@kevinmehall what are your thoughts? |
src/device_handle/async_api.rs
Outdated
return Self::handle_completed_transfer(transfer, new_buf); | ||
} | ||
// No completed transfers, so poll for some new ones | ||
poll_transfers(self.device.context(), timeout); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a race condition (not an UB data race, just a plain old race condition) here: If the transfer you're waiting for completes on another thread after self.completed
is unlocked above, but before libusb_handle_events_timeout_completed
takes an internal libusb event handler lock, the wakeup is lost, and it blocks for an unknown amount of time until a different transfer completes, or until timeout. This is what the completed
arg of libusb_handle_events_timeout_completed
is meant to prevent. See this section of the libusb documentation. I believe that if structured correctly, the events lock inside libusb could be the only locking needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I understand this ( I am new to libusb 👀) - what is the exact ordering that causes the bug?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thread 1, here: let pop_result = { self.completed.lock().unwrap().pop_front() };
(transfer not ready yet, so pop_result = None
)
Thread 2, in the callback: transfer.completed_queue.lock().unwrap().push_back(transfer.pool_id)
Thread 1: poll_transfers(self.device.context(), timeout);
(which blocks for an arbitrary amount of time waiting for the next transfer)
This is known as "lost wakeup" problem, also seen in queue implementations, etc.
src/device_handle/async_api.rs
Outdated
buf, | ||
read_timeout, | ||
); | ||
transfer.submit()?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Automatically submitting the transfers here makes for a somewhat awkward API for writes, because the first N transfers' data must be passed in to this function by iterator, and then the subsequent transfers are passed to poll
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any way to avoid the immediate submission without making things more complicated? To do otherwise would be to have either a first_time_polling
bool in the AsyncPool
or a list of transfers that must initially be submitted. Maybe thats an acceptable level of complexity?
I haven't thought at all about a write api - I have never used libusb's async api to do writes (or anything other than bulk reads - this is my first time with libusb)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is why I think submitting transfers should be separate than initialization, especially for writes. For reads I think it also may be a good idea to make submitting transfers explicit, for example you may want to stop submitting new transfers but let the existing ones complete if you know ahead of time how much data you're trying to read.
I used your code as a starting point for some experimentation this weekend: This simplifies the synchronization and queuing substantially -- the only place that actually needs explicit synchronization beyond what libusb does internally is waiting for a transfer completion. I chose an approach that works like Also ended up reworking the API towards what I proposed in the other thread to better handle OUT transfers, and added a demo that has one thread writing to one endpoint, and another thread reading from a second endpoint. I tested this using a device firmware that echos back packets. |
Thanks! I'll take a look later today or tomorrow. |
Any progress on this? Async |
It seemed like the API ergonomics were still being worked out - I haven't had a chance to look at this in recent history. Shouldn't be hard to finish a PR but I just don't have the bandwidth rn |
Closed in favor of #143 |
This consolidates #64 and the ideas put forth in #62 to provide a pool-like api for async bulk reads. The implementation is marked as a draft because it is not complete.