Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Async api, take 2 #65

Closed
wants to merge 12 commits into from
Closed

Async api, take 2 #65

wants to merge 12 commits into from

Conversation

TheButlah
Copy link

This consolidates #64 and the ideas put forth in #62 to provide a pool-like api for async bulk reads. The implementation is marked as a draft because it is not complete.

@TheButlah TheButlah marked this pull request as draft May 18, 2021 23:23
@TheButlah
Copy link
Author

@kevinmehall what are your thoughts?

return Self::handle_completed_transfer(transfer, new_buf);
}
// No completed transfers, so poll for some new ones
poll_transfers(self.device.context(), timeout);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a race condition (not an UB data race, just a plain old race condition) here: If the transfer you're waiting for completes on another thread after self.completed is unlocked above, but before libusb_handle_events_timeout_completed takes an internal libusb event handler lock, the wakeup is lost, and it blocks for an unknown amount of time until a different transfer completes, or until timeout. This is what the completed arg of libusb_handle_events_timeout_completed is meant to prevent. See this section of the libusb documentation. I believe that if structured correctly, the events lock inside libusb could be the only locking needed.

Copy link
Author

@TheButlah TheButlah May 24, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand this ( I am new to libusb 👀) - what is the exact ordering that causes the bug?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thread 1, here: let pop_result = { self.completed.lock().unwrap().pop_front() }; (transfer not ready yet, so pop_result = None)
Thread 2, in the callback: transfer.completed_queue.lock().unwrap().push_back(transfer.pool_id)
Thread 1: poll_transfers(self.device.context(), timeout); (which blocks for an arbitrary amount of time waiting for the next transfer)

This is known as "lost wakeup" problem, also seen in queue implementations, etc.

buf,
read_timeout,
);
transfer.submit()?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Automatically submitting the transfers here makes for a somewhat awkward API for writes, because the first N transfers' data must be passed in to this function by iterator, and then the subsequent transfers are passed to poll.

Copy link
Author

@TheButlah TheButlah May 19, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any way to avoid the immediate submission without making things more complicated? To do otherwise would be to have either a first_time_polling bool in the AsyncPool or a list of transfers that must initially be submitted. Maybe thats an acceptable level of complexity?

I haven't thought at all about a write api - I have never used libusb's async api to do writes (or anything other than bulk reads - this is my first time with libusb)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is why I think submitting transfers should be separate than initialization, especially for writes. For reads I think it also may be a good idea to make submitting transfers explicit, for example you may want to stop submitting new transfers but let the existing ones complete if you know ahead of time how much data you're trying to read.

src/device_handle/async_api.rs Outdated Show resolved Hide resolved
src/device_handle/async_api.rs Outdated Show resolved Hide resolved
examples/read_async.rs Show resolved Hide resolved
src/device_handle/async_api.rs Outdated Show resolved Hide resolved
@TheButlah TheButlah mentioned this pull request May 21, 2021
5 tasks
@kevinmehall
Copy link
Contributor

kevinmehall commented May 24, 2021

I used your code as a starting point for some experimentation this weekend:
master...kevinmehall:km-pipe-approach

This simplifies the synchronization and queuing substantially -- the only place that actually needs explicit synchronization beyond what libusb does internally is waiting for a transfer completion. I chose an approach that works like libusb_handle_events_timeout_completed, re-checking a flag after taking the events lock to avoid the lost wakeup, but implemented it from lower-level primitives as per the libusb documentation, so that the completed flag could be AtomicBool rather than c_int as I'm not convinced libusb's usage is not a data race. In this design, the only state shared with the callback is an AtomicBool, and the queue is just a VecDeque that needs no locking because it is only accessed through &mut self.

Also ended up reworking the API towards what I proposed in the other thread to better handle OUT transfers, and added a demo that has one thread writing to one endpoint, and another thread reading from a second endpoint. I tested this using a device firmware that echos back packets.

@TheButlah
Copy link
Author

Thanks! I'll take a look later today or tomorrow.

@lachlansneff
Copy link

Any progress on this? Async rusb would enable using WebUSB as a backend.

@TheButlah
Copy link
Author

TheButlah commented Jul 23, 2021

It seemed like the API ergonomics were still being worked out - I haven't had a chance to look at this in recent history. Shouldn't be hard to finish a PR but I just don't have the bandwidth rn

@a1ien
Copy link
Owner

a1ien commented Sep 4, 2022

Closed in favor of #143

@a1ien a1ien closed this Sep 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants