Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement IntoFuture for WinRT async types #3177

Closed
wants to merge 3 commits into from

Conversation

goffrie
Copy link
Contributor

@goffrie goffrie commented Jul 24, 2024

This reverts #3142 while also addressing #342.

I implement IntoFuture by wrapping each IAsyncOperation/etc into a windows_core::FutureWrapper that includes a mutable Waker slot (i.e. Arc<Mutex<Waker>>, so that we only need to call SetCompleted once. To deduplicate the implementation for the various interfaces that exist, I've added a trait windows_core::AsyncOperation that abstracts over the SetCompleted/etc methods.

I believe the Arc is necessary as there is no way to cancel a SetCompleted callback once it is registered. The Mutex<Waker> could be an AtomicWaker instead, but that's a fairly complex piece of code that I don't want to pull in.

Additionally, the wrapper future automatically calls Cancel() when dropped, as is conventional for Rust futures.

@goffrie
Copy link
Contributor Author

goffrie commented Jul 25, 2024

@microsoft-github-policy-service agree

@seritools
Copy link
Contributor

Yes please, we definitely use this, so we're stuck on an old windows-rs version for now 💀

@kennykerr
Copy link
Collaborator

Thanks, been away for a few days. Will review when I dig out from under this backlog.

@kennykerr
Copy link
Collaborator

kennykerr commented Aug 2, 2024

Adding an explanation to the readme about how and why using IntoFuture is necessary to support WinRT async in Rust would be helpful. By readme I mean the original PR description above. 🥲

@kennykerr
Copy link
Collaborator

The clippy build failure has already been addressed - you likely just need to merge master.


impl<T: AsyncOperation> Drop for FutureWrapper<T> {
fn drop(&mut self) {
self.inner.cancel();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WinRT's cancellation support isn't very widely supported and requires a fair amount of interlocking over and above the virtual function dispatch cost. I don't think we should call that here just because it is "conventional in Rust".

/// This is used by generated `IntoFuture` impls. It shouldn't be necessary to use this type manually.
pub struct FutureWrapper<T: AsyncOperation> {
inner: T,
waker: Option<Arc<Mutex<Waker>>>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like an overly expensive combination for synchronization. I'm not sure how to simplify but I'll play around with it a bit.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haven't found a simpler approach. 🤷

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this compare to the old implementation? Is this incurring an extra allocation for each Future call? :o

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, previously (#3142) I didn't store the Waker at all. Now the Waker is stored in an Arc, which adds a heap allocation, and requires a bunch of interlocking to retrieve the Waker.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is why I'm not excited about this in general - the WinRT async model and the Rust futures library don't seem to have a natural fit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't feel like an extra heap allocation is a huge problem since eg. calling SetCompleted also needs to allocate already.
I think it is possible to combine those into one allocation though at the cost of some unsafe code. Would you prefer that?
Also, would you prefer to replace the Mutex with lighter weight synchronization using hand-rolled atomics?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to keep it as simple as possible. We can always optimize later as needed.

Copy link
Collaborator

@kennykerr kennykerr Aug 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps at least a comment here somewhere that the Completed can only be set once and thus the need for a shared Waker.

@kennykerr kennykerr changed the title Implement IntoFuture for WinRT futures Implement IntoFuture for WinRT async types Aug 9, 2024
@kennykerr
Copy link
Collaborator

Are you planning on clearing up the build issues?

@goffrie
Copy link
Contributor Author

goffrie commented Aug 9, 2024 via email

@kennykerr
Copy link
Collaborator

I'd rather not keep PRs open indefinitely. Feel free to revisit when you've had time to work through the feedback and are ready for review.

@kennykerr kennykerr closed this Aug 19, 2024
let saved_waker = Arc::new(Mutex::new(cx.waker().clone()));
self.waker = Some(saved_waker.clone());
self.inner.set_completed(move || {
saved_waker.lock().unwrap().wake_by_ref();
Copy link
Collaborator

@kennykerr kennykerr Aug 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For future reference, nothing appears to prevent this from racing ahead and running against a previous waker while the saved waker is waiting to be updated above.

@kennykerr
Copy link
Collaborator

I added a sketch of what I think is needed here: #342

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants