-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TUI crashes when resetting while running #2
Comments
This is because of our impl of |
If we want to properly resolve this we need to change Ultimately I think this means we have to abandon the "single batch" model that we have now. Right now, we assume that all the futures registered with us are waiting on the next event and that when the event occurs (when the batch is sealed) no more futures will be registered until a new batch starts (which will happen when all the futures poll and acknowledge that an event has happened). This is a decent assumption to make if the One simple fix is to have the reset function trigger a Spinning in reset until this condition is true also does not work for a couple of reasons:
One way to fix this is to try to detect cases like the above where the future is not actually being polled. A decent proxy for this is adding hooks to
On the other hand, the above solution still could suffer from weird race conditions that happen because of the way events get re-sequenced once they go through RPC. More importantly, it will leave some edge cases like the one above. Another approach is to stop trying to stick to the single batch model and to support multiple batches. The downside is complexity (particularly while maintaining no-std support for the |
Here's a note from //
// TODO: This is problematic since the way to 'drop' a Future from the
// client side is to simply stop polling it: when this happens we will not
// be notified. Instead, users will either be unable to call reset or will
// encounter a panic when they try to call reset after dropping a
// RunUntilEvent future.
//
// A better system would be to have every batch have a number. Each future
// would hold this number in addition to a reference to the
// EventFutureSharedState instance. When futures that aren't in the current
// batch try to poll, they should error out. The edge case that exists is
// that if the batch number loops around, it's possible that a very old
// future comes along and takes the place of a new future.
//
// An alternative solution is to use `Drop` to have futures that aren't
// being used anymore signal to us that they're dead by calling a function
// on us decrementing the count. A side-effect of this approach is that we
// won't know whether the waker we currently have a reference to belongs
// to the future that just told us that it's dead. So, we can't unregister
// the current (if we have one) waker that we know about.
//
// As a result any waker that gets used with an `EventFutureSharedState`
// backed Future must be able to handle being called on a future is no
// longer around.
//
// Actually, this has an issue: `Drop` is called eventually even when the
// future resolves. In these cases, the count will be decremented twice:
// once through the `get_event` function and once through the `decrement`
// function (when `Drop` occurs).
//
// We could just not decrement the count in `get_event` but then we'd have
// to assume that executors go and `Drop` their futures as soon as they
// resolve (a reasonable assumption but not required -- afaik the invariant
// they must uphold is not _polling_ Futures once they've resolved).
//
// So, this won't work. Eventually we should do something that looks like the first thing (batches with numbers). |
For now let's do this:
This leaves the user's end: there we still don't have any way of ensuring that the Future we produce is polled and that the Controller's batch is empty (we can and do ensure that it is sealed) before the next When the user tries, the program will crash. I think this is acceptable for now. I'll update the TUI to poll the future it gets within its inner loop. |
It's worth noting that another approach would be to just not resolve any existing futures when a reset occurs but to have them carry over. Calling The problem is that is a user were to block on a future that they got before a reset without starting the machine again (i.e. calling But is this a problem? I think it depends on your answer to this: Does the existence of an unresolved In crafting the API I definitely thought it did. I still do think the above would be surprising behaviour for a user but it's a decidedly better failure mode than just crashing. I think I'm still going to go with the stopgap outlined in the last comment but I'm now less sure what a proper solution to this looks like. @AmrEl-Azizi @pranav12321 @jer-zhang @gipsond Thoughts? |
(Notes for when we eventually have test for this) These are some of the things to test
Also that last one but with breakpoints/watchpoints instead of pauses, maybe. |
Kinda hokey, but it works. And look! We found a use for `Control::get_state()`! re: #48
Kinda hokey, but it works. And look! We found a use for `Control::get_state()`! re: #48
Kinda hokey, but it works. And look! We found a use for `Control::get_state()`! re: #48
To reproduce:
r
)alt
+r
)The text was updated successfully, but these errors were encountered: