-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
withProgress Cancellable
doesn't seem to work?
#538
Comments
Yes, I think this is fundamentally broken if you're not running your handlers concurrently, like HLS does (or see the "reactor" example). I also think the progress handling is broken for this reason. Part of the problem is that I think you can't do this without concurrent handling of requests! If you have only one thread that is executing an action, how can it possibly notice that you have received a cancel request? So in that sense a non-concurrent LSP server is just broken (or can't support some features). I think we should make concurrent the default, but I haven't yet had time to figure out how to do that nicely while still giving the user a way to control how the scheduling happens... See also #409 |
OK, I can refactor my server to run the handlers concurrently. In that case, perhaps it would make sense to change |
I'm not sure how that would help. We have to have a way to cancel the running computation when a cancellation request comes in, killing the thread that is running it as async does is a simple way to do that. |
I am thinking that it'd be good if either (A) the user of the library is in charge of concurrency (they fork and kill), or (B) the library is in charge (i.e., the library forks and kills). The current approach where the user has to fork some handlers, and the library does another fork so it can kill things, seems confusing. I think in #540 you are essentially suggesting design (B), which I think is a good idea, and if I understand correctly, once the library starts forking off the handlers, there would be no need to do another fork in the handler, as we currently are. Or am I misunderstanding the situation? |
I'm not really sure what to do. The thing is, it probably is reasonable for the user to want some kind of control. E.g. you might want to throttle certain kinds of request. Just forcing everyone into unbounded concurrency seems maybe bad? But maybe it's fine and I should just stop worrying about it - if you really care you can probably do it by taking a semaphore inside the handlers or something. |
We don't want to fork handlers in LSP because then clients of the library have no way to determine the order of messages, which is essential for correct processing. For example, in HLS we process all notifications sequentially, as we must for anything to make sense. Perhaps we can fork a single thread to allow messages to be decoded asynchronously, basically integrating the reactor example into the library. This doesn't suffice for cancellation, but I can think of a few ways to handle it:
|
Well no, we absolutely can do that, indeed what I was thinking about is to essentially port the HLS scheduler to Some care may be needed to handle cancellations that come in before you've started things etc. but it all seems possible to handle. But the fact that this is complicated suggests to me that we maybe just want one scheduler that does it right. If what it does is wrong for HLS, it's probably just wrong. |
Oh, right, in fact we also don't even need to fork off notification handlers, since notifications aren't cancellable. |
HLS currently doesn't particularly care about request order, but it might want to do so in the future. I'm pretty sure the spec explicitly states that request processing order matters. |
Well, the spec gives an extremely lukewarm take on this:
I think there is at least the following level of necessary concurrency:
Yes, there are some ordering constraints that do matter (e.g. you have to process notifications in order). I think there are few others (the spec suggests that requests that cause modifications should be done in order), but I think most of those are generic, i.e. we could handle any such ordering constraints in Plus, this all sounds very fiddly and probably best to at least have a sensible default in |
For reference this is how we are currently doing cancellation in Koka. (Sidenote, adding Koka's language server was my first time working in Haskell, and especially working with any asynchronous Haskell, so if there are errors in the logic, please point them out!). It was inspired by HLS's implementation, but I tried to simplify it. I also decided to discard all file change notifications except the most recent for any given file to avoid processing outdated notifications. Edit: this comment probably belongs in #540 |
Sorry this is a pain, this is next on my list of stuff to improve!
If this is just for noticing that a file has changed I think that's probably safe. Generally indeed you want to debounce inputs that require you to redo work; that's one place to do it. (Personally I would probably have an internal queue of work such as "recompile this file" and then debounce that, but up to you). Also would #555 be helpful for you? |
Hello,
I've just started using this package recently, so possibly I am using it incorrectly, or misunderstand something. The issue is that I can't seem to get
withProgress
to work correctly when an action is cancelled. To test it, I've made a server function like this:Basically the idea is that we sleep for a second a bunch of times to simulate work. This works great as long as you don't cancel the action. If the action is cancelled, the work seems to continue executing, and when it is finished I get this error:
Looking at
lsp
's source code, it looks like there is a built-in handler for this, which makes sense. However, this handler never seems to fire.I wonder if the problem is that in the implementation of
withProgress
, we fork off the work withasync
, but then wewait
for it straight away, so we are blocked and can't handle thecancel
message from the client?Thoughts?
The text was updated successfully, but these errors were encountered: