Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle client disconnect in server #229

Open
mihai1voicescu opened this issue Sep 27, 2022 · 9 comments
Open

Handle client disconnect in server #229

mihai1voicescu opened this issue Sep 27, 2022 · 9 comments
Labels
Milestone

Comments

@mihai1voicescu
Copy link

Each time a client disconnects we get an unhandled error in our server logs. How can we handle them since they are flooding our server logs?

We are using ktor_version=2.1.0 with WebSockets and the Netty engine. We also tried CIO but the same result.

Caused by: kotlinx.coroutines.channels.ClosedReceiveChannelException: Channel was closed
	at kotlinx.coroutines.channels.Closed.getReceiveException(AbstractChannel.kt:1108)
	at kotlinx.coroutines.channels.AbstractChannel$ReceiveElement.resumeReceiveClosed(AbstractChannel.kt:913)
	at kotlinx.coroutines.channels.AbstractSendChannel.helpClose(AbstractChannel.kt:342)
	at kotlinx.coroutines.channels.AbstractSendChannel.close(AbstractChannel.kt:271)
	at kotlinx.coroutines.channels.SendChannel$DefaultImpls.close$default(Channel.kt:93)
	at io.ktor.websocket.DefaultWebSocketSessionImpl$runIncomingProcessor$1.invokeSuspend(DefaultWebSocketSession.kt:204)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
	at kotlinx.coroutines.DispatchedTaskKt.resume(DispatchedTask.kt:234)
	at kotlinx.coroutines.DispatchedTaskKt.resumeUnconfined(DispatchedTask.kt:190)
	at kotlinx.coroutines.DispatchedTaskKt.dispatch(DispatchedTask.kt:161)
	at kotlinx.coroutines.CancellableContinuationImpl.dispatchResume(CancellableContinuationImpl.kt:397)
	at kotlinx.coroutines.CancellableContinuationImpl.completeResume(CancellableContinuationImpl.kt:513)
	at kotlinx.coroutines.channels.AbstractChannel$ReceiveHasNext.completeResumeReceive(AbstractChannel.kt:947)
	at kotlinx.coroutines.channels.AbstractSendChannel.offerInternal(AbstractChannel.kt:56)
	at kotlinx.coroutines.channels.AbstractSendChannel.send(AbstractChannel.kt:134)
	at io.ktor.websocket.RawWebSocketJvm$1.invokeSuspend(RawWebSocketJvm.kt:68)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
	at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
	at kotlinx.coroutines.internal.LimitedDispatcher.run(LimitedDispatcher.kt:42)
	at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:95)
	at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:570)
	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:750)
	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:677)
	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:664)
@whyoleg
Copy link
Member

whyoleg commented Sep 27, 2022

Looks like the same issue as #211. So I think it's ktor side
As for workaround is to provide CoroutineExceptionHandler to server instance via scope or parentCoroutineContext which goes directly into applicationEngineEnvironment

@whyoleg
Copy link
Member

whyoleg commented Sep 27, 2022

related: #226
May be it will be possible to fix it in rsocket-kotlin in similar way during refactoring of transports

@mihai1voicescu
Copy link
Author

@whyoleg thank you for your quick response!

That is also what we have been trying, but no success yet. Maybe we are doing something stupid since the handler never gets called.

We tried with the applicationEngingeEnviroment like this:

private val handler = CoroutineExceptionHandler { coroutineContext, throwable ->
    println("CALLED")
    when (throwable) {
        is kotlinx.coroutines.channels.ClosedReceiveChannelException -> Unit
        else -> throw throwable
    }
}

fun applicationEngine(): NettyApplicationEngine {
    val port = config.deployment("port", "80").toInt()

    val host = config.deployment("host", "0.0.0.0")

    return embeddedServer(
        Netty,
        applicationEngineEnvironment {
            module {
                mount()
                printRoutes()
            }
            connector {
                this.port = port
                this.host = host
            }

            this.parentCoroutineContext = this.parentCoroutineContext + handler

        }) {

    }
}

We also tried passing a scope with a handler to the RSocketRequestHandler(EmptyCoroutineContext+handler) and it still does not get called.

@whyoleg
Copy link
Member

whyoleg commented Sep 27, 2022

From what I've checked, looks like it's not possible to add exception handler for websocket session... Need to create an issue on ktor side for this (?).
Still it's possible to hide this exception on rsocket side, but I'm not sure, what is the best thing now as in that way it will be not possible to distinguish between just close and connection failure.
I need more time to investigate, if POC of new transport API will have this possibility...

@whyoleg whyoleg added this to the 0.17.0 milestone Oct 5, 2022
@whyoleg
Copy link
Member

whyoleg commented Oct 18, 2022

@mihai1voicescu
Copy link
Author

We did some digging, for the moment adding a simple try/catch at

will catch the error.

Maybe there is a way to propagate it and notify that the rSocket was closed (since this is very common when dealing with mobile and web clients)?

Maybe you can point us in the right direction and we make a PR?

@whyoleg
Copy link
Member

whyoleg commented Oct 21, 2022

Change should be a little bigger. We need to make receive result nullable and handle it where it's called. This will affect all transport implementations and not only websocket. Also handling of error error in send should be there: if send is called while receive is in process (as those operations can be performed in parallel). I will try to prepare a quicfix(and new patch release)this weekend, until new transport API will be available.

@mihai1voicescu
Copy link
Author

Thank you! If there is something we can help with let us know.

@whyoleg
Copy link
Member

whyoleg commented Nov 17, 2022

Sorry for the delay. Focus switched on new transport API. I've tried to create a quick fix, but it's really degrade stability and expected behavior of current transport implementation and I need to investigate more time that I expected. At the same time I still think, that this specific issue should be fixed on ktor side. I will try to ensure that it will be fixed in ktor 2.2.0 if ktor team will not do it.

@whyoleg whyoleg modified the milestone: 0.16.0 - Transport API/Internals rework, QUIC, Netty Nov 24, 2022
@whyoleg whyoleg added the bug label May 3, 2023
@whyoleg whyoleg removed this from the 0.16.0 - Transport API/Internals rework, QUIC, Netty milestone May 3, 2023
@whyoleg whyoleg added this to the 0.17.0 milestone Nov 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants