Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected error when server expands a compressed message to learn it is too large #11246

Open
jhump opened this issue May 30, 2024 · 4 comments
Milestone

Comments

@jhump
Copy link
Member

jhump commented May 30, 2024

When the client uses “gzip” compression to compress requests, if it sends a message where the compressed size is acceptable (less than or equal to the server’s configured max) but the uncompressed size is too large, the client receives an unhelpful UNKNOWN error with a message of “Application error processing RPC”.

Interestingly, an error is logged with an exception that indicates the expected/correct error code. It’s not a warning but an error with a message that implies the exception was never caught:

SEVERE: Exception while executing runnable io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable@2339b6a8
io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: Decompressed gRPC message exceeds maximum size 204800
	at io.grpc.Status.asRuntimeException(Status.java:525)
	at io.grpc.internal.MessageDeframer$SizeEnforcingInputStream.verifySize(MessageDeframer.java:523)
	at io.grpc.internal.MessageDeframer$SizeEnforcingInputStream.read(MessageDeframer.java:478)
	at com.google.protobuf.CodedInputStream$StreamDecoder.readRawBytesSlowPathRemainingChunks(CodedInputStream.java:2967)
	at com.google.protobuf.CodedInputStream$StreamDecoder.readBytesSlowPath(CodedInputStream.java:3006)
	at com.google.protobuf.CodedInputStream$StreamDecoder.readBytes(CodedInputStream.java:2397)
	at testing.TestRequest$Builder.mergeFrom(TestRequest.java:501)
	at testing.TestRequest$1.parsePartialFrom(TestRequest.java:849)
	at testing.TestRequest$1.parsePartialFrom(TestRequest.java:841)
	at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:63)
	at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:25)
	at io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller.parseFrom(ProtoLiteUtils.java:245)
	at io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller.parse(ProtoLiteUtils.java:237)
	at io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller.parse(ProtoLiteUtils.java:134)
	at io.grpc.MethodDescriptor.parseRequest(MethodDescriptor.java:307)
	at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailableInternal(ServerCallImpl.java:334)
	at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:319)
	at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:834)
	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.base/java.lang.Thread.run(Thread.java:1583)
@ejona86
Copy link
Member

ejona86 commented May 30, 2024

Errors during deserialization (which includes the decompression here) are currently treated as application errors. We don't want to expose StatusRuntimeExceptions thrown by the client to the server's own clients, as that could leak server implementation details. So we close the call with a unique-but-not-very-helpful error and then log locally (via the thread's UncaughtExceptionHandler) for the full details.

It does seem it should be safe to consider errors from the marshaller specially, and report those errors back to the client (and not log). There is some difficulty here as ServerCallImpl will know the source of the exception, but ServerImpl.ServerStreamListenerImpl is the one that handles it. But probably not too bad.

@ejona86 ejona86 added this to the Next milestone Jun 5, 2024
@joybestourous
Copy link
Contributor

Any update on when we can expect this change?

@ejona86
Copy link
Member

ejona86 commented Oct 24, 2024

Is it causing you a problem? Nobody is working on this.

@joybestourous
Copy link
Contributor

One of our users reached out to us about this causing some frustration for them. It's not an urgent problem, but they were bothered enough by it that they bumped their message size limit extremely high so that they can manually report the error with the correct status and avoid the logs, so I figured I'd check if work was underway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants