Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-architecture for Stream style interceptors (WIP) #86

Closed
wants to merge 1 commit into from

Conversation

emcfarlane
Copy link
Collaborator

WIP

Large refactor of handler.go to convert the protocol interfaces to act on headers, messages, trailers and errors. From this we can divide the stream into the client/server and implement interceptors that affect control flow. This makes it possible to implement custom backends and adopt connects interceptor library to run through vanguard. Another use case is to gain direct access to gRPC server methods and do direct dispatch, avoiding the re-encoding of messages and replace gRPC's ServeHTTP method entirely.

Protocol Interface

The private Client protocol interface is now:

	// Protocol returns the protocol used by the client.
	Protocol() Protocol
	// DecodeRequestHeader extracts request headers into the given requestMeta.
	DecodeRequestHeader(*requestMeta) error
	// PrepareRequestMessage prepares the given messageBuffer to receive the
	// message data from the reader.
	PrepareRequestMessage(*messageBuffer, *requestMeta) error
	// EncodeRequestHeader  encodes the responseMeta for the client. 
	EncodeRequestHeader(*responseMeta) error
	// PrepareResponseMessage prepares the given messageBuffer to receive the
	// message data from the writer.
	PrepareResponseMessage(*messageBuffer, *responseMeta) error
	// EncodeResponseTrailer encodes the given trailers into the given buffer and headers.
	EncodeResponseTrailer(*bytes.Buffer, *responseMeta) error
	// Encode the error into the given buffer.
	EncodeError(*bytes.Buffer, *responseMeta, error)

Errors are handled by the client protocol, server's raise errors from their trailer messages. The flow is simplified by having some protocols be stateful avoiding needing to share data in *operation.

Codecs and compressors are set by the protocols. This allows for better optimization on detecting when it's required to convert protocols. For example, REST protocol sets a codec with the name rest-json which is compared to the server's codec and only converted if different. Setting compressors allow protocols to also set full stream compression. So REST HttpBody requests can now compress/decompress streams of data.

Buffers

Message conversion is done using the buffering techniques proposed here: #29

All messages are currently buffered, a regression, but streaming can easily be added as a message stage to avoid full buffering. From benchmarks it looks like the impact is minimal, unmarshalling and compressing are only invoked if necessary already. Large messages might make it worthwhile.

Interceptor

Interceptor could look like this Stream interface. For example, on creation headers are decoded from the client providing ReceiveHeaders. When ReceiveMessage(..) is invoked the client prepares the message buffer and then decodes into the message. On SendMessage(..) client prepares message and then marshals and writes the response.

// WithInterceptor installs a StreamInterceptor that will be invoked on each 
// method invocation. Multiple interceptors will be chained in the order they were 
// applied.
func WithInterceptor(interceptor StreamInterceptor) Options

// Stream interfaces defines the RPC interface for both unary and streaming methods.
type Stream interface {
	MethodDesc() protoreflect.MethodDescriptor
	ReceiveHeader() http.Header
	ReceiveMessage(proto.Message) error
	SendHeader(http.Header) error
	SendMessage(proto.Message) error
	SendTrailer(http.Header) error
}
type StreamHandler func(context.Context, Stream) error
type StreamInterceptor func(ctx context.Context, stream Stream, handler StreamHandler) error

Testing

All current TestMux_ tests are passing. Some http error code passthrough is WIP. Also need to re-add content-length and max message size testing.

The work to split up the message control from the operation will make it much easier to test individual parts.

Performance

Current benchmarks show a 15% throughput increase with less memory overhead and lower allocations. The PassThrough benchmark is the only one slower, as this optimization isn't yet added so it's comparing doing nothing vs going through the full transcoder.

goos: darwin
goarch: arm64
pkg: connectrpc.com/vanguard
                                                         │   old.txt    │                new.txt                │
                                                         │    sec/op    │    sec/op     vs base                 │
ServeHTTP/gRPC_proto_gzip/gRPC_proto_gzip/PassThrough-8    1.931µ ± ∞ ¹   2.146µ ± ∞ ¹        ~ (p=1.000 n=1) ²
ServeHTTP/REST_json/gRPC_proto/Convert-8                   19.05µ ± ∞ ¹   15.17µ ± ∞ ¹        ~ (p=1.000 n=1) ²
ServeHTTP/gRPC_proto/REST_json/Convert-8                   13.86µ ± ∞ ¹   11.23µ ± ∞ ¹        ~ (p=1.000 n=1) ²
ServeHTTP/connect_json/gRPC_proto/Convert-8                44.29µ ± ∞ ¹   40.69µ ± ∞ ¹        ~ (p=1.000 n=1) ²
ServeHTTP/connect_proto_gzip/gRPC_proto_gzip/Translate-8   2.775µ ± ∞ ¹   2.182µ ± ∞ ¹        ~ (p=1.000 n=1) ²
ServeHTTP/REST_json/gRPC_proto/LargeHttpBody-8             2.837m ± ∞ ¹   1.928m ± ∞ ¹        ~ (p=1.000 n=1) ²
geomean                                                    23.71µ         19.92µ        -15.98%
¹ need >= 6 samples for confidence interval at level 0.95
² need >= 4 samples to detect a difference at alpha level 0.05

                                                         │    old.txt    │                new.txt                 │
                                                         │     B/op      │     B/op       vs base                 │
ServeHTTP/gRPC_proto_gzip/gRPC_proto_gzip/PassThrough-8    3.531Ki ± ∞ ¹   3.428Ki ± ∞ ¹        ~ (p=1.000 n=1) ²
ServeHTTP/REST_json/gRPC_proto/Convert-8                   57.82Ki ± ∞ ¹   25.52Ki ± ∞ ¹        ~ (p=1.000 n=1) ²
ServeHTTP/gRPC_proto/REST_json/Convert-8                   44.91Ki ± ∞ ¹   25.07Ki ± ∞ ¹        ~ (p=1.000 n=1) ²
ServeHTTP/connect_json/gRPC_proto/Convert-8                44.03Ki ± ∞ ¹   23.47Ki ± ∞ ¹        ~ (p=1.000 n=1) ²
ServeHTTP/connect_proto_gzip/gRPC_proto_gzip/Translate-8   5.062Ki ± ∞ ¹   3.371Ki ± ∞ ¹        ~ (p=1.000 n=1) ²
ServeHTTP/REST_json/gRPC_proto/LargeHttpBody-8             64.06Mi ± ∞ ¹   16.53Mi ± ∞ ¹        ~ (p=1.000 n=1) ²
geomean                                                    71.54Ki         37.84Ki        -47.10%
¹ need >= 6 samples for confidence interval at level 0.95
² need >= 4 samples to detect a difference at alpha level 0.05

                                                         │   old.txt   │               new.txt               │
                                                         │  allocs/op  │  allocs/op   vs base                │
ServeHTTP/gRPC_proto_gzip/gRPC_proto_gzip/PassThrough-8    27.00 ± ∞ ¹   42.00 ± ∞ ¹       ~ (p=1.000 n=1) ²
ServeHTTP/REST_json/gRPC_proto/Convert-8                   153.0 ± ∞ ¹   134.0 ± ∞ ¹       ~ (p=1.000 n=1) ²
ServeHTTP/gRPC_proto/REST_json/Convert-8                   179.0 ± ∞ ¹   156.0 ± ∞ ¹       ~ (p=1.000 n=1) ²
ServeHTTP/connect_json/gRPC_proto/Convert-8                138.0 ± ∞ ¹   112.0 ± ∞ ¹       ~ (p=1.000 n=1) ²
ServeHTTP/connect_proto_gzip/gRPC_proto_gzip/Translate-8   56.00 ± ∞ ¹   36.00 ± ∞ ¹       ~ (p=1.000 n=1) ²
ServeHTTP/REST_json/gRPC_proto/LargeHttpBody-8             75.00 ± ∞ ¹   71.00 ± ∞ ¹       ~ (p=1.000 n=1) ²
geomean                                                    86.83         79.44        -8.51%
¹ need >= 6 samples for confidence interval at level 0.95
² need >= 4 samples to detect a difference at alpha level 0.05

@emcfarlane emcfarlane self-assigned this Oct 10, 2023
@emcfarlane
Copy link
Collaborator Author

Closing, can pick up later after release if required.

@emcfarlane emcfarlane closed this Oct 16, 2023
@emcfarlane emcfarlane deleted the ed/interceptor branch October 26, 2023 21:32
@emcfarlane
Copy link
Collaborator Author

Part of the implementation for #97

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant