Motivation
This library enables the use of gRPC over an MQTT connection. This can be particularly useful when you have a distributed fleet of gRPC servers behind firewalls, as the servers can be accessible over MQTT without needing to accept incoming connections.
Highlights
- Makes gRPC calls over MQTT!
- Client and RemoteClient code can be generated from
.proto
files - MQTT sessions can properly handle out-of-order messages
- MQTT sessions will avoid re-processing duplicate requests
Overview
This library attempts to closely mirror the API of gRPC-haskell
so that it can be easily swapped in and out with existing gRPC infrastructure.
The basic flow of a request through this system:
The two main components of this library are the modules Client
and the RemoteClient
.
Client
A connection to the MQTT broker can be created and used via withMQTTGRPCClient
by providing an MQTTGRPCConfig
.
Client functions for calling gRPC services over MQTT can be generated from your existing proto files with Template Haskell using mqttClientFuncs
. The generated code requires the corresponding proto file to have also already been compiled using proto3-suite
. See Test/ProtoClients.hs for an example.
General usage:
withMQTTGRPCClient logger myMQTTConfig $ \client -> do
let AddHello mqttAdd mqttHelloSS = addHelloMqttClient client baseTopic
result <- mqttAdd (MQTTNormalRequest (TwoInts 4 6) 2 [])
...
Here AddHello
is a type that was generated by proto3-suite
, and addHelloMqttClient
is generated with mqttClientFuncs
RemoteClient
The RemoteClient performs the actual gRPC requests on behalf of the Client. Similarily to the Client, the RemoteClient code can be generated using mqttRemoteClientMethodMap
. See Test/ProtoRemoteClients.hs for an example. The resulting MethodMap
is a mapping from gRPC method names to a function for making that request. These maps can be combined if you have multiple gRPC servers running on the machine.
General usage:
withGRPCClient myGRPCClientConfig $ \grpcClient -> do
methodMap <- addHelloRemoteClientMethodMap grpcClient
runRemoteClient logger myMQTTConfig baseTopic methodMap
Using multiple servers:
methodMapAH <- addHelloRemoteClientMethodMap grpcClient1
methodMapMG <- multGoodbyeRemoteClientMethodMap grpcClient2
let methodMap = methodMapAH <> methodMapMG
runRemoteClient logger myMQTTConfig baseTopic methodMap
Batching
Typically, each message transmitted through GRPC method calls result in one or more MQTT packets published over MQTT. A
packet size limit is configured in MQTTGRPCConfig.mqttMsgSizeLimit
. If a message is larger than this limit, it will be
split into multiple packets and then those packets are published.
The performance of streaming RPCs that transmit many small messages in a short time window can be improved, dramatically, by enabling batching. When batching is enabled, the sender accumulates many messages into one packet and then flushes them in a single publish operation. This reduces the MQTT protocol overhead and can result in better performace.
Batching can be enabled in one of the following ways:
- The
mqttClientFuncs
andmqttRemoteClientMethodMap
template haskell functions accept a parameter that specifies whether batching should be enabled for the generated RPC methods. This is the recommended approach if you do not want to modify.proto
files. - A protocol buffer option
hs_grpc_mqtt_batched_stream
is available for use at the service or method level. Setting this totrue
/false
will enable/disable batching respectively. The method level option has higher precedence than the service level option and the service level option has higher precedence than the template haskell parameter.
Example usage:
service AddHello {
/* Enables batching for all methods in this service */
option hs_grpc_mqtt_batched_stream = true;
...
}
service AddHello {
/* Server Streaming method with batching */
rpc HelloSSBatch(SSRqt) returns (stream SSRpy) {
option hs_grpc_mqtt_batched_stream = true;
}
}
Note that batching introduces an additional step between the client code triggering a send operation and the actual MQTT publish. Some messages accumulated in memory could get lost if the sender encounters an error or crashes before the messages are flushed. Also, the sender will hold these accumulated messages in memory for a long time if the messages are produced at a very low rate. The receiver will experience a delay in such cases because the sender does not publish anything till the limit is reached. It is not recommended to enable batching in such cases.
Building
This package uses Nix flakes to manage dependencies and provide a reproducible build environment.
To build the package:
nix build
To start a development environment:
nix develop
This starts a shell with required development tools - such as ghc and cabal - in the PATH. You can build and test the code with cabal.