Skip to content

Pipelining and command flushing

Mark Paluch edited this page Jan 21, 2017 · 7 revisions

Redis is a TCP server using the client-server model and what is called a Request/Response protocol. This means that usually a request is accomplished with the following steps:

  • The client sends a query to the server and reads from the socket, usually in a blocking way, for the server response.

  • The server processes the command and sends the response back to the client.

A Request/Response server can be implemented so that it is able to process new requests even if the client did not already read the old responses. This way it is possible to send multiple commands to the server without waiting for the replies at all, and finally read the replies in a single step.

When using a synchronous API, in general, the program flow and the underlying connection are blocked until the response is accomplished.

lettuce works differently, even though lettuce provides a synchronous API to achieve a blocking behavior on a per-Thread basis. lettuce is designed to operate in a pipelining way. Multiple threads can share one connection. While one thread may process one command, the other thread can send a new command through the connection. As soon as the first request returns, the first thread’s program flow continues, while the second request is processed by Redis and comes back at a certain point in time.

lettuce is built on top of netty decouple reading from writing and to provide thread-safe connections. The result is, that reading and writing can be handled by different threads and commands are written and read independent of each other but in sequence. You can find more details about message ordering in the Wiki. The transport and command execution layer does not block the processing until a command is written, processed and while its response is read. lettuce sends commands at the moment they are invoked.

A good example is the async API. Every invocation on the async API returns a Future (response handle) after the command is written to the netty pipeline. A write to the pipeline does not mean, the command is written to the underlying transport. Multiple commands can be written without awaiting the response. Invocations to the API (sync, async and starting with 4.0 also reactive API) can be performed by multiple threads.

Sharing a connection between threads is possible but keep in mind:

The longer commands need for processing, the longer other invoker wait for their results

You should not use transactional commands (MULTI) on shared connection. If you use Redis-blocking commands (e. g. BLPOP) all invocations of the shared connection will be blocked until the blocking command returns which impacts the performance of other threads. Blocking commands can be a reason to use multiple connections.

Command flushing

The normal operation mode of lettuce is to flush every command which means, that every command is written to the transport after it was issued. Any regular user desires this behavior. You can control command flushing since Version 3.3.

Why would you want to do this? A flush is an expensive system call and impacts performance. Batching, disabling auto-flushing, can be used under certain conditions and is recommended if:

  • You perform multiple calls to Redis and you’re not depending immediately on the result of the call

  • You’re bulk-importing

Controlling the flush behavior is only available on the async API. The sync API emulates blocking calls and as soon as you invoke a command, you’re no longer able to interact with the connection until the blocking call ends.

The AutoFlushCommands state is set per connection and therefore affects all threads using the shared connection. If you want to omit this effect, use dedicated connections. The AutoFlushCommands state cannot be set on pooled connections by the lettuce connection pooling.

Example 1. Asynchronous Pipelining
StatefulRedisConnection<String, String> connection = client.connect();
RedisAsyncCommands<String, String> commands = connection.async();

// disable auto-flushing
commands.setAutoFlushCommands(false);

// perform a series of independent calls
List<RedisFuture<?>> futures = Lists.newArrayList();
for (int i = 0; i < iterations; i++) {
    futures.add(commands.set("key-" + i, "value-" + i);
}

// write all commands to the transport layer
commands.flushCommands();

// synchronization example: Wait until all futures complete
boolean result = LettuceFutures.awaitAll(5, TimeUnit.SECONDS,
                   futures.toArray(new RedisFuture[futures.size()]));

// later
connection.close();

Performance impact

Commands invoked in the default flush-after-write mode perform in an order of about 100Kops/sec (async/multithreaded execution). Grouping multiple commands in a batch (size depends on your environment, but batches between 50 and 1000 work nice during performance tests) can increase the throughput up to a factor of 5x.

Pipelining within the Redis docs: http://redis.io/topics/pipelining

Clone this wiki locally