Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

db: pipeline WAL rotation #2540

Open
jbowens opened this issue May 19, 2023 · 1 comment
Open

db: pipeline WAL rotation #2540

jbowens opened this issue May 19, 2023 · 1 comment

Comments

@jbowens
Copy link
Collaborator

jbowens commented May 19, 2023

Let L be the fsync latency of the WAL storage medium.

When the memtable and WAL are rotated, the first batch application to the new WAL may need to at worst wait:

  1. For an inflight fsync of entries to the previous WAL to complete (at worst, L).
  2. For a final fsync of entries to the previous WAL that did not make the in-flight fsync. ( L )
  3. A final fsync in LogWriter.Close to ensure the EOF trailer is synced. ( L )
  4. A fsync of the WAL directory to ensure the new WAL is durably linked into its new name. ( L )
  5. The fsync of this new batch. ( L )

Cumulatively, these can cause commit tail latencies to increase 5x. There are a few ways this could be reduced.

(2) & (3) could be together bounded by 1 L through more coordination between LogWriter.Close and the LogWriter's flush loop. The final flush of log entries (2) can include the EOF trailer and sync:

pebble/record/log_writer.go

Lines 638 to 645 in f6eaf9a

// Sync any flushed data to disk. NB: flushLoop will sync after flushing the
// last buffered data only if it was requested via syncQ, so we need to sync
// here to ensure that all the data is synced.
err := w.flusher.err
var syncLatency time.Duration
if err == nil && w.s != nil {
syncLatency, err = w.syncWithLatency()
}

(4) & (5) could happen in parallel, but it would require some additional, delicate synchronization.

Or alternatively we could prepare the next WAL ahead of time. In a steady state, Pebble would have two open WALs with log numbers >= minUnflushedLogNum: current and next. The next LogWriter's flushLoop would synchronize with current's Close, refusing to signal to waiting syncQueuers until current's Close has completed. By addressing (2) & (3) as well, this would eliminate any additional worst-case fsync latency from the WAL rotation itself, making it inline with ordinary WAL fsyncs.

In Open, we would need to relax/rework the strictWALTail option. Currently all replayed WALs besides the most recent one are required to have clean tails indicating that they were deliberately closed—anything else is interpreted as corruption. With this change, it would be possible for the second most recent WAL to have an unclean tail for some time. We could include a marker entry in the next WAL that is written only once after the next WAL observed that current's Close completed, indicating that if recovery observed an unclean tail of the previous WAL, it should treat it as corruption.

Jira issue: PEBBLE-192

@jbowens
Copy link
Collaborator Author

jbowens commented Jul 24, 2023

In #2762 we've unbounded the amount of data that may be queued for flushing within a single WAL. Today, the 1:1 relationship between WALs and memtables mean that the amount of data queued for flushing is bounded by the size of the mutable memtable. If we begin pipelining WALs allowing more than one WAL to queue writes, this bound will effectively be lifted to opts.MemTableStopWritesThreshold * opts.MemtableSize. If/when we make this change, we should reevaluate what if any additional bound we want to impose on blocks queued for flushing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Backlog
Development

No branches or pull requests

1 participant