Skip to content

Commit

Permalink
update summary
Browse files Browse the repository at this point in the history
Signed-off-by: Alex Chi Z <[email protected]>
  • Loading branch information
skyzh committed Jan 30, 2024
1 parent acc3c95 commit 417e81e
Show file tree
Hide file tree
Showing 3 changed files with 16 additions and 4 deletions.
4 changes: 2 additions & 2 deletions mini-lsm-book/src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@
- [Snapshots - Memtables and Timestamps](./week3-02-snapshot-read-part-1.md)
- [Snapshots - Transaction API](./week3-03-snapshot-read-part-2.md)
- [Watermark and GC](./week3-04-watermark.md)
- [Transaction and OCC (WIP)](./week3-05-txn-occ.md)
- [Serializable Snapshot Isolation (WIP)](./week3-06-serializable.md)
- [Transaction and OCC](./week3-05-txn-occ.md)
- [Serializable Snapshot Isolation](./week3-06-serializable.md)
- [Snack Time: Compaction Filter (WIP)](./week3-07-compaction-filter.md)
- [The Rest of Your Life (TBD)](./week4-overview.md)

Expand Down
12 changes: 11 additions & 1 deletion mini-lsm-book/src/week3-06-serializable.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,11 +102,21 @@ You can skip the check if `write_set` is empty. A read-only transaction can alwa

You should also modify the `put`, `delete`, and `write_batch` interface in `LsmStorageInner`. We recommend you define a helper function `write_batch_inner` that processes a write batch. If `options.serializable = true`, `put`, `delete`, and the user-facing `write_batch` should create a transaction instead of directly creating a write batch. Your write batch helper function should also return a `u64` commit timestamp so that `Transaction::Commit` can correctly store the committed transaction data into the MVCC structure.

## Task 4: Garbage Collection

In this task, you will need to modify:

```
src/mvcc/txn.rs
```

When you commit a transaction, you can also clean up the committed txn map to remove all transactions below the watermark, as they will not be involved in any future serializable validations.

## Test Your Understanding

* If you have some experience with building a relational database, you may think about the following question: assume that we build a database based on Mini-LSM where we store each row in the relation table as a key-value pair (key: primary key, value: serialized row) and enable serializable verification, does the database system directly gain ANSI serializable isolation level capability? Why or why not?
* The thing we implement here is actually write snapshot-isolation (see [A critique of snapshot isolation](https://dl.acm.org/doi/abs/10.1145/2168836.2168853)) that guarantees serializable. Is there any cases where the execution is serializable, but will be rejected by the write snapshot-isolation validation?
* There are databases that claim they have serializable snapshot isolation support by only tracking the keys accessed in gets and scans. Do they really prevent write skews caused by phantoms? (Okay... Actually, I'm talking about [BadgerDB](https://dgraph.io/blog/post/badger-txn/).)
* There are databases that claim they have serializable snapshot isolation support by only tracking the keys accessed in gets and scans (instead of key range). Do they really prevent write skews caused by phantoms? (Okay... Actually, I'm talking about [BadgerDB](https://dgraph.io/blog/post/badger-txn/).)

We do not provide reference answers to the questions, and feel free to discuss about them in the Discord community.

Expand Down
4 changes: 3 additions & 1 deletion mini-lsm-book/src/week3-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

In this part, you will implement MVCC over the LSM engine that you have built in the previous two weeks. We will add timestamp encoding in the keys to maintain multiple versions of a key, and change some part of the engine to ensure old data are either retained or garbage-collected based on whether there are users reading an old version.

The general approach of the MVCC part in this tutorial is inspired and partially based on [BadgerDB](https://github.com/dgraph-io/badger).

The key of MVCC is to store and access multiple versions of a key in the storage engine. Therefore, we will need to change the key format to `user_key + timestamp (u64)`. And on the user interface side, we will need to have new APIs to help users to gain access to a history version. In summary, we will add a monotonically-increasing timestamp to the key.

In previous parts, we assumed that newer keys are in the upper level of the LSM tree, and older keys are in the lower level of the LSM tree. During compaction, we only keep the latest version of a key if multiple versions are found in multiple levels, and the compaction process will ensure that newer keys will be kept on the upper level by only merging adjacent levels/tiers. In the MVCC implementation, the key with a larger timestamp is the newest key. During compaction, we can only remove the key if no user is accessing an older version of the database. Though not keeping the latest version of key in the upper level may still yield a correct result for the MVCC LSM implementation, in our tutorial, we choose to keep the invariant, and if there are multiple versions of a key, a later version will always appear in a upper level.
Expand All @@ -16,7 +18,7 @@ put/delete/write_batch(key, timestamp)
set_watermark(timestamp) # we will talk about watermarks soon!
```

**Un-managed Mode APIs**
**Un-managed/Normal Mode APIs**
```
get(key) -> value
scan(key_range) -> iterator<key, value>
Expand Down

0 comments on commit 417e81e

Please sign in to comment.