Skip to content

Commit

Permalink
docs: config and changelog fixes
Browse files Browse the repository at this point in the history
links need to be absolute because we reuse markdown in github releases
  • Loading branch information
lidel committed Dec 19, 2024
1 parent 33b9f74 commit 3b3bfb4
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 5 deletions.
6 changes: 3 additions & 3 deletions docs/changelogs/v0.33.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Onboarding files and directories with `ipfs add --to-files` now requires non-emp

#### New options for faster writes: `WriteThrough`, `BlockKeyCacheSize`, `BatchMaxNodes`, `BatchMaxSize`

Now that Kubo supports [`pebble`](../datastores.md#pebbleds) as a datastore backend, it becomes very useful to expose some additional configuration options for how the blockservice/blockstore/datastore combo behaves.
Now that Kubo supports [`pebble`](https://github.com/ipfs/kubo/blob/master/docs/datastores.md#pebbleds) as a datastore backend, it becomes very useful to expose some additional configuration options for how the blockservice/blockstore/datastore combo behaves.

Usually, LSM-tree based datastore like Pebble or Badger have very fast write performance (blocks are streamed to disk) while incurring in read-amplification penalties (blocks need to be looked up in the index to know where they are on disk), specially noticiable on spinning disks.

Expand All @@ -47,9 +47,9 @@ We have also made the size of the two-queue blockstore cache configurable with a

Finally, we have added two new options to the `Import` section to control the maximum size of write-batches: `BatchMaxNodes` and `BatchMaxSize`. These are set by default to `128` nodes and `20MiB`. Increasing them will batch more items together when importing data with `ipfs dag import`, which can speed things up. It is importance to find a balance between available memory (used to hold the batch), disk latencies (when writing the batch) and processing power (when preparing the batch, as nodes are sorted and duplicates removed).

As a reminder, details from all the options are explained in the [configuration documentation](../config.md).
As a reminder, details from all the options are explained in the [configuration documentation](https://github.com/ipfs/kubo/blob/master/docs/config.md).

We recommend users trying Pebble as a datastore backend to disable both blockstore bloom-filter and key caching layers and enable write through as a way to evaluate the raw performance of the underlying datastore, which includes its own bloom-filter and caching layers (default cache size is `8MiB` and can be configured in the [options](../datastores.md#pebbleds).
We recommend users trying Pebble as a datastore backend to disable both blockstore bloom-filter and key caching layers and enable write through as a way to evaluate the raw performance of the underlying datastore, which includes its own bloom-filter and caching layers (default cache size is `8MiB` and can be configured in the [options](https://github.com/ipfs/kubo/blob/master/docs/datastores.md#pebbleds).

#### MFS stability with large number of writes

Expand Down
8 changes: 6 additions & 2 deletions docs/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ config file at runtime.
- [`Datastore.GCPeriod`](#datastoregcperiod)
- [`Datastore.HashOnRead`](#datastorehashonread)
- [`Datastore.BloomFilterSize`](#datastorebloomfiltersize)
- [`Datastore.WriteTrhough`](#datastorewritethrough)
- [`Datastore.WriteThrough`](#datastorewritethrough)
- [`Datastore.BlockKeyCacheSize`](#datastoreblockkeycachesize)
- [`Datastore.Spec`](#datastorespec)
- [`Discovery`](#discovery)
Expand Down Expand Up @@ -2463,10 +2463,12 @@ Default: `sha2-256`

Type: `optionalString`

### `Import.BatchMaxNodes
### `Import.BatchMaxNodes`

The maximum number of nodes in a write-batch. The total size of the batch is limited by `BatchMaxnodes` and `BatchMaxSize`.

Increasing this will batch more items together when importing data with `ipfs dag import`, which can speed things up.

Default: `128`

Type: `optionalInteger`
Expand All @@ -2475,6 +2477,8 @@ Type: `optionalInteger`

The maximum size of a single write-batch (computed as the sum of the sizes of the blocks). The total size of the batch is limited by `BatchMaxnodes` and `BatchMaxSize`.

Increasing this will batch more items together when importing data with `ipfs dag import`, which can speed things up.

Default: `20971520` (20MiB)

Type: `optionalInteger`
Expand Down

0 comments on commit 3b3bfb4

Please sign in to comment.