From 3b3bfb4f672cd0f57460d2407267d74d62156c02 Mon Sep 17 00:00:00 2001 From: Marcin Rataj Date: Thu, 19 Dec 2024 23:41:16 +0100 Subject: [PATCH] docs: config and changelog fixes links need to be absolute because we reuse markdown in github releases --- docs/changelogs/v0.33.md | 6 +++--- docs/config.md | 8 ++++++-- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/docs/changelogs/v0.33.md b/docs/changelogs/v0.33.md index 3a6db912fe9..07fabe03295 100644 --- a/docs/changelogs/v0.33.md +++ b/docs/changelogs/v0.33.md @@ -35,7 +35,7 @@ Onboarding files and directories with `ipfs add --to-files` now requires non-emp #### New options for faster writes: `WriteThrough`, `BlockKeyCacheSize`, `BatchMaxNodes`, `BatchMaxSize` -Now that Kubo supports [`pebble`](../datastores.md#pebbleds) as a datastore backend, it becomes very useful to expose some additional configuration options for how the blockservice/blockstore/datastore combo behaves. +Now that Kubo supports [`pebble`](https://github.com/ipfs/kubo/blob/master/docs/datastores.md#pebbleds) as a datastore backend, it becomes very useful to expose some additional configuration options for how the blockservice/blockstore/datastore combo behaves. Usually, LSM-tree based datastore like Pebble or Badger have very fast write performance (blocks are streamed to disk) while incurring in read-amplification penalties (blocks need to be looked up in the index to know where they are on disk), specially noticiable on spinning disks. @@ -47,9 +47,9 @@ We have also made the size of the two-queue blockstore cache configurable with a Finally, we have added two new options to the `Import` section to control the maximum size of write-batches: `BatchMaxNodes` and `BatchMaxSize`. These are set by default to `128` nodes and `20MiB`. Increasing them will batch more items together when importing data with `ipfs dag import`, which can speed things up. It is importance to find a balance between available memory (used to hold the batch), disk latencies (when writing the batch) and processing power (when preparing the batch, as nodes are sorted and duplicates removed). -As a reminder, details from all the options are explained in the [configuration documentation](../config.md). +As a reminder, details from all the options are explained in the [configuration documentation](https://github.com/ipfs/kubo/blob/master/docs/config.md). -We recommend users trying Pebble as a datastore backend to disable both blockstore bloom-filter and key caching layers and enable write through as a way to evaluate the raw performance of the underlying datastore, which includes its own bloom-filter and caching layers (default cache size is `8MiB` and can be configured in the [options](../datastores.md#pebbleds). +We recommend users trying Pebble as a datastore backend to disable both blockstore bloom-filter and key caching layers and enable write through as a way to evaluate the raw performance of the underlying datastore, which includes its own bloom-filter and caching layers (default cache size is `8MiB` and can be configured in the [options](https://github.com/ipfs/kubo/blob/master/docs/datastores.md#pebbleds). #### MFS stability with large number of writes diff --git a/docs/config.md b/docs/config.md index f4d333b9034..db305e40977 100644 --- a/docs/config.md +++ b/docs/config.md @@ -40,7 +40,7 @@ config file at runtime. - [`Datastore.GCPeriod`](#datastoregcperiod) - [`Datastore.HashOnRead`](#datastorehashonread) - [`Datastore.BloomFilterSize`](#datastorebloomfiltersize) - - [`Datastore.WriteTrhough`](#datastorewritethrough) + - [`Datastore.WriteThrough`](#datastorewritethrough) - [`Datastore.BlockKeyCacheSize`](#datastoreblockkeycachesize) - [`Datastore.Spec`](#datastorespec) - [`Discovery`](#discovery) @@ -2463,10 +2463,12 @@ Default: `sha2-256` Type: `optionalString` -### `Import.BatchMaxNodes +### `Import.BatchMaxNodes` The maximum number of nodes in a write-batch. The total size of the batch is limited by `BatchMaxnodes` and `BatchMaxSize`. +Increasing this will batch more items together when importing data with `ipfs dag import`, which can speed things up. + Default: `128` Type: `optionalInteger` @@ -2475,6 +2477,8 @@ Type: `optionalInteger` The maximum size of a single write-batch (computed as the sum of the sizes of the blocks). The total size of the batch is limited by `BatchMaxnodes` and `BatchMaxSize`. +Increasing this will batch more items together when importing data with `ipfs dag import`, which can speed things up. + Default: `20971520` (20MiB) Type: `optionalInteger`