From c77ca4ad845d19ddf4b12847fdb2a54165716cd6 Mon Sep 17 00:00:00 2001 From: Anton Patsev Date: Sun, 25 Feb 2024 15:21:08 +0600 Subject: [PATCH 1/2] Add missing commas, correction of spelling errors --- examples/dbnode/proto_client/README.md | 2 +- scripts/development/m3_stack/README.md | 2 +- site/content/architecture/m3aggregator/flushing.md | 4 ++-- site/content/architecture/m3coordinator.md | 2 +- site/content/architecture/m3db/caching.md | 2 +- site/content/architecture/m3db/overview.md | 2 +- site/content/architecture/m3db/sharding.md | 10 +++++----- site/content/architecture/m3db/storage.md | 2 +- site/content/architecture/m3query/blocks.md | 2 +- site/content/cluster/binaries_cluster.md | 2 +- site/content/cluster/kubernetes_cluster.md | 4 ++-- site/content/faqs/troubleshooting.md | 2 +- site/content/how_to/any_remote_storage.md | 4 ++-- site/content/how_to/grafana.md | 2 +- site/content/how_to/monitoring_m3/tracing.md | 6 +++--- site/content/includes/headers_optional_read_write.md | 2 +- .../includes/headers_optional_read_write_all.md | 2 +- site/content/integrations/graphite.md | 2 +- 18 files changed, 27 insertions(+), 27 deletions(-) diff --git a/examples/dbnode/proto_client/README.md b/examples/dbnode/proto_client/README.md index 9d3fa1db4e..93753fba9d 100644 --- a/examples/dbnode/proto_client/README.md +++ b/examples/dbnode/proto_client/README.md @@ -1,5 +1,5 @@ # Protobuf Client Example 1. Setup an M3DB container as described in the [using M3DB as a general purpose time series database guide](https://docs.m3db.io/how_to/use_as_tsdb). -2. Modify `config.yaml` with any changes you've made to the default configuration. Also if you make any changes to M3DB's configuration make sure to do so before restarting the container as M3DB does not reload YAML configuration dynamically. +2. Modify `config.yaml` with any changes you've made to the default configuration. Also, if you make any changes to M3DB's configuration, make sure to do so before restarting the container as M3DB does not reload YAML configuration dynamically. 3. Execute `go run main.go -f config.yaml` \ No newline at end of file diff --git a/scripts/development/m3_stack/README.md b/scripts/development/m3_stack/README.md index 689766e41d..0d2e24ff7c 100644 --- a/scripts/development/m3_stack/README.md +++ b/scripts/development/m3_stack/README.md @@ -56,4 +56,4 @@ Load can easily be increased by modifying the `prometheus.yml` file to reduce th ## Containers Hanging / Unresponsive -Running the entire stack can be resource intensive. If the containers are unresponsive try increasing the amount of cores and memory that the docker daemon is allowed to use. +Running the entire stack can be resource intensive. If the containers are unresponsive, try increasing the amount of cores and memory that the docker daemon is allowed to use. diff --git a/site/content/architecture/m3aggregator/flushing.md b/site/content/architecture/m3aggregator/flushing.md index 95fffc77df..8cd9bcdd7b 100644 --- a/site/content/architecture/m3aggregator/flushing.md +++ b/site/content/architecture/m3aggregator/flushing.md @@ -56,6 +56,6 @@ data gets discarded. Similarly, if the shard has its cutoff field set to some value, the shard will [stop flushing](https://github.com/m3db/m3/blob/0865ebc80e85234b00532f93521438856883da9c/src/aggregator/aggregator/list.go#L323-L330) -once the wall clock will go past the given cutoff timestamp. +once the wall clock goes past the given cutoff timestamp. -If the shard does not have cutover/cutoff fields it will flush indefinitely. +If the shard does not have cutover/cutoff fields, it will flush indefinitely. diff --git a/site/content/architecture/m3coordinator.md b/site/content/architecture/m3coordinator.md index dce7f746a2..a7f4b0b090 100644 --- a/site/content/architecture/m3coordinator.md +++ b/site/content/architecture/m3coordinator.md @@ -7,6 +7,6 @@ M3 Coordinator is a service that coordinates reads and writes between upstream s It also provides management APIs to setup and configure different parts of M3. -The coordinator is generally a bridge for read and writing different types of metrics formats and a management layer for M3. +The coordinator is generally a bridge for reading and writing different types of metrics formats and a management layer for M3. **Note**: M3DB by default includes the M3 Coordinator accessible on port 7201. For production deployments it is recommended to deploy it as a dedicated service to ensure you can scale the write coordination role separately and independently to database nodes as an isolated application separate from the M3DB database role. diff --git a/site/content/architecture/m3db/caching.md b/site/content/architecture/m3db/caching.md index 1d3451fbe5..db369dedaf 100644 --- a/site/content/architecture/m3db/caching.md +++ b/site/content/architecture/m3db/caching.md @@ -7,7 +7,7 @@ weight: 7 Blocks that are still being actively compressed / M3TSZ encoded must be kept in memory until they are sealed and flushed to disk. Blocks that have already been sealed, however, don't need to remain in-memory. In order to support efficient reads, M3DB implements various caching policies which determine which flushed blocks are kept in memory, and which are not. The "cache" itself is not a separate datastructure in memory, cached blocks are simply stored in their respective [in-memory objects](/docs/architecture/m3db/engine#in-memory-object-layout) with various different mechanisms (depending on the chosen cache policy) determining which series / blocks are evicted and which are retained. -For general purpose workloads, the `lru` caching policy is reccommended. +For general purpose workloads, the `lru` caching policy is recommended. ## None Cache Policy diff --git a/site/content/architecture/m3db/overview.md b/site/content/architecture/m3db/overview.md index d146958c41..e2aead5a64 100644 --- a/site/content/architecture/m3db/overview.md +++ b/site/content/architecture/m3db/overview.md @@ -30,7 +30,7 @@ Here are some attributes of the project: Due to the nature of the requirements for the project, which are primarily to reduce the cost of ingesting and storing billions of timeseries and providing fast scalable reads, there are a few limitations currently that make M3DB not suitable for use as a general purpose time series database. -The project has aimed to avoid compactions when at all possible, currently the only compactions M3DB performs are in-memory for the mutable compressed time series window (default configured at 2 hours). As such out of order writes are limited to the size of a single compressed time series window. Consequently backfilling large amounts of data is not currently possible. +The project has aimed to avoid compactions when at all possible, currently the only compactions M3DB performs are in-memory for the mutable compressed time series window (default configured at 2 hours). As such out of order writes are limited to the size of a single compressed time series window. Consequently backfilling large amounts of data is not currently possible. The project has also optimized the storage and retrieval of float64 values, as such there is no way to use it as a general time series database of arbitrary data structures just yet. diff --git a/site/content/architecture/m3db/sharding.md b/site/content/architecture/m3db/sharding.md index b7e2e39ee5..406e74a1f4 100644 --- a/site/content/architecture/m3db/sharding.md +++ b/site/content/architecture/m3db/sharding.md @@ -3,7 +3,7 @@ title: "Sharding" weight: 2 --- -Timeseries keys are hashed to a fixed set of virtual shards. Virtual shards are then assigned to physical nodes. M3DB can be configured to use any hashing function and a configured number of shards. By default [murmur3](https://en.wikipedia.org/wiki/MurmurHash) is used as the hashing function and 4096 virtual shards are configured. +Timeseries keys are hashed to a fixed set of virtual shards. Virtual shards are then assigned to physical nodes. M3DB can be configured to use any hashing function and a configured number of shards. By default, [murmur3](https://en.wikipedia.org/wiki/MurmurHash) is used as the hashing function and 4096 virtual shards are configured. ## Benefits @@ -23,7 +23,7 @@ Replication is synchronization during a write and depending on the consistency l Each replica has its own assignment of a single logical shard per virtual shard. -Conceptually it can be defined as: +Conceptually, it can be defined as: ```golang Replica { @@ -58,7 +58,7 @@ enum ShardState { The assignment of shards is stored in etcd. When adding, removing or replacing a node shard goal states are assigned for each shard assigned. -For a write to appear as successful for a given replica it must succeed against all assigned hosts for that shard. That means if there is a given shard with a host assigned as _LEAVING_ and another host assigned as _INITIALIZING_ for a given replica writes to both these hosts must appear as successful to return success for a write to that given replica. Currently however only _AVAILABLE_ shards count towards consistency, the work to group the _LEAVING_ and _INITIALIZING_ shards together when calculating a write success/error is not complete, see [issue 417](https://github.com/m3db/m3/issues/417). +For a write to appear as successful for a given replica it must succeed against all assigned hosts for that shard. That means if there is a given shard with a host assigned as _LEAVING_ and another host assigned as _INITIALIZING_ for a given replica writes to both these hosts must appear as successful to return success for a write to that given replica. Currently however only _AVAILABLE_ shards count towards consistency, the work to group the _LEAVING_ and _INITIALIZING_ shards together when calculating a write success/error is not complete, see [issue 417](https://github.com/m3db/m3/issues/417). It is up to the nodes themselves to bootstrap shards when the assignment of new shards to it are discovered in the _INITIALIZING_ state and to transition the state to _AVAILABLE_ once bootstrapped by calling the cluster management APIs when done. Using a compare and set this atomically removes the _LEAVING_ shard still assigned to the node that previously owned it and transitions the shard state on the new node from _INITIALIZING_ state to _AVAILABLE_. @@ -72,8 +72,8 @@ When a node is added to the cluster it is assigned shards that relieves load fai ### Node down -A node needs to be explicitly taken out of the cluster. If a node goes down and is unavailable the clients performing reads will be served an error from the replica for the shard range that the node owns. During this time it will rely on reads from other replicas to continue uninterrupted operation. +A node needs to be explicitly taken out of the cluster. If a node goes down and is unavailable the clients performing reads will be served an error from the replica for the shard range that the node owns. During this time it will rely on reads from other replicas to continue uninterrupted operation. ### Node remove -When a node is removed the shards it owns are assigned to existing nodes in the cluster. Remaining servers discover they are now in possession of shards that are _INITIALIZING_ and need to be bootstrapped and will begin bootstrapping the data using all replicas available. +When a node is removed the shards it owns are assigned to existing nodes in the cluster. Remaining servers discover they are now in possession of shards that are _INITIALIZING_ and need to be bootstrapped and will begin bootstrapping the data using all replicas available. diff --git a/site/content/architecture/m3db/storage.md b/site/content/architecture/m3db/storage.md index 541a71982b..32b8c59e08 100644 --- a/site/content/architecture/m3db/storage.md +++ b/site/content/architecture/m3db/storage.md @@ -19,7 +19,7 @@ A fileset has the following files: * **Data file:** Stores the series compressed data streams. * **Bloom filter file:** Stores a bloom filter bitset of all series contained in this fileset for quick knowledge of whether to attempt retrieving a series for this fileset volume. * **Digests file:** Stores the digest checksums of the info file, summaries file, index file, data file and bloom filter file in the fileset volume for integrity verification. -* **Checkpoint file:** Stores a digest of the digests file and written at the succesful completion of a fileset volume being persisted, allows for quickly checking if a volume was completed. +* **Checkpoint file:** Stores a digest of the digests file and written at the successful completion of a fileset volume being persisted, allows for quickly checking if a volume was completed. ``` ┌───────────────────────┐ diff --git a/site/content/architecture/m3query/blocks.md b/site/content/architecture/m3query/blocks.md index 92cbeece79..45fa103f48 100644 --- a/site/content/architecture/m3query/blocks.md +++ b/site/content/architecture/m3query/blocks.md @@ -59,6 +59,6 @@ In order to convert M3DB blocks into M3 Query blocks, we need to consolidate acr At a high level, M3DB returns to M3 Query `SeriesBlocks` that contain a list of `SeriesIterators` for a given timeseries per namespace. M3 Query then aligns the blocks across common time bounds before applying consolidation. -For example, let's say we have a query that returns two timeseries from two different namespaces- 1min and 10min. When we create the M3 Query `Block`, in order to accurately consolidate results from these two namespaces, we need to convert everything to have a 10min resolution. Otherwise it will not be possible to perform correctly apply functions. +For example, let's say we have a query that returns two timeseries from two different namespaces- 1min and 10min. When we create the M3 Query `Block`, in order to accurately consolidate results from these two namespaces, we need to convert everything to have a 10min resolution. Otherwise, it will not be possible to perform correctly apply functions. > Coming Soon: More documentation on how M3 Query applies consolidation. diff --git a/site/content/cluster/binaries_cluster.md b/site/content/cluster/binaries_cluster.md index 961a793c1c..3ea7149fc0 100644 --- a/site/content/cluster/binaries_cluster.md +++ b/site/content/cluster/binaries_cluster.md @@ -34,7 +34,7 @@ M3 in production can run on local or cloud-based VMs, or bare-metal servers. M3 ### Network {{% notice tip %}} -If you use AWS or GCP, we recommend you use static IPs so that if you need to replace a host, you don't have to update configuration files on all the hosts, but decommission the old seed node and provision a new seed node with the same host ID and static IP that the old seed node had. If you're using AWS you can use an [Elastic Network Interface](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html) on a Virtual Private Cloud (VPC) and for GCP you can use an [internal static IP address](https://cloud.google.com/compute/docs/ip-addresses/reserve-static-internal-ip-address). +If you use AWS or GCP, we recommend you use static IPs so that if you need to replace a host, you don't have to update configuration files on all the hosts, but decommission the old seed node and provision a new seed node with the same host ID and static IP that the old seed node had. If you're using AWS, you can use an [Elastic Network Interface](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html) on a Virtual Private Cloud (VPC) and for GCP you can use an [internal static IP address](https://cloud.google.com/compute/docs/ip-addresses/reserve-static-internal-ip-address). {{% /notice %}} This example creates three static IP addresses for three storage nodes, using the embedded coordinator. diff --git a/site/content/cluster/kubernetes_cluster.md b/site/content/cluster/kubernetes_cluster.md index 36d48f541e..d9707a2e9c 100644 --- a/site/content/cluster/kubernetes_cluster.md +++ b/site/content/cluster/kubernetes_cluster.md @@ -37,7 +37,7 @@ kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/{{% operat ``` {{% notice tip %}} -Depending on what you use to run a cluster on your local machine, you may need to update your _/etc/hosts_ file to match the domains specified in the `etcd` `--initial-cluster` argument. For example to match the `StatefulSet` declaration in the _etcd-minikube.yaml_ above, these are `etcd-0.etcd`, `etcd-1.etcd`, and `etcd-2.etcd`. +Depending on what you use to run a cluster on your local machine, you may need to update your _/etc/hosts_ file to match the domains specified in the `etcd` `--initial-cluster` argument. For example, to match the `StatefulSet` declaration in the _etcd-minikube.yaml_ above, these are `etcd-0.etcd`, `etcd-1.etcd`, and `etcd-2.etcd`. {{% /notice %}} Verify that the cluster is running with something like the Kubernetes dashboard, or the command below: @@ -58,7 +58,7 @@ kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/{{% operat The following creates an M3 cluster with 3 replicas of data across 256 shards that connects to the 3 available etcd endpoints. -It creates three isolated groups for nodes, each with one node instance. In a production environment you can use a variety of different options to define how nodes are spread across groups based on factors such as resource capacity, or location. +It creates three isolated groups for nodes, each with one node instance. In a production environment, you can use a variety of different options to define how nodes are spread across groups based on factors such as resource capacity, or location. It creates namespaces in the cluster with the `namespaces` parameter. You can use M3-provided presets, or define your own. This example creates a namespace with the `10s:2d` preset. diff --git a/site/content/faqs/troubleshooting.md b/site/content/faqs/troubleshooting.md index 60f378d6ab..ba4987f9e4 100644 --- a/site/content/faqs/troubleshooting.md +++ b/site/content/faqs/troubleshooting.md @@ -16,7 +16,7 @@ Double check your configuration against the [bootstrapping guide](/docs/operatio If you're using the commitlog bootstrapper, and it seems to be slow, ensure that snapshotting is enabled for your namespace. Enabling snapshotting will require a node restart to take effect. -If an m3db node hasn't been able to snapshot for awhile, or is stuck in the commitlog bootstrapping phase for a long time due to accumulating a large number of commitlogs, consider using the peers bootstrapper. In situations where a large number of commitlogs need to be read, the peers bootstrapper will outperform the commitlog bootstrapper (faster and less memory usage) due to the fact that it will receive already-compressed data from its peers. Keep in mind that this will only work with a replication factor of 3 or larger and if the nodes peers are healthy and bootstrapped. Review the [bootstrapping guide](/docs/operational_guide/bootstrapping_crash_recovery) for more information. +If an m3db node hasn't been able to snapshot for a while, or is stuck in the commitlog bootstrapping phase for a long time due to accumulating a large number of commitlogs, consider using the peers bootstrapper. In situations where a large number of commitlogs need to be read, the peers bootstrapper will outperform the commitlog bootstrapper (faster and less memory usage) due to the fact that it will receive already-compressed data from its peers. Keep in mind that this will only work with a replication factor of 3 or larger and if the nodes peers are healthy and bootstrapped. Review the [bootstrapping guide](/docs/operational_guide/bootstrapping_crash_recovery) for more information. ## Nodes a crashing with memory allocation errors, but there's plenty of available memory diff --git a/site/content/how_to/any_remote_storage.md b/site/content/how_to/any_remote_storage.md index 74305d2164..913bcca759 100644 --- a/site/content/how_to/any_remote_storage.md +++ b/site/content/how_to/any_remote_storage.md @@ -22,7 +22,7 @@ We are going to setup: - 1 M3 Coordinator with in process M3 Aggregator that is aggregating and downsampling metrics. - Finally, we are going define some aggregation and downsampling rules as an example. -For simplicity lets put all config files in one directory and export env variable: +For simplicity, lets put all config files in one directory and export env variable: ```shell export CONFIG_DIR="" ``` @@ -45,7 +45,7 @@ docker run -p 9090:9090 --name prometheus \ --enable-feature=remote-write-receiver ``` -Next we configure and run M3 Coordinator: +Next, we configure and run M3 Coordinator: `m3_coord_simple.yml` {{< codeinclude file="docs/includes/integrations/prometheus/m3_coord_simple.yml" language="yaml" >}} diff --git a/site/content/how_to/grafana.md b/site/content/how_to/grafana.md index 2a9a205f94..95a714ddbc 100644 --- a/site/content/how_to/grafana.md +++ b/site/content/how_to/grafana.md @@ -12,7 +12,7 @@ When using the Prometheus integration with Grafana, there are two different ways Alternatively, you can configure Grafana to read metrics directly from M3Coordinator in which case you will bypass Prometheus entirely and use M3's PromQL engine instead. To set this up, follow the same instructions from the previous step, but set the url to: http://:7201. ### Querying -M3 supports the the majority of graphite query functions and can be used to query metrics that were ingested via the ingestion pathway described above. +M3 supports the majority of graphite query functions and can be used to query metrics that were ingested via the ingestion pathway described above. ### Grafana M3Coordinator implements the Graphite source interface, so you can add it as a graphite source in Grafana by following these instructions. diff --git a/site/content/how_to/monitoring_m3/tracing.md b/site/content/how_to/monitoring_m3/tracing.md index 65303127ff..81339439fd 100644 --- a/site/content/how_to/monitoring_m3/tracing.md +++ b/site/content/how_to/monitoring_m3/tracing.md @@ -8,9 +8,9 @@ draft: true M3DB is integrated with opentracing to provide insight into query performance and errors. #### Jaeger -To enable Jaeger as the tracing backend, set tracing.backend to "jaeger" (see also our sample local config: +To enable Jaeger as the tracing backend, set tracing.backend to "jaeger" (see also our sample local config): tracing: - backend: jaeger # enables jaeger with default configs + backend: jaeger # enables jaeger with default configs jaeger: # optional configuration for jaeger -- see # https://github.com/jaegertracing/jaeger-client-go/blob/master/config/config.go#L37 @@ -44,7 +44,7 @@ File an issue against M3 and we can work with you on how best to add the backend Note: all URLs assume a local jaeger setup as described in Jaeger's docs. **Finding slow queries** -To find prom queries longer than , filter for minDuration >= on operation="GET /api/v1/query_range". +To find prom queries longer than, filter for minDuration >= on operation="GET /api/v1/query_range". Sample query: http://localhost:16686/search?end=1548876672544000&limit=20&lookback=1h&maxDuration&minDuration=1ms&operation=GET%20%2Fapi%2Fv1%2Fquery_range&service=m3query&start=1548873072544000 **Finding queries with errors** diff --git a/site/content/includes/headers_optional_read_write.md b/site/content/includes/headers_optional_read_write.md index 82f39b09fb..f6664040ef 100644 --- a/site/content/includes/headers_optional_read_write.md +++ b/site/content/includes/headers_optional_read_write.md @@ -1,5 +1,5 @@ * `M3-Metrics-Type`: - If this header is set, it determines what type of metric to store this metric value as. Otherwise by default, metrics will be stored in all namespaces that are configured. You can also disable this default behavior by setting `downsample` options to `all: false` for a namespace in the coordinator config, for more see [disabling automatic aggregation](/docs/how_to/m3query.md#disabling-automatic-aggregation). + If this header is set, it determines what type of metric to store this metric value as. Otherwise, by default, metrics will be stored in all namespaces that are configured. You can also disable this default behavior by setting `downsample` options to `all: false` for a namespace in the coordinator config, for more see [disabling automatic aggregation](/docs/how_to/m3query.md#disabling-automatic-aggregation). Must be one of: `unaggregated`: Write metrics directly to configured unaggregated namespace. diff --git a/site/content/includes/headers_optional_read_write_all.md b/site/content/includes/headers_optional_read_write_all.md index 713eaaec81..3cfeb5d96e 100644 --- a/site/content/includes/headers_optional_read_write_all.md +++ b/site/content/includes/headers_optional_read_write_all.md @@ -1,5 +1,5 @@ * `M3-Metrics-Type`: - If this header is set, it determines what type of metric to store this metric value as. Otherwise by default, metrics will be stored in all namespaces that are configured. You can also disable this default behavior by setting `downsample` options to `all: false` for a namespace in the coordinator config, for more see [disabling automatic aggregation](/docs/how_to/m3query#disabling-automatic-aggregation). + If this header is set, it determines what type of metric to store this metric value as. Otherwise, by default, metrics will be stored in all namespaces that are configured. You can also disable this default behavior by setting `downsample` options to `all: false` for a namespace in the coordinator config, for more see [disabling automatic aggregation](/docs/how_to/m3query#disabling-automatic-aggregation). Must be one of: `unaggregated`: Write metrics directly to configured unaggregated namespace. `aggregated`: Write metrics directly to a configured aggregated namespace (bypassing any aggregation), this requires the `M3-Storage-Policy` header to be set to resolve which namespace to write metrics to. diff --git a/site/content/integrations/graphite.md b/site/content/integrations/graphite.md index 47aa6ac5a0..c44186fe1f 100644 --- a/site/content/integrations/graphite.md +++ b/site/content/integrations/graphite.md @@ -142,7 +142,7 @@ This will make the carbon ingestion emit logs for every step that is taking. *No ## Querying -M3 supports the the majority of [graphite query functions](https://graphite.readthedocs.io/en/latest/functions.html) and can be used to query metrics that were ingested via the ingestion pathway described above. +M3 supports the majority of [graphite query functions](https://graphite.readthedocs.io/en/latest/functions.html) and can be used to query metrics that were ingested via the ingestion pathway described above. ### Grafana From 560ca37ef7067040c07a85db8967d85e47741818 Mon Sep 17 00:00:00 2001 From: Anton Patsev Date: Sun, 25 Feb 2024 15:27:37 +0600 Subject: [PATCH 2/2] Add missing commas, correction of spelling errors --- site/content/faqs/troubleshooting.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/site/content/faqs/troubleshooting.md b/site/content/faqs/troubleshooting.md index ba4987f9e4..60f378d6ab 100644 --- a/site/content/faqs/troubleshooting.md +++ b/site/content/faqs/troubleshooting.md @@ -16,7 +16,7 @@ Double check your configuration against the [bootstrapping guide](/docs/operatio If you're using the commitlog bootstrapper, and it seems to be slow, ensure that snapshotting is enabled for your namespace. Enabling snapshotting will require a node restart to take effect. -If an m3db node hasn't been able to snapshot for a while, or is stuck in the commitlog bootstrapping phase for a long time due to accumulating a large number of commitlogs, consider using the peers bootstrapper. In situations where a large number of commitlogs need to be read, the peers bootstrapper will outperform the commitlog bootstrapper (faster and less memory usage) due to the fact that it will receive already-compressed data from its peers. Keep in mind that this will only work with a replication factor of 3 or larger and if the nodes peers are healthy and bootstrapped. Review the [bootstrapping guide](/docs/operational_guide/bootstrapping_crash_recovery) for more information. +If an m3db node hasn't been able to snapshot for awhile, or is stuck in the commitlog bootstrapping phase for a long time due to accumulating a large number of commitlogs, consider using the peers bootstrapper. In situations where a large number of commitlogs need to be read, the peers bootstrapper will outperform the commitlog bootstrapper (faster and less memory usage) due to the fact that it will receive already-compressed data from its peers. Keep in mind that this will only work with a replication factor of 3 or larger and if the nodes peers are healthy and bootstrapped. Review the [bootstrapping guide](/docs/operational_guide/bootstrapping_crash_recovery) for more information. ## Nodes a crashing with memory allocation errors, but there's plenty of available memory