Skip to content

Commit

Permalink
Fix typos identified by IntelliJ (#812)
Browse files Browse the repository at this point in the history
Fix typos identified by IntelliJ

Co-authored-by: Serdar Ozmen <[email protected]>
  • Loading branch information
JackPGreen and Serdaro authored Jul 31, 2023
1 parent 2d18e52 commit 5ac208b
Show file tree
Hide file tree
Showing 113 changed files with 211 additions and 212 deletions.
2 changes: 1 addition & 1 deletion README.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ For example, if you are releasing version 5.1, create a new release branch named
+
IMPORTANT: If you are creating a branch for a beta release, do not remove this field.

. When you are ready to release, create a maintentance branch from the release branch.
. When you are ready to release, create a maintenance branch from the release branch.
+
NOTE: As soon as you push the maintenance branch to the repository, GitHub will trigger a new build of the site, which will include your new content.

Expand Down
8 changes: 4 additions & 4 deletions docs/modules/ROOT/pages/capacity-planning.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Hazelcast's architecture and concepts. Here, we introduce some basic guidelines
that help to properly size a cluster.

We recommend always benchmarking your setup before deploying it to
production. We also recommend that bechmarking systems resemble the
production. We also recommend that benchmarking systems resemble the
production system as much as possible to avoid unexpected results.
We provide a <<benchmarking-and-sizing-example, bechmarking example>>
that you can use as a starting point.
Expand Down Expand Up @@ -134,7 +134,7 @@ Memory consumption is affected by:
files such as models for ML inference pipelines can consume significant resources.
* **State of the running jobs:** This varies, as it's affected by the shape of
your pipeline and by the data being processed. Most of the memory is
consumed by operations that aggregate and buffer data. Typically the
consumed by operations that aggregate and buffer data. Typically, the
state also scales with the number of distinct keys seen within the
time window. Learn how the operations in the pipeline store its state.
Operators coming with Jet provide this information in the javadoc.
Expand Down Expand Up @@ -166,7 +166,7 @@ for information about lite members.
Hazelcast's default partition count is 271. This is a good choice for clusters of
up to 50 members and ~25–30 GB of data. Up to this threshold,
partitions are small enough that any rebalancing of the partition map
when members join or leave the cluster doesnt disturb the smooth operation of the cluster.
when members join or leave the cluster doesn't disturb the smooth operation of the cluster.
With larger clusters and/or bigger data sets, a larger partition count helps to
maintain an efficient rebalancing of data across members.

Expand All @@ -181,7 +181,7 @@ is under 100MB. Remember to factor in headroom for projected data growth.
To change the partition count, use the system property `hazelcast.partition.count`.

NOTE: If you change the partition count from the default of 271,
be sure to use a prime number of partitions. This helps minimizing
be sure to use a prime number of partitions. This helps minimize
the collision of keys across partitions, ensuring more consistent lookup
times.

Expand Down
4 changes: 2 additions & 2 deletions docs/modules/ROOT/pages/faq.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ maintain linearizability and prefer consistency over availability during netwo

The partition count of 271, being a prime number, is a good choice because
it is distributed to the members almost evenly.
For a small to medium sized cluster, the count of 271 gives an almost even partition distribution and optimal-sized partitions.
For a small to medium-sized cluster, the count of 271 gives an almost even partition distribution and optimal-sized partitions.
As your cluster becomes bigger, you should make this count bigger to have evenly distributed partitions.


Expand Down Expand Up @@ -166,7 +166,7 @@ public void testTwoMemberMapSizes() {
----

In the test above, everything happens in the same thread.
When developing a multi-threaded test, you need to carefully handle coordination of the thread executions.
When developing a multithreaded test, you need to carefully handle coordination of the thread executions.
It is highly recommended that you use `CountDownLatch` for thread coordination (you can certainly use other ways).
Here is an example where we need to listen for messages and make sure that we got these messages.

Expand Down
4 changes: 2 additions & 2 deletions docs/modules/ROOT/pages/glossary.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
= Glossary

[glossary]
2-phase commit:: An atomic commitment protocol for distributed systems. It consists of two phases: commit-request and commit. In commit-request phase, transaction manager coordinates all the transaction resources to commit or abort. In commit-phase, transaction manager decides to finalize operation by committing or aborting according to the votes of the each transaction resource.
2-phase commit:: An atomic commitment protocol for distributed systems. It consists of two phases: commit-request and commit. In commit-request phase, transaction manager coordinates all the transaction resources to commit or abort. In commit-phase, transaction manager decides to finalize operation by committing or aborting according to the votes of each transaction resource.
ACID:: A set of properties (Atomicity, Consistency, Isolation, Durability) guaranteeing that transactions are processed reliably. Atomicity requires that each transaction be all or nothing, i.e., if one part of the transaction fails, the entire transaction fails). Consistency ensures that only valid data following all rules and constraints is written. Isolation ensures that transactions are securely and independently processed at the same time without interference (and without transaction ordering). Durability means that once a transaction has been committed, it will remain so, no matter if there is a power loss, crash, or error.
Cache:: A high-speed access area that can be either a reserved section of main memory or a storage device.
Change data capture (CDC):: A <<data-pipeline, data pipeline>> pattern for observing changes made to a database and extracting them in a form usable by other systems, for the purposes of replication, analysis and more.
Expand All @@ -24,7 +24,7 @@ Lite member:: A member that does not store data and has no partitions. These mem
Member:: A Hazelcast instance. Depending on your Hazelcast topology, it can refer to a server or a Java virtual machine (JVM). Members belong to a Hazelcast cluster. Members may also be referred as member nodes, cluster members, Hazelcast members, or data members.
Multicast:: A type of communication where data is addressed to a group of destination members simultaneously.
Near cache:: A caching model where an object retrieved from a remote member is put into the local cache and the future requests made to this object will be handled by this local member.
NoSQL:: "Not Only SQL". A database model that provides a mechanism for storage and retrieval of data that is tailored in means other than the tabular relations used in relational databases. It is a type of database which does not adhering to the traditional relational database management system (RDMS) structure. It is not built on tables and does not employ SQL to manipulate data. It also may not provide full ACID guarantees, but still has a distributed and fault tolerant architecture.
NoSQL:: "Not Only SQL". A database model that provides a mechanism for storage and retrieval of data that is tailored in means other than the tabular relations used in relational databases. It is a type of database which does not adhering to the traditional relational database management system (RDMS) structure. It is not built on tables and does not employ SQL to manipulate data. It also may not provide full ACID guarantees, but still has a distributed and fault-tolerant architecture.
OSGI:: Formerly known as the Open Services Gateway initiative, it describes a modular system and a service platform for the Java programming language that implements a complete and dynamic component model.
Partition table:: Table containing all members in the cluster, mappings of partitions to members and further metadata.
Race condition:: This condition occurs when two or more threads can access shared data and they try to change it at the same time.
Expand Down
8 changes: 4 additions & 4 deletions docs/modules/ROOT/pages/list-of-metrics.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ with the master member.

|`cp.session.endpoint`
|
|Address of the endpoint which the CP session session belongs to
|Address of the endpoint which the CP session belongs to

|`cp.session.endpointType`
|
Expand Down Expand Up @@ -753,7 +753,7 @@ with the master member.

|`nearcache.persistenceCount`
|count
|Number of Near Cache key persistences (when the pre-load feature is enabled)
|Number of Near Cache key persistences (when the preload feature is enabled)

|`operation.adhoc.executedOperationsCount`
|count
Expand Down Expand Up @@ -1975,8 +1975,8 @@ vertex.
Each Hazelcast member will have an instance of these metrics for each
ordinal of each vertex of each job execution.

Note: These metrics are only present for distributed edges (ie.
edges producing network traffic).
Note: These metrics are only present for distributed edges, i.e.,
edges producing network traffic.

|distributedBytesOut
|Total number of bytes sent to remote members.
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/production-checklist.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ However, the following modules are not supported for the Solaris operating syste
=== VMWare ESX

Hazelcast is certified on VMWare VSphere 5.5/ESXi 6.0.
Generally speaking, Hazelcast can use all of the resources on a full machine.
Generally speaking, Hazelcast can use all the resources on a full machine.
Splitting a single physical machine into multiple virtual machines and
thereby dividing resources are not required.

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/system-properties.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -1162,7 +1162,7 @@ master starts forming the cluster.
|50000
|int
|Defines the pending invocation threshold for the WAN replication implementation. Exceeding this threshold on a WAN consumer member makes the member to delay the WAN acknowledgment,
thus slowing down the WAN publishers on the source side that send WAN events to the given WAN consumer. Setting this value to negative disables the acknowledgement delaying feature.
thus slowing down the WAN publishers on the source side that send WAN events to the given WAN consumer. Setting this value to negative disables the acknowledgment delaying feature.

|`tcp.channels.per.connection`
| 1
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/architecture/pages/architecture.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
{description}

In Hazelcast, data is load-balanced in-memory across a cluster.
This cluster is a network of members each of which runs Hazelcast. A cluster of Hazelcast members share both the data storage and computational
This cluster is a network of members, each of which runs Hazelcast. A cluster of Hazelcast members share both the data storage and computational
load which can dynamically scale up and down. When you add new members to the cluster, both the data and computations are automatically rebalanced across the cluster.

image:ROOT:HighLevelArch.png[Hazelcast High-Level Architecture]
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/architecture/pages/data-partitioning.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ a stale partition table information about a backup replica member,
network interruption, or a member crash. That's why sync backup acks require
a timeout to give up. Regardless of being a sync or async backup, if a backup update is missed,
the periodically running anti-entropy mechanism detects the inconsistency and
synchronizes backup replicas with the primary. Also the graceful shutdown procedure ensures
synchronizes backup replicas with the primary. Also, the graceful shutdown procedure ensures
that all backup replicas for partitions whose primary replicas are assigned to
the shutting down member will be consistent.

Expand Down
4 changes: 2 additions & 2 deletions docs/modules/architecture/pages/distributed-computing.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ digraph DAG {
=== Tasks Concurrency is Cooperative

Hazelcast avoids starting a heavyweight system thread for each
concurrent task of the DAG. Instead it uses a xref:execution-engine.adoc[cooperative multithreading model]. This has high-level implications as well: all the
concurrent task of the DAG. Instead, it uses a xref:execution-engine.adoc[cooperative multithreading model]. This has high-level implications as well: all the
lambdas you write in the Jet API must cooperate by not calling
blocking methods that may take unpredictably long to complete. If that
happens, all the tasklets scheduled on the same thread will be blocked
Expand All @@ -238,7 +238,7 @@ asynchronous calls and use `mapUsingServiceAsync`.
When you split the stream by, for example, user ID and aggregate every
user's events independently, you should send all the events with the
same user ID to the same task, the one holding that user's state.
Otherwise all the tasks will end up with storage for all the IDs and no
Otherwise, all the tasks will end up with storage for all the IDs and no
task will have the full picture. The technique to achieve this
separation is *data partitioning*: Hazelcast uses a function that maps any
user ID to an integer from a predefined range and then assigns the
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/cache/pages/overview.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ To use Hazelcast for web session replication, see the following resources:

== Caching with JDBC Data Stores

To configure a cache with a connection to to JDBC data store, see xref:mapstore:configuring-a-generic-mapstore.adoc[].
To configure a cache with a connection to JDBC data store, see xref:mapstore:configuring-a-generic-mapstore.adoc[].

== Building a Custom Database Cache

Expand Down
4 changes: 2 additions & 2 deletions docs/modules/clients/pages/java.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ You can find Hazelcast Java client's code samples https://github.com/hazelcast/h

== Client API

The client API is your gateway to access your Hazelcast cluster, incuding distributed objects and data pipelines (jobs).
The client API is your gateway to access your Hazelcast cluster, including distributed objects and data pipelines (jobs).

The first step is the configuration. You can configure the Java client xref:configuration:understanding-configuration.adoc[declaratively or
programmatically]. We use the programmatic approach for this section.
Expand Down Expand Up @@ -2050,7 +2050,7 @@ chooses to rely on ICMP Echo requests. This is preferred.
If there are not enough permissions, it can be configured to fallback on attempting
a TCP Echo on port 7. In the latter case, both a successful connection or an explicit rejection
is treated as "Host is Reachable". Or, it can be forced to use only RAW sockets.
This is not preferred as each call creates a heavy weight socket and moreover the Echo service is typically disabled.
This is not preferred as each call creates a heavyweight socket and moreover the Echo service is typically disabled.

For the Ping Failure Detector to rely **only** on the ICMP Echo requests,
the following criteria need to be met:
Expand Down
20 changes: 10 additions & 10 deletions docs/modules/cluster-performance/pages/best-practices.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -36,21 +36,21 @@ performance/behavior in your particular setup, it is best to run a single member
=== Using Operation Threads Efficiently

By default, Hazelcast uses the machine's core count to determine the number of operation threads. Creating more
operation threads than this core count is highly unlikely leads to an improved performance since there will be more context
operation threads than this core count is highly unlikely to lead to an improved performance since there will be more context
switching, more thread notification, etc.

Especially if you have a system that does simple operations like put and get,
it is better to use a lower thread count than the number of cores.
The reason behind the increased performance
by reducing the core count is that the operations executed on the operation threads normally execute very fast and there can
be a very significant amount of overhead caused by thread parking and unparking. If there are less threads, a thread needs
be a very significant amount of overhead caused by thread parking and unparking. If there are fewer threads, a thread needs
to do more work, will block less and therefore needs to be notified less.

=== Avoiding Random Changes

Tweaking can be very rewarding because significant performance improvements are possible. By default, Hazelcast tries
to behave at its best for all situations, but this doesn't always lead to the best performance. So if you know what
you are doing and what to look for, it can be very rewarding to tweak. However it is also important that tweaking should
you are doing and what to look for, it can be very rewarding to tweak. However, it is also important that tweaking should
be done with proper testing to see if there is actually an improvement. Tweaking without proper benchmarking
is likely going to lead to confusion and could cause all kinds of problems. In case of doubt, we recommend not to tweak.

Expand All @@ -65,7 +65,7 @@ possible. Otherwise, you are at risk of spotting the issues too late or focusing
== AWS Deployments

When you deploy Hazelcast clusters on AWS EC2 instances, you can consider to place the
cluster members on the same https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#placement-groups-cluster[Cluster Placement Group]. This helps reducing the latency among members drastically.
cluster members on the same https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#placement-groups-cluster[Cluster Placement Group]. This helps to reduce the latency among members drastically.
Additionally, you can also consider using private IPs
instead of public ones to increase the throughput when the cluster
members are placed in the same VPC.
Expand All @@ -81,7 +81,7 @@ EC2 instances, please check https://www.ec2instances.info[this web page^].
[[pipelining]]
== Pipelining

With the pipelining, you can send multiple
With pipelining, you can send multiple
requests in parallel using a single thread and therefore can increase throughput.
As an example, suppose that the round trip time for a request/response
is 1 millisecond. If synchronous requests are used, e.g., `IMap.get()`, then the maximum throughput out of these requests from
Expand Down Expand Up @@ -463,7 +463,7 @@ But before the call completes the Near Cache entry is updated. Any threads readi
If the mutative operation was a remove, the key will no longer exist in the cache, both the Near Cache and the original copy in the member.
The member initiates an invalidate event to any other Near Caches, however the caller Near Cache is
not invalidated as it already has the new value. This setting also provides read-your-writes consistency.
* `preloader`: Specifies if the Near Cache should store and pre-load its keys for a faster re-population after
* `preloader`: Specifies if the Near Cache should store and preload its keys for a faster re-population after
a Hazelcast client restart. Is just available on IMap and JCache clients. It has the following attributes:
** `enabled`: Specifies whether the preloader for this Near Cache is enabled or not, `true` or `false`.
** `directory`: Specifies the parent directory for the preloader of this Near Cache. The filenames for
Expand Down Expand Up @@ -806,7 +806,7 @@ the batch size should be configured with the `hazelcast.map.invalidation.batch.s

==== Eventual Consistency

Near Caches are invalidated by invalidation events. Invalidation events can be lost due to the fire-and-forget fashion of eventing system.
Near Caches are invalidated by invalidation events. Invalidation events can be lost due to the fire-and-forget fashion of the eventing system.
If an event is lost, reads from Near Cache can indefinitely be stale.

To solve this problem, Hazelcast provides
Expand Down Expand Up @@ -951,7 +951,7 @@ public static int removeOrder( long customerId, long orderId ) throws Exception
}
----

There are couple of things you should consider.
There are a couple of things you should consider.

* There are four distributed operations there: lock, remove, keySet, unlock. Can you reduce
the number of distributed operations?
Expand Down Expand Up @@ -1351,9 +1351,9 @@ If independent data structures share the same partition, a slow operation on one
1, 11, 21, etc. If an operation for partition 1 takes a lot of time, it blocks the execution of an operation for partition
11 because both of them are mapped to the same operation thread.

You need to be careful with long running operations because you could starve operations of a thread.
You need to be careful with long-running operations because you could starve operations of a thread.
As a general rule, the partition thread should be released as soon as possible because operations are not designed
as long running operations. That is why, for example, it is very dangerous to execute a long running operation
as long-running operations. That is why, for example, it is very dangerous to execute a long-running operation
using `AtomicReference.alter()` or an `IMap.executeOnKey()`, because these operations block other operations to be executed.

Currently, there is no support for work stealing. Different partitions that map to the same thread may need to wait
Expand Down
Loading

0 comments on commit 5ac208b

Please sign in to comment.