.hazelcast-spring.5.5.0.source-code.hazelcast-spring-4.0.xsd Maven / Gradle / Ivy
The newest version!
Adds the Split Brain Protection for this data-structure which you configure using the split-brain-protection element.
You should set the split-brain-protection-ref's value as the split brain protection's name.
The number of executor threads per Member for the Executor.
Executor's task queue capacity. 0 means Integer.MAX_VALUE.
Enable/disable statistics
Adds the Split Brain Protection for this data-structure which you configure using the split-brain-protection element.
You should set the split-brain-protection-ref's value as the split brain protection's name.
The number of executor threads per Member for the Executor.
The durability of the executor
Executor's task capacity (per partition)
Adds the Split Brain Protection for this data-structure which you configure using the split-brain-protection element.
You should set the split-brain-protection-ref's value as the split brain protection's name.
The number of executor threads per member for the executor.
The durability of the scheduled executor.
The maximum number of tasks that a scheduler can have at any given point
in time as per capacity-policy.
Once the capacity is reached, new tasks will be rejected.
Capacity is ignored upon migrations to prevent any undesirable data-loss.
The active policy for the capacity setting
capacity-policy has these valid values:
PER_NODE: Maximum number of tasks in each Hazelcast instance.
This is the default policy.
PER_PARTITION: Maximum number of tasks within each partition. Storage size
depends on the partition count in a Hazelcast instance.
This attribute should not be used often.
Avoid using this attribute with a small cluster: if the cluster is small it will
be hosting more partitions, and therefore tasks, than that of a larger
cluster.
Number of synchronous backups. For example, if 1 is set as the backup-count,
then the cardinality estimation will be copied to one other JVM for
fail-safety. Valid numbers are 0 (no backup), 1, 2 ... 6.
Number of asynchronous backups. For example, if 1 is set as the
async-backup-count,
then cardinality estimation will be copied to one other JVM (asynchronously) for
fail-safety. Valid numbers are 0 (no backup), 1, 2 ... 6.
Adds the Split Brain Protection for this data-structure which you configure using the split-brain-protection element.
You should set the split-brain-protection-ref's value as the split brain protection's name.
Name of the cardinality estimator.
Adds the Split Brain Protection for this data-structure which you configure using the split-brain-protection element.
You should set the split-brain-protection-ref's value as the split brain protection's name.
Value of maximum size of items in the Queue.
Count of synchronous backups. Remember that, Queue is a non-partitioned
data structure, i.e. all entries of a Set resides in one partition. When
this parameter is '1', it means there will be a backup of that Set in
another node in the cluster. When it is '2', 2 nodes will have the backup.
Count of asynchronous backups.
Enable/disable statistics.
Includes the ring buffer store factory class name. The store format is the same as
the in-memory-format for the ringbuffer.
Name of the class or bean implementing MapLoader and/or MapStore.
Adds the Split Brain Protection for this data-structure which you configure using the split-brain-protection element.
You should set the split-brain-protection-ref's value as the split brain protection's name.
Number of items in the ringbuffer. If no time-to-live-seconds is set, the size will always
be equal to capacity after the head completed the first loop around the ring. This is
because no items are getting expired. The default value is 10000.
Number of synchronous backups. For example, if 1 is set as the backup-count,
then the items in the ringbuffer are copied to one other JVM for fail-safety.
`backup-count` + `async-backup-count` cannot be bigger than maximum
backup count which is `6`. Valid numbers are 0 (no backup), 1, 2 ... 6.
Number of asynchronous backups. For example, if 1 is set as the backup-count,
then the items in the ringbuffer are copied to one other JVM for fail-safety.
`backup-count` + `async-backup-count` cannot be bigger than maximum
backup count which is `6`. Valid numbers are 0 (no backup), 1, 2 ... 6.
Sets the time to live in seconds which is the maximum number of seconds
for each item to stay in the ringbuffer before being removed.
Entries that are older than time-to-liveSeconds are removed from the
ringbuffer on the next ringbuffer operation (read or write).
Time to live can be disabled by setting time-to-liveSeconds to 0.
It means that items won't get removed because they expire. They may only
be overwritten.
When time-to-liveSeconds is disabled and after the tail does a full
loop in the ring, the ringbuffer size will always be equal to the capacity.
The time-to-liveSeconds can be any integer between 0 and Integer#MAX_VALUE.
0 means infinite. The default is 0.
Sets the in-memory format.
Setting the in-memory format controls the format of the stored item in the
ringbuffer:
- OBJECT: the item is stored in deserialized format (a regular object)
- BINARY (default): the item is stored in serialized format (a binary blob)
The object in-memory format is useful when:
- the object stored in object format has a smaller footprint than in
binary format
- if there are readers using a filter. Since for every filter
invocation, the object needs to be available in object format.
Enable/disable statistics.
The maximum number of items to read in a batch.
Policy to handle an overloaded topic. Available values are `DISCARD_OLDEST`,
`DISCARD_NEWEST`,
`BLOCK` and `ERROR`. The default value is `BLOCK.
When maximum size is reached, map is evicted based on the eviction policy.
IMap has no eviction by default.
size:
maximum size can be any integer between 0 and Integer.MAX_VALUE.
For max-size to work, set the eviction-policy property to a value other than NONE.
Default value is 0.
max-size-policy:
max-size-policy has these valid values:
PER_NODE: Maximum number of map entries in each Hazelcast instance.
This is the default policy.
PER_PARTITION: Maximum number of map entries within each partition. Storage size
depends on the partition count in a Hazelcast instance.
This attribute should not be used often.
Avoid using this attribute with a small cluster: if the cluster is small it will
be hosting more partitions, and therefore map entries, than that of a larger
cluster. Thus, for a small cluster, eviction of the entries will decrease
performance (the number of entries is large).
USED_HEAP_SIZE: Maximum used heap size in megabytes per map for each Hazelcast instance.
USED_HEAP_PERCENTAGE: Maximum used heap size percentage per map for each Hazelcast
instance.
If, for example, JVM is configured to have 1000 MB and this value is 10, then the map
entries will be evicted when used heap size exceeds 100 MB.
FREE_HEAP_SIZE: Minimum free heap size in megabytes for each JVM.
FREE_HEAP_PERCENTAGE: Minimum free heap size percentage for each JVM.
For example, if JVM is configured to have 1000 MB and this value is 10,
then the map entries will be evicted when free heap size is below 100 MB.
USED_NATIVE_MEMORY_SIZE: Maximum used native memory size in megabytes per map
for each Hazelcast instance.
USED_NATIVE_MEMORY_PERCENTAGE: Maximum used native memory size percentage per map
for each Hazelcast instance.
FREE_NATIVE_MEMORY_SIZE: Minimum free native memory size in megabytes
for each Hazelcast instance.
FREE_NATIVE_MEMORY_PERCENTAGE: Minimum free native memory size percentage
for each Hazelcast instance.
eviction-policy:
Eviction policy has these valid values:
LRU (Least Recently Used),
LFU (Least Frequently Used),
RANDOM,
NONE.
Default value is "NONE".
Name of the class or bean implementing MapLoader and/or MapStore.
Number of seconds to delay to call the MapStore.store(key,
value).
If the value is zero then it is write-through so
MapStore.store(key, value) will be called as soon as the
entry is updated. Otherwise it is write-behind so updates will
be stored after write-delay-seconds value by calling
Hazelcast.storeAll(map). Default value is 0.
Used to create batch chunks when writing map store. In default
mode all entries will be tried to persist in one go. To create
batch chunks, minimum meaningful value for write-batch-size is
2.
For values smaller than 2, it works as in default mode.
Setting write-coalescing is meaningful if you are using
write-behind map-store. Otherwise it has no effect.
When write-coalescing is true, only the latest
store operation on a key in the write-delay-seconds
time-window will be reflected to the map-store.
Hazelcast can replicate some or all of the cluster data. For example,
you can have 5 different maps but you want only one of these maps
replicating across clusters. To achieve this you mark the maps
to be replicated by adding this element.
This configuration lets you define map indexes.
This configuration lets you define extractors for custom attributes.
This configuration lets you add listeners (listener classes) for the
map entries.
Adds the Split Brain Protection for this data-structure which you configure using the split-brain-protection element.
You should set the split-brain-protection-ref's value as the split brain protection's name.
List of partition lost listeners
Data type used to store entries.
Possible values:
BINARY (default): keys and values are stored as binary data.
OBJECT: values are stored in their object forms.
NATIVE: keys and values are stored in native memory. Only available on Hazelcast
Enterprise.
Metadata policy for this map. Hazelcast may process objects of supported types ahead of time to
create additional metadata about them. This metadata then is used to make querying and indexing faster.
Turning on preprocessing may decrease put throughput.
Valid values are:
CREATE_ON_UPDATE (default): Objects of supported types are pre-processed when they are created and updated.
OFF: No metadata is created.
You can retrieve some statistics like owned entry count, backup entry count,
last update time, locked entry count by setting this parameter's value
as "true". The method for retrieving the statistics is `getLocalMapStats()`.
Control caching of de-serialized values. Caching makes query evaluation faster, but it
cost memory.
Possible Values:
NEVER: Never cache de-serialized object
INDEX-ONLY: Cache values only when they are inserted into an index.
ALWAYS: Always cache de-serialized values.
Number of sync backups. If 1 is set as the backup-count for example, then
all
entries of the map will be copied to another JVM for fail-safety. Valid
numbers are 0 (no backup), 1, 2 ... 6.
Number of async backups. If 1 is set as the backup-count for example, then
all
entries of the map will be copied to another JVM for fail-safety. Valid
numbers are 0 (no backup), 1, 2 ... 6.
Maximum number of seconds for each entry to stay in the map. Entries that
are older than `time-to-live-seconds` and not updated for
`time-to-live-seconds` will get automatically evicted from the map. Any
integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
Maximum number of seconds for each entry to stay idle in the map. Entries
that are idle(not touched) for more than max-idle-seconds will get
automatically evicted from the map. Entry is touched if get, put or
containsKey is called. Any integer between 0 and Integer.MAX_VALUE. 0 means
infinite. Default is 0.
This boolean parameter enables reading local backup entries when set as
`true`.
When maximum size is reached, cache is evicted based on the eviction policy.
size:
maximum size can be any integer between 0 and Integer.MAX_VALUE.
Default value is 0.
max-size-policy:
max-size-policy has these valid values:
ENTRY_COUNT (Maximum number of cache entries in the cache),
USED_NATIVE_MEMORY_SIZE (Maximum used native memory size in megabytes per cache
for each Hazelcast instance),
USED_NATIVE_MEMORY_PERCENTAGE (Maximum used native memory size percentage per
cache
for each Hazelcast instance),
FREE_NATIVE_MEMORY_SIZE (Minimum free native memory size in megabytes for each
Hazelcast instance),
FREE_NATIVE_MEMORY_PERCENTAGE (Minimum free native memory size percentage for each
Hazelcast instance).
Default value is "ENTRY_COUNT".
eviction-policy:
Eviction policy has these valid values:
LRU (Least Recently Used),
LFU (Least Frequently Used).
Default value is "LRU".
List of cache entry listeners
List of partition lost listeners
Defines the expiry policy factory class name or
defines the expiry policy factory from predefined ones with duration
configuration.
Hazelcast can replicate some or all of the cluster data. For example,
you can have 5 different caches but you want only one of these caches
replicating across clusters. To achieve this you mark the caches
to be replicated by adding this element.
Adds the Split Brain Protection for this data-structure which you configure using the split-brain-protection element.
You should set the split-brain-protection-ref's value as the split brain protection's name.
Name of the cache.
the type of keys provided as full class name
the type of values provided as full class name
Defines whether statistics gathering is enabled on a cache.
Defines whether management is enabled on a cache.
Set if read-through caching should be used.
Disables invalidation events for per entry but full-flush invalidation events are
still enabled.
Full-flush invalidation event means that invalidation events for all entries on clear.
Set if write-through caching should be used.
Data type that will be used for storing records. Possible values:
BINARY (default): keys and values will be stored as binary data
OBJECT : values will be stored in their object forms
NATIVE : keys and values will be stored in native memory.
Defines the cache loader factory class name.
Defines the cache loader class name.
Defines the cache writer factory class name.
Defines the cache writer class name.
Defines the expiry policy factory class name.
Number of synchronous backups. For example, if `1` is set as the `backup-count`,
then all entries of the cache are copied to one other instance as synchronous for
fail-safety.
`backup-count` + `async-backup-count` cannot be bigger than maximum backup count which
is `6`.
Valid numbers are 0 (no backup), 1, 2 ... 6.
Number of asynchronous backups. For example, if `1` is set as the
`async-backup-count`,
then all entries of the cache are copied to one other instance as asynchronous for
fail-safety.
`backup-count` + `async-backup-count` cannot be bigger than maximum backup count which
is `6`.
Valid numbers are 0 (no backup), 1, 2 ... 6.
This boolean parameter enables hot-restart feature when set as true.
Only available on Hazelcast Enterprise.
Adds the Split Brain Protection for this data-structure which you configure using the split-brain-protection element.
You should set the split-brain-protection-ref's value as the split brain protection's name.
Number of sync backups. If 1 is set as the backup-count for example, then
all
entries of the map will be copied to another JVM for fail-safety. Valid
numbers are 0 (no backup), 1, 2 ... 6.
Number of async backups. If 1 is set as the backup-count for example, then
all
entries of the map will be copied to another JVM for fail-safety. Valid
numbers are 0 (no backup), 1, 2 ... 6.
Type of value collection. It can be Set or List.
You can retrieve some statistics like owned entry count, backup entry count,
last update time, locked entry count by setting this parameter's value
as "true". The method for retrieving the statistics is `getLocalMultiMapStats()`.
By default, BINARY in-memory format is used, meaning that the object is stored
in a serialized form. You can set it to false, then, the OBJECT in-memory format
is used, which is useful when the OBJECT in-memory format has a smaller memory
footprint than BINARY.
Adds the Split Brain Protection for this data-structure which you configure using the split-brain-protection element.
You should set the split-brain-protection-ref's value as the split brain protection's name.
Maximum size. Any integer between 0 and Integer.MAX_VALUE. 0 means
Integer.MAX_VALUE. Default is 0.
Count of synchronous backups. Remember that, List is a non-partitioned data
structure, i.e. all entries of a List resides in one partition. When this
parameter is '1', it means there will be a backup of that List in another
node in the cluster. When it is '2', 2 nodes will have the backup.
Count of asynchronous backups.
Enable/disable statistics
Adds the Split Brain Protection for this data-structure which you configure using the split-brain-protection element.
You should set the split-brain-protection-ref's value as the split-brain-protection's name.
Maximum size. Any integer between 0 and Integer.MAX_VALUE. 0 means
Integer.MAX_VALUE. Default is 0.
Count of synchronous backups. Remember that, Set is a non-partitioned data
structure, i.e. all entries of a List resides in one partition. When this
parameter is '1', it means there will be a backup of that List in another
node in the cluster. When it is '2', 2 nodes will have the backup.
Count of asynchronous backups.
Enable/disable statistics
If set as `true`, you can retrieve statistics for the topic using the
method `getLocalTopicStats()`.
By default, it is false, meaning there is no global order
guarantee by default.
Default is `false`, meaning only one dedicated thread will handle topic messages.
When multi-threading enabled (true) all threads from event thread pool can be used
for message handling.
A replicated map is a implementation
of the map
interface which is not
partitioned but fully replicates all data to all members.
Due to the nature of weak consistency there is a chance of reading staled data
and no
guarantee is given to retrieve the same value on multiple get calls.
ReplicatedMap was added in Hazelcast 3.2.
Adds the Split Brain Protection for this data-structure which you configure using the split-brain-protection element.
You should set the split-brain-protection-ref's value as the split-brain-protection's name.
This value defines it the replicated map is available for reads before the
initial
replication is completed. Default is true. If set to false no Exception will
be
thrown when replicated map is not yet ready but call will block until
finished.
Specify the name, type and value of your attribute here.
True to set the node as a lite member, false otherwise.
Number of replicas on which the CRDT state will be kept. The updates are replicated
asynchronously between replicas.
The number must be greater than 1 and up to 2147483647 (Integer.MAX_VALUE).
The default value is 2147483647 (Integer.MAX_VALUE).
Adds the Split Brain Protection for this data-structure which you configure using the split-brain-protection element.
You should set the split-brain-protection-ref's value as the split-brain-protection's name.
Name of the PN counter.
Enable/disable statistics for this PN counter.
Custom classes implementing com.hazelcast.nio.serialization.DataSerializableFactory to be registered.
These can be used to speed up serialization/deserialization of objects.
PortableFactory class to be registered.
Global serializer class to be registered if no other serializer is applicable.
Defines the class name of the serializer implementation.
Blacklist used for deserialization class filtering.
Blacklist used for deserialization class filtering.
Disables including default list entries (hardcoded in Hazelcast source code).
Name of a class to be included in the list.
Name of a package to be included in the list.
Class name prefix to be included in the list.
Configure the hazelcast instance
Configure the hazelcast client with multiple configs. Client will connect to first one and when disconnected
from that it will connect to next available cluster via given alternative configs.
Configure the hazelcast client
Retrieve a Hazelcast IMap instance
Retrieve a JCache cache manager from specified Hazelcast instance
Retrieve a Hazelcast MultiMap instance
Retrieve a Hazelcast ReplicatedMap instance
Retrieve a Hazelcast IQueue instance
Retrieve a Hazelcast Ringbuffer instance
Retrieve a Hazelcast ITopic instance
Retrieve a Hazelcast ITopic instance
Retrieve a Hazelcast ISet instance
Retrieve a Hazelcast IList instance
Retrieve a Hazelcast IExecutorService instance
Retrieve a Hazelcast IExecutorService instance
Retrieve a Hazelcast IScheduledExecutorService instance
Retrieve a Hazelcast FlakeIdGenerator instance
Retrieve a Hazelcast CardinalityEstimator instance
Retrieve a Hazelcast IAtomicLong instance
Retrieve a Hazelcast IAtomicReference instance
Retrieve a Hazelcast ICountDownLatch instance
Retrieve a Hazelcast ISemaphore instance
Retrieve a Hazelcast FencedLock instance
Retrieve a Hazelcast PNCounter instance
element.]]>
Encryption algorithm such as DES/ECB/PKCS5Padding, PBEWithMD5AndDES, AES/CBC/PKCS5Padding,
Blowfish, DESede.
This configuration is not intended to provide addresses of other cluster members with
which the hazelcast instance will form a cluster. This is an SPI for advanced use in
cases where the DefaultAddressPicker does not pick suitable addresses to bind to
and publish to other cluster members. For instance, this could allow easier
deployment in some cases when running on Docker, AWS or other cloud environments.
That said, if you are just starting with Hazelcast, you will probably want to
set the member addresses by using the tcp-ip or multicast configuration
or adding a discovery strategy.
Member address provider allows to plug in own strategy to customize:
1. What address Hazelcast will bind to
2. What address Hazelcast will advertise to other members on which they can bind to
In most environments you don't need to customize this and the default strategy will work just
fine. However in some cloud environments the default strategy does not make the right choice and
the member address provider delegates the process of address picking to external code.
Includes IP addresses of trusted members. When a node wants to join to the cluster,
its join request will be rejected if it is not a trusted member. You can give an IP
addresses range using the wildcard (*) on the last digit of the IP address
(e.g. 192.168.1.* or 192.168.1.100-110).
Sets the strategy for checking consistency of data between source and
target cluster. Any inconsistency will not be reconciled, it will be
merely reported via the usual mechanisms (e.g. statistics, diagnostics).
The user must initiate WAN sync to reconcile there differences. For the
check procedure to work properly, the target cluster should support the
chosen strategy.
Default value is NONE, which means the check is disabled.
While recovering from split-brain (network partitioning), data structure entries in the small cluster
merge into the bigger cluster based on the policy set here. When an entry merges into the cluster,
an entry with the same key (or value) might already exist in the cluster.
The merge policy resolves these conflicts with different out-of-the-box or custom strategies.
The out-of-the-box merge polices can be references by their simple class name.
For custom merge policies you have to provide a fully qualified class name.
The out-of-the-box policies are:
DiscardMergePolicy: the entry from the smaller cluster will be discarded.
HigherHitsMergePolicy: the entry with the higher number of hits wins.
LatestAccessMergePolicy: the entry with the latest access wins.
LatestUpdateMergePolicy: the entry with the latest update wins.
PassThroughMergePolicy: the entry from the smaller cluster wins.
PutIfAbsentMergePolicy: the entry from the smaller cluster wins if it doesn't exist in the cluster.
The default policy is: PutIfAbsentMergePolicy
The name of the class implementing the com.hazelcast.spi.MemberAddressProvider interface.
If both the implementation and the class name are provided, the implementation is used
and the class name is ignored.
Specifies whether the member address provider SPI is enabled or not. Values can be true or false.
ICMP can be used in addition to the other detectors. It operates at layer 3 detects network
and hardware issues more quickly
Timeout in Milliseconds before declaring a failed ping
Maximum number of times the IP Datagram (ping) can be forwarded, in most cases
all Hazelcast cluster members would be within one network switch/router therefore
default of 0 is usually sufficient
Run ICMP detection in parallel with the Heartbeat failure detector
Cluster Member will fail to start if it is unable to action an ICMP ping command when ICMP is enabled.
Failure is usually due to OS level restrictions.
Maximum number of consecutive failed attempts before declaring a member suspect
Time in milliseconds between each ICMP ping
Enables ICMP Pings to detect and suspect dead members
Configuration object for the built-in WAN publisher (available in
Hazelcast Enterprise). The publisher sends events to another Hazelcast
cluster in batches, sending when either when enough events are enqueued
or enqueued events have waited for enough time.
The endpoint can be a different cluster defined by static IP's or
discovered using a cloud discovery mechanism.
Configuration object for a custom WAN publisher. A single publisher defines how
WAN events are sent to a specific endpoint.
The endpoint can be some other external system which is
not a Hazelcast cluster (e.g. JMS queue).
Sets the cluster name used as an endpoint cluster name for authentication
on the target endpoint.
If there is no separate publisher ID property defined, this cluster name
will also be used as a WAN publisher ID. This ID is then used for
identifying the publisher in a WanReplicationConfig.
Sets if key-based coalescing is configured for this WAN publisher.
When enabled, only the latest WanReplicationEvent
of a key is sent to target.
Defines the initial state in which a WAN publisher is started.
- REPLICATING (default):
State where both enqueuing new events is allowed, enqueued events are replicated to the target cluster
and WAN sync is enabled.
- PAUSED:
State where new events are enqueued but they are not dequeued. Some events which have been dequeued before
the state was switched may still be replicated to the target cluster but further events will not be
replicated. WAN sync is enabled.
- STOPPED:
State where neither new events are enqueued nor dequeued. As with the PAUSED state, some events might
still be replicated after the publisher has switched to this state. WAN sync is enabled.
Sets the capacity of the primary and backup queue for WAN replication events.
One hazelcast instance can have up to 2*queueCapacity events since
we keep up to queueCapacity primary events (events with keys for
which the instance is the owner) and queueCapacity backup events
(events with keys for which the instance is the backup).
Events for IMap and ICache count against this limit collectively.
When the queue capacity is reached, backup events are dropped while normal
replication events behave as determined by the queue-full-behavior.
Sets the maximum batch size that can be sent to target cluster.
Sets the maximum amount of time in milliseconds to wait before sending a
batch of events to target cluster, if batch-size of events
have not arrived within this duration.
Sets the duration in milliseconds for the waiting time before retrying to
send the events to target cluster again in case of acknowledgement
is not arrived.
Sets the configured behaviour of this WAN publisher when the WAN queue is
full.
Sets the strategy for when the target cluster should acknowledge that
a WAN event batch has been processed.
Sets the period in seconds in which WAN tries to discover new target
endpoints and reestablish connections to failed endpoints.
Returns the maximum number of endpoints that WAN will connect to when
using a discovery mechanism to define endpoints.
This property has no effect when static endpoint addresses are defined
using target-endpoints.
Sets the maximum number of WAN event batches being sent to the target
cluster concurrently.
Setting this property to anything less than 2 will only allow a
single batch of events to be sent to each target endpoint and will
maintain causality of events for a single partition.
Setting this property to 2 or higher will allow multiple batches
of WAN events to be sent to each target endpoint. Since this allows
reordering or batches due to network conditions, causality and ordering
of events for a single partition is lost and batches for a single
partition are now sent randomly to any available target endpoint.
This, however, does present faster WAN replication for certain scenarios
such as replicating immutable, independent map entries which are only
added once and where ordering of when these entries are added is not
necessary.
Keep in mind that if you set this property to a value which is less than
the target endpoint count, you will lose performance as not all target
endpoints will be used at any point in time to process WAN event batches.
So, for instance, if you have a target cluster with 3 members (target
endpoints) and you want to use this property, it makes sense to set it
to a value higher than 3. Otherwise, you can simply disable it
by setting it to less than 2 in which case WAN will use the
default replication strategy and adapt to the target endpoint count
while maintaining causality.
Sets whether the WAN connection manager should connect to the
endpoint on the private address returned by the discovery SPI.
By default this property is false which means the WAN connection
manager will always use the public address.
Sets the minimum duration in nanoseconds that the WAN replication thread
will be parked if there are no events to replicate.
Sets the maximum duration in nanoseconds that the WAN replication thread
will be parked if there are no events to replicate.
Sets the publisher ID used for identifying the publisher in a
WanReplicationConfig.
If there is no publisher ID defined (it is empty), the cluster name will
be used as a publisher ID.
Comma separated list of target cluster members,
e.g. 127.0.0.1:5701, 127.0.0.1:5702.
Reference to the name of a WAN endpoint config or WAN server socket endpoint config.
The network settings from the referenced endpoint configuration will setup the network
configuration of connections to the target WAN cluster members.
Sets the publisher ID used for identifying the publisher in a
WanReplicationConfig.
If there is no publisher ID defined (it is empty), the cluster name will
be used as a publisher ID.
Fully qualified class name of WAN Replication implementation WanPublisher.
Properties for the custom WAN consumer. These properties are
accessible when initalizing the WAN consumer.
Defines a custom WAN consumer (WanConsumer).
If you don't define a class name or implementation, the default processing
logic for incoming WAN events will be used.
When true, an incoming event over WAN replication can be persisted to a
database for example, otherwise it will not be persisted. Default value
is false.
Block or allow actions, submitted as tasks in an Executor from clients and have no permission mappings.
true: Blocks all actions that have no permission mapping
false: Allows all actions that have no permission mapping
Name of the principal. Wildcards(*) can be used.
Name of the permission. Wildcards(*) can be used.
Endpoint address of principal. Wildcards(*) can be used.
Permission actions that are permitted on Hazelcast instance objects.
This configuration lets you add listeners (listener classes) for the
map entries.
This configuration lets you index the attributes and also order them.
This configuration lets you index the attributes and also order them.
True to enable User Code Deployment on this client, false otherwise.
Name of the Flake ID Generator.
Sets how many IDs are pre-fetched on the background when one call to
FlakeIdGenerator.newId() is made. Value must be in the range 1..100,000, default
is 100.
This setting pertains only to newId() calls made on the member that configured it.
Sets for how long the pre-fetched IDs can be used. If this time elapses, a new batch of IDs
will be fetched. Time unit is milliseconds, default is 600,000 (10 minutes).
The IDs contain timestamp component, which ensures rough global ordering of IDs. If an
ID is assigned to an object that was created much later, it will be much out of order. If you
don't care about ordering, set this value to 0.
This setting pertains only to newId() calls made on the member that configured it.
Sets the offset of timestamp component. Time unit is milliseconds, default is 1514764800000
(1.1.2018 0:00 UTC).
Sets the offset that will be added to the node ID assigned to cluster member for this generator.
Might be useful in A/B deployment scenarios where you have cluster A which you want to upgrade.
You create cluster B and for some time both will generate IDs and you want to have them unique.
In this case, configure node ID offset for generators on cluster B.
Sets the bit-length of the sequence component, default is 6 bits.
Sets the bit-length of node id component. Default value is 16 bits.
Sets how far to the future is the generator allowed to go to generate IDs without blocking, default is 15 seconds.
Enable/disable statistics.
Name of the Flake ID Generator.
Sets how many IDs are pre-fetched on the background when one call to
FlakeIdGenerator.newId() is made. Value must be in the range 1..100,000, default
is 100.
This setting pertains only to newId() calls made on the member that configured it.
Sets for how long the pre-fetched IDs can be used. If this time elapses, a new batch of IDs
will be fetched. Time unit is milliseconds, default is 600,000 (10 minutes).
The IDs contain timestamp component, which ensures rough global ordering of IDs. If an
ID is assigned to an object that was created much later, it will be much out of order. If you
don't care about ordering, set this value to 0.
This setting pertains only to newId() calls made on the member that configured it.
A policy to deal with an overloaded topic; so topic where there is no place to store new messages.
This policy can only be used in combination with the
com.hazelcast.core.HazelcastInstance#getReliableTopic(String).
The reliable topic uses a com.hazelcast.ringbuffer.Ringbuffer to
store the messages. A ringbuffer doesn't track where readers are, so
it has no concept of a slow consumers. This provides many advantages like
high performance reads, but it also gives the ability to the reader to
re-read the same message multiple times in case of an error.
A ringbuffer has a limited, fixed capacity. A fast producer may overwrite
old messages that are still being read by a slow consumer. To prevent
this, we may configure a time-to-live on the ringbuffer (see
com.hazelcast.config.RingbufferConfig#setTimeToLiveSeconds(int).
Once the time-to-live is configured, the TopicOverloadPolicy
controls how the publisher is going to deal with the situation that a
ringbuffer is full and the oldest item in the ringbuffer is not old
enough to get overwritten.
Keep in mind that this retention period (time-to-live) can keep messages
from being overwritten, even though all readers might have already completed reading.
Its default value is BLOCK. Available values are as follows:
- DISCARD_OLDEST:
Using this policy, a message that has not expired can be overwritten.
No matter the retention period set, the overwrite will just overwrite
the item.
This can be a problem for slow consumers because they were promised a
certain time window to process messages. But it will benefit producers
and fast consumers since they are able to continue. This policy sacrifices
the slow producer in favor of fast producers/consumers.
- DISCARD_NEWEST:
Message that was to be published is discarded.
- BLOCK:
The caller will wait until there is space in the Ringbuffer.
- ERROR:
The publish call fails immediately.
Sets the read batch size.
The ReliableTopic tries to read a batch of messages from the ringbuffer.
It will get at least one, but if there are more available, then it will
try to get more to increase throughput. The maximum read batch size can
be influenced using the read batch size.
Apart from influencing the number of messages to retrieve, the
readBatchSize also determines how many messages will be processed
by the thread running the MessageListener before it returns back
to the pool to look for other MessageListeners that need to be
processed. The problem with returning to the pool and looking for new work
is that interacting with an executor is quite expensive due to contention
on the work-queue. The more work that can be done without retuning to the
pool, the smaller the overhead.
If the readBatchSize is 10 and there are 50 messages available,
10 items are retrieved and processed consecutively before the thread goes
back to the pool and helps out with the processing of other messages.
If the readBatchSize is 10 and there are 2 items available,
2 items are retrieved and processed consecutively.
If the readBatchSize is an issue because a thread will be busy
too long with processing a single MessageListener and it can't
help out other MessageListeners, increase the size of the
threadpool so the other MessageListeners don't need to wait for
a thread, but can be processed in parallel.
Name of the Reliable Topic.
Sets the metrics collection frequency in seconds.
By default, metrics are collected every 5 seconds.
May be overridden by 'hazelcast.metrics.collection.frequency'
system property.
Master-switch for the metrics system. Controls whether
the metrics are collected and publishers are enabled.
May be overridden by 'hazelcast.metrics.enabled'
system property.
One of membership-listener, instance-listener or migration-listener
Sets the list of class names implementing the CacheWanEventFilter or
MapWanEventFilter for filtering outbound WAN replication events.
Name of the wan-replication configuration. IMap or ICache instance uses this wan-replication config.
Sets the merge policy sent to the WAN replication target to merge
replicated entries with existing target entries.
The default merge policy is com.hazelcast.spi.merge.PassThroughMergePolicy.
When enabled, an incoming event to a member is forwarded to the target cluster of that member.
A probabilistic split brain protection function based on Phi Accrual failure detector. See
com.hazelcast.internal.cluster.fd.PhiAccrualClusterFailureDetector for implementation
details. Configuration:
- acceptable-heartbeat-pause
: duration in milliseconds corresponding to number
of potentially lost/delayed heartbeats that will be accepted before considering it to be an anomaly.
This margin is important to be able to survive sudden, occasional, pauses in heartbeat arrivals,
due to for example garbage collection or network drops.
- threshold
: threshold for suspicion level. A low threshold is prone to generate
many wrong suspicions but ensures a quick detection in the event of a real crash. Conversely, a high
threshold generates fewer mistakes but needs more time to detect actual crashes.
- max-sample-size
: number of samples to use for calculation of mean and standard
deviation of inter-arrival times.
- first-heartbeat-estimate
: bootstrap the stats with heartbeats that corresponds to
this duration in milliseconds, with a rather high standard deviation (since environment is unknown
in the beginning)
- min-std-deviation
: minimum standard deviation (in milliseconds) to use for the normal
distribution used when calculating phi. Too low standard deviation might result in too much
sensitivity for sudden, but normal, deviations in heartbeat inter arrival times.
A split brain protection function that keeps track of the last heartbeat timestamp per each member.
For a member to be considered live (for the purpose to conclude whether the minimum cluster size
property is satisfied), a heartbeat must have been received at most heartbeat-tolerance
milliseconds before current time.
The period between two replications of CRDT states in milliseconds.
A lower value will increase the speed at which changes are disseminated
to other cluster members at the expense of burst-like behaviour - less
updates will be batched together in one replication message and one
update to a CRDT may cause a sudden burst of replication messages in a
short time interval.
The value must be a positive non-null integer.
The maximum number of target members that we replicate the CRDT states
to in one period. A higher count will lead to states being disseminated
more rapidly at the expense of burst-like behaviour - one update to a
CRDT will lead to a sudden burst in the number of replication messages
in a short time interval.
The path of the KeyStore file.
The type of the KeyStore (PKCS12, JCEKS, etc.).
The password to access the KeyStore (JCEKS, PKCS12, etc.).
The alias for the current encryption key entry (optional).
The polling interval for checking for changes in the KeyStore.
Java KeyStore Secure Store configuration.
The address of the Vault REST API server.
The Vault secret path.
The Vault access token.
The polling interval for checking for changes in Vault.
SSL/TLS configuration for HTTPS connections.
HashiCorp Vault Secure Store configuration.
Configuration for hot restart (symmetric) encryption at rest. Encryption is based
on Java Cryptography Architecture.
Encryption algorithm such as AES/CBC/PKCS5Padding, DES/ECB/PKCS5Padding, Blowfish,
or DESede.
Salt value to use when generating the secret key.
The key size in bits used when generating encryption keys (optional).
The default value of 0 implies falling back to the cipher-specific
default key size.
The Secure Store configuration. Not required when encryption at rest is disabled.
True to enable symmetric encryption, false to disable.
Base directory for all hot-restart data. Can be an absolute or relative path to the node startup
directory.
Base directory for hot backups. Each new backup will be created in a separate directory inside this one.
Can be an absolute or relative path to the node startup directory.
Specifies parameters for encryption of Hot Restart data. This includes the encryption algorithm
to be used (such as AES, DESede etc.) and the Secure Store configuration for retrieving the
encryption keys.
True to enable hot-restart, false otherwise.
Validation timeout for hot-restart process, includes validating
cluster members expected to join and partition table on all cluster.
Data load timeout for hot-restart process,
all members in the cluster should complete restoring their local data
before this timeout.
Specifies the policy that will be respected during hot restart cluster start. Valid values are :
FULL_RECOVERY_ONLY : Starts the cluster only when all expected nodes are present and correct.
Otherwise, it fails.
PARTIAL_RECOVERY_MOST_RECENT : Starts the cluster with the members which have most up-to-date
partition table and successfully restored their data. All other members will leave the cluster and
force-start themselves. If no member restores its data successfully, cluster start fails.
PARTIAL_RECOVERY_MOST_COMPLETE : Starts the cluster with the largest group of members which have the
same partition table version and successfully restored their data. All other members will leave the
cluster and force-start themselves. If no member restores its data successfully, cluster start fails.
Sets whether or not automatically removal of stale Hot Restart data is enabled.
When a member terminates or crashes when cluster state is ACTIVE, remaining members
redistributes data among themselves and data persisted on terminated member's storage becomes
stale. That terminated member cannot rejoin the cluster without removing Hot Restart data.
When auto-removal of stale Hot Restart data is enabled, while restarting that member,
Hot Restart data is automatically removed and it joins the cluster as a completely new member.
Otherwise, Hot Restart data should be removed manually.
Configuration for an event journal. The event journal keeps events related
to a specific partition and data structure. For instance, it could keep
map add, update, remove, merge events along with the key, old value, new value and so on.
True if the event journal is enabled, false otherwise.
Number of items in the event journal. If no time-to-live-seconds
is set, the size will always be equal to capacity after the event
journal has been filled. This is because no items are getting retired.
The default value is 10000.
Maximum number of seconds for each entry to stay in the event journal.
Entries that are older than <time-to-live-seconds> are evicted from the journal.
Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
True if disk write should be followed by an fsync() system call,
false otherwise.
True if hot-restart is enabled, false otherwise
Only available on Hazelcast Enterprise.
Configuration for a merkle tree.
The merkle tree is a data structure used for efficient comparison of the
difference in the contents of large data structures. The precision of
such a comparison mechanism is defined by the depth of the merkle tree.
A larger depth means that a data synchronization mechanism will be able
to pinpoint a smaller subset of the data structure contents in which a
change occurred. This causes the synchronization mechanism to be more
efficient. On the other hand, a larger tree depth means the merkle tree
will consume more memory.
A smaller depth means the data synchronization mechanism will have to
transfer larger chunks of the data structure in which a possible change
happened. On the other hand, a shallower tree consumes less memory.
The depth must be between 2 and 27 (exclusive).
As the comparison mechanism is iterative, a larger depth will also prolong
the duration of the comparison mechanism. Care must be taken to not have
large tree depths if the latency of the comparison operation is high.
The default depth is 10.
See https://en.wikipedia.org/wiki/Merkle_tree.
True if the merkle tree is enabled, false otherwise.
The depth of the merkle tree.
A larger depth means that a data synchronization mechanism will be able
to pinpoint a smaller subset of the data structure contents in which a
change occurred. This causes the synchronization mechanism to be more
efficient. On the other hand, a larger tree depth means the merkle tree
will consume more memory.
A smaller depth means the data synchronization mechanism will have to
transfer larger chunks of the data structure in which a possible change
happened. On the other hand, a shallower tree consumes less memory.
The depth must be between 2 and 27 (exclusive). The default depth is 10.
This configuration is not intended to provide addresses of other cluster members with
which the hazelcast instance will form a cluster. This is an SPI for advanced use in
cases where the DefaultAddressPicker does not pick suitable addresses to bind to
and publish to other cluster members. For instance, this could allow easier
deployment in some cases when running on Docker, AWS or other cloud environments.
That said, if you are just starting with Hazelcast, you will probably want to
set the member addresses by using the tcp-ip or multicast configuration
or adding a discovery strategy.
Member address provider allows to plug in own strategy to customize:
1. What address Hazelcast will bind to
2. What address Hazelcast will advertise to other members on which they can bind to
In most environments you don't need to customize this and the default strategy will work just
fine. However in some cloud environments the default strategy does not make the right choice and
the member address provider delegates the process of address picking to external code.
Indicates whether the advanced network configuration is enabled or not. Default is false.
Encryption algorithm such as DES/ECB/PKCS5Padding, PBEWithMD5AndDES, AES/CBC/PKCS5Padding,
Blowfish, DESede.
Encryption algorithm such as DES/ECB/PKCS5Padding, PBEWithMD5AndDES, AES/CBC/PKCS5Padding,
Blowfish, DESede.
Encryption algorithm such as DES/ECB/PKCS5Padding, PBEWithMD5AndDES, AES/CBC/PKCS5Padding,
Blowfish, DESede.
Enables or disables named REST endpoint group.
If a group is not listed within the rest-api configuration, then it's 'enabledByDefault' flag is used
to control the behavior of the group.
Enables or disables named REST endpoint group.
If a group is not listed within the rest-api configuration, then it's 'enabledByDefault' flag is used
to control the behavior of the group.
Number of CP members to initialize CP Subsystem. It is 0 by default,
meaning that CP Subsystem is disabled. CP Subsystem is enabled when
a positive value is set. After CP Subsystem is initialized successfully,
more CP members can be added at run-time and the number of active CP
members can go beyond the configured CP member count. The number of CP
members can be smaller than total member count of the Hazelcast cluster.
For instance, you can run 5 CP members in a Hazelcast cluster of
20 members. If set, must be greater than or equal to group-size.
Number of CP members to form CP groups. If set, it must be an odd
number between 3 and 7.
Otherwise, cp-member-count is respected while forming CP groups.
If set, must be smaller than or equal to cp-member-count.
Duration for a CP session to be kept alive after the last received
session heartbeat. A CP session is closed if no session heartbeat is
received during this duration. Session TTL must be decided wisely. If
a very small value is set, a CP session can be closed prematurely if
its owner Hazelcast instance temporarily loses connectivity to CP
Subsystem because of a network partition or a GC pause. In such an
occasion, all CP resources of this Hazelcast instance, such as
FencedLock or ISemaphore, are released. On the other
hand, if a very large value is set, CP resources can remain assigned to
an actually crashed Hazelcast instance for too long and liveliness
problems can occur. CP Subsystem offers an API in
CPSessionManagementService to deal with liveliness issues
related to CP sessions. In order to prevent premature session expires,
session TTL configuration can be set a relatively large value and
CPSessionManagementService#forceCloseSession(String, long)
can be manually called to close CP session of a crashed Hazelcast
instance.
Must be greater than session-heartbeat-interval-seconds, and
smaller than or equal to missing-cp-member-auto-removal-seconds.
Interval for the periodically-committed CP session heartbeats.
A CP session is started on a CP group with the first session-based
request of a Hazelcast instance. After that moment, heartbeats are
periodically committed to the CP group.
Must be smaller than session-time-to-live-seconds.
Duration to wait before automatically removing a missing CP member from
CP Subsystem. When a CP member leaves the Hazelcast cluster, it is not
automatically removed from CP Subsystem, since it could be still alive
and left the cluster because of a network problem. On the other hand,
if a missing CP member actually crashed, it creates a danger for CP
groups, because it is still part of majority calculations. This
situation could lead to losing majority of CP groups if multiple CP
members leave the cluster over time.
With the default configuration, missing CP members are automatically
removed from CP Subsystem after 4 hours. This feature is very useful
in terms of fault tolerance when CP member count is also configured
to be larger than group size. In this case, a missing CP member is
safely replaced in its CP groups with other available CP members
in CP Subsystem. This configuration also implies that no network
partition is expected to be longer than the configured duration.
If a missing CP member comes back alive after it is removed from CP
Subsystem with this feature, that CP member must be terminated manually.
Must be greater than or equal to session-time-to-live-seconds.
Offers a choice between at-least-once and at-most-once execution
of operations on top of the Raft consensus algorithm. It is disabled by
default and offers at-least-once execution guarantee. If enabled, it
switches to at-most-once execution guarantee. When you invoke an API
method on a CP data structure proxy, it sends an internal operation
to the corresponding CP group. After this operation is committed on
the majority of this CP group by the Raft leader node, it sends
a response for the public API call. If a failure causes loss of
the response, then the calling side cannot determine if the operation is
committed on the CP group or not. In this case, if this configuration is
disabled, the operation is replicated again to the CP group, and hence
could be committed multiple times. If it is enabled, the public API call
fails with com.hazelcast.core.IndeterminateOperationStateException.
Flag to denote whether or not CP Subsystem Persistence is enabled.
If enabled, CP members persist their local CP data to stable storage and
can recover from crashes.
Base directory to store all CP data when persistence-enabled
is true. This directory can be shared between multiple CP members.
Each CP member creates a unique directory for itself under the base
directory. This is especially useful for cloud environments where CP
members generally use a shared filesystem.
Timeout duration for CP members to restore their data from disk.
A CP member fails its startup if it cannot complete its CP data restore
process in the configured duration.
Contains configuration options for Hazelcast's Raft consensus algorithm implementation
Configurations for CP semaphore instances
Configurations for FencedLock instances
Leader election timeout in milliseconds. If a candidate cannot win
majority of the votes in time, a new leader election round is initiated.
Duration in milliseconds for a Raft leader node to send periodic
heartbeat messages to its followers in order to denote its liveliness.
Periodic heartbeat messages are actually append entries requests and
can contain log entries for the lagging followers. If a too small value
is set, heartbeat messages are sent from Raft leaders to followers too
frequently and it can cause an unnecessary usage of CPU and network.
Maximum number of missed Raft leader heartbeats for a follower to
trigger a new leader election round. For instance, if
leader-heartbeat-period-in-millis is 1 second and this value is set
to 5, then a follower triggers a new leader election round if 5 seconds
pass after the last heartbeat message of the current Raft leader node.
If this duration is too small, new leader election rounds can be
triggered unnecessarily if the current Raft leader temporarily slows
down or a network congestion occurs. If it is too large, it takes longer
to detect failures of Raft leaders.
Maximum number of Raft log entries that can be sent as a batch
in a single append entries request. In Hazelcast's Raft consensus
algorithm implementation, a Raft leader maintains a separate replication
pipeline for each follower. It sends a new batch of Raft log entries to
a follower after the follower acknowledges the last append entries
request sent by the leader.
Number of new commits to initiate a new snapshot after the last snapshot
taken by the local Raft node. This value must be configured wisely as it
effects performance of the system in multiple ways. If a small value is
set, it means that snapshots are taken too frequently and Raft nodes keep
a very short Raft log. If snapshots are large and CP Subsystem
Persistence is enabled, this can create an unnecessary overhead on IO
performance. Moreover, a Raft leader can send too many snapshots to
followers and this can create an unnecessary overhead on network.
On the other hand, if a very large value is set, it can create a memory
overhead since Raft log entries are going to be kept in memory until
the next snapshot.
Maximum number of uncommitted log entries in the leader's Raft log
before temporarily rejecting new requests of callers. Since Raft leaders
send log entries to followers in batches, they accumulate incoming
requests in order to improve the throughput. You can configure this
field by considering your degree of concurrency in your callers.
For instance, if you have at most 1000 threads sending requests to
a Raft leader, you can set this field to 1000 so that callers do not
get retry responses unnecessarily.
Timeout duration in milliseconds to apply backoff on append entries
requests. After a Raft leader sends an append entries request to
a follower, it will not send a subsequent append entries request either
until the follower responds or this timeout occurs. Backoff durations
are increased exponentially if followers remain unresponsive.
Name of the CP semaphore
Enables / disables JDK compatibility of CP ISemaphore.
When it is JDK compatible, just as in the Semaphore#release()
method, a permit can be released without acquiring it first, because
acquired permits are not bound to threads. However, there is no
auto-cleanup mechanism for acquired permits upon Hazelcast
server / client failures. If a permit holder fails, its permits must be
released manually. When JDK compatibility is disabled,
a HazelcastInstance must acquire permits before releasing them
and it cannot release a permit that it has not acquired. It means, you
can acquire a permit from one thread and release it from another thread
using the same HazelcastInstance, but not different
HazelcastInstances. In this mode, acquired permits are
automatically released upon failure of the holder
HazelcastInstance. So there is a minor behavioral difference
to the Semaphore#release() method.
JDK compatibility is disabled by default.
Number of permits to initialize the Semaphore. If a positive value is
set, the Semaphore is initialized with the given number of permits.
Name of the FencedLock
Maximum number of reentrant lock acquires. Once a caller acquires
the lock this many times, it will not be able to acquire the lock again,
until it makes at least one unlock() call.
By default, no upper bound is set for the number of reentrant lock
acquires, which means that once a caller acquires a FencedLock,
all of its further lock() calls will succeed. However, for instance,
if you set lock-acquire-limit to 2, once a caller acquires
the lock, it will be able to acquire it once more, but its third lock()
call will not succeed.
If lock-acquire-limit is set to 1, then the lock becomes non-reentrant.
Sets the metrics collection frequency in seconds.
By default, metrics are collected every 5 seconds.
May be overridden by 'hazelcast.metrics.collection.frequency'
system property.
Master-switch for the metrics system. Controls whether
the metrics are collected and publishers are enabled.
May be overridden by 'hazelcast.metrics.enabled'
system property.
Sets the number of seconds the metrics will be retained on the
instance. By default, metrics are retained for 5 seconds (that is for
one collection of metrics values, if default "collection-frequency-seconds"
collection frequency is used). More retention means more heap memory, but
allows for longer client hiccups without losing a value (for example to
restart the Management Center).
May be overridden by 'hazelcast.metrics.mc.retention'
system property.
Controls whether the metrics collected are exposed to
Hazelcast Management Center. It is enabled by default.
Please note that the metrics are polled by the
Hazelcast Management Center, hence the members need to
buffer the collected metrics between two polls. The aim
for this switch is to reduce memory consumption of the
metrics system if the Hazelcast Management Center is not
used.
In order to expose the metrics, the metrics system need
to be enabled via the enabled master-switch attribute.
May be overridden by 'hazelcast.metrics.mc.enabled'
system property.
Controls whether the metrics collected are exposed to
through JMX. It is enabled by default.
In order to expose the metrics, the metrics system need
to be enabled via the enabled master-switch attribute.
May be overridden by 'hazelcast.metrics.jmx.enabled'
system property.