schema.infinispan-config-9.4.xsd Maven / Gradle / Ivy
Defines the configuration for Infinispan, for the cache manager configuration, for the default cache, and for named caches.
Defines JGroups stacks.
Defines the threading subsystem.
Defines an embedded cache container.
Defines JGroups transport stacks.
Defines an individual JGroups stack, pointing to the file containing its definition.
Name of the stack, to be referenced by transport's stack attribute.
Path of JGroups configuration file containing stack definition.
Class that represents a network transport. Must implement org.infinispan.remoting.transport.Transport.
The threading subsystem, used to declare manageable thread pools and resources.
Overrides the transport characteristics for this cache container.
Configures security for this cache container.
Specifies how data serialization will be performed by the cache container.
Defines JMX management details.
Defines the global state persistence configuration. If this element is not present, global state persistence will be disabled.
Defines a LOCAL mode cache.
Defines a LOCAL mode cache configuration.
Defines a REPL_* mode cache.
Defines a REPL_* mode cache configuration.
Defines an INVALIDATION_* mode cache.
Defines an INVALIDATION_* mode cache configuration.
Defines a DIST_* mode cache.
Defines a DIST_* mode cache configuration.
Defines a SCATTERED_* mode cache. Since 9.0.
Defines a SCATTERED_* mode cache configuration. Since 9.0.
Uniquely identifies this cache container.
Unused XML attribute
Unused XML attribute
Indicates the default cache for this cache container
If 'true' then no data is stored in this node. Defaults to 'false'.
Unused XML attribute
Defines the executor used for asynchronous cache operations.
Defines the executor used for asynchronous cache listener notifications.
DEPRECATED Defines the scheduled executor used for evictions. The expiration-executor type should be used instead.
Defines the scheduled executor used for expirations.
Configuration for the executor service used when interacting with the persistent store.
Configuration for the executor service used when applying state from other nodes during the state transfer.
Unused XML attribute
Determines whether or not the cache container should collect statistics. Keep disabled for optimal performance.
Behavior of the JVM shutdown hook registered by the cache
Defines the jgroups stack used by the transport.
Defines the name for the underlying group communication cluster.
Defines the executor used for asynchronous transport communication.
Configuration for the executor service used to execute remote commands. Use org.infinispan.executors.WithinThreadExecutorFactory to disable.
Infinispan uses a distributed lock to maintain a coherent transaction log during state transfer or rehashing, which means that only one cache can be doing state transfer or rehashing at the same time.
This constraint is in place because more than one cache could be involved in a transaction.
This timeout controls the time to wait to acquire a distributed lock.
Name of the current node. This is a friendly name to make logs, etc. make more sense. Defaults to a combination of host name and a random number (to differentiate multiple nodes on the same host)
The id of the machine where this node runs.
The id of the rack where this node runs.
The id of the site where this node runs.
The minimum number of nodes that must join the cluster for the cache manager to start
The amount of time in milliseconds to wait for a cluster with sufficient nodes to form. Defaults to 60000
Configures the global authorization role to permission mapping. The presence of this element in the configuration implicitly enables authorization.
Uses the identity role mapper where principal names are converted as-is into role names.
Uses the common name role mapper which assumes principal names are in Distinguished Name format and extracts the Common Name to use as a role
Uses the cluster role mapper which stores the principal to role mappings within the cluster registry.
Uses a custom role mapper.
Class of the custom principal to role mapper
Defines a new role name and assigns permissions to it.
Defines the name of the role.
Defines the list of permissions for the role.
Class of the audit logger.
AdvancedExternalizer provides an alternative way to provide externalizers for marshalling/unmarshalling user defined classes that overcome the deficiencies of the more user-friendly externalizer definition model explained in Externalizer.
Class of the custom externalizer
Id of the custom externalizer
Fully qualified name of the marshaller to use. It must implement org.infinispan.marshall.StreamingMarshaller
Largest allowable version to use when marshalling internal state. Set this to the lowest version cache instance in your cluster to ensure compatibility of communications. However, setting this too low will mean you lose out on the benefit of improvements in newer versions of the marshaller.
A jmx property with name and value to be passed to the MBean Server lookup instance.
If JMX statistics are enabled then all 'published' JMX objects will appear under this name. This is optional, if not specified an object name will be created for you by default.
Class that will attempt to locate a JMX MBean server to bind to. Defaults to using the platform MBean server.
If true, multiple cache manager instances could be configured under the same configured JMX domain. Each cache manager will in practice use a different JMX domain that has been calculated based on the configured one by adding an incrementing index to it.
Defines the filesystem path where persistent state data which needs to survive container restarts
should be stored. The data stored at this location is required for graceful
shutdown and restore. This path must NOT be shared among multiple instances.
Defaults to the user.dir system property which usually is where the
application was started. This value should be overridden to a more appropriate location.
Defines the filesystem path where shared persistent state data which needs to survive container restarts
should be stored. This path can be safely shared among multiple instances.
Defaults to the user.dir system property which usually is where the
application was started. This value should be overridden to a more appropriate location.
Defines the filesystem path where temporary state should be stored. Defaults to the value of the
java.io.tmpdir system property.
An immutable configuration storage.
A non-persistent configuration storage.
A persistent configuration storage which saves runtime configurations to the persistent-location.
A persistent configuration storage for managed environments. This doesn't work in embedded mode.
Uses a custom configuration storage implementation.
Class of the custom configuration storage implementation.
Ignored in embedded mode.
Defines the path where global state for this cache-container will be stored.
The state transfer configuration for distribution and replicated caches.
Configures this cache as a backup for a remote cache.
The name of the remote cache that backups data here.
The name of the remote site containing the cache that backups data here.
The cache encoding configuration.
The locking configuration of the cache.
The cache transaction configuration.
The cache eviction configuration.
The cache expiration configuration.
Deprecated. The cache compatibility mode configuration.
Configures the cache to store data in binary format.
Configures the cache's persistence layer.
Controls whether entries are versioned. Versioning is necessary, for example, when using optimistic transactions in a clustered environment, to be able to perform write-skew checks.
Controls the data container for the cache.
Controls how the entries are stored in memory
Defines indexing options for cache
Defines the indexed entity classes
Indexed entity class name
Property to pass on to the indexing system
The indexing mode of the cache. Defaults to NONE.
Whether or not to apply automatic index configuration based on cache type
Configures custom interceptors to be added to the cache.
Configures cache-level security.
Uniquely identifies this cache within its cache container.
The name of the cache configuration which this configuration inherits from.
Unused XML attribute
Unused XML attribute
Unused XML attribute
Determines whether or not the cache should collect statistics. Keep disabled for optimal performance.
If set to false, statistics gathering cannot be enabled during runtime. Keep disabled for optimal performance.
Deprecated since 9.0, deadlock detection is always disabled.
Specifies whether Infinispan is allowed to disregard the Map contract when providing return values for org.infinispan.Cache#put(Object, Object) and org.infinispan.Cache#remove(Object) methods.
This cache will be using optimized (faster) implementation that does not support transactions/invocation batching, persistence, custom interceptors, indexing, store-as-binary or compatibility. Also, this type of cache does not support Map-Reduce jobs or Distributed Executor framework.
Sets the cache locking isolation level. Infinispan only supports READ_COMMITTED or REPEATABLE_READ isolation level.
If true, a pool of shared locks is maintained for all entries that need to be locked. Otherwise, a lock is created per entry in the cache. Lock striping helps control memory footprint but may reduce concurrency in the system.
Maximum time to attempt a particular lock acquisition.
Concurrency level for lock containers. Adjust this value according to the number of concurrent threads interacting with Infinispan.
(Deprecated) This setting is only applicable in the case of REPEATABLE_READ. When write skew check is set to false, if the writer at commit time discovers that the working entry and the underlying entry have different versions, the working entry will overwrite the underlying entry. If true, such version conflict - known as a write-skew - will throw an Exception. Defaults to false.
Sets the cache transaction mode to one of NONE, BATCH, NON_XA, NON_DURABLE_XA, FULL_XA.
If there are any ongoing transactions when a cache is stopped, Infinispan waits for ongoing remote and local transactions to finish. The amount of time to wait for is defined by the cache stop timeout.
The locking mode for this cache, one of OPTIMISTIC or PESSIMISTIC.
Configure Transaction manager lookup directly using an instance of TransactionManagerLookup. Calling this method marks the cache as transactional.
The duration (millis) in which to keep information about the completion of a transaction. Defaults to 60000.
The time interval (millis) at which the thread that cleans up transaction completion information kicks in. Defaults to 30000.
If the cache is transactional and transactionAutoCommit is enabled then for single operation transactions the user doesn't need to manually start a transaction, but a transactions is injected by the system. Defaults to true.
Configures the commit protocol to use.
Sets the name of the cache where recovery related information is held. The cache's default name is "__recoveryInfoCacheName__"
Enables or disables triggering transactional notifications on cache listeners. By default is enabled.
Describes the content-type and encoding.
Defines content type and encoding for keys and values of the cache.
DEPRECATED: please use memory element instead
Sets the cache eviction strategy. Available options are 'UNORDERED', 'FIFO', 'LRU', 'LIRS' and 'NONE' (to disable eviction).
Deprecated since 8.1. Use the size attribute instead.
Threading policy for eviction. Defaults to using the DEFAULT eviction policy.
Specifies whether to use entry count or memory-based approximation to decide when to evict entries.
Maximum size to use for eviction. When using the COUNT type, this is the maximum number of entries in a cache instance. When using the MEMORY threshold policy, this is the maximum number of allocated bytes used by a cache's datacontainer. A value of -1 means no limit. This is currently limited to 2^48 - 1 in size.
Maximum idle time a cache entry will be maintained in the cache, in milliseconds. If the idle time is exceeded, the entry will be expired cluster-wide. -1 means the entries never expire.
Maximum lifespan of a cache entry, after which the entry is expired cluster-wide, in milliseconds. -1 means the entries never expire.
Interval (in milliseconds) between subsequent runs to purge expired entries from memory and any cache stores. If you wish to disable the periodic eviction process altogether, set interval to -1.
A marshaller to use for compatibility conversions.
Controls whether when stored in memory, keys and values are stored as references to their original objects, or in a serialized, binary format. There are benefits to both approaches, but often if used in a clustered mode, storing objects as binary means that the cost of serialization happens early on, and can be amortized. Further, deserialization costs are incurred lazily which improves throughput. It is possible to control this on a fine-grained basis: you can choose to just store keys or values as binary, or both.
DEPRECATED: please use memory element instead
Specify whether keys are stored as binary or not. Enabled by default if the "enabled" attribute is set to true.
Specify whether values are stored as binary or not. Enabled by default if the "enabled" attribute is set to true.
Defines a cluster cache loader.
Defines a custom cache store.
Defines a file-based cache store.
If true, data is only written to the cache store when it is evicted from memory, a phenomenon known as 'passivation'. Next time the data is requested, it will be 'activated' which means that data will be brought back to memory and removed from the persistent store. This gives you the ability to 'overflow' to disk, similar to swapping in an operating system. If false, the cache store contains a copy of the contents in memory, so writes to cache result in cache store writes. This essentially gives you a 'write-through' configuration. Defaults to false.
The maximum number of unsuccessful attempts to start each of the
configured CacheWriter/CacheLoader before an exception is thrown and
the cache fails to start.
The time, in milliseconds, to wait between subsequent connection
attempts on startup. A negative or zero value means no wait between
connection attempts.
The time, in milliseconds, between availability checks to determine
if the PersistenceManager is available. In other words, this interval
sets how often stores/loaders are polled via their
`org.infinispan.persistence.spi.CacheWriter#isAvailable`
or `org.infinispan.persistence.spi.CacheLoader#isAvailable`
implementation. If a single store/loader is not available,
an exception is thrown during cache operations.
Dictates that the custom interceptor appears immediately after the specified interceptor. If the specified interceptor is not found in the interceptor chain, a ConfigurationException will be thrown when the cache starts.
Dictates that the custom interceptor appears immediately before the specified interceptor. If the specified interceptor is not found in the interceptor chain, a ConfigurationException will be thrown when the cache starts.
A fully qualified class name of the new custom interceptor to add to the configuration.
Specifies a position in the interceptor chain to place the new interceptor. The index starts at 0 and goes up to the number of interceptors in a given configuration. A ConfigurationException is thrown if the index is less than 0 or greater than the maximum number of interceptors in the chain.
Specifies a position where to place the new interceptor. Allowed values are FIRST, LAST, and OTHER_THAN_FIRST_OR_LAST
Configures authorization for this cache.
Enables authorization checks for this cache. Defaults to true if the authorization element is present.
Sets the valid roles required to access this cache.
(Deprecated) The scheme to use when versioning entries. Can be either SIMPLE or NONE. Defaults to NONE
Properties passed to the data container
DEPRECATED: this is to be removed in a future release
Fully qualified class name of the data container to use
DEPRECATED: this is to be removed in a future release
Fully qualified class name of the Equivalence class to use
for keys stored in the cache, which provides with custom
ways to compare cached keys
DEPRECATED: this is to be removed in a future release
Fully qualified class name of the Equivalence class to use
for values stored in the cache, which provides with custom
ways to compare cached values
Store keys and values as instance variables. Instances of byte[] will be wrapped to ensure equality.
Store keys and values as byte[] instances. Key and value will be serialized to binary representations.
Store keys and values as byte[] off of the Java heap. Key and value will be serialized to binary
representations and stored in native memory as to not take up Java heap. Temporary objects will be
put onto Java heap temporarily until processing is completed.
The size of the eviction cache as a long. Limits the cache to this normal by the amount of
entries in the cache.
The eviction strategy to use. This determines if eviction is even enabled or if it
has a different variant.
The size of the eviction cache as a long. If the configured type is COUNT, this will be
how many entries can be stored. If the configured type is MEMORY, this will be how much memory
in bytes can be stored.
The eviction type to use whether it is COUNT or MEMORY. COUNT will limit the cache based on
the number of entries. MEMORY will limit the cache by how much memory the entries use
The eviction strategy to use. This determines if eviction is even enabled or if it
has a different variant.
The size of the eviction cache as a long. If the configured type is COUNT, this will be
how many entries can be stored. If the configured type is MEMORY, this will be how much memory
in bytes can be stored.
The eviction type to use whether it is COUNT or MEMORY. COUNT will limit the cache based on
the number of entries. MEMORY will limit the cache by how much memory the entries use
The eviction strategy to use. This determines if eviction is even enabled or if it
has a different variant.
How many address pointers to use. This number will be rounded up to a power of two.
For optimal performance you will want more address pointers than you expect to have entries. This is similar
to the size of an array backing a hash map. Without collisions lookups and writes will be constant time.
Each pointer will take up 8 bytes of memory thus the default will use 8 MB of off-heap memory.
Configures the way this cache reacts to node crashes and split brains.
Deprecated, use type instead. Enable/disable the partition handling functionality. Defaults to false.
The type of actions that are possible when a split brain scenario is encountered.
The entry merge policy which should be applied on partition merges.
Sets the clustered cache mode, ASYNC for asynchronous operation, or SYNC for synchronous operation.
In SYNC mode, the timeout (in ms) used to wait for an acknowledgment when making a remote call, after which the call is aborted and an exception is thrown.
The state transfer configuration for distribution and replicated caches.
Sets the number of hash space segments per cluster. The default value is 256. The value should be at least 20 * the cluster size.
The factory to use for generating the consistent hash.
Must implement `org.infinispan.distribution.ch.ConsistentHashFactory`.
E.g. `org.infinispan.distribution.ch.impl.SyncConsistentHashFactory` can be used to guarantee
that multiple distributed caches use exactly the same consistent hash, which for performance
reasons is not guaranteed by the default consistent hash factory instance used.
The name of the key partitioner class.
Must implement `org.infinispan.distribution.ch.KeyPartitioner`.
A custom key partitioner can be used as an alternative to grouping, to guarantee that some keys
are located in the same segment (and thus their primary owner is the same node).
Since 8.2.
The state transfer configuration for distribution and replicated caches.
Configures grouping of data.
Number of cluster-wide replicas for each cache entry.
Sets the number of hash space segments per cluster. The default value is 256. The value should be at least 20 * the cluster size.
Controls the proportion of entries that will reside on the local node,
compared to the other nodes in the cluster. Value must be positive. The default is 1
Maximum lifespan in milliseconds of an entry placed in the L1 cache.
By default L1 is disabled unless a positive value is configured for this attribute.
If the attribute is not present, L1 is disabled.
Controls how often a cleanup task to prune L1 tracking data is run. Defaults to 10 minutes.
Controls the proportion of entries that will reside on the local node, compared to the other nodes
in the cluster. This is just a suggestion, there is no guarantee that a node with a capacity
factor of 2 will have twice as many entries as a node with a capacity factor of 1.
The factory to use for generating the consistent hash.
Must implement `org.infinispan.distribution.ch.ConsistentHashFactory`.
E.g. `org.infinispan.distribution.ch.impl.SyncConsistentHashFactory` can be used to guarantee
that multiple distributed caches use exactly the same consistent hash, which for performance
reasons is not guaranteed by the default consistent hash factory instance used.
The name of the key partitioner class.
Must implement `org.infinispan.distribution.ch.KeyPartitioner`.
A custom key partitioner can be used as an alternative to grouping, to guarantee that some keys
are located in the same segment (and thus their primary owner is the same node).
Since 8.2.
The state transfer configuration for distribution and replicated caches.
Configures grouping of data.
Number of hash space segments (per cluster). The default value is 256, and should be at least 20 * cluster size.
Controls the proportion of entries that will reside on the local node, compared to the other nodes
in the cluster. This is just a suggestion, there is no guarantee that a node with a capacity
factor of 2 will have twice as many entries as a node with a capacity factor of 1.
The factory to use for generating the consistent hash.
Must implement `org.infinispan.distribution.ch.ConsistentHashFactory`.
E.g. `org.infinispan.distribution.ch.impl.SyncConsistentHashFactory` can be used to guarantee
that multiple distributed caches use exactly the same consistent hash, which for performance
reasons is not guaranteed by the default consistent hash factory instance used.
The name of the key partitioner class.
Must implement `org.infinispan.distribution.ch.KeyPartitioner`.
A custom key partitioner can be used as an alternative to grouping, to guarantee that some keys
are located in the same segment (and thus their primary owner is the same node).
Since 8.2.
Threshold for sending batch invalidations. Once a node registers more updated keys,
it sends a batch invalidation to all nodes requesting to remove old versions of the entries.
The threshold is also used for second batch invalidation of tombstones for removed entries.
A cache loader property with name and value.
Unused XML attribute.
This setting should be set to true when multiple cache instances share the same cache store (e.g., multiple nodes in a cluster using a JDBC-based CacheStore pointing to the same, shared database.) Setting this to true avoids multiple cache instances writing the same modification multiple times. If enabled, only the node where the modification originated will write to the cache store. If disabled, each individual cache reacts to a potential remote update by storing the data to the cache store.
If true, when the cache starts, data stored in the cache store will be pre-loaded into memory. This is particularly useful when data in the cache store will be needed immediately after startup and you want to avoid cache operations being delayed as a result of loading this data lazily. Can be used to provide a 'warm-cache' on startup, however there is a performance penalty as startup time is affected by this process.
The timeout when performing remote calls.
Configures a cache store as write-behind instead of write-through.
A cache store property with name and value.
This setting should be set to true when multiple cache instances share the same cache store (e.g., multiple nodes in a cluster using a JDBC-based CacheStore pointing to the same, shared database.) Setting this to true avoids multiple cache instances writing the same modification multiple times. If enabled, only the node where the modification originated will write to the cache store. If disabled, each individual cache reacts to a potential remote update by storing the data to the cache store.
This setting should be set to true when the underlying cache store supports transactions and it is desirable for the underlying store and the cache to remain synchronized. With this enabled any Exceptions thrown whilst writing to the underlying store will result in both the store's and cache's transactions rollingback.
If true, when the cache starts, data stored in the cache store will be pre-loaded into memory. This is particularly useful when data in the cache store will be needed immediately after startup and you want to avoid cache operations being delayed as a result of loading this data lazily. Can be used to provide a 'warm-cache' on startup, however there is a performance penalty as startup time is affected by this process.
If true, fetch persistent state when joining a cluster. If multiple cache stores are chained, only one of them can have this property enabled.
If true, purges this cache store when it starts up.
If true, the singleton store cache store is enabled. SingletonStore is a delegating cache store used for situations when only one instance in a cluster should interact with the underlying store. Deprecated: A shared store should be used instead, as this limits store writes to the primary owner of a key
If true, the cache store will only be used to load entries. Any modifications made to the caches will not be applied to the store.
The maximum size of a batch to be inserted/deleted from the store. If the value is less than one, then no upper limit is placed on the number of operations in a batch.
Maximum number of entries in the asynchronous queue. When the queue is full, the store becomes write-through.
until it can accept new entries
Size of the thread pool whose threads are responsible for applying the modifications to the cache store.
If true, the async store attempts to perform write operations only
as many times as configured with `connection-attempts` in the
PersistenceConfiguration. If all attempts fail, the errors are
ignored and the write operations are not executed on the store.
If false, write operations that fail are attempted again when the
underlying store becomes available. If the modification queue becomes
full before the underlying store becomes available, an error is
thrown on all future write operations to the store until the
modification queue is flushed. The modification queue is not
persisted. If the underlying store does not become available before
the Async store is stopped, queued modifications are lost.
The class name of the cache store implementation.
Sets the maximum number of in-memory mappings between keys and their position in the store.
Normally this is unlimited, but to avoid excess memory usage, an upper bound can be configured.
If this limit is exceeded, entries are removed permanently using the LRU algorithm both from
the in-memory index and the underlying file based cache store. Warning: setting this value
may cause data loss.
Unused XML attribute
The path within "relative-to" in which to store the cache state.
If undefined, the path defaults to the cache container name.
The hostname or ip address of a remote Hot Rod server
The port on which the server is listening (default 11222)
Unused XML attribute.
If enabled, this will cause the cache to ask neighboring caches for state when it starts up, so the cache starts 'warm', although it will impact startup time.
The maximum amount of time (ms) to wait for state from neighboring caches, before throwing an exception and aborting startup.
The number of cache entries to batch in each transfer.
If enabled, this will cause the cache to wait for initial state transfer to complete before responding to requests.
The class to use to group keys. Must implement org.infinispan.distribution.group.Grouper.
Enables or disables grouping.
Configures a specific site where this cache backups data.
Determines whether this backup is taken offline (ignored) after a certain number of tries.
Configures the properties needed to transfer the state for this site.
If > 0, the state will be transferred in batches of {@code chunkSize} cache entries.
If <= 0, the state will be transferred in all at once. Not recommended. Defaults to 512.
The time (in milliseconds) to wait for the backup site acknowledge the state chunk
received and applied. Default value is 20 min.
The maximum number of retries when a push state command fails. A value <= 0 (zero) mean that
the command will not retry. Default value is 30.
The waiting time (in milliseconds) between each retry. The value should be > 0 (zero). Default
value is 2 seconds.
Name of the remote site where this cache backups data.
The strategy used for backing up data: "SYNC" or "ASYNC". Defaults to "ASYNC"
Decides what the system would do in case of failure during backup. Defaults to "WARN"
The timeout(millis) to be used when backing up data remotely. Defaults to 10 secs.
If 'false' then no data is backed up to this site. Defaults to 'true'.
Configures whether the replication happens in a 1PC or 2PC when using SYNC backup strategy. Defaults to "${Backup.useTwoPhaseCommit}".
CacheConfigurationException is thrown when used with ASYNC backup strategy.
If the 'backupFailurePolicy' is set to 'CUSTOM' then this attribute is
required and should
contain the fully qualified name of a class implementing
org.infinispan.xsite.CustomFailurePolicy.
The number of failed request operations after which this site should be taken offline. Defaults to 0 (never). A negative value would mean that the site will be taken offline after 'min-wait'.
The minimal number of millis to wait before taking this site offline, even in the case 'after-failures' is reached. If smaller or equal to 0, then only 'after-failures' is considered.
Defines the name of a property.
No locking isolation will be performed. This is only valid in local mode. In clustered mode, READ_COMMITTED will be used instead.
Unsupported. Actually configures READ_COMMITTED
Read committed is an isolation level that guarantees that any data read is committed at the moment it is read. However, depending on the outcome of other transactions, successive reads may return different results
Repeatable read is an isolation level that guarantees that any data read is committed at the moment it is read and that, within a transaction, successive reads will always return the same data.
Unsupported. Actually configures REPEATABLE_READ
Cache will not enlist within transactions.
Uses batching to group cache operations together.
Cache will enlist within transactions as a javax.transaction.Synchronization
Cache will enlist within transactions as a javax.transaction.xa.XAResource, without recovery.
Cache will enlist within transactions as a javax.transaction.xa.XAResource, with recovery.
Do not index data. This is the default.
Only index changes made locally, ignoring remote changes. This is useful if indexes are shared across a cluster to prevent redundant indexing of updates.
Index all data
Only index changes on the primary owner, regardless of it's local or remote.
Never evict entries. This is the default.
Eviction will be performed manually. Equivalent internally to NONE.
Eviction will be performed automatically to ensure that "older" entries are removed to make room
for new entries.
Eviction is not performed and instead when the container is full exceptions will prevent
new entries from being written to the container. This cache must be transactional and now allow for
commit optimizations that would prevent it from performing a two phase commit.
Deprecated. Activates REMOVE policy.
Deprecated. Activates REMOVE policy.
Deprecated. Activates REMOVE policy.
Deprecated. Activates REMOVE policy.
Enables asynchronous mode.
Enables synchronous mode.
Enables Optimistic locking.
Enables Pessimistic locking.
A list of aliases.
Ignore backup failures.
Warn of backup failures.
Fail local operations when a backup failure occurs.
Invoke a user-specified failure policy (set via the failure-policy-class attribute)
Use the default shutdown hook behaviour (REGISTER)
Register a shutdown hook
Don't register a shutdown hook
Enumeration containing the available commit protocols
Fires the eviction events from the same thread which is performing the eviction
Use the default eviction listener thread policy (PIGGYBACK)
Evicts entries from the cache when a specified count has been set
Evicts entries from the cache when a specified memory usage has been reached. Memory usage is computed using an approximation which is tailored for the HotSpot VM. This can only be used when both key and values are stored as byte arrays. To guarantee only byte arrays are stored it is recommended to run with store as binary for both keys and values
A simple versioning scheme that is cluster-aware
Don't version entries
Allows control of a cache's lifecycle (i.e. starting and stopping a cache)
Allows reading data from a cache
Allows writing data to a cache
Allows performing task execution (e.g. distributed executors, map/reduce) on a cache
Allows attaching listeners to a cache
Allows bulk-read operations (e.g. obtaining all the keys in a cache)
Allows bulk-write operations (e.g. clearing a cache)
Allows performing "administrative" operations on a cache
Aggregate permission which implies all of the others
Aggregate permission which implies all read permissions (READ and BULK_READ)
Aggregate permission which implies all write permissions (WRITE and BULK_WRITE)
Permission which means no permissions
If the partition does not have all owners for a given segment, both reads and writes are denied for all keys in that segment.
Allows reads for a given key if it exists in this partition, but only allows writes if this partition contains all owners of a segment.
Allow entries on each partition to diverge, with conflicts resolved during merge.
Do not attempt to resolve conflicts on merge.
Always utilise the entry located in the preferred partition.
Utilise entries from the preferred partition if non-null, otherwise utilise entries from the other partition.
If a conflict is encountered for a given key, remove all versions of that key.
© 2015 - 2025 Weber Informatics LLC | Privacy Policy