schema.infinispan-config-15.1.xsd Maven / Gradle / Ivy
Defines the configuration for Infinispan, for the cache manager configuration, for the default cache, and for named caches.
Defines JGroups stacks.
Defines the threading subsystem.
Defines an embedded cache container.
Defines JGroups transport stacks.
Defines an individual JGroups stack, pointing to the file containing its definition.
Name of the stack, to be referenced by transport's stack attribute.
Path of JGroups configuration file containing stack definition.
Class that represents a network transport. Must implement org.infinispan.remoting.transport.Transport.
Defines the relay configuration.
Name of the stack, to be referenced by transport's stack attribute.
The base stack to extend.
Defines the name of the JGroups stack to be used by default when connecting to remote sites.
Defines the default cluster name for remote clusters.
Defines the name of the remote site.
Defines the name of the JGroups stack to use to connect to the remote site. If unspecified, the default-stack will be used.
Defines the name for the underlying group communication cluster. If unspecified, the remote-sites.cluster name will be used.
The threading subsystem, used to declare manageable thread pools and resources.
Overrides the transport characteristics for this cache container.
Configures security for this cache container.
Specifies how data serialization will be performed by the cache container.
Configures metrics.
Configures tracing, can be collected by an OpenTelemetry collector.
Enables and configures JMX monitoring and management.
Defines the global state persistence configuration. If this element is not present, global state persistence will be disabled.
Defines a LOCAL mode cache.
Defines a LOCAL mode cache configuration.
Defines a REPL_* mode cache.
Defines a REPL_* mode cache configuration.
Defines an INVALIDATION_* mode cache.
Defines an INVALIDATION_* mode cache configuration.
Defines a DIST_* mode cache.
Defines a DIST_* mode cache configuration.
Uniquely identifies this cache container.
Unused XML attribute
Indicates the default cache for this cache container
If 'true' then no data is stored in this node. Defaults to 'false'.
Unused XML attribute
Defines the executor used for asynchronous cache listener notifications.
Defines the scheduled executor used for expirations.
The name of the executor used for non-blocking operations. Must be non-blocking and must have a queue.
The name of the executor used for blocking operations. Must be blocking and must have a queue.
Unused XML attribute
Determines whether or not the cache container should collect statistics. Keep disabled for optimal performance.
Behavior of the JVM shutdown hook registered by the cache
A transport property with name and value to be passed to the Transport instance.
Defines the jgroups stack used by the transport.
Defines the name for the underlying group communication cluster.
Infinispan uses a distributed lock to maintain a coherent transaction log during state transfer or rehashing, which means that only one cache can be doing state transfer or rehashing at the same time.
This constraint is in place because more than one cache could be involved in a transaction.
This timeout controls the time to wait to acquire a distributed lock.
Specifies the name of the current node. Defaults to a combination of the host name and a random number to differentiate between multiple nodes on the same host.
Specifies the ID of the machine where the node runs. When a single physical machine hosts multiple JVM instances, the machine ID ensures that data is distributed across different physical hosts.
Specifies the ID of the rack where the node runs. The rack ID ensures that the copies of data are stored on different rack than the original data.
The ID of the site where the node runs.In clusters with nodes in multiple physical locations site ID ensures that backup copies of data are stored across different locations.
The minimum number of nodes that must join the cluster for the cache manager to start
The amount of time in milliseconds to wait for a cluster with sufficient nodes to form. Defaults to 60000
The list of raft members separated by space.
Configures the global authorization role to permission mapping. The presence of this element in the configuration implicitly enables authorization.
Stores role to permission mappings in the cluster registry.
This role mapper is enabled by default in the server configuration.
Configures a custom permission mapper implementation.
Specifies the class name of a custom implementation that maps roles to permissions.
Uses the identity role mapper where principal names are converted as-is into role names.
Uses the common name role mapper which assumes principal names are in Distinguished Name format and extracts the Common Name to use as a role.
Stores the principal to role mappings in the cluster registry.
This role mapper is enabled by default in the server configuration.
Configures a custom role mapper.
Specifies the class name of a custom principal to role mapper implementation.
Class of the audit logger.
Specifies whether principal-to-role mapping applies only to group principals or also to user principals. Defaults to true.
The ACL cache size. The default is 1000 entries. A value of 0 disables the cache.
The ACL cache timeout in milliseconds. The default is 300000 (5 minutes). A value of 0 disables the cache.
Specifies a name rewriter that transforms names returned by a realm.
Specifies the base type for all PrincipalTransformer definitions.
Deprecated. Will be ignored.
Specifies a PrincipalTransformer definition using regular expressions and Matcher based replacement.
Specifies the regular expression for this PrincipalTransformer.
Specifies the replacement string for the PrincipalTransformer.
Replaces all occurrences instead of the first occurrence.
Defines a new role name and assigns permissions to it.
Defines the name of the role.
Defines the description of the role.
Defines the list of permissions for the role.
Deprecated since 10.0. Please utilise ProtoStream based marshalling for your Java objects by
configuring one or more context-initializer elements. Alternatively, it's possible to configure a custom
org.infinispan.commons.marshall.Marshaller implementation for user types, via the "marshaller" attribute.
AdvancedExternalizer implementations allow users to have fine grained control over how Java objects are
serialized. Providing smaller payloads than traditional Serialization or Externalizer implementations by
writing a class ID value instead of the classes FQN.
Class of the custom externalizer
Id of the custom externalizer
SerializationContextInitializer implementation which is used to initialize a ProtoStream based marshaller
for user types. If no <context-initializer> elements are present then the java.util.ServiceLoader
mechanism will be used to automatically discover all SerializationContextInitializer implementations present
in classpath and load them.
Class of the SerializationContextInitializer implementation
Enables individual classes or regular expressions to be added to the EmbeddedCacheManager allow list.
FQN of the class to be added to the allowlist.
Regex pattern used to determine if a class is a member of the allowlist.
Fully qualified name of the marshaller to use. It must implement org.infinispan.marshall.StreamingMarshaller
Largest allowable version to use when marshalling internal state. Set this to the lowest version cache instance in your cluster to ensure compatibility of communications. However, setting this too low will mean you lose out on the benefit of improvements in newer versions of the marshaller.
The kind of compatibility validation that is performed when updating schemas.
Exports gauge metrics. Gauges are enabled by default but you must
enable statistics so that they are exported.
Exports histogram metrics. Histograms are not enabled by default
because they require additional computation. If you enable histograms
you must also enable statistics so that they are exported.
Specifies a global name prefix for metrics.
Put the cache manager and cache name in tags rather than include them in the metric name.
Enables accurate size computation for numberOfEntries statistics. Note that this doesn't affect invocations of
Cache#size() method.
Enables tracing collection defining the collector that will receive the spans created by Infinispan.
The value is supposed to be a valid parsable URL containing the protocol,
the address and the port of the remote receiving process.
E.g., http://otlp-collector-host:4317.
Tracing is enabled by default if "collector-endpoint" is defined.
By default, the tracing spans will be exported applying the OTLP (OpenTelemetry Protocol).
This protocol can be changed, but an extra exporter dependency should be added in case.
The service name used by tracing to identify the server process.
If true, it allows to trace security events. False by default.
Specifies a JMX property with a name and value that is passed to
the MBean Server lookup instance.
If JMX statistics are enabled then all 'published' JMX objects appear
under this domain. Optional, if not specified it defaults to
"org.infinispan".
Class that attempts to locate a JMX MBean server to bind to. Defaults
to the platform MBean server.
Enables exporting of JMX MBeans.
Defines the filesystem path where persistent state data which needs to survive container restarts
should be stored. The data stored at this location is required for graceful
shutdown and restore. This path must NOT be shared among multiple instances.
Defaults to the user.dir system property which usually is where the
application was started. This value should be overridden to a more appropriate location.
Defines the filesystem path where shared persistent state data which needs to survive container restarts
should be stored. This path can be safely shared among multiple instances.
Defaults to the user.dir system property which usually is where the
application was started. This value should be overridden to a more appropriate location.
Defines the filesystem path where temporary state should be stored. Defaults to the value of the
java.io.tmpdir system property.
An immutable configuration storage.
A non-persistent configuration storage.
A persistent configuration storage which saves runtime configurations to the persistent-location.
A persistent configuration storage for managed environments. This doesn't work in embedded mode.
Uses a custom configuration storage implementation.
Class of the custom configuration storage implementation.
Defines the action taken when a dangling lock file is found in the persistent global state, signifying an
unclean shutdown of the node (usually because of a crash or an external termination).
A property name whose value will be used as the root path for storing global state
Defines the path where global state for this cache-container will be stored.
Defines backup locations for cache data and modifies state transfer properties.
Defines the local cache as a backup for a remote cache with a different name.
Specifies the name of the remote cache that uses the local cache as a backup.
Specifies the name of the remote site that backs up data to the local cache.
The cache encoding configuration.
The locking configuration of the cache.
The cache transaction configuration.
The cache expiration configuration.
Configures the cache to store data in binary format.
Configures the persistence layer for caches.
Controls how the entries are stored in memory
Defines query options for cache
Limits the number of results returned by a query. Applies to indexed, non-indexed, and hybrid queries.
Setting the default-max-results significantly improves performance of queries that don't have an explicit limit set.
Limit the required accuracy of the hit count for the indexed queries to an upper-bound.
Setting the hit-count-accuracy optimize the performance of queries targeting large data sets.
For optimal results, set this value slightly above the expected hit count.
If you do not require accurate hit counts, set it to a low value.
Defines indexing options for cache
Controls index reading parameters.
Controls index writing parameters
Sharding consists in splitting index data into multiple "smaller indexes", called shards,
in order to improve performance when dealing with large amounts of data.
By default, sharding is disabled.
Defines the Transformers used to stringify keys for indexing with Lucene
Defines the Transformer to use for the specified key class
Defines the set of indexed type names (fully qualified). If values of types that are not included in this set are put in the cache they will not be indexed.
Indexed entity type name. Must be either a fully qualified Java class name or a protobuf type name.
Enables and disables cache indexing.
Including the indexing element in cache configuration automatically enables indexing without the need to set the value of this attribute to "true".
Specify index storage options.
Triggers an indexing process when the cache starts.
Specifies a filesystem path for the index when storage is 'filesystem'.
The value can be a relative or absolute path. Relative paths are created
relative to the configured global persistent location, or to the current
working directory when global state is disabled.
By default, the cache name is used as a relative path for index path.
When setting a custom value, ensure that there are no conflicts between caches using the same indexed entities.
Affects how cache operations will be propagated to the indexes.
By default, all the changes to the cache will be immediately applied to the indexes.
Whether true, the indexes will be defined on the Java indexed entities accessible from the class path of the server VM.
Even if the cache is Protobuf encoded and the indexes are run in server mode.
Useful in case we want to run embedded queries from a server task, in the case the cache is Protobuf encoded.
By default, false.
Configures tracing for cache.
Enable or disable tracing on the given cache.
This property can be used to enable or disable tracing at runtime.
Deprecated since 10.0, will be removed without a replacement.
Configures custom interceptors to be added to the cache.
Configures cache-level security.
Uniquely identifies this cache within its cache container.
The name of the cache configuration which this configuration inherits from.
Zero or more cache alias names.
Determines whether the cache should collect statistics. Keep disabled for optimal performance.
Specifies whether Infinispan is allowed to disregard the Map contract when providing return values for org.infinispan.Cache#put(Object, Object) and org.infinispan.Cache#remove(Object) methods.
This cache will be using optimized (faster) implementation that does not support transactions/invocation
batching, persistence, custom interceptors, indexing, store-as-binary or transcoding. Also, this type of
cache does not support Map-Reduce jobs or Distributed Executor framework.
Sets the cache locking isolation level. Infinispan only supports READ_COMMITTED or REPEATABLE_READ isolation level.
If true, a pool of shared locks is maintained for all entries that need to be locked. Otherwise, a lock is created per entry in the cache. Lock striping helps control memory footprint but may reduce concurrency in the system.
Maximum time to attempt a particular lock acquisition.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
Concurrency level for lock containers. Adjust this value according to the number of concurrent threads interacting with Infinispan.
Sets the cache transaction mode to one of NONE, BATCH, NON_XA, NON_DURABLE_XA, FULL_XA.
If there are any ongoing transactions when a cache is stopped, Infinispan waits for ongoing remote and local transactions to finish. The amount of time to wait for is defined by the cache stop timeout.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
The locking mode for this cache, one of OPTIMISTIC or PESSIMISTIC.
Configure Transaction manager lookup directly using an instance of TransactionManagerLookup. Calling this method marks the cache as transactional.
The duration (millis) in which to keep information about the completion of a transaction. Defaults to 60000.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
The time interval (millis) at which the thread that cleans up transaction completion information kicks in. Defaults to 30000.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
If the cache is transactional and transactionAutoCommit is enabled then for single operation transactions the user doesn't need to manually start a transaction, but a transactions is injected by the system. Defaults to true.
Sets the name of the cache where recovery related information is held. The cache's default name is "__recoveryInfoCacheName__"
Enables or disables triggering transactional notifications on cache listeners. By default is enabled.
Describes the content-type and encoding.
Defines content type and encoding for keys and values of the cache.
The media type for both keys and values. When present, takes precedence over the
individual configurations for keys and values.
Specifies the maximum amount of time, in milliseconds, that cache
entries can remain idle. If no operations are performed on entries
within the maximum idle time, the entries expire across the cluster.
A value of 0 or -1 disables expiration.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
Specifies the maximum amount of time, in milliseconds, that cache
entries can exist. After reaching their lifespan, cache entries
expire across the cluster. A value of 0 or -1 disables expiration.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
Specifies the interval, in milliseconds, between expiration runs. A value of 0 or -1 disables the expiration reaper.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
Controls how timestamps get updated for entries in clustered caches with maximum idle expiration.
The default value is SYNC.
This attribute applies only to caches that use synchronous mode.
Timestamps are updated asynchronously for caches that use asynchronous mode.
Controls whether when stored in memory, keys and values are stored as references to their original objects, or in a serialized, binary format. There are benefits to both approaches, but often if used in a clustered mode, storing objects as binary means that the cost of serialization happens early on, and can be amortized. Further, deserialization costs are incurred lazily which improves throughput. It is possible to control this on a fine-grained basis: you can choose to just store keys or values as binary, or both.
DEPRECATED: please use memory element instead
Specify whether keys are stored as binary or not. Enabled by default if the "enabled" attribute is set to true.
Specify whether values are stored as binary or not. Enabled by default if the "enabled" attribute is set to true.
Defines a cluster cache loader.
Defines a custom cache store.
Defines a filesystem-based cache store.
Deprecated since 13.0. Defines a filesystem-based cache store that stores its elements in a single file.
Enables passivation so that data is written to cache stores only if
it is evicted from memory. Subsequent requests for passivated entries
restore them to memory and remove them from persistent storage.
If you do not enable passivation, writes to entries in memory result
in writes to cache stores.
Sets the maximum number of attempts to start each configured
`CacheWriter` or `CacheLoader`. An exception is thrown and the cache
does not start if the number of connection attempts exceeds the
maximum.
Specifies the time, in milliseconds, between availability checks to
determine if the PersistenceManager is available. In other words,
this interval sets how often stores and loaders are polled via their
`org.infinispan.persistence.spi.CacheWriter#isAvailable` or
`org.infinispan.persistence.spi.CacheLoader#isAvailable`
implementation. If a single store or loader is not available, an
exception is thrown during cache operations.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
Deprecated since 10.0, will be removed without a replacement.
Dictates that the custom interceptor appears immediately after the specified interceptor. If the specified interceptor is not found in the interceptor chain, a ConfigurationException will be thrown when the cache starts.
Dictates that the custom interceptor appears immediately before the specified interceptor. If the specified interceptor is not found in the interceptor chain, a ConfigurationException will be thrown when the cache starts.
A fully qualified class name of the new custom interceptor to add to the configuration.
Specifies a position in the interceptor chain to place the new interceptor. The index starts at 0 and goes up to the number of interceptors in a given configuration. A ConfigurationException is thrown if the index is less than 0 or greater than the maximum number of interceptors in the chain.
Specifies a position where to place the new interceptor. Allowed values are FIRST, LAST, and OTHER_THAN_FIRST_OR_LAST
Configures authorization for this cache.
Enables authorization checks for this cache. Defaults to true if the authorization element is present.
Sets the valid roles required to access this cache.
Defines the size of the data container in bytes. The default unit is
B (bytes). You can optionally set one of the following units: KB
(kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), KiB
(kibibytes), MiB (mebibytes), GiB (gibibytes) and TiB (tebibytes).
Eviction occurs when the approximate memory usage of the data
container exceeds the maximum size.
Defines the size of the data container by number of entries. Eviction occurs after the container size exceeds the maximum count.
Specifies a strategy for evicting cache entries. Eviction always
takes place when you define either the max-size or the max-count
(but not both) for the data container. If no strategy is defined,
but max-count or max-size is configured, REMOVE is used.
Defines the type of memory that the data container uses as storage.
Stores cache entries in JVM heap memory.
Stores cache entries as bytes in native memory outside the Java
heap.
Deprecated, only added in 11.0 to simplify the transition from the <object/> element.
Please use HEAP instead.
Deprecated, only added in 11.0 to simplify the transition from the <binary/> element.
Please use HEAP and set the encoding media type instead.
Configures the way this cache reacts to node crashes and split brains.
Deprecated, use type instead. Enable/disable the partition handling functionality. Defaults to false.
The type of actions that are possible when a split brain scenario is encountered.
The entry merge policy which should be applied on partition merges.
Sets the clustered cache mode, ASYNC for asynchronous operation, or SYNC for synchronous operation.
In SYNC mode, the timeout (in ms) used to wait for an acknowledgment when making a remote call, after which the call is aborted and an exception is thrown.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
The state transfer configuration for distribution and replicated caches.
Sets the number of hash space segments per cluster. The default
value is 256. The value should be at least 20 * the cluster size.
Deprecated since 11.0. Will be removed in 14.0, the segment allocation will no longer be customizable.
The factory to use for generating the consistent hash.
Must implement `org.infinispan.distribution.ch.ConsistentHashFactory`.
E.g. `org.infinispan.distribution.ch.impl.SyncConsistentHashFactory` can be used to guarantee
that multiple distributed caches use exactly the same consistent hash, which for performance
reasons is not guaranteed by the default consistent hash factory instance used.
The name of the key partitioner class.
Must implement `org.infinispan.distribution.ch.KeyPartitioner`.
A custom key partitioner can be used as an alternative to grouping, to guarantee that some keys
are located in the same segment (and thus their primary owner is the same node).
Since 8.2.
The state transfer configuration for distribution and replicated caches.
Configures grouping of data.
Number of cluster-wide replicas for each cache entry.
Sets the number of hash space segments per cluster. The default value is 256. The value should be at least 20 * the cluster size.
Controls the proportion of entries that reside on the local node in comparison to other nodes in the cluster.
You must specify a positive number as the value.
The value can also be a fraction such as 1.5.
Maximum lifespan in milliseconds of an entry placed in the L1 cache.
By default, L1 is disabled unless a positive value is configured for this attribute.
If the attribute is not present, L1 is disabled.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
Controls how often a cleanup task to prune L1 tracking data is run. Defaults to 10 minutes.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
The factory to use for generating the consistent hash.
Must implement `org.infinispan.distribution.ch.ConsistentHashFactory`.
E.g. `org.infinispan.distribution.ch.impl.SyncConsistentHashFactory` can be used to guarantee
that multiple distributed caches use exactly the same consistent hash, which for performance
reasons is not guaranteed by the default consistent hash factory instance used.
The name of the key partitioner class.
Must implement `org.infinispan.distribution.ch.KeyPartitioner`.
A custom key partitioner can be used as an alternative to grouping, to guarantee that some keys
are located in the same segment (and thus their primary owner is the same node).
Since 8.2.
A cache loader property with name and value.
Unused XML attribute.
Determines if a cache loader is shared between cache instances.
Values are true / false (default). This property prevents duplicate
writes of data to the cache loader by different cache instances. An
example is where all cache instances in a cluster use the same JDBC
settings for the same remote, shared database. If true, only the
nodes where modifications originate write to the cache store. If
false, each cache reacts to potential remote updates by storing the
data to the cache store.
Pre-loads data into memory from the cache loader when the cache
starts. Values are true / false (default). This property is useful
when data in the cache loader is required immediately after startup
to prevent delays with cache operations when the data is loaded
lazily. This property can provide a "warm cache" on startup but it
impacts performance because it affects start time. Pre-loading data
is done locally, so any data loaded is stored locally in the node
only. Pre-loaded data is not replicated or distributed. Likewise,
data is pre-loaded only up to the maximum configured number of
entries in eviction.
The timeout when performing remote calls.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
Configures cache stores as write-behind instead of write-through.
Defines a cache store property with name and value.
This setting should be set to true when multiple cache instances share the same cache store (e.g., multiple nodes in a cluster using a JDBC-based CacheStore pointing to the same, shared database.) Setting this to true avoids multiple cache instances writing the same modification multiple times. If enabled, only the node where the modification originated will write to the cache store. If disabled, each individual cache reacts to a potential remote update by storing the data to the cache store.
This setting should be set to true when the underlying cache store supports transactions and it is desirable for the underlying store and the cache to remain synchronized. With this enabled any Exceptions thrown whilst writing to the underlying store will result in both the store's and cache's transactions rollingback.
If true, when the cache starts, data stored in the cache store will be pre-loaded into memory. This is particularly useful when data in the cache store will be needed immediately after startup and you want to avoid cache operations being delayed as a result of loading this data lazily. Can be used to provide a 'warm-cache' on startup, however there is a performance penalty as startup time is affected by this process. Likewise in some cases you cannot pre-load data in caches stores, such as when using shared remote stores.
Empties the specified cache loader at startup. Values are true /
false (default). This property takes effect only if read-only is
false.
Prevents data from being persisted to cache stores. Values are true /
false (default). If true, cache stores load entries only. Any
modifications to data in the cache do not apply to cache stores.
Prevents data from being loaded from cache stores. Values are true /
false (default). If true, cache stores write entries only. Any
retrievals of data in the cache do not read from the cache store.
Sets the maximum size of a batch to insert or delete from the cache
store. If the value is less than one, no upper limit applies to the
number of operations in a batch.
Configures cache stores to store data in hash space segments, with the cache's
"segments" attribute defining the number of segments.
Specifies the maximum number of entries in the asynchronous
modification queue. When the queue is full, write-through mode is
used until the queue can accept new entries.
Specifies the number of threads to apply modifications to the cache
store.
Controls how asynchronous write operations take place when cache
stores become unavailable. If "true", asynchronous write operations
that fail are re-attempted with the number of times specified in the
"connection-attempts" parameter. If all attempts fail, errors are
ignored and write operations are not executed on the cache store. If
"false", asynchronous write operations that fail are re-attempted
when the underlying store becomes available. If the modification
queue becomes full before the underlying store becomes available, an
error is thrown on all future write operations to the store until the
modification queue is flushed. The modification queue is not
persisted. If the underlying store does not become available before
the asynchronous store is stopped, queued modifications are lost.
Defines the class name of a cache store that
implements either `CacheLoader`, `CacheWriter`, or both.
Specifies the maximum number of entries that the file store can
hold. To increase the speed of lookups, Single File cache stores
index keys and their locations in the file. To avoid excessive
memory usage, you can configure the maximum number of entries so
that entries are removed permanently from both memory and the
cache store when the maximum is exceeded. However, this can lead
to data loss. You should only set a maximum number of entries if
data can be recomputed or retrieved from an authoritative data
store. By default, the value is `-1` which means that there is no
maximum number of entries.
Specifies a filesystem directory for data. The value can be a
relative or absolute path. Relative paths are created relative to
the configured global persistent location. Absolute paths must be
subdirectories of the global persistent location, otherwise an
exception is thrown.
Deprecated since 13.0 which ignores the property.
Specifies the maximum number of entries that the file store can
hold. To increase the speed of lookups, Single File cache stores
index keys and their locations in the file. To avoid excessive
memory usage, you can configure the maximum number of entries so
that entries are removed permanently from both memory and the
cache store when the maximum is exceeded. However, this can lead
to data loss. You should only set a maximum number of entries if
data can be recomputed or retrieved from an authoritative data
store. By default, the value is `-1` which means that there is no
maximum number of entries.
Deprecated since 13.0, will be removed in 16.0.
Specifies a filesystem directory for data. The value can be a
relative or absolute path. Relative paths are created relative to
the configured global persistent location. Absolute paths must be
subdirectories of the global persistent location, otherwise an
exception is thrown.
Max number of data and index files opened for reading (current log file and compaction output are not included here - always uses one each).
Index files will use 1/10th of list limit with a minimum of 1 and a maximum equal to the number of cache segments.
If amount of unused space in some data file gets above this threshold, the file is compacted - entries from that file are copied to a new file and the old file is deleted.
Configure where and how should be the entries stored - this is the persistent location.
A location on disk where the store writes entries. Files are written sequentially, reads are random.
Max size of single file with entries, in bytes.
If true, the write is confirmed only after the entry is fsynced on disk.
Configure where and how should be the index stored.
A location where the store keeps index - this does not have be persistent across restarts, and SSD storage is recommended (the index is accessed randomly).
Deprecated since 15.0 which ignores the property.
This value is ignored as we create an index file per cache segment instead.
Max number of entry writes that are waiting to be written to the index, per index segment.
Max size of node (continuous block on filesystem used in index implementation), in bytes.
If the size of node (continuous block on filesystem used in index implementation) drops below this threshold, the node will try to balance its size with some neighbour node, possibly causing join of multiple nodes.
The hostname or ip address of a remote Hot Rod server
The port on which the server is listening (default 11222)
Unused XML attribute.
If enabled, this will cause the cache to ask neighboring caches for state when it starts up, so the cache starts 'warm', although it will impact startup time.
The maximum amount of time (ms) to wait for state from neighboring caches, before throwing an exception and aborting startup.
Must be greater than or equal to 'remote-timeout' in the clustering configuration.
The number of cache entries to batch in each transfer.
If enabled, this will cause the cache to wait for initial state transfer to complete before responding to requests.
The class to use to group keys. Must implement org.infinispan.distribution.group.Grouper.
Enables or disables grouping.
Configures a remote site as a backup location for cache data.
Specifies the fully qualified name of a class that implements the XSiteEntryMergePolicy interface or any of
the alias. Use if for ASYNC strategy backup.
Specifies the maximum delay, in milliseconds, between which tombstone cleanup tasks run.
This attribute applies to the asynchronous backup strategy only.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
Specifies the target number of tombstones to store.
If the current size of the tombstone map is greater than the target size, then cleanup tasks run more frequently.
If the current size of the tombstone map is less than the target size, then cleanup tasks run less frequently.
This attribute applies to the asynchronous backup strategy only.
Specifies the number of failures that can occur before backup locations go offline.
Modifies state transfer operations.
Specifies how many cache entries are batched in each transfer request.
The time (in milliseconds) to wait for the backup site acknowledge the state chunk
received and applied. Default value is 20 min.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
Sets the maximum number of retry attempts for push state failures. Specify a value of 0 (zero) to disable retry attempts. The default value is 30.
Sets the amount of time, in milliseconds, to wait between retry attempts for push state failures. You must specify a value of 1 or more. The default value is 2000.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
Controls whether cross-site state transfer happens manually on user action, which is the default, or automatically when backup locations come online.
Names the remote site to which the cache backs up data.
Sets the strategy for backing up to a remote site.
Controls how local writes to caches are handled if synchronous backup operations fail.
Specifies timeout, in milliseconds, for synchronous and asynchronous backup operations.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
Enables two-phase commits for optimistic transactional caches with
the synchronous backup strategy only.
Specifies the fully qualified name of a class that implements the
CustomFailurePolicy interface. Use if failure-policy="CUSTOM".
Sets the number of consecutive failures that can occur for backup
operations before sites go offline. Specify a negative or zero value
to ignore this attribute and use minimum wait time only ("min-wait").
Sets the minimum time to wait, in milliseconds, before sites go
offline when backup operations fail. If subsequent operations are
successful, the minimum wait time is reset. If you set
"after-failures", sites go offline when the wait time is reached and
the number of failures occur.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
Defines the name of a property.
No locking isolation will be performed. This is only valid in local mode. In clustered mode, READ_COMMITTED will be used instead.
Unsupported. Actually configures READ_COMMITTED
Read committed is an isolation level that guarantees that any data read is committed at the moment it is read. However, depending on the outcome of other transactions, successive reads may return different results
Repeatable read is an isolation level that guarantees that any data read is committed at the moment it is read and that, within a transaction, successive reads will always return the same data.
Unsupported. Actually configures REPEATABLE_READ
Cache will not enlist within transactions.
Uses batching to group cache operations together.
Cache will enlist within transactions as a javax.transaction.Synchronization
Cache will enlist within transactions as a javax.transaction.xa.XAResource, without recovery.
Cache will enlist within transactions as a javax.transaction.xa.XAResource, with recovery.
Do not index data. This is the default.
Only index changes made locally, ignoring remote changes. This is useful if indexes are shared across a cluster to prevent redundant indexing of updates.
Index all data
Only index changes on the primary owner, regardless of it's local or remote.
Local filesystem index storage. This is the default.
JVM heap index storage, not persisted between restarts. Only suitable for small datasets with low concurrency.
Clears the index when the cache starts.
Rebuilds the index when the cache starts.
Automatically triggers an indexing operation when the cache starts.
If data is volatile and the index is persistent then the cache is cleared when it starts.
If data is persistent and the index is volatile then the cache is reindexed when it starts.
Cache startup does not trigger an indexing operation. This is the default value.
All the changes to the cache will be immediately applied to the indexes.
Indexes will be only updated when a reindex is explicitly invoked.
Interval, in milliseconds, to reopen the index reader.
By default, the index reader is refreshed on-demand during searches, if new entries were indexed since the last refresh. Configuring with a value larger than zero will make some queries results stale, but query throughput will increase substantially, specially in write heavy scenarios.
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
Defines the number of shards with which slip the index data.
To enable sharding, set this attribute to a value that is strictly major of 1.
Maximum number of entries that an index segment can have before merging. Segments with more than this number of entries are not merged. Smaller values perform better on frequently changing indexes, larger values provide better search performance if the index does not change often.
Number of segments that are merged at once. With smaller values, merging happens more often, which uses more resources, but the total number of segments will be lower on average, increasing search performance. Larger values (greater than 10) are best for heavy writing scenarios.
Minimum target size of segments, in MB, for background merges. Segments smaller than this size are merged more aggressively. Setting a value that is too large might result in expensive merge operations, even though they are less frequent.
Maximum size of segments, in MB, for background merges. Segments larger than this size are never merged in the background. Settings this to a lower value helps reduce memory requirements and avoids some merging operations at the cost of optimal search speed. This attribute is ignored when forcefully merging an index and `max-forced-size` applies instead.
maximum size of segments, in MB, for forced merges and overrides the `max-size` attribute. Set this to the same value as `max-size` or lower. However setting the value too low degrades search performance because documents are deleted.
Whether the number of deleted entries in an index should be taken into account when counting the entries in the segment. Setting `false` will lead to more frequent merges caused by `max-entries`, but will more aggressively merge segments with many deleted documents, improving search performance.
Defines properties to control the merge of index segments. An index segment is
not related to an Infinispan segment, but represents a section of index in the storage.
Amount of time, in milliseconds, that index changes that are buffered in memory are flushed to the index storage and a commit is performed. Because operation is costly, small values should be avoided. The default is 1000 ms (1 second).
You can optionally set one of the following units: ms (milliseconds), s (seconds), m (minutes), h (hours), d (days).
Maximum amount of memory that can be used for buffering added entries and deletions before they are flushed to the index storage. Large values result in faster indexing but use more memory. For faster indexing performance you should set this attribute instead of `max-buffered-entries`. When used in combination with the `max-buffered-entries` attribute, a flush occurs for whichever event happens first.
Maximum number of entries that can be buffered in-memory before they are flushed to the index storage. Large values result in faster indexing but use more memory. When used in combination with the `ram-buffer-size` attribute, a flush occurs for whichever event happens first.
Number of threads that execute write operations to the index.
Number of internal queues to use for each indexed type. Each queue holds a batch of modifications that is applied to the index and queues are processed in parallel. Increasing the number of queues will lead to an increase of indexing throughput, but only if the bottleneck is CPU. For optimum results, do not set a value for `queue-count` that is larger than the value for `thread-pool-size`.
Maximum number of elements each queue can hold. Increasing the `queue-size` value increases the amount of memory that is used during indexing operations. Setting a value that is too small can block indexing operations.
Enables low-level trace information for indexing operations. Enabling this attribute substantially degrades performance. You should use this low-level tracing only as a last resource for troubleshooting.
Do not evict entries. If you define a size for the data container,
you should specify either the REMOVE or EXCEPTION exiction strategy.
Manually evict entries. This strategy is the same as NONE but does
not log errors if you enable passivation without eviction.
Automatically evict older entries to make space for new entries. By
default REMOVE is always used when you define a size for the data
container unless you configure the EXCEPTION strategy.
Do not evict entries. If the data container reaches the maximum
size, exceptions occur for requests to create new entries. You can
use this eviction strategy only with transactional caches that use
two phase commit.
Deprecated. Activates REMOVE policy.
Deprecated. Activates REMOVE policy.
Deprecated. Activates REMOVE policy.
Deprecated. Activates REMOVE policy.
Enables asynchronous mode.
Enables synchronous mode.
Enables Optimistic locking.
Enables Pessimistic locking.
Ignore failed backup operations and write to the local cache.
Log exceptions when backup operations fail and write to the local cache.
Throw exceptions when backup operations fail and attempt to stop writes to the local cache.
Use a custom failure policy. Requires the "failure-policy-class" attribute.
Use the default shutdown hook behaviour (REGISTER)
Register a shutdown hook
Don't register a shutdown hook
Use the OpenTelemetry Protocol (OTLP). This is the default and the recommended option.
This option does not require any extra dependency.
Tracing will be exported using gRPC and the Jaeger format.
This option will require the Jaeger exporter as extra dependency.
Tracing will be exported in JSON Zipkin format using HTTP.
This option will require the Zipkin exporter as extra dependency.
Tracing will be exported for a Prometheus collector.
This option will require the Prometheus exporter as extra dependency.
Evict entries from the cache when the number of entries reaches the
configured size.
Evict entries from the cache when the amount of memory in use
reaches the configured size.
A simple versioning scheme that is cluster-aware
Don't version entries
Allows control of a cache's lifecycle (i.e. starting and stopping a cache)
Allows reading data from a cache
Allows writing data to a cache
Allows performing task execution (e.g. distributed executors, map/reduce) on a cache
Allows attaching listeners to a cache
Allows bulk-read operations (e.g. obtaining all the keys in a cache)
Allows bulk-write operations (e.g. clearing a cache)
Allows performing "administrative" operations on a cache
Aggregate permission which implies all of the others
Aggregate permission which implies all read permissions (READ and BULK_READ)
Aggregate permission which implies all write permissions (WRITE and BULK_WRITE)
Permission which means no permissions
Allow read and write operations only if all replicas of an entry are in the partition.
If a partition does not include all replicas of an entry, do not allow cache operations for that entry.
Allow read operations for entries and deny write operations unless the partition includes all replicas of an entry.
Allow read and write operations on caches while a cluster is split into network partitions.
Caches remain available and conflicts are resolved during merge.
Do not resolve conflicts when merging split clusters.
Use the value that exists on the majority of nodes in the cluster to resolve conflicts.
Use the first non-null value on the cluster to resolve conflicts.
Delete any conflicting entries from the cache.
Combines the protocol attributes, overriding any that have been set in the base stack.
Inserts the protocol after/above an existing protocol in the stack, referenced using the stack.position attribute.
Inserts the protocol after/above an existing protocol in the stack, referenced using the stack.position attribute.
Inserts the protocol before/below an existing protocol in the stack, referenced using the stack.position attribute.
Inserts the protocol before/below an existing protocol in the stack, referenced using the stack.position attribute.
Replaces the protocol in the base stack.
Removes the protocol from the stack.
The bias is never acquired.
Bias is acquired by the writing entry.
Delays read operations until the other owners confirm that the timestamp for the entry is updated.
Sends touch commands to other owners without waiting for confirmation.
This mode allows read operations to return the value of a key even if another node has started
expiring it.
When that occurs, the read operation does not extend the lifespan of the key.
In case of conflict, it chooses the entry for the site's name with lower lexicographically
order.
In case of conflict and one of the entries is null, it chooses the non-null entry. If both
non-null, it uses the DEFAULT.
In case of conflict and one of the entries is null, it chooses the null entry. If both
non-null, it uses the DEFAULT.
In case of any conflict, the entry is removed from both sites.
Users must bring backup locations online and initiate state
transfer between remote sites.
Backup locations that use the asynchronous backup strategy can
automatically come back online. State transfer operations begin
when the remote site connections are stable.
Ignores the presence of a dangling lock file in the persistent global state.
Prevents startup of the cache manager if a dangling lock file is found in the persistent global state.
Clears the persistent global state if a dangling lock file is found in the persistent global state.
Default Infinispan span category, which includes all the major put/insertion operations.
Span category for cluster operations causally related by client interactions.
Span category for x-site operations causally related by client interactions.
Span category for persistence operations causally related by client interactions.
Span category for security operations causally related by client interactions.
Replaces existing schemas without performing any backwards-compatibility checks.
Enables lenient compatibility checks when updating an existing schema. The following checks are performed:
No Using Reserved Fields, No Changing Field IDs, No Changing Field Types, No Removing Fields Without Reserve
Enables strict compatibility checks when updating an existing schema. The following checks are performed:
No Using Reserved Fields, No Changing Field IDs, No Changing Field Types, No Removing Fields Without Reserve,
No Removing Reserved Fields, No Changing Field Names
© 2015 - 2025 Weber Informatics LLC | Privacy Policy