schema.infinispan-config-5.2.xsd Maven / Gradle / Ivy
Defines the configuration for Infinispan, for the cache manager configuration, for the default cache, and for named caches.
Defines global settings shared among all cache instances created by a single CacheManager.
Configuration for the executor service used to emit notifications to asynchronous listeners
Configuration for the executor service used for asynchronous work on the Transport, including asynchronous marshalling and Cache 'async operations' such as Cache.putAsync().
Configuration for the scheduled executor service used to periodically run eviction cleanup tasks.
Configuration for the scheduled executor service used to periodically flush replication queues, used if asynchronous clustering is enabled along with useReplQueue being set to true.
Configuration for the x-site replication.
This element specifies whether global statistics are gathered and reported via JMX for all caches under this cache manager.
Sets properties which are then passed to the MBean Server Lookup implementation specified.
If true, multiple cache manager instances could be configured under the same configured JMX domain. Each cache manager will in practice use a different JMX domain that has been calculated based on the configured one by adding an incrementing index to it.
If JMX statistics are enabled, this property represents the name of this cache manager. It offers the possibility for clients to provide a user-defined name to the cache manager which later can be used to identify the cache manager within a JMX based management tool amongst other cache managers that might be running under the same JVM.
Enable Global JMX statistics
If JMX statistics are enabled then all 'published' JMX objects will appear under this name. This is optional, if not specified an object name will be created for you by default.
Class that will attempt to locate a JMX MBean server to bind to
This element configures the transport used for network communications across the cluster.
Sets transport properties
Defines the name of the cluster. Nodes only connect to clusters sharing the same name.
Infinispan uses a distributed lock to maintain a coherent transaction log during state transfer or rehashing which means that only one cache can be doing state transfer or rehashing at the same time. This constraint is in place because more than one cache could be involved in a transaction. This timeout controls the time to wait to acquire acquire a lock on the distributed lock.
The id of the machine where this node runs. Visit http://community.jboss.org/wiki/DesigningServerHinting for more information .
Name of the current node. This is a friendly name to make logs, etc. make more sense. Defaults to a combination of host name and a random number (to differentiate multiple nodes on the same host)
The id of the rack where this node runs. Visit http://community.jboss.org/wiki/DesigningServerHinting for more information .
The id of the site where this node runs. Visit http://community.jboss.org/wiki/DesigningServerHinting for more information .
If set to true, RPC operations will fail if the named cache does not exist on remote nodes with a NamedCacheNotFoundException. Otherwise, operations will succeed but it will be logged on the caller that the RPC did not succeed on certain nodes due to the named cache not being available.
Class that represents a network transport. Must implement org.infinispan.remoting.transport.Transport
Configures serialization and marshalling settings.
Configures custom marshallers.
AdvancedExternalizer provides an alternative way to provide externalizers for marshalling/unmarshalling user defined classes that overcome the deficiencies of the more user-friendly externalizer definition model explained in Externalizer.
Class of the custom marshaller
Id of the custom marshaller
Fully qualified name of the marshaller to use. It must implement org.infinispan.marshall.StreamingMarshaller
Largest allowable version to use when marshalling internal state. Set this to the lowest version cache instance in your cluster to ensure compatibility of communications. However, setting this too low will mean you lose out on the benefit of improvements in newer versions of the marshaller.
This element specifies behavior when the JVM running the cache instance shuts down.
Behavior of the JVM shutdown hook registered by the cache
This element contains configuration options for additional modules which affect global configuration
Specifies the default behvaior for all named caches belonging to this cache manager.
Specify the configuration for a named cache.
Defines the local, in-VM locking and concurrency characteristics of the cache.
Concurrency level for lock containers. Adjust this value according to the number of concurrent threads interacting with Infinispan. Similar to the concurrencyLevel tuning parameter seen in the JDK's ConcurrentHashMap.
Cache isolation level. Infinispan only supports READ_COMMITTED or REPEATABLE_READ isolation levels. See http://en.wikipedia.org/wiki/Isolation_level'>http://en.wikipedia.org/wiki/Isolation_level for a discussion on isolation levels.
Maximum time to attempt a particular lock acquisition
If true, a pool of shared locks is maintained for all entries that need to be locked. Otherwise, a lock is created per entry in the cache. Lock striping helps control memory footprint but may reduce concurrency in the system.
This setting is only applicable in the case of REPEATABLE_READ. When write skew check is set to false, if the writer at commit time discovers that the working entry and the underlying entry have different versions, the working entry will overwrite the underlying entry. If true, such version conflict - known as a write-skew - will throw an Exception.
For non-transactional caches only: if set to true(default value) the cache keeps data consistent in the case of concurrent updates. For clustered caches this comes at the cost of an additional RPC, so if you don't expect your application to write data concurrently, disabling this flag increases performance.
Configuration for cache loaders and stores.
Configuration of a specific cache loader
Configuration of a specific cache store
Configuration of a ClusterCacheLoader
Configuration of a FileCacheStore
If true, data is only written to the cache store when it is evicted from memory, a phenomenon known as 'passivation'. Next time the data is requested, it will be 'activated' which means that data will be brought back to memory and removed from the persistent store. This gives you the ability to 'overflow' to disk, similar to swapping in an operating system. If false, the cache store contains a copy of the contents in memory, so writes to cache result in cache store writes. This essentially gives you a 'write-through' configuration.
If true, when the cache starts, data stored in the cache store will be pre-loaded into memory. This is particularly useful when data in the cache store will be needed immediately after startup and you want to avoid cache operations being delayed as a result of loading this data lazily. Can be used to provide a 'warm-cache' on startup, however there is a performance penalty as startup time is affected by this process.
This setting should be set to true when multiple cache instances share the same cache store (e.g., multiple nodes in a cluster using a JDBC-based CacheStore pointing to the same, shared database.) Setting this to true avoids multiple cache instances writing the same modification multiple times. If enabled, only the node where the modification originated will write to the cache store. If disabled, each individual cache reacts to a potential remote update by storing the data to the cache store. Note that this could be useful if each individual node has its own cache store - perhaps local on-disk.
Cache configuration for the x-site replication.
Configures the list of sites where this cache backups.
Configures a specific site where this cache backups data. The name of the backup
must match
a site defined in the "global" section.
Configures this cache as a backup for a remote cache.
The name of the remote cache that backups data here.
The name of the remote site containing the cache that backups data here.
Defines transactional (JTA) characteristics of the cache.
Defines recovery configuration for the cache.
Enable recovery for this cache.
Sets the name of the cache where recovery related information is held. The cache's default name is specified by Configuration.RecoveryType.DEFAULT_RECOVERY_INFO_CACHE.
If there are any ongoing transactions when a cache is stopped, Infinispan waits for ongoing remote and local transactions to finish. The amount of time to wait for is defined by the cache stop timeout. It is recommended that this value does not exceed the transaction timeout because even if a new transaction was started just before the cache was stopped, this could only last as long as the transaction timeout allows it.
The duration (millis) in which to keep information about the completion of a transaction. Defaults to 15000.
The time interval (millis) at which the thread that cleans up transaction completion information kicks in. Defaults to 1000.
Only has effect for DIST mode and when useEagerLocking is set to true. When this is enabled, then only one node is locked in the cluster, disregarding numOwners config. On the opposite, if this is false, then on all cache.lock() calls numOwners RPCs are being performed. The node that gets locked is the main data owner, i.e. the node where data would reside if numOwners==1. If the node where the lock resides crashes, then the transaction is marked for rollback - data is in a consistent state, no fault tolerance. Starting with Infinispan 5.1 single node locking is used by default.
If true, the cluster-wide commit phase in two-phase commit (2PC) transactions will be synchronous, so Infinispan will wait for responses from all nodes to which the commit was sent. Otherwise, the commit phase will be asynchronous. Keeping it as false improves performance of 2PC transactions, since any remote failures are trapped during the prepare phase anyway and appropriate rollbacks are issued.
If true, the cluster-wide rollback phase in two-phase commit (2PC) transactions will be synchronous, so Infinispan will wait for responses from all nodes to which the rollback was sent. Otherwise, the rollback phase will be asynchronous. Keeping it as false improves performance of 2PC transactions.
Configure Transaction manager lookup directly using an instance of TransactionManagerLookup. Calling this method marks the cache as transactional.
Only has effect for DIST mode and when useEagerLocking is set to true. When this is enabled, then only one node is locked in the cluster, disregarding numOwners config. On the opposite, if this is false, then on all cache.lock() calls numOwners RPCs are being performed. The node that gets locked is the main data owner, i.e. the node where data would reside if numOwners==1. If the node where the lock resides crashes, then the transaction is marked for rollback - data is in a consistent state, no fault tolerance. Note: Starting with infinispan 5.1 eager locking is replaced with pessimistic locking and can be enforced by setting transaction's locking mode to PESSIMISTIC.
Configures whether the cache registers a synchronization with the transaction manager, or registers itself as an XA resource. It is often unnecessary to register as a full XA resource unless you intend to make use of recovery as well, and registering a synchronization is significantly more efficient.
Configures whether the cache uses optimistic or pessimistic locking. If the cache is not transactional then the locking mode is ignored. See transactioMode.
Configures whether the cache is transactional or not.
If the cache is transactional and transactionAutoCommit is enabled then for single operation transactions the user doesn't need to manually start a transaction, but a transactions is injected by the system. Defaults to true.
Before Infinispan 5.1 you could access the cache both transactionally and non-transactionally. Naturally the non-transactional access is faster and offers less consistency guarantees. From Infinispan 5.1 onwards, mixed access is no longer supported, so if you wanna speed up transactional caches and you're ready to trade some consistency guarantees, you can enable use1PcForAutoCommitTransactions. What this configuration option does is force an induced transaction, that has been started by Infinispan as a result of enabling autoCommit, to commit in a single phase. So only 1 RPC instead of 2RPCs as in the case of a full 2 Phase Commit (2PC).
Configures custom interceptors to be added to the cache.
Dictates that the custom interceptor appears immediately after the specified interceptor. If the specified interceptor is not found in the interceptor chain, a ConfigurationException will be thrown when the cache starts.
Dictates that the custom interceptor appears immediately before the specified interceptor. If the specified interceptor is not found in the interceptor chain, a ConfigurationException will be thrown when the cache starts.
A fully qualified class name of the new custom interceptor to add to the configuration.
Specifies a position in the interceptor chain to place the new interceptor. The index starts at 0 and goes up to the number of interceptors in a given configuration. A ConfigurationException is thrown if the index is less than 0 or greater than the maximum number of interceptors in the chain.
Specifies a position where to place the new interceptor. Allowed values are FIRST, LAST, and OTHER_THAN_FIRST_OR_LAST
Controls the data container for the cache.
Properties passed to the data container
Fully qualified class name of the data container to use
Controls the eviction settings for the cache.
Maximum number of entries in a cache instance. Cache size is guaranteed not to exceed upper limit specified by max entries. However, due to the nature of eviction it is unlikely to ever be exactly maximum number of entries specified here.
Eviction strategy. Available options are 'UNORDERED', 'LRU', 'LIRS' and 'NONE' (to disable eviction).
Threading policy for eviction.
Controls the default expiration settings for entries in the cache.
Interval (in milliseconds) between subsequent runs to purge expired entries from memory and any cache stores. If you wish to disable the periodic eviction process altogether, set wakeupInterval to -1.
Maximum lifespan of a cache entry, after which the entry is expired cluster-wide, in milliseconds. -1 means the entries never expire. Note that this can be overridden on a per-entry basis by using the Cache API.
Maximum idle time a cache entry will be maintained in the cache, in milliseconds. If the idle time is exceeded, the entry will be expired cluster-wide. -1 means the entries never expire. Note that this can be overridden on a per-entry basis by using the Cache API.
Determines whether the background reaper thread is enabled to test entries for expiration. Regardless of whether a reaper is used, entries are tested for expiration lazily when they are touched.
Controls certain tuning parameters that may break some of Infinispan's public API contracts in exchange for better performance in some cases. Use with care, only after thoroughly reading and understanding the documentation about a specific feature.
Specifies whether Infinispan is allowed to disregard the Map contract when providing return values for org.infinispan.Cache#put(Object, Object) and org.infinispan.Cache#remove(Object) methods.
Controls whether entries are versioned. Versioning is necessary, for example, when using optimistic transactions in a clustered environment, to be able to perform write-skew checks.
Determines whether versioning is enabled or disabled for this cache. Defaults to disabled
The scheme to use when versioning entries. Can be either SIMPLE or NONE. Defaults to NONE
Defines clustered characteristics of the cache.
If configured all communications are synchronous, in that whenever a thread sends a message sent over the wire, it blocks until it receives an acknowledgment from the recipient. Sync configuration is mutually exclusive with async configuration.
This is the timeout used to wait for an acknowledgment when making a remote call, after which the call is aborted and an exception is thrown.
Configures how state is transferred when a cache joins or leaves the cluster. Used in distributed and replication clustered modes.
If > 0, the state will be transferred in batches of {@code chunkSize} cache entries. If <= 0, the state will be transferred in all at once. Not recommended.
If true, this will cause the cache to ask neighboring caches for state when it starts up, so the cache starts 'warm', although it will impact startup time.
If true, this will cause the first call to method CacheManager.getCache() on the joiner node to block and wait until the joining is complete and the cache has finished
receiving state from neighboring caches (if fetchInMemoryState is enabled). This option applies to distributed and replicated caches only and is enabled by default.
Please note that setting this to false will make the cache object available immediately but any access to keys that should be available locally but are not yet transferred
will actually cause a (transparent) remote access. While this will not have any impact on the logic of your application it might impact performance.
This is the maximum amount of time - in milliseconds - to wait for state from neighboring caches, before throwing an exception and aborting startup.
Configures the L1 cache behavior in 'distributed' caches instances. In any other cache modes, this element is ignored.
Enable the L1 cache.
Determines whether a multicast or a web of unicasts are used when performing L1 invalidations. By default multicast will be used. If the threshold is set to -1, then unicasts will always be used. If the threshold is set to 0, then multicast will be always be used.
Maximum lifespan of an entry placed in the L1 cache.
Controls how often a cleanup task to prune L1 tracking data is run.
If enabled, entries removed due to a rehash will be moved to L1 rather than being removed altogether.
If configured all communications are asynchronous, in that whenever a thread sends a message sent over the wire, it does not wait for an acknowledgment before returning. Asynchronous configuration is mutually exclusive with synchronous configuration.
If configured all communications are asynchronous, in that whenever a thread sends a message sent over the wire, it does not wait for an acknowledgment before returning. <async> is mutually exclusive with the <sync>.
If true, asynchronous marshalling is enabled which means that caller can return even quicker, but it can suffer from reordering of operations. You can find more information at https://docs.jboss.org/author/display/ISPN/Asynchronous+Options
The replication queue in use, by default ReplicationQueueImpl.
If useReplQueue is set to true, this attribute controls how often the asynchronous thread used to flush the replication queue runs.
If useReplQueue is set to true, this attribute can be used to trigger flushing of the queue when it reaches a specific threshold.
If true, forces all async communications to be queued up and sent out periodically as a batch.
Allows fine-tuning of rehashing characteristics. Must only used with 'distributed' cache mode.
The factory to use for generating the consistent hash. Must implement org.infinispan.distribution.ch.ConsistentHashFactory.
The hash function used as a bit spreader and a general hash code generator. Must implement org.infinispan.commons.hash.Hash.
Number of cluster-wide replicas for each cache entry.
Controls the total number of hash space segments (per cluster). Recommended value is 10 * max_cluster_size.
Cache mode. For distribution, set mode to either 'dist'. For replication, use 'repl'. Finally, for invalidation 'inv'. If the cache mode is set to 'local', the cache in question will not support clustering even if its cache manager does.
Determines whether statistics are gather and reported.
Enable or disable statistics gathering and reporting
Controls whether when stored in memory, keys and values are stored as references to their original objects, or in a serialized, binary format. There are benefits to both approaches, but often if used in a clustered mode, storing objects as binary means that the cost of serialization happens early on, and can be amortized. Further, deserialization costs are incurred lazily which improves throughput. It is possible to control this on a fine-grained basis: you can choose to just store keys or values as binary, or both.
Enables storing both keys and values as binary.
Specify whether keys are stored as binary or not.
Specify whether values are stored as binary or not.
Deprecated configuration element. Use storeAsBinary instead.
Configures deadlock detection.
Enable or disable deadlock detection
Time period that determines how often is lock acquisition attempted within maximum time allowed to acquire a particular lock
Configures indexing of entries in the cache for searching.
The Query engine relies on properties for configuration. These properties are passed directly to the embedded Hibernate Search engine, so for the complete and up to date documentation about available properties refer to the Hibernate Search reference of the version you're using with Infinispan Query.
Enable or disable indexing
If true, only index changes made locally, ignoring remote changes. This is useful if indexes are shared across a cluster to prevent redundant indexing of updates.
This element contains configuration options for additional modules which affect cache configuration
Add key/value property pair to this factory configuration. Example properties include "maxThreads" which sets the maximum number of threads for this executor (default values can be found at https://docs.jboss.org/author/display/ISPN/Default+Values+For+Property+Based+Attribute). Another example is "threadNamePrefix" which sets the thread name prefix for threads created by this executor (default values can be found at https://docs.jboss.org/author/display/ISPN/Default+Values+For+Property+Based+Attributes).
Fully qualified class name of the ExecutorFactory to use. Must implement org.infinispan.executors.ExecutorFactory
The property name or key
The property value
Add key/value property pair to this factory configuration. Example properties include "maxThreads" which sets the maximum number of threads for this executor (default values can be found at https://docs.jboss.org/author/display/ISPN/Default+Values+For+Property+Based+Attribute). Another example is "threadNamePrefix" which sets the thread name prefix for threads created by this executor (default values can be found at https://docs.jboss.org/author/display/ISPN/Default+Values+For+Property+Based+Attributes).
Fully qualified class name of the ScheduledExecutorFactory to use. Must implement org.infinispan.executors.ScheduledExecutorFactory
Determines whether this backup is taken offline (ignored) after a certain
number of tries.
The number of failed request operations after which this site
should be taken offline. Defaults to 0 (never).
A negative value would mean that the site will be taken offline
after 'minTimeToWait'.
The minimal number of millis to wait before taking this site
offline, even in the case 'afterFailures' is reached.
If smaller or equal to 0, then only 'afterFailures' is considered.
Name of the remote site where this cache backups data.
The strategy used for backing up data: "SYNC" or "ASYNC". Defaults to "ASYNC"
Decides what the system would do in case of failure during backup. Defaults to
"FAIL"
The timeout(millis) to be used when backing up data remotely. Defaults to 10
secs.
Whether a backup will use a 2PC cycle for SYNC backups. Defaults to "false".
NOTE: Not used for ASYNC backup strategies.
If the 'backupFailurePolicy' is set to 'CUSTOM' then this attribute is
required and should
contain the fully qualified name of a class implementing
org.infinispan.xsite.CustomFailurePolicy.
If 'false' then no data is backed up to this site. Defaults to 'true'.
The name of the local site. Must be one of the site names defined in child "site" elements.
Defines the locking modes that are available for transactional caches: optimistic or pessimistic - see http://community.jboss.org/wiki/OptimisticLockingInInfinispan for more.
Enumeration containing the available transaction modes for a cache.
Properties passed to the cache store or loader
The cache loader to configure and use
If true, all modifications to this cache store happen asynchronously, on a separate thread.
Timeout to acquire the lock which guards the state to be flushed to the cache store periodically.
Sets the size of the modification queue for the async store. If updates are made at a rate that is faster than the underlying cache store can process this queue, then the async
store behaves like a synchronous store for that period, blocking until the queue can accept more elements.
Timeout to stop the cache store. When the store is stopped it's possible that some modifications still need to be applied; you likely want to set a very large timeout to make sure to
not loose data
Size of the thread pool whose threads are responsible for applying the modifications.
If pushStateWhenCoordinator is true, this property sets the maximum number of milliseconds that the process of pushing the in-memory state to the underlying cache loader should
take.
If true, when a node becomes the coordinator, it will transfer in-memory state to the underlying cache store. This can be very useful in situations where the coordinator crashes
and there's a gap in time until the new coordinator is elected.
If true, the singleton store cache store is enabled.
Configuration for the async cache loader. If enabled, this provides you with asynchronous writes to the cache store, giving you 'write-behind' caching.
SingletonStore is a delegating cache store used for situations when only one instance in a cluster should interact with the underlying store. The coordinator of the cluster will be responsible for the underlying CacheStore. SingletonStore is a simply facade to a real CacheStore implementation. It always delegates reads to the real CacheStore.
If true, fetch persistent state when joining a cluster. If multiple cache stores are chained, only one of them can have this property enabled. Persistent state transfer with a shared cache store does not make sense, as the same
persistent store that provides the data will just end up receiving it. Therefore, if a shared cache store is used, the cache will not allow a persistent state transfer even if a cache store has this
property set to true. Finally, setting it to true only makes sense if in a clustered environment, and only 'replication' and 'invalidation' cluster modes are supported.
If true, any operation that modifies the cache (put, remove, clear, store...etc) won't be applied to the cache store. This means that the cache store could become out of sync with the cache.
If true, purges this cache store
when it starts up.
If true, CacheStore#purgeExpired() call will be done synchronously
The number of threads to use when purging asynchronously.
Maximum time to attempt a particular lock acquisition
Concurrency level for lock containers. Adjust this value according to the number of concurrent threads interacting with Infinispan. Similar to the concurrencyLevel tuning parameter seen in the JDK's ConcurrentHashMap.
The cache store to configure and use
How long to wait for results from other members in the cluster before returning a null.
A location on disk where the store can write internal files. This defaults to Infinispan-FileCacheStore in the current working directory.
When writing state to disk, a buffered stream is used. This parameter allows you to tune the buffer size. Larger buffers are usually faster but take up more (temporary) memory, resulting in more gc. By default, this is set to 8192
Configures how the file changes will be synchronized with the underlying file system. This property has three possible values (The default mode configured is default)
Specifies the time after which the file changes in the cache need to be flushed. This option has only effect when periodic fsync mode is in use. The default fsync interval is 1 second.
Means that the file system will be synchronized when the OS buffer is full or when the bucket is read.
Configures the file cache store to sync up changes after each write request
Enables sync operations to happen as per a defined interval, or when the bucket is about to be read.
A simple versioning scheme that is cluster-aware
Don't version entries
© 2015 - 2025 Weber Informatics LLC | Privacy Policy