org.tango.cache.ehcacheRef.xml Maven / Gradle / Ivy
Go to download
Show more of this group Show more artifacts with this name
Show all versions of JTangoServer Show documentation
Show all versions of JTangoServer Show documentation
Library for Tango Server (ie. Tango Device) in Java
<?xml version="1.0" encoding="UTF-8"?> <!-- Copyright (C) : 2012 Synchrotron Soleil L'Orme des merisiers Saint Aubin BP48 91192 GIF-SUR-YVETTE CEDEX This file is part of Tango. Tango is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. Tango is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with Tango. If not, see <http://www.gnu.org/licenses/>. --> <!-- CacheManager Configuration ========================== An ehcache.xml corresponds to a single CacheManager. See instructions below or the ehcache schema (ehcache.xsd) on how to configure. System property tokens can be specified in this file which are replaced when the configuration is loaded. For example multicastGroupPort=${multicastGroupPort} can be replaced with the System property either from an environment variable or a system property specified with a command line switch such as -DmulticastGroupPort=4446. The attributes of <ehcache> are: * name - an optional name for the CacheManager. The name is optional and primarily used for documentation or to distinguish Terracotta clustered cache state. With Terracotta clustered caches, a combination of CacheManager name and cache name uniquely identify a particular cache store in the Terracotta clustered memory. * updateCheck - an optional boolean flag specifying whether this CacheManager should check for new versions of Ehcache over the Internet. If not specified, updateCheck="true". * dynamicConfig - an optional setting that can be used to disable dynamic configuration of caches associated with this CacheManager. By default this is set to true - i.e. dynamic configuration is enabled. Dynamically configurable caches can have their TTI, TTL and maximum disk and in-memory capacity changed at runtime through the cache's configuration object. * monitoring - an optional setting that determines whether the CacheManager should automatically register the SampledCacheMBean with the system MBean server. Currently, this monitoring is only useful when using Terracotta clustering and using the Terracotta Developer Console. With the "autodetect" value, the presence of Terracotta clustering will be detected and monitoring, via the Developer Console, will be enabled. Other allowed values are "on" and "off". The default is "autodetect". This setting does not perform any function when used with JMX monitors. --> <ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="ehcache.xsd" updateCheck="true" monitoring="autodetect" dynamicConfig="true"> <!-- DiskStore configuration ======================= The diskStore element is optional. To turn off disk store path creation, comment out the diskStore element below. Configure it if you have overflowToDisk or diskPersistent enabled for any cache. If it is not configured, and a cache is created which requires a disk store, a warning will be issued and java.io.tmpdir will automatically be used. diskStore has only one attribute - "path". It is the path to the directory where .data and .index files will be created. If the path is one of the following Java System Property it is replaced by its value in the running VM. For backward compatibility these should be specified without being enclosed in the ${token} replacement syntax. The following properties are translated: * user.home - User's home directory * user.dir - User's current working directory * java.io.tmpdir - Default temp file path * ehcache.disk.store.dir - A system property you would normally specify on the command line e.g. java -Dehcache.disk.store.dir=/u01/myapp/diskdir ... Subdirectories can be specified below the property e.g. java.io.tmpdir/one --> <diskStore path="java.io.tmpdir"/> <!-- TransactionManagerLookup configuration ====================================== This class is used by ehcache to lookup the JTA TransactionManager use in the application using an XA enabled ehcache. If no class is specified then DefaultTransactionManagerLookup will find the TransactionManager in the following order *GenericJNDI (i.e. jboss, where the property jndiName controls the name of the TransactionManager object to look up) *Websphere *Bitronix *Atomikos You can provide you own lookup class that implements the net.sf.ehcache.transaction.manager.TransactionManagerLookup interface. --> <transactionManagerLookup class="net.sf.ehcache.transaction.manager.DefaultTransactionManagerLookup" properties="jndiName=java:/TransactionManager" propertySeparator=";"/> <!-- CacheManagerEventListener ========================= Specifies a CacheManagerEventListenerFactory which is notified when Caches are added or removed from the CacheManager. The attributes of CacheManagerEventListenerFactory are: * class - a fully qualified factory class name * properties - comma separated properties having meaning only to the factory. Sets the fully qualified class name to be registered as the CacheManager event listener. The events include: * adding a Cache * removing a Cache Callbacks to listener methods are synchronous and unsynchronized. It is the responsibility of the implementer to safely handle the potential performance and thread safety issues depending on what their listener is doing. If no class is specified, no listener is created. There is no default. --> <cacheManagerEventListenerFactory class="" properties=""/> <!-- CacheManagerPeerProvider ======================== (For distributed operation) Specifies a CacheManagerPeerProviderFactory which will be used to create a CacheManagerPeerProvider, which discovers other CacheManagers in the cluster. One or more providers can be configured. The first one in the ehcache.xml is the default, which is used for replication and bootstrapping. The attributes of cacheManagerPeerProviderFactory are: * class - a fully qualified factory class name * properties - comma separated properties having meaning only to the factory. Providers are available for RMI, JGroups and JMS as shown following. RMICacheManagerPeerProvider +++++++++++++++++++++++++++ Ehcache comes with a built-in RMI-based distribution system with two means of discovery of CacheManager peers participating in the cluster: * automatic, using a multicast group. This one automatically discovers peers and detects changes such as peers entering and leaving the group * manual, using manual rmiURL configuration. A hardcoded list of peers is provided at configuration time. Configuring Automatic Discovery: Automatic discovery is configured as per the following example: <cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="hostName=fully_qualified_hostname_or_ip, peerDiscovery=automatic, multicastGroupAddress=230.0.0.1, multicastGroupPort=4446, timeToLive=32"/> Valid properties are: * peerDiscovery (mandatory) - specify "automatic" * multicastGroupAddress (mandatory) - specify a valid multicast group address * multicastGroupPort (mandatory) - specify a dedicated port for the multicast heartbeat traffic * timeToLive - specify a value between 0 and 255 which determines how far the packets will propagate. By convention, the restrictions are: 0 - the same host 1 - the same subnet 32 - the same site 64 - the same region 128 - the same continent 255 - unrestricted * hostName - the hostname or IP of the interface to be used for sending and receiving multicast packets (relevant to multi-homed hosts only) Configuring Manual Discovery: Manual discovery requires a unique configuration per host. It is contains a list of rmiURLs for the peers, other than itself. So, if we have server1, server2 and server3 the configuration will be: In server1's configuration: <cacheManagerPeerProviderFactory class= "net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="peerDiscovery=manual, rmiUrls=//server2:40000/sampleCache1|//server3:40000/sampleCache1 | //server2:40000/sampleCache2|//server3:40000/sampleCache2" propertySeparator="," /> In server2's configuration: <cacheManagerPeerProviderFactory class= "net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="peerDiscovery=manual, rmiUrls=//server1:40000/sampleCache1|//server3:40000/sampleCache1 | //server1:40000/sampleCache2|//server3:40000/sampleCache2" propertySeparator="," /> In server3's configuration: <cacheManagerPeerProviderFactory class= "net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="peerDiscovery=manual, rmiUrls=//server1:40000/sampleCache1|//server2:40000/sampleCache1 | //server1:40000/sampleCache2|//server2:40000/sampleCache2" propertySeparator="," /> Valid properties are: * peerDiscovery (mandatory) - specify "manual" * rmiUrls (mandatory) - specify a pipe separated list of rmiUrls, in the form //hostname:port * hostname (optional) - the hostname is the hostname of the remote CacheManager peer. The port is the listening port of the RMICacheManagerPeerListener of the remote CacheManager peer. JGroupsCacheManagerPeerProvider +++++++++++++++++++++++++++++++ <cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory" properties="channel=ehcache^connect=UDP(mcast_addr=231.12.21.132;mcast_port=45566;ip_ttl=32; mcast_send_buf_size=150000;mcast_recv_buf_size=80000): PING(timeout=2000;num_initial_members=6): MERGE2(min_interval=5000;max_interval=10000): FD_SOCK:VERIFY_SUSPECT(timeout=1500): pbcast.NAKACK(gc_lag=10;retransmit_timeout=3000): UNICAST(timeout=5000): pbcast.STABLE(desired_avg_gossip=20000): FRAG: pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=false;print_local_addr=false)" propertySeparator="^" /> JGroups configuration is done by providing a connect string using connect= as in the above example which uses multicast, or since version 1.4, a file= to specify the location of a JGroups configuration file. If neither a connect or file property is specified, the default JGroups JChannel will be used. Multiple JGroups clusters may be run on the same network by specifying a different CacheManager name. The name is used as the cluster name. JMSCacheManagerPeerProviderFactory ++++++++++++++++++++++++++++++++++ <cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.jms.JMSCacheManagerPeerProviderFactory" properties="..." propertySeparator="," /> The JMS PeerProviderFactory uses JNDI to maintain message queue independence. Refer to the manual for full configuration examples using ActiveMQ and Open Message Queue. Valid properties are: * initialContextFactoryName (mandatory) - the name of the factory used to create the message queue initial context. * providerURL (mandatory) - the JNDI configuration information for the service provider to use. * topicConnectionFactoryBindingName (mandatory) - the JNDI binding name for the TopicConnectionFactory * topicBindingName (mandatory) - the JNDI binding name for the topic name * getQueueBindingName (mandatory only if using jmsCacheLoader) - the JNDI binding name for the queue name * securityPrincipalName - the JNDI java.naming.security.principal * securityCredentials - the JNDI java.naming.security.credentials * urlPkgPrefixes - the JNDI java.naming.factory.url.pkgs * userName - the user name to use when creating the TopicConnection to the Message Queue * password - the password to use when creating the TopicConnection to the Message Queue * acknowledgementMode - the JMS Acknowledgement mode for both publisher and subscriber. The available choices are AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE and SESSION_TRANSACTED. The default is AUTO_ACKNOWLEDGE. --> <cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="peerDiscovery=automatic, multicastGroupAddress=230.0.0.1, multicastGroupPort=4446, timeToLive=1" propertySeparator="," /> <!-- CacheManagerPeerListener ======================== (Enable for distributed operation) Specifies a CacheManagerPeerListenerFactory which will be used to create a CacheManagerPeerListener, which listens for messages from cache replicators participating in the cluster. The attributes of cacheManagerPeerListenerFactory are: class - a fully qualified factory class name properties - comma separated properties having meaning only to the factory. Ehcache comes with a built-in RMI-based distribution system. The listener component is RMICacheManagerPeerListener which is configured using RMICacheManagerPeerListenerFactory. It is configured as per the following example: <cacheManagerPeerListenerFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory" properties="hostName=fully_qualified_hostname_or_ip, port=40001, remoteObjectPort=40002, socketTimeoutMillis=120000" propertySeparator="," /> All properties are optional. They are: * hostName - the hostName of the host the listener is running on. Specify where the host is multihomed and you want to control the interface over which cluster messages are received. Defaults to the host name of the default interface if not specified. * port - the port the RMI Registry listener listens on. This defaults to a free port if not specified. * remoteObjectPort - the port number on which the remote objects bound in the registry receive calls. This defaults to a free port if not specified. * socketTimeoutMillis - the number of ms client sockets will stay open when sending messages to the listener. This should be long enough for the slowest message. If not specified it defaults to 120000ms. --> <cacheManagerPeerListenerFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"/> <!-- TerracottaConfig ======================== (Enable for Terracotta clustered operation) Note: You need to install and run one or more Terracotta servers to use Terracotta clustering. See http://www.terracotta.org/web/display/orgsite/Download. Specifies a TerracottaConfig which will be used to configure the Terracotta runtime for this CacheManager. Configuration can be specified in two main ways: by reference to a source of configuration or by use of an embedded Terracotta configuration file. To specify a reference to a source (or sources) of configuration, use the url attribute. The url attribute must contain a comma-separated list of: * path to Terracotta configuration file (usually named tc-config.xml) * URL to Terracotta configuration file * <server host>:<port> of running Terracotta Server instance Simplest example for pointing to a Terracotta server on this machine: <terracottaConfig url="localhost:9510"/> Example using a path to Terracotta configuration file: <terracottaConfig url="/app/config/tc-config.xml"/> Example using a URL to a Terracotta configuration file: <terracottaConfig url="http://internal/ehcache/app/tc-config.xml"/> Example using multiple Terracotta server instance URLs (for fault tolerance): <terracottaConfig url="host1:9510,host2:9510,host3:9510"/> To embed a Terracotta configuration file within the ehcache configuration, simply place a normal Terracotta XML config within the <terracottaConfig> element. Example: <terracottaConfig> <tc-config> <servers> <server host="server1" name="s1"/> <server host="server2" name="s2"/> </servers> <clients> <logs>app/logs-%i</logs> </clients> </tc-config> </terracottaConfig> For more information on the Terracotta configuration, see the Terracotta documentation. --> <!-- Cache configuration =================== The following attributes are required. name: Sets the name of the cache. This is used to identify the cache. It must be unique. maxElementsInMemory: Sets the maximum number of objects that will be created in memory. 0 = no limit. In practice no limit means Integer.MAX_SIZE (2147483647) unless the cache is distributed with a Terracotta server in which case it is limited by resources. maxElementsOnDisk: Sets the maximum number of objects that will be maintained in the DiskStore The default value is zero, meaning unlimited. eternal: Sets whether elements are eternal. If eternal, timeouts are ignored and the element is never expired. overflowToDisk: Sets whether elements can overflow to disk when the memory store has reached the maxInMemory limit. The following attributes and elements are optional. overflowToOffHeap: (boolean) This feature is available only in enterprise versions of Ehcache. When set to true, enables the cache to utilize off-heap memory storage to improve performance. Off-heap memory is not subject to Java GC. The default value is false. maxMemoryOffHeap: (string) This feature is available only in enterprise versions of Ehcache. Sets the amount of off-heap memory available to the cache. This attribute's values are given as <number>k|K|m|M|g|G|t|T for kilobytes (k|K), megabytes (m|M), gigabytes (g|G), or terrabytes (t|T). For example, maxMemoryOffHeap="2g" allots 2 gigabytes to off-heap memory. This setting is in effect only if overflowToOffHeap is true. Note that it is recommended to set maxElementsInMemory to at least 100 elements when using an off-heap store, otherwise performance will be seriously degraded, and a warning will be logged. The minimum amount that can be allocated is 128MB. There is no maximum. timeToIdleSeconds: Sets the time to idle for an element before it expires. i.e. The maximum amount of time between accesses before an element expires Is only used if the element is not eternal. Optional attribute. A value of 0 means that an Element can idle for infinity. The default value is 0. timeToLiveSeconds: Sets the time to live for an element before it expires. i.e. The maximum time between creation time and when an element expires. Is only used if the element is not eternal. Optional attribute. A value of 0 means that and Element can live for infinity. The default value is 0. diskPersistent: Whether the disk store persists between restarts of the Virtual Machine. The default value is false. diskExpiryThreadIntervalSeconds: The number of seconds between runs of the disk expiry thread. The default value is 120 seconds. diskSpoolBufferSizeMB: This is the size to allocate the DiskStore for a spool buffer. Writes are made to this area and then asynchronously written to disk. The default size is 30MB. Each spool buffer is used only by its cache. If you get OutOfMemory errors consider lowering this value. To improve DiskStore performance consider increasing it. Trace level logging in the DiskStore will show if put back ups are occurring. clearOnFlush: whether the MemoryStore should be cleared when flush() is called on the cache. By default, this is true i.e. the MemoryStore is cleared. statistics: Whether to collect statistics. Note that this should be turned on if you are using the Ehcache Monitor. By default statistics is turned off to favour raw performance. To enable set statistics="true" memoryStoreEvictionPolicy: Policy would be enforced upon reaching the maxElementsInMemory limit. Default policy is Least Recently Used (specified as LRU). Other policies available - First In First Out (specified as FIFO) and Less Frequently Used (specified as LFU) copyOnRead: Whether an Element is copied when being read from a cache. By default this is false. copyOnWrite: Whether an Element is copied when being added to the cache. By default this is false. Cache elements can also contain sub elements which take the same format of a factory class and properties. Defined sub-elements are: * cacheEventListenerFactory - Enables registration of listeners for cache events, such as put, remove, update, and expire. * bootstrapCacheLoaderFactory - Specifies a BootstrapCacheLoader, which is called by a cache on initialisation to prepopulate itself. * cacheExtensionFactory - Specifies a CacheExtension, a generic mechanism to tie a class which holds a reference to a cache to the cache lifecycle. * cacheExceptionHandlerFactory - Specifies a CacheExceptionHandler, which is called when cache exceptions occur. * cacheLoaderFactory - Specifies a CacheLoader, which can be used both asynchronously and synchronously to load objects into a cache. More than one cacheLoaderFactory element can be added, in which case the loaders form a chain which are executed in order. If a loader returns null, the next in chain is called. * copyStrategy - Specifies a fully qualified class which implements net.sf.ehcache.store.compound.CopyStrategy. This strategy will be used for copyOnRead and copyOnWrite in place of the default which is serialization. Cache Event Listeners +++++++++++++++++++++ All cacheEventListenerFactory elements can take an optional property listenFor that describes which events will be delivered in a clustered environment. The listenFor attribute has the following allowed values: * all - the default is to deliver all local and remote events * local - deliver only events originating in the current node * remote - deliver only events originating in other nodes Example of setting up a logging listener for local cache events: <cacheEventListenerFactory class="my.company.log.CacheLogger" listenFor="local" /> RMI Cache Replication +++++++++++++++++++++ Each cache that will be distributed needs to set a cache event listener which replicates messages to the other CacheManager peers. For the built-in RMI implementation this is done by adding a cacheEventListenerFactory element of type RMICacheReplicatorFactory to each distributed cache's configuration as per the following example: <cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory" properties="replicateAsynchronously=true, replicatePuts=true, replicatePutsViaCopy=false, replicateUpdates=true, replicateUpdatesViaCopy=true, replicateRemovals=true asynchronousReplicationIntervalMillis=<number of milliseconds" propertySeparator="," /> The RMICacheReplicatorFactory recognises the following properties: * replicatePuts=true|false - whether new elements placed in a cache are replicated to others. Defaults to true. * replicatePutsViaCopy=true|false - whether the new elements are copied to other caches (true), or whether a remove message is sent. Defaults to true. * replicateUpdates=true|false - whether new elements which override an element already existing with the same key are replicated. Defaults to true. * replicateRemovals=true - whether element removals are replicated. Defaults to true. * replicateAsynchronously=true | false - whether replications are asynchronous (true) or synchronous (false). Defaults to true. * replicateUpdatesViaCopy=true | false - whether the new elements are copied to other caches (true), or whether a remove message is sent. Defaults to true. * asynchronousReplicationIntervalMillis=<number of milliseconds> - The asynchronous replicator runs at a set interval of milliseconds. The default is 1000. The minimum is 10. This property is only applicable if replicateAsynchronously=true JGroups Replication +++++++++++++++++++ For the Jgroups replication this is done with: <cacheEventListenerFactory class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory" properties="replicateAsynchronously=true, replicatePuts=true, replicateUpdates=true, replicateUpdatesViaCopy=false, replicateRemovals=true,asynchronousReplicationIntervalMillis=1000"/> This listener supports the same properties as the RMICacheReplicationFactory. JMS Replication +++++++++++++++ For JMS-based replication this is done with: <cacheEventListenerFactory class="net.sf.ehcache.distribution.jms.JMSCacheReplicatorFactory" properties="replicateAsynchronously=true, replicatePuts=true, replicateUpdates=true, replicateUpdatesViaCopy=true, replicateRemovals=true, asynchronousReplicationIntervalMillis=1000" propertySeparator=","/> This listener supports the same properties as the RMICacheReplicationFactory. Cluster Bootstrapping +++++++++++++++++++++ Bootstrapping a cluster may use a different mechanism to replication. e.g you can mix JMS replication with bootstrap via RMI - just make sure you have the cacheManagerPeerProviderFactory and cacheManagerPeerListenerFactory configured. There are two bootstrapping mechanisms: RMI and JGroups. RMI Bootstrap The RMIBootstrapCacheLoader bootstraps caches in clusters where RMICacheReplicators are used. It is configured as per the following example: <bootstrapCacheLoaderFactory class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory" properties="bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000" propertySeparator="," /> The RMIBootstrapCacheLoaderFactory recognises the following optional properties: * bootstrapAsynchronously=true|false - whether the bootstrap happens in the background after the cache has started. If false, bootstrapping must complete before the cache is made available. The default value is true. * maximumChunkSizeBytes=<integer> - Caches can potentially be very large, larger than the memory limits of the VM. This property allows the bootstraper to fetched elements in chunks. The default chunk size is 5000000 (5MB). JGroups Bootstrap Here is an example of bootstrap configuration using JGroups boostrap: <bootstrapCacheLoaderFactory class="net.sf.ehcache.distribution.jgroups.JGroupsBootstrapCacheLoaderFactory" properties="bootstrapAsynchronously=true"/> The configuration properties are the same as for RMI above. Note that JGroups bootstrap only supports asynchronous bootstrap mode. Cache Exception Handling ++++++++++++++++++++++++ By default, most cache operations will propagate a runtime CacheException on failure. An interceptor, using a dynamic proxy, may be configured so that a CacheExceptionHandler can be configured to intercept Exceptions. Errors are not intercepted. It is configured as per the following example: <cacheExceptionHandlerFactory class="com.example.ExampleExceptionHandlerFactory" properties="logLevel=FINE"/> Caches with ExceptionHandling configured are not of type Cache, but are of type Ehcache only, and are not available using CacheManager.getCache(), but using CacheManager.getEhcache(). Cache Loader ++++++++++++ A default CacheLoader may be set which loads objects into the cache through asynchronous and synchronous methods on Cache. This is different to the bootstrap cache loader, which is used only in distributed caching. It is configured as per the following example: <cacheLoaderFactory class="com.example.ExampleCacheLoaderFactory" properties="type=int,startCounter=10"/> XA Cache ++++++++ To enable an ehcache as a participant in the JTA Transaction, just have the following attribute transactionalMode="xa", otherwise the default is transactionalMode="off" Cache Writer ++++++++++++ A CacheWriter maybe be set to write to an underlying resource. Only one CacheWriter can be been to a cache. It is configured as per the following example for write-through: <cacheWriter writeMode="write-through" notifyListenersOnException="true"> <cacheWriterFactory class="net.sf.ehcache.writer.TestCacheWriterFactory" properties="type=int,startCounter=10"/> </cacheWriter> And it is configured as per the following example for write-behind: <cacheWriter writeMode="write-behind" minWriteDelay="1" maxWriteDelay="5" rateLimitPerSecond="5" writeCoalescing="true" writeBatching="true" writeBatchSize="1" retryAttempts="2" retryAttemptDelaySeconds="1"> <cacheWriterFactory class="net.sf.ehcache.writer.TestCacheWriterFactory" properties="type=int,startCounter=10"/> </cacheWriter> The cacheWriter element has the following attributes: * writeMode: the write mode, write-through or write-behind These attributes only apply to write-through mode: * notifyListenersOnException: Sets whether to notify listeners when an exception occurs on a writer operation. These attributes only apply to write-behind mode: * minWriteDelay: Set the minimum number of seconds to wait before writing behind. If set to a value greater than 0, it permits operations to build up in the queue. This is different from the maximum write delay in that by waiting a minimum amount of time, work is always being built up. If the minimum write delay is set to zero and the CacheWriter performs its work very quickly, the overhead of processing the write behind queue items becomes very noticeable in a cluster since all the operations might be done for individual items instead of for a collection of them. * maxWriteDelay: Set the maximum number of seconds to wait before writing behind. If set to a value greater than 0, it permits operations to build up in the queue to enable effective coalescing and batching optimisations. * writeBatching: Sets whether to batch write operations. If set to true, writeAll and deleteAll will be called on the CacheWriter rather than write and delete being called for each key. Resources such as databases can perform more efficiently if updates are batched, thus reducing load. * writeBatchSize: Sets the number of operations to include in each batch when writeBatching is enabled. If there are less entries in the write-behind queue than the batch size, the queue length size is used. * rateLimitPerSecond: Sets the maximum number of write operations to allow per second when writeBatching is enabled. * writeCoalescing: Sets whether to use write coalescing. If set to true and multiple operations on the same key are present in the write-behind queue, only the latest write is done, as the others are redundant. * retryAttempts: Sets the number of times the operation is retried in the CacheWriter, this happens after the original operation. * retryAttemptDelaySeconds: Sets the number of seconds to wait before retrying an failed operation. Cache Extension +++++++++++++++ CacheExtensions are a general purpose mechanism to allow generic extensions to a Cache. CacheExtensions are tied into the Cache lifecycle. CacheExtensions are created using the CacheExtensionFactory which has a <code>createCacheCacheExtension()</code> method which takes as a parameter a Cache and properties. It can thus call back into any public method on Cache, including, of course, the load methods. Extensions are added as per the following example: <cacheExtensionFactory class="com.example.FileWatchingCacheRefresherExtensionFactory" properties="refreshIntervalMillis=18000, loaderTimeout=3000, flushPeriod=whatever, someOtherProperty=someValue ..."/> Cache Decorator Factory +++++++++++++++++++++++ Cache decorators can be configured directly in ehcache.xml. The decorators will be created and added to the CacheManager. It accepts the name of a concrete class that extends net.sf.ehcache.constructs.CacheDecoratorFactory The properties will be parsed according to the delimiter (default is comma ',') and passed to the concrete factory's <code>createDecoratedEhcache(Ehcache cache, Properties properties)</code> method alongwith the reference to the owning cache. It is configured as per the following example: <cacheDecoratorFactory class="net.sf.ehcache.constructs.nonstop.NonStopCacheDecoratorFactory" properties="name=localReadsCacheDecorator, timeoutBehavior=localReads ..." /> Terracotta Clustering +++++++++++++++++++++ Cache elements can also contain information about whether the cache can be clustered with Terracotta. The <terracotta> sub-element has the following attributes: * clustered=true|false - indicates whether this cache should be clustered with Terracotta. By default, if the <terracotta> element is included, clustered=true. * valueMode=serialization|identity - indicates whether this cache should be clustered with serialized copies of the values or using Terracotta identity mode. By default, values will be cached in serialization mode which is similar to other replicated Ehcache modes. The identity mode is only available in certain Terracotta deployment scenarios and will maintain actual object identity of the keys and values across the cluster. In this case, all users of a value retrieved from the cache are using the same clustered value and must provide appropriate locking for any changes made to the value (or objects referred to by the value). * copyOnRead=true|false - indicates whether cache values are deserialized on every read or if the materialized cache value can be re-used between get() calls. This setting is useful if a cache is being shared by callers with disparate classloaders or to prevent local drift if keys/values are mutated locally w/o putting back to the cache. NOTE: This setting is only relevant for caches with valueMode=serialization * coherent=true|false - indicates whether this cache should have coherent reads and writes with guaranteed consistency across the cluster. By default, its value is true. If this attribute is set to false (or "incoherent" mode), values from the cache are read without locking, possibly yielding stale data. Writes to a cache in incoherent mode are batched and applied without acquiring cluster-wide locks, possibly creating inconsistent values across cluster. Incoherent mode is a performance optimization with weaker concurrency guarantees and should generally be used for bulk-loading caches, for loading a read-only cache, or where the application that can tolerate reading stale data. This setting overrides coherentReads, which is deprecated. * synchronousWrites=true|false - When set to true, clustered caches use Terracotta SYNCHRONOUS WRITE locks. Asynchronous writes (synchronousWrites="false") maximize performance by allowing clients to proceed without waiting for a "transaction received" acknowledgement from the server. Synchronous writes (synchronousWrites="true") maximize data safety by requiring that a client receive server acknowledgement of a transaction before that client can proceed. If coherence mode is disabled using configuration (coherent="false") or through the coherence API, only asynchronous writes can occur (synchronousWrites="true" is ignored). By default this value is false (i.e. clustered caches use normal Terracotta WRITE locks). * storageStrategy=classic|DCV2 - Can be used to configure how the key/values of the cache are stored. The key/values can be stored locally in the vm or stored in the Terracotta Server Array. "classic" value for the attribute stores the key/values in the local vm and using "DCV2" stores it in the Terracotta Server Array. This attribute is optional and default value is "classic" * concurrency - A non-negative integer that can be used to specify the number of segments that will be used by the map underneath the Terracotta Store. Its optional and has default value of 0, which means will use default values based on the internal Map being used underneath the store. This value cannot be changed programmatically once cache is initialized. Simplest example to indicate clustering: <terracotta/> To indicate the cache should not be clustered (or remove the <terracotta> element altogether): <terracotta clustered="false"/> To indicate the cache should be clustered using identity mode: <terracotta clustered="true" valueMode="identity"/> To indicate the cache should be clustered using incoherent mode for bulk load: <terracotta clustered="true" coherent="false"/> To indicate the cache should be clustered using synchronous-write locking level: <terracotta clustered="true" synchronousWrites="true"/> To indicate the key/values should be stored in the Terracotta Server Array: <terracotta clustered="true" storageStrategy="DCV2"/> --> <!-- Mandatory Default Cache configuration. These settings will be applied to caches created programmtically using CacheManager.add(String cacheName). The defaultCache has an implicit name "default" which is a reserved cache name. --> <defaultCache maxElementsInMemory="10000" eternal="false" timeToIdleSeconds="120" timeToLiveSeconds="120" overflowToDisk="true" diskSpoolBufferSizeMB="30" maxElementsOnDisk="10000000" diskPersistent="false" diskExpiryThreadIntervalSeconds="120" memoryStoreEvictionPolicy="LRU" statistics="false" /> <!-- Sample caches. Following are some example caches. Remove these before use. --> <!-- Sample cache named sampleCache1 This cache contains a maximum in memory of 10000 elements, and will expire an element if it is idle for more than 5 minutes and lives for more than 10 minutes. If there are more than 10000 elements it will overflow to the disk cache, which in this configuration will go to wherever java.io.tmp is defined on your system. On a standard Linux system this will be /tmp" --> <cache name="sampleCache1" maxElementsInMemory="10000" maxElementsOnDisk="1000" eternal="false" overflowToDisk="true" diskSpoolBufferSizeMB="20" timeToIdleSeconds="300" timeToLiveSeconds="600" memoryStoreEvictionPolicy="LFU" transactionalMode="off" /> <!-- Sample cache named sampleCache2 This cache has a maximum of 1000 elements in memory. There is no overflow to disk, so 1000 is also the maximum cache size. Note that when a cache is eternal, timeToLive and timeToIdle are not used and do not need to be specified. --> <cache name="sampleCache2" maxElementsInMemory="1000" eternal="true" overflowToDisk="false" memoryStoreEvictionPolicy="FIFO" /> <!-- Sample cache named sampleCache3. This cache overflows to disk. The disk store is persistent between cache and VM restarts. The disk expiry thread interval is set to 10 minutes, overriding the default of 2 minutes. --> <cache name="sampleCache3" maxElementsInMemory="500" eternal="false" overflowToDisk="true" timeToIdleSeconds="300" timeToLiveSeconds="600" diskPersistent="true" diskExpiryThreadIntervalSeconds="1" memoryStoreEvictionPolicy="LFU" /> <!-- Sample distributed cache named sampleDistributedCache1. This cache replicates using defaults. It also bootstraps from the cluster, using default properties. --> <cache name="sampleDistributedCache1" maxElementsInMemory="10" eternal="false" timeToIdleSeconds="100" timeToLiveSeconds="100" overflowToDisk="false"> <cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"/> <bootstrapCacheLoaderFactory class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory"/> </cache> <!-- Sample distributed cache named sampleDistributedCache2. This cache replicates using specific properties. It only replicates updates and does so synchronously via copy --> <cache name="sampleDistributedCache2" maxElementsInMemory="10" eternal="false" timeToIdleSeconds="100" timeToLiveSeconds="100" overflowToDisk="false"> <cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory" properties="replicateAsynchronously=false, replicatePuts=false, replicatePutsViaCopy=false, replicateUpdates=true, replicateUpdatesViaCopy=true, replicateRemovals=false"/> </cache> <!-- Sample distributed cache named sampleDistributedCache3. This cache replicates using defaults except that the asynchronous replication interval is set to 200ms. This one includes / and # which were illegal in ehcache 1.5. --> <cache name="sample/DistributedCache3" maxElementsInMemory="10" eternal="false" timeToIdleSeconds="100" timeToLiveSeconds="100" overflowToDisk="true"> <cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory" properties="asynchronousReplicationIntervalMillis=200"/> </cache> <!-- Sample Terracotta clustered cache named sampleTerracottaCache. This cache uses Terracotta to cluster the contents of the cache. --> <!-- <cache name="sampleTerracottaCache" maxElementsInMemory="1000" eternal="false" timeToIdleSeconds="3600" timeToLiveSeconds="1800" overflowToDisk="false"> <terracotta/> </cache> --> <!-- Sample xa enabled cache named xaCache <cache name="xaCache" maxElementsInMemory="500" eternal="false" timeToIdleSeconds="300" timeToLiveSeconds="600" overflowToDisk="false" diskPersistent="false" diskExpiryThreadIntervalSeconds="1" transactionalMode="xa"> </cache> --> <!-- Sample copy on both read and write cache named copyCache using the default (explicitly configured here as an example) SerializationCopyStrategy class could be any implementation of net.sf.ehcache.store.compound.CopyStrategy <cache name="copyCache" maxElementsInMemory="500" eternal="false" timeToIdleSeconds="300" timeToLiveSeconds="600" overflowToDisk="false" diskPersistent="false" diskExpiryThreadIntervalSeconds="1" copyOnRead="true" copyOnWrite="true"> <copyStrategy class="net.sf.ehcache.store.compound.SerializationCopyStrategy" /> </cache> --> </ehcache>