herence.ce.coherence.14.1.1-0-17.source-code.coherence-cache-config.xsd Maven / Gradle / Ivy
Go to download
Show more of this group Show more artifacts with this name
Show all versions of coherence Show documentation
Show all versions of coherence Show documentation
Oracle Coherence Community Edition
Copyright (c) 2000, 2021, Oracle and/or its affiliates.
Licensed under the Universal Permissive License v 1.0 as shown at
http://oss.oracle.com/licenses/upl.
This is the XML schema for the Coherence cache configuration file
(coherence-cache-config.xml).
The cache-config element is the root element of the
cache-config descriptor.
Note: scope-name as a child element of cache-config is deprecated.
scope-name should be used in the defaults element instead.
The scope-name element contains the scope name for this configuration.
The scope name is typically added to the service name (as a prefix)
for all services generated by a cache factory. Scope may be used to isolate
services indicated in this cache configuration from services created
by cache factories with other configurations, thus avoiding unintended
joining of services with similar names from different configurations.
scope-name may also be set on a service by service basis.
Note: The use of scope-name in the cache-config element is deprecated.
The defaults element is the recommended parent element for scope-name
when setting the default scope name for this configuration.
Used in: cache-config, defaults, proxy-scheme, remote-cache-scheme,
remote-invocation-scheme, distributed-scheme-type,
optimistic-scheme, replicated-scheme, local-scheme
The defaults element defines factory wide default settings.
Used in: cache-config
The caching-scheme-mapping element contains the bindings between the
cache names and the caching schemes specified for the caches to use.
Used in: cache-config
The cache-mapping element contains a single binding between a cache
name and a caching scheme this cache will use. Following cache name
patterns are supported:
- exact match, i.e. "MyCache",
- prefix match, i.e. "My*" that matches to any cache name starting
with "My",
- any match "*", that matches to any cache name.
Starting with Coherence 3.0 the cache-mapping element allows specifying
replaceable cache scheme parameters by supplying any number of
"init-param" elements.
During cache scheme parsing, any occurrence of any replaceable parameter
in format "{parameter-name}" is replaced with the corresponding
parameter value.
Consider the following cache mapping example:
<cache-mapping>
<cache-name>My*</cache-name>
<scheme-name>my-scheme</scheme-name>
<init-params>
<init-param>
<param-name>cache-loader</param-name>
<param-value>com.acme.MyCacheLoader</param-value>
</init-param>
<init-param>
<param-name>size-limit</param-name>
<param-value>1000</param-value>
</init-param>
</init-params>
</cache-mapping>
For any matching cache name, any occurrence of the literal
"{cache-loader}" in any part of the corresponding cache-scheme
element will be replaced with the string "com.acme.MyCacheLoader"
and any occurrence of the literal "{size-limit}" will be
replaced with the value of "1000".
Used in: caching-config
The topic-scheme-mapping element contains the bindings between the
topic names and the topic schemes specified for the topic to use.
Used in: cache-config
The topic-mapping element contains a single binding between a topic
name and a topic scheme this topic will use. Following topic name
patterns are supported:
- exact match, i.e. "MyTopic",
- prefix match, i.e. "My*" that matches to any topic name starting
with "My",
- any match "*", that matches to any topic name.
The topic-mapping element allows specifying
replaceable topic scheme parameters by supplying any number of
"init-param" elements.
During topic scheme parsing, any occurrence of any replaceable parameter
in format "{parameter-name}" is replaced with the corresponding
parameter value.
Consider the following topic mapping example:
<topic-mapping>
<topic-name>My*</topic-name>
<scheme-name>my-scheme</scheme-name>
<value-type>String</value-type>
<init-params>
<init-param>
<param-name>page-size</param-name>
<param-value>10MB</param-value>
</init-param>
</init-params>
</topic-mapping>
Any occurrence of the literal "{page-size}" will be replaced with the value of "10MB".
Used in: topic-mapping
The caching-schemes element contains the definitions of all the
available caching schemes. Caching schemes can be defined from
scratch or configured to use other caching schemes and override
some of the characteristics of the schemes they use. Specifying
scheme-name allows for other schemes to refer to a scheme by its
unique name. Specifying scheme-ref allows for the scheme
to inherit all the characteristics defined in the base scheme whose
scheme-name is referred to by scheme-ref element, overriding any
subset of its settings.
Used in: cache-config
The clustered-caching-scheme element represents the group of
caching schemes that are clustered services.
Includes: distributed-scheme, federated-scheme, replicated-scheme,
transactional-scheme, optimistic-scheme
The distributed-scheme-type type contains the caching scheme
configuration info that is shared by distributed schemes.
Extended by: distributed-scheme and federated-scheme
The distributed-scheme element is a distributed-scheme-type that contains
the distributed caching scheme configuration info.
Used in: clustered-caching-scheme
The federated element is used to mark if a cache which is mapped
to federated-scheme should be replicated.
Valid values are true or false.
Default value is true.
Used in: cache-mapping
The federated-scheme element is a distributed-scheme-type that contains
the federated caching scheme configuration info.
Used in: clustered-caching-scheme
The paged-topic-scheme element contains the topic caching scheme
configuration info.
Used in: cache-schemes
The journalcache-highunits element contains either a memory limit or a maximum number
for entries that the federated cache service's internal cache will hold in its backlog
for replication to destination participants. The element provides a mechanism to constrain
resources utilized by federation service internal caches. Once the journalcache-highunits
is reached, the federation service will move all the destination participants to the ERROR
state and will remove all pending entries from federation's internal backlog cache.
Valid values are memory values (e.g. "1G") or positive integers and zero. A memory value
is treated as a memory limit on federation's backlog. If no units are specified, then
the value is treated as a limit on the number of entries in the backlog.
Zero implies no limit. The default value is 0.
Used in: federated-scheme
The send-old-value flag indicates if the federation service should include the old
values when replicating updated cache entries to the remote participants.
Valid values is true or false.
Default value is true.
Used in: federated-scheme
The transactional-scheme element contains the transactional caching
scheme configuration info.
Used in: clustered-caching-scheme
The replicated-scheme element contains the replicated caching scheme
configuration info.
As of 12.2.1.4, this feature has been deprecated in favor of view-scheme
which offers the benefits of both replicated and distributed schemes.
Used in: clustered-caching-scheme
The backing-map-scheme definition for replicated-scheme and optimistic-scheme
differs from that used in distributed-scheme-type, therefore inline definition.
The optimistic-scheme element contains the optimistic caching scheme
configuration info.
Used in: clustered-caching-scheme
The backing-map-scheme definition for replicated-scheme and optimistic-scheme
differs from that used in distributed-scheme-type, therefore inline definition.
The composite-caching-scheme entity defines the selection of the
composite caching scheme types, designed to create multi-tiered
clustered caches with persistence implementations, loaders, etc.
These caches utilize other caches as building blocks to provide
more sophisticated functionality.
Used in: caching-schemes
The standalone-caching-scheme entity defines the selection of the
standalone caching scheme types, designed to run in a single JVM
and to be used by other caching schemes for data storage on for
cache values on a specific participating cluster node.
Used in: caching-schemes, internal-cache-scheme,
back-scheme, backing-map-scheme
The remote-invocation-scheme element contains the configuration info
necessary to execute tasks within the context of a cluster without
having to first join the cluster.
Used in: caching-schemes
The invocation-scheme element contains the
invocation scheme configuration info.
Used in: caching-scheme
The read-write-backing-map-scheme element contains
the read-write backing map configuration info.
This scheme is implemented by
com.tangosol.net.cache.ReadWriteBackingMap
class (unless overridden by the class-name element).
Used in: caching-scheme
The remote-cache-scheme element contains the
configuration info necessary to use a clustered
cache from outside the cluster.
Used in: caching-schemes, cachestore-scheme, back-scheme
The proxy-scheme element contains the configuration
info for a clustered service that allows clients to use
clustered services without having to join the cluster.
Used in: caching-schemes
The local-scheme element contains the local caching
scheme configuration info. It should be used to specify
and configure local caches or backing maps with various
eviction policies.
This scheme is implemented by
com.tangosol.net.cache.LocalCache class (unless
overridden by the class-name element).
Used in: standalone-caching-scheme
The near-scheme element contains the near caching
scheme configuration info.
This scheme is implemented by com.tangosol.net.cache.NearCache
class (unless overridden by the class-name element).
Used in: composite-caching-scheme
The overflow-scheme element contains the overflow
caching scheme configuration info.
To enable the automatic expiration of cache entries
in the overflow cache based on the time-to-live setting
passed to "put(key, value, ttl)", set the expiry-enabled
element to true or provide a default expiry in the
expiry-delay element.
This scheme is implemented by com.tangosol.net.cache.OverflowMap
class or by com.tangosol.net.cache.SimpleOverflowMap class
(unless overridden by the class-name element). To explicitly use
either the OverflowMap or the SimpleOverflowMap implementation,
specify com.tangosol.net.cache.OverflowMap or
com.tangosol.net.cache.SimpleOverflowMap explicitly in the
class-name element. Otherwise, if expiry-enabled is true or if
the back-scheme is an observable implementation, then the
OverflowMap will be used. Otherwise, the SimpleOverflowMap will
be used.
Used in:
caching-schemes, distributed-scheme-type, replicated-scheme,
optimistic-scheme, read-write-backing-map-scheme.
The internal-cache-scheme element contains the
internal cache storage configuration info.
Used in: read-write-backing-map-scheme
The miss-cache-scheme element contains configuration
info for the local cache used to register cachestore
misses.
Used in: read-write-backing-map-scheme
The cachestore-scheme element contains the cachestore
configuration info.
Implementation classes should implement one of
two interfaces: com.tangosol.net.cache.CacheLoader or
com.tangosol.net.cache.CacheStore
Used in: read-write-backing-map-scheme,local-scheme
The view-scheme element contains the continuous view
caching scheme configuration info.
Used in: composite-caching-scheme
The federated-loading flag indicates whether the federation service should
replicate entries loaded from the cache store to the remote participants.
Valid values are true or false.
Default value is false.
Used in: cachestore-scheme
The external-scheme element contains the configuration info
for a cache that is not JVM heap based.
This scheme is implemented by the
com.tangosol.net.cache.SerializationMap class for size
unlimited caches and the com.tangosol.net.cache.SerializationCache
class for size limited caches.
The implementation type is chosen based on the following
rule:
- if the high-units element is specified and not zero then
SerializationCache
is used (and the unit-calculator element is
respected);
- otherwise SerializationMap is used.
The actual com.tangosol.io.BinaryStore implementation supplied to
the SerializationMap or SerializationCache depends on the specified
store manager configuration (one of async-store-manager,
custom-store-manager, bdb-store-manager,
nio-file-manager, or nio-memory-manager).
The ramjournal-scheme uses a SimpleSerializationMap as the
backing map implementation.
Used in: caching-schemes, internal-cache-scheme,
back-scheme, backing-map-scheme
The flashjournal-scheme
Used in: caching-schemes, internal-cache-scheme,
back-scheme, backing-map-scheme
The paged-external-scheme element contains the configuration
info for a cache that is not JVM heap based and that
implements an LRU policy using time-based paging.
This scheme is implemented by the
com.tangosol.net.cache.SerializationPagedCache
class. A detailed description of the paged cache functionality can
be found at
http://www.tangosol.com/downloads/WriteMostlyBrief.pdf.
The actual com.tangosol.io.BinaryStoreManager implementation
supplied to the SerializationPagedCache depends on the specified
store manager configuration (one of async-store-manager,
custom-store-manager, bdb-store-manager,
nio-file-manager, or nio-memory-manager).
The participant-destination-name element contains the destination name
that should be used to obtain participant destination configuration
within the operational configuration.
In the absence of this element, federation will use destination
specific configuration keyed by the service name, if present.
Default value is the service name.
Used in: federated-scheme
The topologies element is a container for the definition of
many topology elements.
Used in: federated-scheme
The topology element refers to the named topology defined
in the topology-definitions configuration within
the operational configuration.
The cache-name specifies the destination cache-name on the remote
participant.
Used in: topologies
The async-backup element specifies if the partitioned (distributed) cache
service should backup changes asynchronously while concurrently responding to
the client.
Valid values are true and false.
The async backup is disabled by default.
Used in: distributed-scheme-type
The front-scheme element contains the front tier
cache configuration info.
Used in: overflow-scheme, near-scheme
The back-scheme element contains the back tier cache
configuration info.
Used in: overflow-scheme, near-scheme, view-scheme
The backing-map-scheme-group contains the backing map configuration
info that is shared by backing map schemes.
Included by: backing-map-scheme and journalcache-backing-map-scheme
The backing-map-scheme element contains the backing map configuration info.
Note: the partitioned element is used if and only if
the parent element is a distributed-scheme or federated-scheme.
Used in: distributed-scheme-type
The journalcache-backing-map-scheme contains the backing map
configuration for federation's internal caches.
By default, federation uses Elastic Data for its internal caches.
Used in: federated-scheme
The federate-apply-synthetic element specifies whether or not changes
received from remote federation participants should be applied locally
as synthetic updates.
Note: as a federation origin, synthetic updates are not considered for
federation to destination participants. Therefore for topologies other
than active-active (or active-standby), "multi-hop" federation topologies
may not forward changes if this flag is set.
Valid values are "true" or "false". Default value is false.
Used in: backing-map-scheme (within a federated-scheme only)
The partitioned element specifies whether or not the enclosed
backing map is a PartitionAwareBackingMap. (This element
is respected only for backing-map-scheme that is a child of a
distributed-scheme-type.) If set to true, the specific scheme contained
in the backing-map-scheme element will be used to configure backing
maps for each individual partition of the PartitionAwareBackingMap;
otherwise it is used for the entire backing map itself.
The concrete implementations of the PartitionAwareBackingMap
interface are:
- com.tangosol.net.partition.ObservableSplittingBackingCache
- com.tangosol.net.partition.PartitionSplittingBackingMap
- com.tangosol.net.partition.ReadWriteSplittingBackingMap
Valid values are "true" or "false". Default value is false.
Used in: backing-map-scheme (within a distributed-scheme-type only)
The transient element specifies whether or not the enclosed
backing map should be persisted using a configured persistence
environment. (This element is respected only for a backing-map-scheme
that is a child of a distributed-scheme-type or a paged-topic-scheme that has a
persistence environment configured). If set to false, the
configured persistence environment will be used to persist the
contents of the backing map/paged-topic-scheme; otherwise,
the backing map/paged-topic-scheme is assumed to be transient
and its contents will not be recoverable upon cluster restart.
Valid values are "true" or "false". Default value is false.
Used in: backing-map-scheme (within a distributed-scheme-type only) and paged-topic-scheme
The sliding-expiry element specifies whether or not the expiry
delay of entries should be extended by the read operations.
By default the expiry delay is only extended upon updates.
Default value is false.
Used in: backing-map-scheme (within a distributed-scheme-type only)
The storage-authorizer element contains a reference to the operational
configuration for a storage authorizer used by the enclosing
partitioned cache to authorize access to the underlying cache data.
If configured, all read and write access to the data in the cache
storage (backing map) will be validated and/or audited by the
configured authorizer.
See com.tangosol.net.security.StorageAccessAuthorizer for the
details of the authorizer SPI.
Used in: backing–map-scheme (within a distributed-scheme-type only)
The persistence element contains the persistence-related
configuration for a distributed cache service.
Used in: distributed-scheme-type, paged-topic-scheme-type
The environment element contains a reference to the operational
configuration for a persistence environment used by the enclosing
distributed cache service to persist the contents of backing maps.
If configured, a persistence environment will enable cache contents
to be automatically recovered either after a cluster restart or
loss of multiple members, or on-demand from a named snapshot.
Used in: persistence
The archiver element contains a reference to the operational
configuration for a snapshot archiver used by the enclosing
distributed cache service to archive, retrieve, or purge
persistent snapshots.
Used in: persistence
The active-failure-mode element describes how the service responds
to an unexpected failure while performing persistence operations
in "active" mode.
Legal values are:
- stop-service: persistence is critical and failures encountered
while writing to or recovering from the active
persistent store should result in stopping the
service rather than continuing without persistence.
- stop-persistence: persistence is desirable, but the service
should continue servicing requests if there
is a failure encountered while writing to or
recovering from the active persistent store.
Default value is "stop-service".
Used in: persistence
The binary-store-manager entity defines the selection of
a com.tangosol.io.BinaryStoreManager implementation class.
Includes: customer-store-manager, bdb-store-manager,
nio-file-manager, nio-memory-manager
Used in: external-scheme, paged-external-scheme,
async-store-manager
The key-associator element contains the configuration info for
a class that implements the com.tangosol.net.partition.KeyAssociator
interface. This implementation must have a public default constructor.
Used in: distributed-scheme-type
The key-partitioning element contains the configuration info for
a class that implements the
com.tangosol.net.partition.KeyPartitioningStrategy interface.
This implementation must have a public default constructor.
Used in: distributed-scheme-type
The partition-assignment-strategy element contains
the configuration info for a class that implements the
com.tangosol.net.partition.PartitionAssignmentStrategy interface.
Legal values are: "simple", "mirror:AssociatedServiceName", or
configuration info for a class that implements the
com.tangosol.net.partition.PartitionAssignmentStrategy interface.
The pre-defined strategies are:
"simple"
This centralized distribution strategy attempts to balance the partition
distribution evenly, while ensuring machine-safety. The "simple" assignment
strategy is more deterministic and efficient than the "legacy" strategy.
"mirror:AssociatedServiceName"
This distribution strategy attempts to co-locate the service's partitions with
the partitions of another service. This strategy can be used to increase the
likelihood that key-associated cross-service cache access remains "local" to
the member.
Default value is "simple".
Used in: distributed-scheme-type
The partition-listener element contains the configuration
info for a class that implements the
com.tangosol.net.partition.PartitionListener
interface. This implementation must have a public default
constructor.
Used in: distributed-scheme-type
The compressor element specifies whether or not backup binary
entries are updated using a delta. A delta represents the parts
of a backup entry that must be changed in order to synchronize
it with the primary version of the entry. Deltas are created
and applied using a compressor. The default behavior is to
replace the whole backup binary entry when the primary entry
changes.
Valid values are "none" (default), "standard", and the fully
qualified name of a class that implements
the com.tangosol.io.DeltaCompressor interface.
The value of "standard" automatically selects a delta
compressor based on the serializer being used by the
partitioned service.
Used in: distributed-scheme-type
The member-listener element contains the
configuration info for a class that implements the
com.tangosol.net.MemberListener interface.
This implementation must have a public default
constructor.
Used in: distributed-scheme-type,
replicated-scheme, optimistic-scheme,
invocation-scheme, proxy-scheme
The task-hung-threshold element specifies the amount of time
in milliseconds that a request can execute on a service
worker thread before it is considered as "hung".
Note 1:
This element is applicable only if the "thread-count" value
is positive.
Note 2:
While a request is queued up and until the actual processing
starts, the corresponding task is never considered as hung.
Used in: distributed-scheme-type, transactional-scheme,
invocation-scheme, proxy-scheme
For the partitioned (distributed) cache service the task-timeout
element specifies the timeout value in milliseconds for requests
executing on the service worker threads.
For the invocation service, it specifies the timeout value for
Invocable tasks that implement the com.tangosol.net.PriorityTask
interface, but don't explicitly specify the execution timeout value
(getExecutionTimeoutMillis() returns zero).
A value of zero indicates that the default timeout (as specified by
service-guardian/timeout-milliseconds in the operational configuration
descriptor) will be used.
Note: This element is applicable only if the "thread-count" value is
positive.
Used in: distributed-scheme-type, transactional-scheme, invocation-scheme,
proxy-scheme
The guardian-timeout element specifies the guardian timeout value to
use for guarding the service and any dependant threads. If the guardian-timeout
is not specified for a given service, the default guardian timeout (as
specified by the operational config element "service-guardian/timeout-milliseconds")
is used.
The value of this element must be in the following format:
[\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]?
where the first non-digits (from left to right) indicate the unit of time
duration:
-MS or ms
(milliseconds)
-S or s (seconds)
-M or m (minutes)
-H or h (hours)
-D or d (days)
If the value does not contain a unit, a unit of milliseconds
is assumed.
Used in: distributed-scheme-type, replicated-scheme,
optimistic-scheme,invocation-scheme, proxy-scheme
The service-failure-policy element contains the configuration info
for how to respond an abnormally behaving service.
Legal values are: "exit-cluster", "exit-process", "logging", or
configuration info for a class that implements the
com.tangosol.net.ServiceFailurePolicy interface.
The pre-defined policies are:
"exit-cluster"
This option will attempt to recover threads that appear be
unresponsive, and failing that, attempt to stop the associated
service. If the associated service cannot be stopped, this policy
will cause the local node to stop the cluster services
"exit-process"
This option will attempt to recover threads that appear be unresponsive,
and failing that, attempt to stop the associated service. If the
associated service cannot be stopped, this policy will cause the local
node to exit the JVM to terminate abruptly
"logging"
This option will cause any detected problems to be logged,
but no corrective action will be taken
Default value is "exit-cluster".
Used in: distributed-scheme-type, replicated-scheme,
optimistic-scheme, invocation-scheme, proxy-scheme
The custom-store-manager element specifies the configuration info
for a custom com.tangosol.io.BinaryStoreManager implementation class.
Used in: external-scheme, paged-external-scheme, async-store-manager
The bdb-store-manager element specifies the configuration info
for a com.tangosol.io.BinaryStoreManager implementation that
creates com.tangosol.io.BinaryStore objects that use Berkeley
DB JE for their underlying storage.
This store manager is implemented by the
com.tangosol.io.bdb.BerkeleyDBBinaryStoreManager
class. The BinaryStore objects created by this class are
instances of com.tangosol.io.bdb.BerkeleyDBBinaryStore.
Used in: external-scheme, paged-external-scheme,
async-store-manager
The nio-file-manager element specifies the configuration info
for a com.tangosol.io.BinaryStoreManager implementation that
creates com.tangosol.io.BinaryStore objects that use NIO
memory mapped files for their underlying storage.
This store manager is implemented by the
com.tangosol.io.nio.MappedStoreManager class. The BinaryStore
objects created by this class are instances of
com.tangosol.io.nio.BinaryMap.
Used in: external-scheme, paged-external-scheme,
async-store-manager
The nio-memory-manager element specifies the configuration info
for a com.tangosol.io.BinaryStoreManager implementation that
creates com.tangosol.io.BinaryStore objects that use direct
java.nio.ByteBuffer objects for their underlying storage.
This store manager is implemented by the
com.tangosol.io.nio.DirectStoreManager class. The BinaryStore
objects created by this class are instances of
com.tangosol.io.nio.BinaryMap.
Used in: external-scheme, paged-external-scheme,
async-store-manager
The async-store-manager element specifies the configuration info
for a wrapper com.tangosol.io.BinaryStoreManager implementation
that creates wrapper com.tangosol.io.BinaryStore objects that
perform write operations asynchronously.
This store manager is implemented by the
com.tangosol.io.AsyncBinaryStoreManager class. The BinaryStore
objects created by this class are instances of
com.tangosol.io.AsyncBinaryStore.
Used in: external-scheme, paged-external-scheme
The initial-size element specifies the initial buffer size
in bytes.
Only applicable with "NIO-*" file-managers in the external-scheme,
the nio-file-manager, the nio-memory-manager, and the "file-mapped"
type for backup-storage.
The value of this
element must be in the following format:
(\d)+[K|k|M|m|G|g|T|t]?[B|b]?
where the first non-digit (from left to right) indicates the factor
with which the preceding decimal value should be multiplied:
-K or k (kilo, 2^10)
-M or m (mega, 2^20)
-G or g (giga, 2^30)
-T or t (tera, 2^40)
If the value does not contain a factor, a factor of mega is assumed.
Valid values are positive integers between 1 and Integer.MAX_VALUE -
1023.
Default value is 1MB.
Used in: external-scheme, backup-storage,
nio-file-manager, nio-memory-manager
The maximum-size element specifies the maximum buffer size
in bytes.
Only applicable with "NIO-*" file-managers in the external-scheme,
the nio-file-manager, the nio-memory-manager, and the "file-mapped"
type for backup-storage.
The value of this element must be in the following format:
(\d)+[K|k|M|m|G|g|T|t]?[B|b]?
where the first non-digit (from left to right) indicates the factor
with which the preceding decimal value should be multiplied:
-K or k (kilo, 2^10)
-M or m (mega, 2^20)
-G or g (giga, 2^30)
-T or t (tera, 2^40)
If the value does not contain a factor, a factor of mega is assumed.
Valid values are positive integers between 1 and Integer.MAX_VALUE -
1023.
Default value is 1024MB.
Used in: external-scheme, backup-storage,
nio-file-manager, nio-memory-manager
The page-limit element specifies the maximum
number of active pages.
Valid values are positive integers between 2
and 3600 and zero.
Default value is zero.
Used in: external-scheme,
paged-external-scheme
The page-duration element specifies length of
time, in seconds, that a page is current.
The value of this element
must be in the following format:
(\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]?
where the first non-digits (from left to right) indicate
the unit of time duration:
-MS or ms
(milliseconds)
-S or s (seconds)
-M or m (minutes)
-H or h (hours)
-D or d (days)
If the value does not contain a unit, a unit of seconds is
assumed.
Valid values are between 5 and 604800 seconds (one week) and
zero.
Default value is zero.
Used in: external-scheme, paged-external-scheme
The store-name element specifies the name for a non-temporary data
store to use for a persistence manager. This is intended only for
local caches that are backed by a cache loader from a non-temporary
store, so that the local cache can be pre-populated from the store
on startup.
Used in: bdb-store-manager
The operation-name element specifies the operation name for which calls
performed concurrently on multiple threads will be "bundled" into a
functionally analogous [bulk] operation that takes a collection of
arguments instead of a single one.
Valid values depend on the bundle configuration context. For the
cachestore-scheme the valid operations are "load", "store" and
"erase". For distributed-scheme-types and remote-cache-scheme the valid
operations are "get", "put" and "remove". In all cases there is a
pseudo operation "all", referring to all valid operations.
Default value is "all".
Used in: bundle-config
The preferred-size element specifies the bundle size threshold. When
a bundle size reaches this value, the corresponding "bulk" operation
will be invoked immediately. This value is measured in context-specific
units.
Valid values are zero (disabled bundling) or positive values.
Default value is zero.
Used in: bundle-config
The delay-millis element specifies the maximum amount of
time in milliseconds that individual execution requests are
allowed to be deferred for a purpose of "bundling" them
together and passing into a corresponding bulk operation.
If the preferred-size threshold is reached before the
specified delay, the bundle is processed immediately.
Valid values are positive numbers.
Default value is 1.
Used in: bundle-config
The thread-threshold element specifies the minimum number of
threads that must be concurrently executing individual
(non-bundled) requests for the bundler to switch from a
pass-through to a bundling mode.
Valid values are positive numbers.
Default value is 4.
Used in: bundle-config
The auto-adjust element specifies whether or not the auto
adjustment of the preferred-size value (based on the
run-time statistics) is allowed.
Valid values are "true" or "false".
Default value is "false".
Used in: bundle-config
The maximum number of simultaneous connections allowed by a connection
acceptor.
Valid values are positive integers and zero.
A value of zero implies no limit.
Default value is zero.
Used in: acceptor-config
The operation-bundling element specifies the configuration info
for a particular bundling strategy.
Bundling is a process of coalescing multiple individual operations
into "bundles". It could be beneficial when (1) there is a continuous
stream of operations on multiple threads in parallel; (2) individual
operations have relatively high latency (network or
database-related); and (3) there are functionally analogous [bulk]
operations that take a collection of arguments instead of a single
one without causing the latency to grow linearly (as a function of
the collection size).
Note: As with any bundling algorithm, there is a natural trade-off
between the resource utilization and average request latency.
Depending on a particular application usage pattern, enabling this
feature may either help or hurt the overall application performance.
See com.tangosol.net.cache.AbstractBundler for
additional implementation details.
Used in: cachestore-scheme, distributed-scheme-type, remote-cache-scheme
The bundle-config element specifies the bundling strategy
configuration for one or more bundle-able operations.
Used in: operation-bundling
The initiator-config element specifies the configuration info
for a protocol-specific connection initiator. A connection
initiator allows a client to connect to a cluster (via a
connection acceptor) and use the clustered services offered by
the cluster without having to first join the cluster.
Used in: remote-cache-scheme, remote-invocation-scheme
The acceptor-config element specifies the configuration info
for a protocol-specific connection acceptor used by a proxy
service to enable clients to connect to the cluster and use
the services offered by the cluster without having to join the
cluster.
Used in: proxy-scheme
The proxy-config element specifies the configuration info
for the clustered service proxies managed by a proxy service.
A service proxy is an intermediary between a remote client
(connected to the cluster via a connection acceptor) and a
clustered service used by the remote client.
Used in: proxy-scheme
The load-balancer element contains the configuration info for
a pluggable strategy used by the proxy service and federated
service to distribute client connections across the set of
clustered proxy and federated service members.
Legal values when used within a proxy service:
"proxy", "client", or configuration info for a class that implements
the com.tangosol.net.proxy.ProxyServiceLoadBalancer interface.
Legal values when used within a federated service:
"federation", "client", or configuration info for a class that implements
the com.tangosol.net.federation.FederatedServiceLoadBalancer interface.
The pre-defined strategies are:
"proxy"
This strategy will attempt to distribute client connections equally
across proxy service members based upon existing connection count,
connection limit, incoming and outgoing message backlog, and daemon
pool utilization.
"federation"
This strategy will attempt to distribute client connections equally
across federated service members based upon existing connection count,
and incoming message backlog.
"client"
This strategy relies upon the client address provider implementation to
dictate the distribution of clients across proxy service members.
Default value:
"proxy" for proxy service.
"federation" for federated service.
Used in: proxy-scheme, federated-scheme
The tcp-initiator element specifies the configuration info for a
connection initiator that enables clients to connect to a remote
cluster via TCP/IP.
If the initiator is not configured with any destination addresses then
address lookup will be performed via the NameService using the
operational configuration cluster discovery addresses. This is
suitable for deployments where the extend client resides on the same
network as the cluster.
Used in: initiator-config
The http-acceptor element specifies the configuration info for a
connection acceptor that accepts connections from remote REST
clients over HTTP.
Used in: acceptor-config
The tcp-acceptor element specifies the configuration info for a
connection acceptor that accepts connections from remote clients
over TCP/IP.
Note: As of 12.2.1 the specification of a local-address element within the
tcp-acceptor element is deprecated, and instead it should be specified
as a sub element of the address-provider element.
Note: if no address specified the proxy will share a port with TCMP and clients
will need to locate the proxy via the tcp-initiator/name-service-addresses
rather then tcp-initiator/remote-addresses.
Used in: acceptor-config
The memcached-acceptor element specifies the configuration info for a
connection acceptor that accepts connections from remote Memcached
clients over TCP/IP.
The memcached-acceptor can only work with partitioned (distributed) caches.
Used in: acceptor-config
The authorized-hosts element contains the collection of IP addresses
of TCP/IP initiator hosts that are allowed to connect to the cluster
via a TCP/IP acceptor. If this collection is empty no constraints are
imposed.
Used in: tcp-acceptor
The lock-enabled element specifies whether or not
lock requests from remote clients are permitted on a cache.
Used in: cache-service-proxy
The reuse-address element specifies whether or not a TCP/IP
socket can be bound to an address if a previous connection is
in a timeout state.
When a TCP/IP connection is closed the connection may remain in a
timeout state for a period of time after the connection is closed
(typically known as the TIME_WAIT state or 2MSL wait state). For
applications using a well known socket address or port it may not
be possible to bind a socket to a required address if there is a
connection in the timeout state involving the socket address or port.
Valid values are true and false.
Default value is true for the tcp-acceptor and false for the
tcp-initiator.
Used in: tcp-initiator, tcp-acceptor
The value of the listen-backlog element is used to configure
the size of the TCP/IP server socket backlog queue.
Valid values are positive integers.
Default value is O/S dependent.
Used in: tcp-acceptor
The resource-config element contains the configuration info
for an instance of com.sun.jersey.api.core.ResourceConfig.
This ResourceConfig instance is used by the HTTP
acceptor to load resource and provider classes for a REST
application.
The context-path element provides the base URI path for this
REST application. When the optional context-path element is not present,
an javax.ws.rs.ApplicationPath annotation on the
REST application class declaration provides an application-specific
default context-path.
If no resource-config child element is present in an http-acceptor
element, then all REST applications implementing
javax.ws.rs.core.Application on the classpath are discovered
and loaded. Coherence Cache REST API is discovered in class
com.tangosol.coherence.rest.server.DefaultResourceConfig and has a
ApplicationPath annotation of "/api".
Used in: http-acceptor
The context-path element is used to specify a base URI path
for a REST application. The first character of the path must
be '/'.
Default value is "/".
Used in: resource-config
The auth-method element is used to configure the authentication
mechanism for the HTTP server. As a prerequisite to gaining access
to any resources exposed by the server, a client must have authenticated
using the configured mechanism.
Legal values are: "basic", "cert", "cert+basic", or "none".
The pre-defined methods are:
"basic"
This method requires the client to be authenticated using
HTTP basic authentication.
"cert"
This method requires the client to be authenticated using
client-side SSL certificate-based authentication. The certificate
must be passed to the server to authenticate. Additionally,
this method requires an SSL-based socket provider to be
configured for the HTTP server.
"cert+basic"
This method requires the client to be authenticated using both
client-side SSL certificate and HTTP basic authentication.
"none"
This method does not require the client to be authenticated.
Default value is "none".
Used in: http-acceptor
The memcached-auth-method element is used to configure the authentication
mechanism for the memcached acceptor. As a prerequisite to gaining access
to any resources exposed by the server, a client must have authenticated
using the configured mechanism.
Legal values are: "plain", or "none".
The pre-defined methods are:
"plain"
This method requires the client to be authenticated using
SASL PLAIN mechanism.
"none"
This method does not require the client to be authenticated.
Default value is "none".
Used in: memcached-acceptor
The interop-enabled element specifies that the memcached acceptor can by-pass the
configured cache service serializer while storing the values in the cache.
This is only required when sharing data between Coherence and Memcached clients.
The assumption is that memcached clients are using a Coherence Serializer like POF Serializer
to convert the objects into byte[] and the cache service is also using the same Serializer.
Valid values are "true" or "false".
Default value is false.
Used in: memcached-acceptor
The cache-service-proxy element contains the configuration info
for a cache service proxy managed by a proxy service.
Used in: proxy-config
The invocation-service-proxy element contains the configuration info
for an invocation service proxy managed by a proxy service.
Used in: proxy-config
The partitioned-quorum-policy-scheme configuration element contains
the configuration info for the policy for the partitioned cache service.
Used in: distributed-scheme-type
The distribution-quorum configuration element specifies the minimum
number of ownership-enabled members of a partitioned service that
must be present in order to perform partition distribution or
establish new partition backups.
Valid values are non-negative integers.
Used in: partitioned-quorum-policy-scheme
The recover-quorum configuration element specifies the minimum number
of ownership-enabled members of a partitioned service that must be
present in order to recover orphaned partitions from the persistent
storage, or assign empty partitions if the persistent storage is
unavailable or lost. A value of zero indicates that the service
will utilize the dynamic recovery policy, which ensures availability
of all persisted state and uses the "last good" membership information
to determine how many members must be present for the recovery.
Valid values are non-negative integers.
Used in: partitioned-quorum-policy-scheme
The recovery-hosts configuration element specifies the set of
host-addresses which must be represented by the set of
ownership-enabled members in order to recover orphaned partitions
from the persistent storage, or assign empty partitions if the
persistent storage is unavailable or lost.
Used in: partitioned-quorum-policy-scheme
The restore-quorum configuration element specifies the minimum number
of ownership-enabled members of a partitioned service that must be
present in order to restore lost primary partitions from backup.
Valid values are non-negative integers.
Used in: partitioned-quorum-policy-scheme
The read-quorum configuration element specifies the minimum number
of storage members of a cache service that must be present in
order to process "read" requests. A "read" request is any
request that does not mutate the state or contents of a cache.
Valid values are non-negative integers.
Used in: partitioned-quorum-policy-scheme
The write-quorum configuration element specifies the minimum
number of storage members of a cache service that must be
present in order to process "write" requests. A "write"
request is any request that may mutate the state
or contents of a cache.
Valid values are non-negative integers.
Used in: partitioned-quorum-policy-scheme
The proxy-quorum-policy-scheme configuration element contains
the configuration info for the quorum-based action policy for
the proxy service.
Used in: proxy-scheme
The connect-quorum configuration element specifies the minimum
number of members of a proxy service that must be present
in order to allow client connections.
Valid values are non-negative integers.
Used in: proxy-quorum-policy-scheme
The read-only element specifies a readonly setting for
the cachestore. If true the cache will only load data
from cachestore for read operations and will not
perform any writing to the cachestore when the cache
is updated.
Valid values are "true" or "false".
Default value is false.
Used in: read-write-backing-map-scheme,
cache-proxy-config, view-scheme
The write-delay element specifies the time interval
for a write-behind queue to defer asynchronous writes
to the cachestore by.
The value of this element must be in the following
format:
(\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]?
where the first non-digits (from left to right)
indicate the unit of time duration:
-MS or ms (milliseconds)
-S or s (seconds)
-M or m (minutes)
-H or h (hours)
-D or d (days)
If the value does not contain a unit, a unit of seconds
is assumed.
If zero, synchronous writes to the cachestore (without
queueing) will take place, otherwise the writes will be
asynchronous and deferred by the specified time interval
after the last update to the value in the cache.
Used in: read-write-backing-map-scheme
The write-delay-seconds element specifies the number of
seconds for a write-behind queue to defer asynchronous
writes to the cachestore by.
If zero, synchronous writes to the cachestore
(without queueing) will take place, otherwise the
writes will be asynchronous and deferred by the
number of seconds after the last update to the value in
the cache.
Used in: read-write-backing-map-scheme
The write-batch-factor element is used to calculate the
"soft-ripe" time for write-behind queue entries.
A queue entry is considered to be "ripe" for a write
operation if it has been in the write-behind queue for
no less than the write-delay interval. The "soft-ripe"
time is the point in time prior to the actual ripe time
after which an entry will be included in a batched
asynchronous write operation to the cachestore (along
with all other ripe and soft-ripe entries). In other
words, a soft-ripe entry is an entry that has been in
the write-behind queue for at least the following
duration:
D' = (1.0 - F)*D
where:
D = write-delay interval
F = write-batch-factor
Conceptually, the write-behind thread uses the
following logic when performing
a batched update:
(1) The thread waits for a queued entry to become ripe.
(2) When an entry becomes ripe, the thread dequeues all
ripe and soft-ripe entries in the queue.
(3) The thread then writes all ripe and soft-ripe
entries either via store() (if there is only the single
ripe entry) or storeAll() (if there are multiple
ripe/soft-ripe entries).
(4) The thread then repeats (1).
This element is only applicable if asynchronous writes
are enabled (i.e. the value of the write-delay element
is greater than zero) and the cachestore implements the
storeAll() method.
The value of the element is expressed as a percentage of the
write-delay interval. Valid values are doubles in the interval
[0.0, 1.0].
Used in: read-write-backing-map-scheme
The write-requeue-threshold element specifies the size of
the write-behind queue at which additional actions could be
taken.
Prior to Coherence 3.6 reaching this threshold would cause a
permanent loss of the corresponding store operations. As of
Coherence 3.6 this value is only used to control the
frequency of the corresponding log messages. For example,
the value of 100 will produce a log message every time the
size of the write queue is a multiple of 100.
Valid values are positive integers and zero.
If zero, the requeueing is disabled.
Used in: read-write-backing-map-scheme
The write-max-batch-size element specifies the maximum number
of entries to write in a single storeAll operation.
Valid values are positive integers or zero. Default value is
128 entries.
If write behind is disabled this value has no effect.
Used in: read-write-backing-map-scheme
The refresh-ahead-factor element is used to
calculate the "soft-expiration" time for cache entries.
Soft-expiration is the point in time prior to the actual
expiration after which any access request for an entry will
schedule an asynchronous load request for the entry. This
element is only applicable for a ReadWriteBackingMap which
has an internal LocalCache with scheduled automatic
expiration.
The value of this element is expressed as a percentage of
the internal LocalCache expiration interval. Valid values
are doubles in the interval [0.0, 1.0].
If zero, refresh-ahead scheduling will be disabled.
Used in: read-write-backing-map-scheme
The cachestore-timeout element is used to specify the
timeout interval to use for cachestore read and write
operations. If zero is specified, the default service
guardian timeout will be used. If a cache store operation
times out, the executing thread will be interrupted and
may ultimately lead to the termination of the cache
service.
Note: As of Coherence 3.6, timeouts of asynchronous
CacheStore operations (e.g. Refresh-Ahead, Write-Behind)
will not result in service termination.
The value of this element must be in
the following format:
(\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]?
where the first non-digits (from left to right) indicate
the unit of time duration:
-MS or ms (milliseconds)
-S or s (seconds)
-M or m (minutes)
-H or h (hours)
-D or d (days)
If the value does not contain a unit, a unit of milliseconds
is assumed.
Default value is 0.
Used in: read-write-backing-map-scheme
The rollback-cachestore-failures element specifies whether
or not exceptions caught during synchronous cachestore
operations are rethrown to the calling thread (possibly
over the network to a remote member).
If the value of this element is false, an exception caught
during a synchronous cachestore operation is logged locally
and the internal cache is updated.
If the value is true, the exception is rethrown to the
calling thread and the internal cache is not changed. If
the operation was called within a transactional context,
this would have the effect of rolling back the current
transaction.
Valid values are "true" or "false". Default value is true
as of Coherence 3.6.
Used in: read-write-backing-map-scheme
NOTE: The thread-count element is deprecated and is replaced
by setting the thread-count-min and thread-count-max elements
to the same value.
The thread-count element specifies the number of daemon
threads. Usage of daemon threads varies for different
service types.
If zero or negative, the service does not use daemon threads
and all relevant tasks are performed on the service thread.
Furthermore, if negative, tasks are performed on the caller's
thread where possible.
The absence of either a thread-count element or a thread-count
value will enable the dynamic thread pool.
Used in: distributed-scheme-type, transactional-scheme,
invocation-scheme, proxy-scheme
The thread-count-max element specifies the maximum number
of daemon threads. Usage of daemon threads varies for
different service types.
If zero or negative, the service does not use daemon threads
and all relevant tasks are performed on the service thread.
Furthermore, if negative, tasks are performed on the caller's
thread where possible.
Valid values are integers greater or equal to the value of
the thread-count-min element. Default value is Integer.MAX_VALUE.
Used in: distributed-scheme-type, transactional-scheme,
invocation-scheme, proxy-scheme, paged-topic-scheme
The thread-count-min element specifies the minimum number
of daemon threads. Usage of daemon threads varies for
different service types.
If zero or negative, the service does not use daemon threads
and all relevant tasks are performed on the service thread.
Furthermore, if negative, tasks are performed on the caller's
thread where possible.
Valid values are integers less than or equal to the value of
the thread-count-max element. Default value is 1.
Used in: distributed-scheme-type, transactional-scheme,
invocation-scheme, proxy-scheme
The standard-lease-milliseconds element specifies the duration
of the standard lease in milliseconds. Once a lease has
aged past this number of milliseconds, the lock will
automatically be released. Set this value to zero to specify a
lease that never expires. The purpose of this setting is to
avoid deadlocks or blocks caused by stuck threads; the value
should be set higher than the longest expected lock duration
(e.g. higher than a transaction timeout). It's also recommended
to set this value higher then packet-delivery/timeout-milliseconds
value in tangosol-coherence.xml descriptor.
Valid values are positive integers and zero.
Default value is the value specified in the
tangosol-coherence.xml descriptor.
Used in: replicated-scheme
The lease-granularity element specifies the lease
ownership granularity.
Valid values are "thread" and "member". A value of "thread"
means that locks are held by a thread that obtained them
and can only be released by that thread. A value of
"member" means that locks are held by a cluster node and
any thread running on the cluster node that obtained the
lock can release it.
Default value is the value specified in the
tangosol-coherence.xml descriptor.
Used in: replicated-scheme, distributed-scheme-type
The local-storage element specifies whether or not
this member will store a portion of the data managed by
the partitioned (distributed) cache service.
Valid values are "true" or "false". A value of false
means that the cluster member will not store any of
the data locally. A value of true means that the cluster
member will store its fair share of the data.
Default value is the value specified in the
tangosol-coherence.xml descriptor.
Used in: distributed-scheme-type, transactional-scheme
The partition-count element specifies the number of partitions
that a partitioned (distributed) cache will be "chopped up"
into. Each member running the partitioned cache service that
has the local-storage option set to true will manage a "fair"
(balanced) number of partitions.
The number of partitions should be a prime number and
sufficiently large such that a given partition is expected
to be no larger than 50MB in size.
Good defaults for example service storage sizes are provided below:
service storage partition-count
_______________ ______________
100M 257
1G 509
10G 2039
50G 4093
100G 8191
A list of first 1,000 primes can be found at
http://www.utm.edu/research/primes/lists/small/1000.txt
Valid values are positive integers between 1 and 32767.
Default value is the value specified in the
tangosol-coherence.xml descriptor.
Used in: distributed-scheme-type, transactional-scheme
The transfer-threshold element specifies the threshold for
the primary pages (partitions) distribution in kilo-bytes.
When a new node joins the partitioned (distributed) cache
service or when a member of the partitioned cache service
leaves, the remaining nodes perform a task of partition ownership
re-distribution. During this process, the existing data gets
re-balanced along with the ownership information. This
parameter indicates a preferred message size for data transfer
communications. Setting this value lower will make the
distribution process take longer, but will reduce network
bandwidth utilization during this activity.
Valid values are positive integers.
Default value is the value specified in the
tangosol-coherence.xml descriptor.
Used in: distributed-scheme-type, transactional-scheme
The backup-count element specifies the number of members of
the partitioned (distributed) cache service that hold the
backup data for each unit of storage in the cache.
Value of 0 means that in the case of abnormal termination,
some portion of the data in the cache will be lost. Value of
N means that if up to N cluster nodes terminate at once, the
cache data will be preserved.
To maintain the partitioned cache of size M, the total memory
usage in the cluster does not depend on the number of cluster
nodes and will be in the order of M*(N+1).
Recommended values are 0, 1 or 2.
Default value is the value specified in the
tangosol-coherence.xml descriptor.
Used in: distributed-scheme-type, transactional-scheme
The backup-count-after-writebehind element specifies the
number of members of the partitioned (distributed) cache
service that will retain backup data that does _not_ require
write-behind, i.e. data that is not vulnerable to being lost
even if the entire cluster were shut down.
Specifically, if a unit of storage is marked as requiring
write-behind, then it will be backed up on the number of
members specified by the backup-count element, and if the
unit of storage is not marked as requiring write-behind,
then it will be backed up only by the number of members
specified by the backup-count-after-writebehind element.
Value must be in the range [0.. backup-count].
Default value is the value specified in the backup-count
element.
Used in: distributed-scheme-type
The backup-storage element contains the backup
storage configuration info.
Used in: distributed-scheme-type
The type element specifies the type of the storage
used to hold the backup data.
Valid values are "on-heap", "off-heap", "file-mapped",
"custom" or "scheme".
The corresponding implementations classes are
- com.tangosol.util.SafeHashMap,
- com.tangosol.io.nio.BinaryMap using
com.tangosol.io.nio.DirectBufferManager,
- com.tangosol.io.nio.BinaryMap using
com.tangosol.io.nio.MappedBufferManager,
- the class specified by the backup-storage/class-name
element,
- the map returned by the ConfigurableCacheFactory for
the scheme referred to by the
backup-storage/scheme-name element.
The "off-heap" and "file-mapped" options are only
available with JDK 1.4 and later.
Default value is the value specified in the
tangosol-coherence.xml descriptor.
Used in: backup-storage
The suspect-protocol-enabled element is used to enable or
disable the Coherence*Extend-TCP rogue client connection
detection algorithm. This algorithm monitors client connections
looking for abnormally slow or abusive clients. When a rouge
client connection is detected, the algorithm closes the
connection in order to protect the proxy server from running
out of memory.
Valid values are true and false.
The suspect protocol is enabled by default.
Used in: tcp-acceptor
The suspect-buffer-size element specifies the outgoing
connection backlog (in bytes) after which the corresponding
client connection is marked as suspect. A suspect client
connection is then monitored until it is no longer suspect
or it is closed in order to protect the proxy server from
running out of memory.
The value of this element must be in the following format:
[\d]+[[.][\d]+]?[K|k|M|m|G|g|T|t]?[B|b]?
where the first non-digit (from left to right) indicates
the factor with which the preceding decimal value should
be multiplied:
-K or k (kilo, 2^10)
-M or m (mega, 2^20)
-G or g (giga, 2^30)
-T or t (tera, 2^40)
If the value does not contain a factor, a factor of one
is assumed.
Default value is 10000000.
Used in: tcp-acceptor
The suspect-buffer-length element specifies the outgoing
connection backlog (in messages) after which the corresponding
client connection is marked as suspect. A suspect client
connection is then monitored until it is no longer suspect or
it is closed in order to protect the proxy server from running
out of memory.
Default value is 10000.
Used in: tcp-acceptor
The nominal-buffer-size element specifies the outgoing connection
backlog (in bytes) at which point a suspect client connection is
no longer considered to be suspect.
The value of this element must be in the following format:
[\d]+[[.][\d]+]?[K|k|M|m|G|g|T|t]?[B|b]?
where the first non-digit (from left to right) indicates the factor
with which the preceding decimal value should be multiplied:
-K or k (kilo, 2^10)
-M or m (mega, 2^20)
-G or g (giga, 2^30)
-T or t (tera, 2^40)
If the value does not contain a factor, a factor of one
is assumed.
Default value is 2000000.
Used in: tcp-acceptor
The nominal-buffer-length element specifies the outgoing connection
backlog (in messages) at which point a suspect client connection
is no longer considered to be suspect.
Default value is 2000.
Used in: tcp-acceptor
The limit-buffer-size element specifies the outgoing
connection backlog (in bytes) at which point the
corresponding client connection must be closed
in order to protect the proxy server from running out
of memory.
The value of this element must be in the following
format:
[\d]+[[.][\d]+]?[K|k|M|m|G|g|T|t]?[B|b]?
where the first non-digit (from left to right) indicates
the factor with which the preceding decimal value should
be multiplied:
-K or k (kilo, 2^10)
-M or m (mega, 2^20)
-G or g (giga, 2^30)
-T or t (tera, 2^40)
If the value does not contain a factor, a factor of one
is assumed.
Default value is 100000000.
Used in: tcp-acceptor
The limit-buffer-length element specifies the outgoing connection
backlog (in messages) at which point the corresponding client
connection must be closed in order to protect the proxy server
from running out of memory.
Default value is 60000.
Used in: tcp-acceptor
The defer-key-association-check element specifies whether a
key should be checked for KeyAssociation by the extend client
(false) or deferred until the key is received by the
PartitionedService (true).
Set the defer-key-association-check element value to true
when the Java key class defined on the Coherence
cluster-side should be used for KeyAssociation processing.
Valid values are "true" or "false". Default value is false.
Used in: remote-cache-scheme
The worker-priority element specifies the priority for
the worker threads
Valid range: 1-10, default is 5 (Thread.NORM_PRIORITY)
Used in: distributed-scheme-type, transactional-scheme,
invocation-scheme, proxy-scheme
The event-dispatcher-priority element specifies the priority for
the event dispatcher thread for each service
Valid range: 1-10, default is 10 (Thread.MAX_PRIORITY)
Used in: distributed-scheme-type, replicated-scheme, optimistic-scheme,
transactional-scheme, invocation-scheme, proxy-scheme
The cache-values element specifies if the view is to maintain
both keys and values (the default; i.e. true), or only keys.
Valid values are "true" or "false".
Default value is true.
Used in: view-scheme
Since: 12.2.1.4.11
The write-behind-remove element specifies whether remove
operations should be added to the write-behind queue. If
true and the cachestore is write-behind enabled, cache
remove operations will be added to the write-behind queue
and the cachestore remove operations will take place
asynchronously.
If false (default), the write-behind delay does not apply to
remove operations and remove operations will be synchronous
(i.e. write-through).
Valid values are "true" or "false".
Default value is false.
Used in: read-write-backing-map-scheme
Since: 12.2.1.4.18
The page-size element specifies the target page size.
The default value is 1MB.
Used in: paged-topic-scheme
This enum type specifies the storage scheme used to hold topic values and metadata.
Valid values are "on-heap", "flashjournal" or "ramjournal".
Default value is "on-heap".
Used in: paged-topic-scheme
The subscriber-groups element enables defining one or more durable subscription group(s) in a topic-mapping.
These groups will be created along with the topic and as such are ensured to exist before any data is published
to the topic.
Used in: subscriber-groups
The subscriber-group element defines a durable subscription group for a topic-mapping.
Used in: subscriber-groups
The retain-consumed element is used to mark if a topic
should retain values even after all registered subscribers
have consumed the elements
Valid values are true or false.
Default value is false.
Used in: paged-topic-scheme