com.sun.enterprise.v3.admin.cluster.LocalStrings.properties Maven / Gradle / Ivy
Go to download
Show more of this group Show more artifacts with this name
Show all versions of payara-micro Show documentation
Show all versions of payara-micro Show documentation
Micro Distribution of the Payara Project for IBM JDK
#
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER.
#
# Copyright (c) 2010-2013 Oracle and/or its affiliates. All rights reserved.
#
# The contents of this file are subject to the terms of either the GNU
# General Public License Version 2 only ("GPL") or the Common Development
# and Distribution License("CDDL") (collectively, the "License"). You
# may not use this file except in compliance with the License. You can
# obtain a copy of the License at
# https://glassfish.dev.java.net/public/CDDL+GPL_1_1.html
# or packager/legal/LICENSE.txt. See the License for the specific
# language governing permissions and limitations under the License.
#
# When distributing the software, include this License Header Notice in each
# file and include the License file at packager/legal/LICENSE.txt.
#
# GPL Classpath Exception:
# Oracle designates this particular file as subject to the "Classpath"
# exception as provided by Oracle in the GPL Version 2 section of the License
# file that accompanied this code.
#
# Modifications:
# If applicable, add the following below the License Header, with the fields
# enclosed by brackets [] replaced by your own identifying information:
# "Portions Copyright [year] [name of copyright owner]"
#
# Contributor(s):
# If you wish your version of this file to be governed by only the CDDL or
# only the GPL Version 2, indicate your decision by adding "[Contributor]
# elects to include this software in this distribution under the [CDDL or GPL
# Version 2] license." If you don't indicate a single choice of license, a
# recipient has the option to distribute your version of this file under
# either the CDDL, the GPL Version 2 or to extend the choice of license to
# its licensees as provided above. However, if you add GPL Version 2 code
# and therefore, elected the GPL Version 2 license, then the option applies
# only if the new code is made subject to such option by the copyright
# holder.
#
#####restart-instance
restart.instance.notInstance=-_restart-instance only works on instances. This is a {0}
#Below {0} is the runtime-type. currently DAS, INSTANCE, EMBEDDED, NODEAGENT or ALL
restart.instance.notDas=restart-instance only works with DAS. This is a {0}
restart.notRestartable={0} reports that it is not restartable. \n\
This usually means that the password file that was originally used to start {0} \
has been deleted or is not readable now.\n\
Please stop and then start {0} - or fix the password file.
restart.instance.success={0} was restarted.
restart.instance.racError=Error trying to restart the instance named {0} : {1}
restart.instance.timeout=Timed out waiting for {0} to restart
restart.instance.nopid=Could not get the Process ID for {0}
restart.instance.startSucceeded=The instance was not running. A normal start \
was successful.\nMessage from start-instance: {0}
restart.instance.startFailed=The instance is not running. \
A start was attempted but it was unsuccessful. \n\
Please try to start the instance with start-local-instance.\n\
Message from start-instance: {0}
## StopInstanceCommand
stop.instance.command=Stop a running instance
stop.instance.init=Instance {0} shutdown initiated
stop.instance.notDas=stop-instance only works with DAS. This is a {0}
stop.instance.notInstance=_stop-instance only works on instances. This is a {0}
stop.instance.noInstanceName=You must specify an instance name.
stop.instance.noSuchInstance=There is no instance named {0} in this domain.
stop.instance.noPort=Can not find the Admin Port for the instance named {0}.
stop.instance.noHost=Can not find the name of the host for the instance named {0}.
stop.instance.racError=Error trying to stop the instance named {0} : {1}
stop.instance.success=The instance, {0}, is stopped.
stop.instance.timeout=Timed out waiting for {0} to stop.
stop.instance.timeout.completely=Timed out waiting for {0} to stop completely.
## StartInstanceCommand
start.instance.success=The instance, {0}, was started on host {1}
start.instance.already.running=Instance {0} is already running.
start.instance.command=Start an instance
start.instance.init=Instance start initiated
start.instance.notAnInstanceOrDas=start-instance only works with instances or DAS, this is a {0}
start.instance.noInstanceName=You must specify an instance name when you call start-instance on DAS.
start.instance.noSuchInstance=There is no instance named {0} in this domain.
start.instance.noSuchNodeRef=There is no node named {0} in this domain.
# 0=instance, 1=node, 2=host
start.instance.failed=Could not start instance {0} on node {1} ({2}).
start.instance.timeout=Timed out waiting for {0} to start.
stop.local.instance.kill=Failed to stop instance {0} on node {1} using {2}
## ListInstancesCommand
list.instances.command=Display status of server instances
list.instances.onlyRunsOnDas=This is a GlassFish server instance and list-instances can only run on DAS.
## SynchronizeFiles
sync.unknown.instance=Unknown server instance: {0}
sync.exception.reading=SynchronizeFiles: Exception reading request
sync.exception.processing=SynchronizeFiles: Exception processing request
sync.bad_output_file=The output file, {0}, can not be written to.
## ExportSyncBundle
sync.bad_temp_file=Could not create a temp file in the system temp file area: {0}
sync.unknown.instanceOrCluster=Unknown stand-alone server or cluster: {0}
sync.empty_cluster=Unable to generate sync bundle. Cluster {0} has no instances.
export.sync.bundle.success=Sync bundle: {0}
export.sync.bundle.fail=Unable to export sync bundle: {0}
export.sync.bundle.retrieveFailed=Could not retrieve sync bundle: {0}
export.sync.bundle.exportFailed=Could not export synchronized content to file {0}: {1}.
export.sync.bundle.closeStreamFailed=Could not close stream on file {0} : {1}
export.sync.bundle.createDirFailed=Could not create directory {0}.
## ServerSynchronizer
serversync.unknown.dir=Unknown directory: {0}
serversync.exception.processing=ServerSynchronizer: Exception processing request
## GlassFishClusterExecutor
glassfish.clusterexecutor.notargets=Did not find any suitable instances for target {0}; command executed on DAS only
glassfish.clusterexecutor.notargetspecified=A target was not specified in the command line; aborting further execution of CLI
glassfish.clusterexecutor.supplementalcmdfailed=A supplemental command, {0}, failed
glassfish.clusterexecutor.dynrecfgdisabled=WARNING: The command was not replicated to all cluster instances because the dynamic-reconfiguration-enabled flag is set to false for cluster(s) {0}
noNode=node is null
noUpdate.nodeInUse=Cannot update node {0}. It is in use by a server instance \
and therefore attribute {1} cannot be changed.
notConfigNode=Node {0} is not a config node.
notSshNode=Node {0} is not an SSH node.
missingNodeRef=Instance {0} has no associated node.
missingNode=Node {0} does not exist.
attribute.mismatch=Attribute mismatch for node ''{0}'': the value for the ''{1}'' attribute from the command ({2}) does not match the value in the DAS configuration ({3})
attribute.null=Attribute null for attribute {1} on node {0}: command value is: {2}, node value is: {3}
top.instance.notRunning=Instance {0} not running.
invalid.installdir=Installdir value {0} is not a valid glassfish installation.
## CreateInstanceCommand
generic.config=The existing config name to be used by this cluster
create.instance=Creates a new GlassFish server instance
create.instance.cluster=The cluster name this new instance should be attached to
create.instance.success=The instance, {0}, was created on host {1}
create.instance.missing.info=Node {0} does not have the {1} attribute \
set and therefore {2} can not be used with this node. You must \
either add the {1} attribute using the {3} command, or create the \
instance by running {4} directly on the instance host.
create.instance.config=The instance {0} was registered with the DAS. You must run the following command on the instance host to complete the instance creation: \n {1}
create.instance.usagetext=create-instance --node \n\t[--config | --cluster ] \n\t[--lbenabled[=]] \n\t[--checkports[=]] [--portbase ] \n\t[--systemproperties ] \n\t[-?|--help[=]] instance_name
creatingInstance=Creating instance {0} on {1}
mustRunLocal=Could not access host {0} for node {1}. You must run the following command on host {0} to complete instance creation: {2}
noSuchNode=There is no node named {0} in this domain.
notConfigNodeType=Node {0} is not type CONFIG
notRemoteNodeType=Node {0} is not a remote type (SSH or DCOM).
lbenabledNotForStandaloneInstance=The lbenabled option is not supported for standalone instances.
Instance.duplicateInstanceDir=Server instance {0} is trying to use the same directory as instance {1}
notAllowed=This command can only be run on DAS.
## StartClusterCommand
start.cluster.command=Start a cluster
start.cluster=Starting cluster {0}
stop.cluster.command=Stop a cluster
stop.cluster=Stopping cluster {0}
cluster.command.notDas=This is a GlassFish server instance and {0} can only run on DAS.
cluster.command.instancesSucceeded=The command {0} executed successfully for: {1}
cluster.command.instancesFailed=The command {0} failed for: {1}
cluster.command.instancesTimedOut=Timed out while waiting for command {0} to complete for: {1}
cluster.command.unknownCluster=There is no cluster named {0} in this domain.
cluster.command.noInstances=The cluster {0} contains no instances.
cluster.command.interrupted=Command for cluster {0} was interrupted after \
getting responses from {1} of {2} instances for command {3}.
cluster.command.executing=Executing {0} on {1} instances.
create.instance.local.boot.failed=Successfully created instance {0} in the DAS configuration, but failed to retrieve configuration files during bootstrap.
create.instance.remote.boot.failed=Successfully created instance {0} in the DAS configuration, but failed to install configuration files for the instance on node {3} during bootstrap.\n\nSSH configuration information \n{1}\n Additional failure info: {2}.
deleting.instance=Deleting instance {0} on {1}
instance.shutdown=Instance {0} must be stopped before it can be deleted.
delete.instance.success=The instance, {0}, was deleted from host {1}
delete.instance.failed=Failed to delete instance {0} from {1}
#delete.instance.config.failed: {0} server instance name, {1} is hostname
delete.instance.config.failed=Successfully removed the files for instance {0} \
from {1}, but failed to delete the instance from the DAS configuration.
delete.instance.filesystem.failed=Successfully removed instance {0} from the \
DAS configuration, but failed to remove the instance files from node \
{1} ({2}).
restart.instance.command=restart a running server instance
list.instances.badTarget=The target, {0}, is not an instance, cluster, domain, node or config.
list.instances.targetWithStandaloneOnly=The target and standaloneonly options are mutually exclusive. Please choose just one of them.
list.instances.longAndNoStatus=The long and nostatus options are mutually exclusive. Please choose just one of them.
list.instances.serverTarget=You specified the target as the DAS. That''s illegal. DAS is not an instance server.
Config.badConfigNames=You must specify a source and destination config.
Config.noSuchConfig=Config {0} does not exist.
#Config.copyConfigError {0} is the message from the Exception that was thrown
Config.copyConfigError=Error copying the config caused by {0}.
Config.configExists=Config {0} already exists.
Config.inUseConfig=Config {0} is in use and must be referenced by no server instances or clusters
Config.defaultConfig=The default configuration template named default-config cannot be deleted.
Config.deleteConfigFailed=Unable to remove config {0}
create.instance.filesystem.failed=Successfully created instance {0} in the \
DAS configuration, but failed to create the instance files on node {1} ({2}).
# Messages for connecting to SSH nodes
# 0=node name, 1=hostname
node.not.ssh=This command requires connecting to node {0} ({1}) \
using SSH to complete its operation, but node {0} is not configured to use SSH. \
You may use update-node-ssh to configure node {0} to use SSH.
# 0=node name, 1=hostname, 2=more info
node.ssh.bad.connect=This command requires connecting to host {1} using \
SSH to complete its operation, but it failed to connect:\n\n \
{2}\n\nPlease verify you have SSH configured correctly on your system \
with the proper attributes set on node {0}. You may use update-node-ssh \
to modify these attributes. See the DAS log file for more information.
# 0=hostname, 1=installdir, 2=more info
node.remote.tocomplete=To complete this operation run the following command \
locally on host {0} from the GlassFish install location {1}:\n\n \
{2}
# A command return a non-zero status
# 0=node, 1=node-host, 2=output from failed command, 3=command
node.command.failed=Command ''{3}'' failed on node {0} ({1}): {2}
node.command.failed.short=Command failed on node {0} ({1}): {2}
node.command.failed.local.exception=Failed to execute local command: {0}
node.command.failed.local.details=Failed to execute local command ''{1}'': {0}
# Failed to execute a command via SSH
# 0=node, 1=node-host, 2=command, 3=exception info, 4=SSH settings
node.command.failed.ssh.details=Failed to run ''{2}'' on node {0} ({1}): {3}. \
SSH settings: {4}
create.node.ssh.continue.force=Continuing with node creation due to use of --force.
create.node.ssh.or.dcom.not.created={0} node not created. To force creation of the \
node with these parameters rerun the command \
using the --force option.
create.node.ssh.install.success=Successfully installed GlassFish on {0}.
delete.node.ssh.uninstall.failed=Node {0} deleted successfully but failed to un-install GlassFish on {1}. Please run uninstall-node manually.
delete.node.ssh.uninstall.success=Successfully un-installed GlassFish on {0}.
create.node.ssh.no.installdir=Node does not have an install directory.
update.node.ssh.continue.force=Continuing with node update due to use of --force.
update.node.ssh.not.updated=Node not updated. To force an update of the \
node with these parameters rerun the command \
using the --force option.
update.node.config.missing.attribute={1} attribute is required to update node {0}.
update.node.config.defaultnode=Cannot update node {0}. It is the built-in localhost node.
node.ssh.invalid.params=Warning: some parameters appear to be invalid.
ssh.bad.connect=Could not connect to host {0} using {1}.
ssh.invalid.port=Invalid port number {0}.
key.path.not.absolute=Key file path {0} must be an absolute path.
key.path.not.found=Key file {0} not found. The key file path must be reachable by the DAS.
key.path.not.readable=Key file {0} is not readable by DAS user {1}.
no.such.password.alias=Password alias {0} was not found in domain.
ping.node.failure=Failed to validate {2} connection to node {0} ({1})
ping.node.success=Successfully made {2} connection to node {0} ({1})
#ping.glassfish.version {0} is the installation directory path of the node.
#ping.glassfish.version {1} is the GlassFish version found on the node
ping.glassfish.version=GlassFish version found at {0}:\n{1}
unknown.host=Unknown host {0}
nodehost.required=Node's host attribute must be set.
#failed.to.run {0} is the actual command that was run, {1} is the name of the node.
failed.to.run=Failed to run ''{0}'' on {1}
failed.to.update.node=Failed to update node {0}.
setup.ssh.null.sshpass=SSH password cannot be null.
setup.ssh.unalias.error=Failed to obtain SSH password from domain key store for specified password alias {0}.
setup.ssh.no.keyfile=Use --generatekey option to generate a new key since no key file was found.
setup.ssh.invalid.path=Key file path {0} must be an absolute path.
setup.ssh.already.configured=SSH public key authentication is already configured for {0}@{1}
setup.ssh.failed=SSH key setup failed: {0}
setup.ssh.conn.failed=Connection verification failed.
setup.ssh.null.keypassphrase=Failed to obtain SSH key passphrase from domain key store.
## GetHealthCommand
get.health.called=get-health called for cluster "{0}"
get.health.cluster.name=The name of the cluster for which you want the health information.
get.health.command=Get the current health of the instances in a cluster.
get.health.no.instances=No instances found for cluster {0}.
get.health.noCluster=Cluster {0} does not exist.
get.health.noGMS=Group Management Service is not running for cluster {0}. Cannot get the health of the instances without GMS running.
get.health.noHistoryError=Unexpected error: GMS Health History not found.
get.health.onlyRunsOnDas=This is a GlassFish server instance and get-health can only run on DAS.
#get.health.instance.state.since --> domain1 running since January 1, 2011
get.health.instance.state.since={0} {1} since {2}
secure.admin.boot.errCreDir=Could not create directories for {0}. No further information is available.
secure.admin.boot.errSetLastMod=Could not set the lastModified date for {0}. No further information is available.
bad.dcom.ping=Error connecting to remote with DCOM: {0}
dcom.no.remote.install=Could not find a remote Glassfish installation on host: {0} at {1}
dcom.no.connection=Couldn''t connect via DCOM to remote host: {0}
internal.error=Internal Error: {0}
no.mkdir=Could not create the directory on the remote host: {0}
dcom.connect.ok=Successfully connected to {0} via DCOM. Also successfully \n\ran the version command on the remote GlassFish installation.
dcom.no.remote.access=Could not access the files, {0} on host {1} via DCOM.
dcom.no.remote.file=The remote file, {0} doesn''t exist on {1}
dcom.no.remote.file.access=Can not access the remote file system. Is UAC on?
dcom.access.ok=Successfully accessed {0} on {1} using DCOM.
dcom.no.write=Could not write {0} to {1} on {2} using DCOM.
dcom.write.ok=Successfully wrote {0} to {1} on {2} using DCOM.
dcom.run.ok=Successfully ran the test script on {0} using DCOM.\n\
The script simply ran the DIR command. Here are the first few \
lines from the output of the dir command on the remote machine:\n\{1}
dcom.no.run=Could not run the test script on {0} using DCOM.
validate.dcom.getbyname=\nSuccessfully resolved host name to: {0}
validate.dcom.connect=Successfully connected to {0} at port {1} on host {2}.
validate.dcom.no.connect=Can''t connect to {0} at port {1} on host {2}.\n\
This is usually caused by a firewall blocking the port or the Server Service being stopped.
dcom.no.wmi=Could not connect to WMI (Windows Management Interface) on {0}.
dcom.wmi.ok=Successfully accessed WMI (Windows Management Interface) on {0}. There are {1} processes running on {0}.
dcom.wmi.procinfolegend=Below are the command lines for all the remote processes that have one: \n
dcom.nopassword=Missing Windows password. If you are using asadmin, \
specify the remote Windows password in a file as follows:\n\
AS_ADMIN_WINDOWSPASSWORD=windows-password\n\
Specify the path of the password file to asadmin with the --passwordfile (or -W) option.
dcom.no.installdir=The configuration for the node is invalid. There is no value for the installdir. \
Try running update-node-dcom and specify the install directory for GlassFish.
dcom.no.jdk=Could not find javac in the path on {0]. A JDK is required to be configured in the Path.
dcom.yes.jdk=Verified that a JDK is installed and available in the Path on {0}. javac -version returned this: {1}
dcom.no.local=Successfully verified that the host, {0}, is not the local machine as required.
dcom.yes.local=The host, {0}, is the local machine. DCOM is only for use on distributed systems.
© 2015 - 2024 Weber Informatics LLC | Privacy Policy