Many resources are needed to download a project. Please understand that we have to compensate our server costs. Thank you in advance. Project price only 1 $
You can buy this project and download/modify it how often you want.
[[_sslb_mss_load_balancer]]
= Load Balancer
.Star Cluster Topology.
image::images/mss-MSSSIPLoadBalancer-dia-StarNetworkTopology.jpg[]
The {this-platform} Load Balancer is used to balance the load of SIP service requests and responses between nodes in a SIP Server cluster, such as {this-platform} JAIN SLEE or SIP Servlets.
Both {this-platform} servers can be used in conjunction with the {this-platform} Load Balancer to increase the performance and availability of SIP services and applications.
In terms of functionality, the {this-platform} Load Balancer is a simple stateless proxy server that intelligently forwards SIP session requests and responses between User Agents (UAs) on a Wide Area Network (WAN), and SIP Server nodes, which are almost always located on a Local Area Network (LAN). All SIP requests and responses pass through the {this-platform} Load Balancer.
Starting with the 2.0.0.GA release, {this-platform} Load Balancer can handle WebSocket requests supporting up to version 13 of the wire protocol - RFC 6455 (version 17 of the draft hybi specification - http://tools.ietf.org/html/draft-ietf-hybi-thewebsocketprotocol-17 ). {this-platform} Load Balancer will accept HTTP requests and if the request contains header Sec-WebSocket-Protocol it will upgrade the connection and dispatch the reuqest to a node capable to handle WebSocket requests (a node that has a WebSocket connector).
One major advantage of having WebSocket support for {this-platform} Load Balancer is to allow tunneling over HTTP port for SIP traffic and thus bypass proxy and firewall servers that might be blocking SIP traffic.
[[_sslb_sip_load_balancing_basics]]
== SIP Load Balancing Basics
All User Agents send SIP messages, such as `INVITE` and `MESSAGE`, to the same SIP URI (the IP address and port number of the {this-platform} Load Balancer on the WAN). The Load Balancer then parses, alters, and forwards those messages to an available node in the cluster.
If the message was sent as a part of an existing SIP session, it will be forwarded to the cluster node which processed that User Agent's original transaction request.
The SIP Server that receives the message acts upon it and sends a response back to the {this-platform} Load Balancer.
The {this-platform} Load Balancer reparses, alters and forwards the message back to the original User Agent.
This entire proxying and provisioning process is carried out independent of the User Agent, which is only concerned with the SIP service or application it is using.
By using the Load Balancer, SIP traffic is balanced across a pool of available SIP Servers, increasing the overall throughput of the SIP service or application running on either individual nodes of the cluster.
In the case of a {this-platform} server with `` capabilities, load balancing advantages are applied across the entire cluster.
The {this-platform} Load Balancer is also able to failover requests mid-call from unavailable nodes to available ones, thus increasing the reliability of the SIP service or application.
The Load Balancer increases throughput and reliability by dynamically provisioning SIP service requests and responses across responsive nodes in a cluster.
This enables SIP applications to meet the real-time demand for SIP services.
[[_sslb_load_balancer_node_discovery]]
=== Node Discovery of the {this-platform} Load Balancer
Each individual {this-platform} SIP Server in the cluster is responsible for
contacting the {this-platform} Load Balancer and relaying its health status and
regular "heartbeats" except case with <>.
From these health status reports and heartbeats, the {this-platform} Load Balancer creates and maintains a list of all available and healthy nodes in the cluster allowing for making the whole cluster dynamic and capable of autoscaling with no downtime.
The Load Balancer forwards SIP requests between these cluster nodes, providing that the provisioning algorithm reports that each node is healthy and is still sending heartbeats.
If an abnormality is detected, the {this-platform} Load Balancer removes the unhealthy or unresponsive node from the list of available nodes.
In addition, mid-session and mid-call messages are failed over to a healthy node.
The {this-platform} Load Balancer first receives SIP requests from endpoints on a port that is specified in its Configuration Properties configuration file.
The {this-platform} Load Balancer, using a round-robin algorithm, then selects a node to which it forwards the SIP requests.
The Load Balancer forwards all same-session requests to the first node selected to initiate the session, providing that the node is healthy and available.
[[_sslb_load_balancer_keepalive]]
=== Heartbeats to the {this-platform} Load Balancer
Currently the Load balancer support three protocols for heartbeats(RMI, protocol
based on http requests, and protocol for Kubernetes(explained in )).
For RMI you should set:
org.mobicents.tools.heartbeat.rmi.ServerControllerRmi20002001
For http protocol:
org.mobicents.tools.heartbeat.impl.ServerController2610
.
For RMI protocol you should set:
org.mobicents.tools.heartbeat.rmi.ServerControllerRmi
For protocol based on http requests:
org.mobicents.tools.heartbeat.impl.ServerController
Heartbeats service based on http protocol. Each http request and response has JSON
object with required data. The load balancer and SIP-HA listens on heartbeats
ports which we should set in configuration. Also for SIP-HA we should set port
of the load balancer on which SIP-HA will send requests.
.Heartbeat protocol
image::images/heartbeat-protocol.png[]
Protocol has next requests:
* Node’s start request
After Node’s start sip-ha will send start POST http request with JSON object
with info about Node (hostname, ip, ports, sessionId etc.). sessionId generates
after Restcomm’s start or restart. The Node sends periodically start request
until getting http start response from the Load balancer.
* Node’s heartbeat request
After getting start http response, RC will send heartbeat requests
This request will have JSON object with only sessionId value. On each heartbeat
request the Load balancer sends heartbeat http response.
For signalising about gracefully shutdown the Node can send the shutdown request
to the Load balancer
* Node’s shutdown request
If node will shutdown then it will send shutdown request. If we don’t need
gracefully shutdown but need just stop the Node then the Node can use stop request.
For signaling about stopping of the Load balancer, it can send stop http request
to the Node.
* LB’s stop request:
When the Node gets stop request from the Load balancer,
it will start send the start request to the Load balancer until will not get
start response.
=== Additional health check of the Nodes by the {this-platform} Load Balancer
Sometimes we face the situation when the server goes down but LB considers
it is alive since ping packets are sent in the normal way.
This problem arises because the processing of keep alive and SIP messages
takes place on different levels and even if SIP is not responding, the ping
packets still can be sent. On client side this situation looks as follows:
a client sends messages to LB, LB forwards the messages to the dead server
and no response is received. In such way a client tries and fails to
connect to the server, because LB is not aware that the server is dead.
The snag is that it is not possible to identify this issue via keep alive since
the keep alive message flow is not interrupted in this case.
So we need to identify the dead server by its ability to process the messages.
It implemented by recording the number of requests (which remain without any
response) and storing the timestamp of last packet sent by server. If maxRequestNumberWithoutResponse
and maxResponseTime values are exceeded then LB removes the node.
It is important to note that the node is considered dead only when both
these values are exceeded. If only one value is exceeded, the node is
still considered as alive. By monitoring these two values we can insure
that LB does not send any packets to the dead server while we do not remove
the node which is active but coincidentally exceeded maxResponseTime or
maxRequestNumberWithoutResponse. Below you can find more details on how it works.
Counter stores the number of requests which were sent to the server till the
server response was received. When LB sends request to the server – the
counter increments by one. As soon as LB receives any packet from the server –
the counter sets to zero. The “maxRequestNumberWithoutResponse” is a configurable
value. If maxRequestNumberWithoutResponse without response is exceeded but the
server response does not take too long (it keeps within maxResponseTime limits),
the node is still considered active. The “maxResponseTime” parameter also configurable.
The maxResponseTime will allow to identify the server which fails to respond
within specific time frame. The timestamp of last packet sent by server is
stored and if the server response takes too long (the maxResponseTime value is
exceeded) but the number of requests without response is small, then LB considers
the server is active. In case both maxResponseTime and maxRequestNumberWithoutResponse
are exceeded, LB considers the server is dead and stops sending messages to the server.
=== {this-platform} Load Balancer supports SIP Over Websockets
The websocket client (e.g. TelScale Olympus) can connect to Load balancer
on port which should be set in lb-configuration.properties externalWsPort and
internal.wsPort if needed. To work with secure websocket it is necessary to set
external.wssPort and internal.wssPort. When websocket client connects to the
{this-platform} Load balancer, then the LB opens connection to a node from the
list of nodes which is based on keepalives got from the server. During the further
connections of the client, each time LB opens the new connection to the node
from the list of connected nodes according to chosen algorithm. If the
connection has already been established to this node, then LB reuses this
connection. The same goes for other connection-oriented transport protocols.
=== {this-platform} Load Balancer supports IPv6 protocol
You should set at least next tags in lb-configuration.xml file for enabling IPv6:
in common section
::1
in external section
5070
Other tags you can see in example of lb-configuration.xml below.
=== Ramp-up of restarting Nodes
When a node is added to the list of available nodes, the traffic should be slowly
injected to the new node, until it gets the best performance. A fresh started node
needs to initialize internal data structures,and fill different caches to get the
best performance, if we allow to inject high traffic to restarted node, we may
face high response times in the beginning, potentially leading to failed calls.
So the Load balancer allow to ramp-up of freshly added node. It can be enabled via
next properties:
- trafficRampupCyclePeriod (ms)
- trafficPercentageIncrease (%)
trafficPercentageIncrease = 10 mean the Load Balancer increases traffic to the node by 10% after
the trafficRampupCyclePeriod = 1000 and will keep doing that until 100%. So after
10 second (10000ms) the Load Balancer will send to newly added Node same traffic as
for other nodes.
== HTTP Load Balancing
=== Basics
In addition to the SIP load balancing, there are several options for coordinated or cooperative load balancing with other protocols such as HTTP.
Typically, a JBoss Application Server will use apache HTTP server with mod_jk, mod_proxy, mod_cluster or similar extension installed as an HTTP load balancer.
This apache-based load balancer will parse incoming HTTP requests and will look for the session ID of those requests in order to ensure all requests from the
same session arrive at the same application server.
By default, this is done by examining the `jsessionid` HTTP cookie or GET parameter and looking for the `jvmRoute` assigned to the session.
The typical `jsessionid` value is of the form `.` . The very first request for each new HTTP session does not have a session ID assigned;
the apache routes the request to a random application server node.
When the node responds it assigns a session ID and `jvmRoute` to the response of the request in a HTTP cookie.
This response goes back to the client through apache, which keeps track of which node owns each `jvmRoute` . Once the very first request is served this way,
the subsequent requests from this session will carry the assigned cookie, and the apache load balancer will always route the requests to the node, which advertised
itself as the `jvmRoute` owner.
Instead of using apache, an integrated HTTP Load Balancer is also available.
The {this-platform} Load Balancer has a HTTP port where you can direct all incoming HTTP requests.
The integrated HTTP load balancer behaves exactly like apache by default, but this behavior is extensible and can be overridden completely with the pluggable balancer algorithms.
The integrated HTTP load balancer is much easier to configure and generally requires no effort, because it reuses most SIP settings and assumes reasonable default values.
Unlike the native apache, the integrated HTTP Load Balancer is written completely in Java, thus a performance penalty should be expected when using it.
However, the integrated HTTP Balancer has an advantage when related SIP and HTTP requests must stick to the same node.
Also HTTP load balancer can choose the next node by 'instanceId' which is got from the Restcomm connector info. First the Load balancer
checks whether http request contains 'CallSID' parameter. If it does, the Load balancer gets 'instanceId' from it and checks whether 'instanceId' corresponds to 'instanceId'
of nodes connected to the Load balancer. If the Load balancer finds the correspondence, it sends http request to this node.
=== Url rewriting
The load balancer can rewrite incoming HTTP requests. For enabling this feature
you should change http section in config file of the Load balancer. You need to
add section :
20802081someCompanyrestcomm
This rule will change all requests with “someCompany” string to “restcomm” for example:
/someCompany/2012-04-24/Accounts/0/Calls/ID1f
To
/restcomm/2012-04-24/Accounts/0/Calls/ID1f
More info about rules you can find link:http://tuckey.org/urlrewrite/manual/3.0/guide.html[here].
== SMPP multiplexing basics
=== Modes of SMPP
The SMPP Load balancer has two working modes:
* Mux mode
* Regular mode
You can choose it by property muxMode. If it's true, the Load balancer will work in Mux mode.
If it's false, in regular mode.
==== Mux mode
SMPP providers (e.g.NEXMO) offer limited numbers of connections to their services
but sometimes we need more than one client connections to it. So you can use our
load balancer as SMPP multiplexer for multiplexing connections from several clients
(Restcomm) to one SMPP provider using one connection.
When very first client connect to SMPP load balancer, Load balancer creates connection
to server and next clients reuse it. When connection to server established it can
be used in both directions (transceiver mode): ESME and SMSC can send messages
through the load balancer.
If we have more than one server then Load balancer uses round robin algorithm for
sending the requests from clients to the servers. You can specify in which way
Load balancer will send requests from the server to clients by using
isUseRrSendSmppRequestToClient property. If true, Load balancer uses
round robin algorithm; if false, LB sends each request to all clients.
.SMPP load balancer diagram mux mode
image::images/SMPP-lb-diagram.png[]
Main goal is to reuse connection to the servers (SMSC) and simply transfer packets
between client (ESME) and server(SMSC). But also it can inspect the received packets
for correct command ID and will not send incorrect packets forward instead turn them back.
When connection to server drops, SMPP load balancer can reconnect (rebind) to the
server. During this process (reconnect) it turns back all received packets until
the new connection is established. If there are no established connection with
the server, all connections will be closed.
==== Regular mode
The regular mode is used for balancing load from SMPP clients (ESME) to SMPP servers
(SMSC) based on round-robin algorithm by default. When new clients connect to SMPP load balancer,
application creates new connection to SMPP server (SMSC). When connection established it can
be used in both directions: ESME and SMSC can send messages.
.SMPP load balancer diagram regular mode
image::images/SMPP-lb-regular-diagram.png[]
Main goal of application is reducing the load on servers (SMSC) and simply transfer packets
between client (ESME) and server(SMSC). But also it can inspect the received packets for
correct command ID and will not send incorrect packets forward instead turn them back.
When connection to server drops, SMPP load balancer can reconnect (rebind) to the next working server.
During this process (reconnect) it turns back all received packets until the new connection is established.
If there are no established connections with the servers, the client connection is closed and vice versa.
[[_sslb_smpp_load_balancer_implementation]]
=== Implementation of the SMPP Load Balancer
SMPP load balancer implements timers: enquire link timer, session initialization timer and response timer for connection handling.
Timers of SMPP load balancer have next behaviour:
* Session initialization timer disconnects client (ESME) if it does not send bind request in defined time;
* Response timer sends response with system error if sender does not receive response in defined time;
* Enquire link timer with a fixed rate checks the connections with client (ESME) and server (SMSC).
Server part of SMPP load balancer has next states:
* OPEN - it can receive only bind requests from client (ESME);
* BINDING - it can't receive any messages, in this state we wait for client's response;
* BOUND - it can receive all PDU packets from client (ESME), which he can send according SMPP protocol, except bind requests;
* REBINDING - it can also receive all PDU packets from client (ESME), but returns them back, because the client part at this time is trying to reconnect to server;
* UNBINDING - it can receive only unbind response from client (ESME);
* CLOSED - it can't receive any messages, this is last state of life cycle, which indicate that connection is closed.
Client part of SMPP load balancer has next states:
* INITIAL - it can't receive any messages, this is first state of life cycle, at this state the client part is trying to connect to the server (SMSC) and if the connection is successful state changes to OPEN;
* OPEN - it can't receive any messages, at this state the client part sends a bind request to the server (SMSC), and changes state to binding;
* BINDING - it can receive only bind response from server, and if response does not have errors, the client part changes ones state to bound;
* BOUND - it can receive all packets from server (SMSC), which can be sent according SMPP protocol, except unbind response;
* REBINDING - if connection drops to the server (SMSC), the client part changes ones state to rebinding until reconnect.
If reconnect fails, connection is closed;
* UNBINDING - it can receive unbind response from server only, after which state changes to closed state;
* CLOSED - it can't receive any messages, this is the last state of the life cycle, which indicates that the connection is closed.
== Pluggable balancer algorithms
The {this-platform} Load Balancer exposes an interface to allow users to customize the routing decision making for special purposes.
By default there are three built-in algorithms.
Only one algorithm is active at any time and it is specified with the `algorithmClass` property in the configuration file.
It is up to the algorithm how and whether to support distributed architecture or how to store the information needed for session affinity.
The algorithms will be called for every SIP and HTTP request and other significant events to make more informed decisions.
NOTE: Users must be aware that by default requests explicitly addressed to a live server node passing through the load balancer will be forwarded directly to the server node.
This allows for pre-specified routing use-cases, where the target node is known by the SIP client through other means.
If the target node is dead, then the node selection algorithm is used to route the request to an available node.
The following is a list of the built-in algorithms for SIP:
org.mobicents.tools.sip.balancer.CallIDAffinityBalancerAlgorithm::
This algorithm is not distributable.
It selects nodes randomly to serve a give Call-ID extracted from the requests and responses.
It keeps a map with `Call-ID ->
nodeId` associations and this map is not shared with other load balancers which will cause them to make different decisions.
For HTTP it behaves like apache.
org.mobicents.tools.sip.balancer.UserBasedAlgorithm::
This algorithm algorithm tie all calls for a given DID/Number to the same node.
All participants for a given conference will be on the same node.
It selects nodes randomly to serve a give To header extracted from the requests and responses.
It keeps a map with `To ->
nodeId` associations and this map is not shared with other load balancers which will cause them to make different decisions.
For HTTP it behaves like apache.
org.mobicents.tools.sip.balancer.ActiveStandbyAlgorithm::
This algorithm allows to send all requests to the active node. If the active node
will disconnected in some way then the Load balancer will send requests to the
passive node. This algorithm works for SIP and HTTP protocols.
org.mobicents.tools.sip.balancer.HeaderConsistentHashBalancerAlgorithm::
This algorithm is distributable and can be used in distributed load balancer configurations.
It extracts the hash value of specific headers from SIP and HTTP messages to decide which application server node will handle the request.
Information about the options in this algorithms is available in the balancer configuration file comments.
org.mobicents.tools.sip.balancer.PersistentConsistentHashBalancerAlgorithm::
This algorithm is distributable and is similar to the previous algorithm,
but it attempts to keep session affinity even when the cluster nodes are removed or
added. It implemented by storing "header value" <-> "SIP Node" to cache. So this
cache can be shared between LBs.
org.mobicents.tools.sip.balancer.ClusterSubdomainAffinityAlgorithm::
This algorithm is not distributable, but supports grouping server nodes to act as a subcluster.
Any call of a node that belongs to a cluster group will be preferentially failed over to a node from the same group.
To configure a group you can just add the `subclusterMap` property in the load balancer properties and listing the IP addresses of the nodes.
The groups are enclosed in parentheses and the IP addresses are separate by commas as follows:
+
----
subclusterMap=( 192.168.1.1, 192.168.1.2 ) ( 10.10.10.10,20.20.20.20, 30.30.30.30)
----
+
The nodes specified in a group do not have to alive and nodes that are not specified are still allowed to join the cluster. Otherwise the algorthim behaves exactly as the default Call-ID affinity algorthim.
The following is a list of the built-in algorithms for SMPP:
- to SMPP provider side:
org.mobicents.tools.smpp.multiplexer.SmppToProviderRoundRobinAlgorithm::
It's default algorithm. The Load balancer uses round-robin algorithm
for sending SMPP requests to connected providers.
org.mobicents.tools.smpp.multiplexer.SmppToProviderActiveStandbyAlgorithm::
This algorithm allows to send all requests to the active SMPP provider.
If the active SMPP provider will disconnected in some way then
the Load balancer will send requests to the passive SMPP provider.
- to Node side (only for Mux mode):
org.mobicents.tools.smpp.multiplexer.SmppToNodeRoundRobinAlgorithm::
It's default algorithm. The Load balancer uses round-robin algorithm
for sending SMPP requests to connected Nodes.
org.mobicents.tools.smpp.multiplexer.SmppToNodeSubmitToAllAlgorithm::
The Load balancer sends each SMPP requests from provider to all connected providers.
== Distributed load balancing
When the capacity of a single load balancer is exceeded, multiple load balancers can be used.
With the help of an IP load balancer the traffic can be distributed between all {this-platform} load balancers based on some IP rules or round-robin.
With consistent hash and `jvmRoute` -based balancer algorithms it doesn't matter which {this-platform} load balancer will process the request, because they would all make the same decisions based on information in the requests (headers, parameters or cookies) and the list of available nodes.
With consistent hash algorithms there is no state to be preserved in the {this-platform} balancers.
.Example deployment: IP load balancers serving both directions forincoming/outgoing requests in a cluster
image::images/WSS_Failover_Goal.png[]
[[_sslb_binary_sip_load_balancer_installing_configuring_and_running]]
== {this-platform} Load Balancer: Installing, Configuring andRunning
[[_sslb_binary_sip_load_balancer_preinstall_requirements_and_prerequisites]]
=== Pre-Install Requirements and Prerequisites
.Software Prerequisites
A JAIN SIP HA-enabled application server such as {this-platform} JAIN SLEE or {this-platform} SIP Servlets is required.::
Running the {this-platform} Load Balancer requires at least two instances of the application server as cluster nodes nodes.
Therefore, before configuring the {this-platform} Load Balancer, we should make sure we've installed a the SIP application server first.
The {this-platform} {this-platform} Load Balancer will work with a SIP Servlets-enabled JBoss Application Server _or_ a JAIN SLEE application server with SIP RA.
[[_sslb_binary_sip_load_balancer_downloading]]
=== Downloading
The load balancer is located in the [path]_sip-balancer_ top-level directory of the {this-platform} distribution.
You will find the following files in the directory:
{this-platform} load balancer executable JAR file::
This is the binary file with all dependencies, include SMPP load balancer
{this-platform} load balancer Configuration Properties file::
This is the properties files with various settings
[[_sslb_binary_sip_load_balancer_installing]]
=== Installing
The {this-platform} load balancer executable JAR file can be extracted anywhere in the file system.
It is recommended that the file is placed in the directory containing other JAR executables, so it can be easily located in the future.
[[_sslb_binary_sip_load_balancer_configuring]]
=== Configuring
Configuring the {this-platform} Load Balancer and the two SIP Servlets-enabled Server nodes is described in <<_sslb_configuring_the_sip_load_balancer_and_servlet_server_nodes>> .
[[_sslb_configuring_the_sip_load_balancer_and_servlet_server_nodes]]
.Procedure: Configuring the {this-platform} {this-platform} Load Balancer and SIPServer Nodes
. Configure lb-configuration.xml Configuration Properties File
+
Configure the {this-platform} Load Balancer's Configuration Properties file by substituting valid values for your personal setup. <<_sslb_complete_sample_lb.properties_file>> shows a sample [path]_lb-configuration.xml_ file, with key element descriptions provided after the example.
The lines beginning with the pound sign are comments.
+
[[_sslb_complete_sample_lb.properties_file]]
.Complete Sample lb-configuration.xml File
====
[source]
----
172.21.0.10584001502006800010000truedaddy123456falsefalsefalsetrueDestination not available503503,504
10300000restcomm.comtrue10.0.0.*false172.0.0.*falsefd30:*falsefalse0false50.17.127.1705060506050615062506350605060506150625063172.21.0.105508050805081508250835080508050815082508320802081(admin)172.21.0.1052776174.37.245.38:800010truetrue10000100050000100010001000truetrue/opt/loadbalancer/config/keystore123456/opt/loadbalancer/config/keystore123456TLSv1,TLSv1.1,TLSv1.2TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHAjavax.sip.STACK_NAMESipBalancerForwarderjavax.sip.AUTOMATIC_DIALOG_SUPPORToffgov.nist.javax.sip.TRACE_LEVELLOG4Jgov.nist.javax.sip.LOG_MESSAGE_CONTENTfalsegov.nist.javax.sip.DEBUG_LOGlogs/sipbalancerforwarderdebug.txtgov.nist.javax.sip.SERVER_LOGlogs/sipbalancerforwarder.xmlgov.nist.javax.sip.THREAD_POOL_SIZE64gov.nist.javax.sip.REENTRANT_LISTENERtruegov.nist.javax.sip.AGGRESSIVE_CLEANUPtruegov.nist.javax.sip.RECEIVE_UDP_BUFFER_SIZE65536gov.nist.javax.sip.SEND_UDP_BUFFER_SIZE65536MAX_LISTENER_RESPONSE_TIME120gov.nist.javax.sip.MAX_MESSAGE_SIZE10000gov.nist.javax.sip.AGGRESSIVE_CLEANUPtruegov.nist.javax.sip.MAX_FORK_TIME_SECONDS0gov.nist.javax.sip.AUTOMATIC_DIALOG_ERROR_HANDLINGfalseorg.mobicents.ext.javax.sip.congestion.SIP_SCANNERSsipvicious,sipcli,friendly-scannergov.nist.javax.sip.TLS_CLIENT_AUTH_TYPEDisabledjavax.net.debugsslorg.mobicents.tools.heartbeat.impl.ServerController2610
----
====
+
host::
Local IP address, or interface, on which the {this-platform} Load Balancer will listen for incoming requests.
externalUdpPort::
Port on which the {this-platform} Load Balancer listens for incoming requests from SIP User Agents.
internalUdpPort::
Port on which the {this-platform} Load Balancer forwards incoming requests to available, and healthy, SIP Server cluster nodes.
ProtocolClassName::
Protocol which will be used for communicating with nodes
heartbeatPorts::
Port on which the {this-platform} Load Balancer listens heartbeats from nodes.
For RMI protocol there mast be two ports separated by comma.
shutdownTimeout::
Time after which LB will shutdown completely after getting request
http://ip:statisticPort/lbstop
securityRequired::
should we check login and password while stopping the Load balancer
login::
password::
login and password for authorization for stopping the Load balancer
httpPort::
Port on which the {this-platform} Load Balancer will accept HTTP requests to be distributed across the nodes.
httpsPort::
Port on which the {this-platform} Load Balancer will accept HTTPS requests to be distributed across the nodes.
requestCheckPattern::
If LB gets http response with error (>=400 and <600) from the Node on request which will
be matched to this pattern, then LB will remove this Node.
externalIpLoadBalancerAddress::
Address of the IP load balancer (if any) used for incoming requests to be distributed in the direction of the application server nodes.
This address may be used by the {this-platform} Load Balancer to be put in SIP headers where the external address of the {this-platform} Load Balancer is needed.
externalIpLoadBalancerUdpPort::
The port of the external IP load balancer.
Any messages arriving at this port should be distributed across the external SIP ports of a set of {this-platform} Load Balancers.
internalIpLoadBalancerAddresst::
Address of the IP load balancer (if any) used for outgoing requests (requests initiated from the servers) to be distributed in the direction of the clients.
This address may be used by the {this-platform} Load Balancer to be put in SIP headers where the internal address of the {this-platform} Load Balancer is needed.
internalIpLoadBalancerUdpPort::
The port of the internal IP load balancer.
Any messages arriving at this port should be distributed across the internal SIP ports of a set of {this-platform} Load Balancers.
isSend5xxResponse::
Enables send back 5xx in case of exception while sending to destination.
By default false.
isSend5xxResponseReasonHeader::
Reason of error. If not defined the Load Balancer won't add ReasonHeader to
response message
isSend5xxResponseSatusCode::
Code of error. You can manage it.
responsesStatusCodeNodeRemoval::
if LB gets response from the node with this status code more than
times, it removes this node from the node's map until the node will be restarted
maxNumberResponsesWithError::
if is set, than LB after getting this number of
errors from server will remove node
maxErrorTime::
works with . If duration in ms between responses with
error is more than this parameter then LB will set counter of errors to 0.
matchingHostnameForRoute::
if request comes with a matching hostname in the route header it should be removed
for example restcomm.com
isFilterSubdomain::
if false LB will remove route with host equal matchingHostnameForRoute (restcomm.com),
if true LB will remove route ended with .matchingHostnameForRoute(a.restcomm.com, b.restcomm.com)
internalTransport::
if set, all external transport will switch to this internal(e.g. WSS,WS,TLS,UDP -> TCP)
trafficRampupCyclePeriod::
Uses if rump-up needed.Period after which LB will update Node's array based
on weight of the Node.
trafficPercentageIncrease::
Uses if rump-up needed. Percent on which traffic will be increased after
each trafficRampupCyclePeriod.
Uses if rump-up needed.
isSendTrying::
if true LB will send own Trying to sender after requests (INVITE,SUBSCRIBE,NOTIFY,MESSAGE,REFER,PUBLISH,UPDATE)
maxRequestNumberWithoutResponse::
Used in additional health check. It sets max request number which LB can send
to the Node from which did not get any response. Work only with "maxResponseTime"
maxResponseTime::
Used in additional health check. It sets max time in which LB can send request to the Node
from which did not get any response.
extraServerNodes::
Comma-separated list of hosts that are server nodes.
You can put here alternative names of the application servers here and they will be recognized.
Names are important, because they might be used for direction-analysis.
Requests coming from these server will go in the direction of the clients and will not be routed back to the cluster.
routingRulesIpv4::
routingRulesIpv6::
Rules for detection from which network LB should not patch RecordRouter and Via
headers. So for example if we have next rule :
10.0.0.*false
The Load balancer will not patch headers for initial requests for IP which will
be match regex `10.0.0.*` i.e. from 10.0.0.0-10.0.0.255
algorithmClass::
The fully-qualified Java class name of the balancing algorithm to be used.
There are three algorithms to choose from and you can write your own to implement more complex routing behaviour.
Refer to the sample configuration file for details about the available options for each algorithm.
Each algorithm can have algorithm-specific properties for fine-grained configuration.
nodeTimeout::
In milliseonds.
Default value is 5100.
If a server node doesnt check in within this time (in ms), it is considered dead.
heartbeatInterval::
In milliseconds.
Default value is 150 milliseonds.
The hearbeat interval must be much smaller than the interval specified in the JAIN SIP property on the server machines - `org.Restcomm.ha.javax.sip.HEARTBEAT_INTERVAL`
smppHost::
Local IP address on which the SMPP load balancer will listen for incoming requests from clients.
smppInternalHost::
If neede two NIC for SMPP. Interface to SMPP provider side.
smppExternalHost::
If neede two NIC for SMPP. Interface to SMPP RC/Node/client side.
smppPort::
Port on which the SMPP load balancer will listen for incoming requests from clients.
remoteServers::
The IP address:port of SMPP server
maxConnectionSize::
max number of connections/sessions this server will expect to handle
this number corrosponds to the number of worker threads handling reading
data from sockets and the thread things will be processed under
it is recommended that at any time there were no more than 10 (ten) SMPP messages are
outstanding (10 is default)
nonBlockingSocketsEnabled::
Is NIO enabled (default true).
defaultSessionCountersEnabled::
Is default session counters enabled(used for testing)
timeoutResponse::
In milliseconds.
Max time allowable between request and response, after which operation assumed to have failed.
timeoutConnection::
In milliseconds.
Session initialization timer(if client connect but doesn’t send bind request then LB disconnects it)
timeoutEnquire::
In milliseconds.
Enquire Link Timer (after each this period LB checks connection to client and server, sends enquire_link)
reconnectPeriod::
In milliseconds.
Time period after which balancer reconnects to server if connection to server was lost.
timeoutConnectionCheckClientSide::
In milliseconds.
After sending enquire link to client for checking connection, balancer wait this time and if not receive response close connection.
timeoutConnectionCheckServerSide::
In milliseconds.
Connection check server side timer(time which LB wait for enquire_link_resp and if doesn’t receive, tries to rebind to server).
toNodeAlgorithmClass::
toProviderAlgorithmClass::
SMPP algorithms to Node side and to SMPP provider side. By default LB uses
round-robin algorithm to both sides.
muxMode::
Boolean property. If true the LB will work in mux mode, if false in regular.
javax.net.ssl.keyStore::
Points to the keystore file we generated before.
javax.net.ssl.keyStorePassword::
Provides the password we used when we generated the keystore.
javax.net.ssl.trustStore::
Points to the truststore file we generated before.
javax.net.ssl.trustStorePassword::
Provides the password we used when we generated the truststore.
gov.nist.javax.sip.TLS_CLIENT_PROTOCOLS::
Sets secure protocols for all balancers. All available : SSLv2Hello, SSLv3, TLSv1, TLSv1.1, TLSv1.2
gov.nist.javax.sip.ENABLED_CIPHER_SUITES::
Sets cipher suits for HTTP and SIP balancers.
gov.nist.javax.sip.TLS_CLIENT_AUTH_TYPE::
If Enabled, used to request and require client certificate authentication: the connection will terminate if no suitable client certificate is presented.
If Want, used to request client certificate authentication, but keep the connection if no authentication is provided.
If Disabled or DisabledAll does not use authentication.
javax.net.debug::
SSL will provide some extra debugging information for the SSL if uncomment it.
terminateTLSTraffic::
Terminate all secure traffic coming from the outside world HTTPs, SIP TLS, WSS will be terminated at LB side.
statisticPort::
Port for statistic
+
NOTE: The remaining keys and properties in the configuration properties file can be used to tune the JAIN SIP stack, but are not specifically required for load balancing.
To assist with tuning, a comprehensive list of implementing classes for the SIP Stack is available from the http://snad.ncsl.nist.gov/proj/iptel/jain-sip-1.2/javadoc/javax/sip/SipStack.html[Interface
SIP Stack page on nist.gov] . For a comprehensive list of properties associated with the SIP Stack implementation, refer to http://snad.ncsl.nist.gov/proj/iptel/jain-sip-1.2/javadoc/gov/nist/javax/sip/SipStackImpl.html[Class
SipStackImpl page on nist.gov] .
+
NOTE: If {this-platform} Load Balancer is behind firewall you will need to open all the required ports.
The default ports are: TCP: 2000, 2001, 2080, 5060, 5065, 8000 UDP: 5060, 5065
. Configure logging
+
The {this-platform} Load Balancer uses http://logging.apache.org/log4j[Log4J] as a logging mechanism.
You can configure it through the typical log4j xml configuration file and specify the path as follows `-DlogConfigFile=./log4j.xml` . Please refer to Log4J documentation for more information on how to configure the logging.
A shortcut exists if you want to switch between INFO/DEBUG/WARN logging levels.
The JVM option `-DlogLevel=DEBUG` will allow you to switch all loggig categories to the specified log level.
Also you can dynamically change log level of the {this-platform} Load balancer via http request:
http://lbIpAddress:statisticPort/lbloglevel?logLevel=DEBUG
==== Converged Load Balancing
===== Apache HTTP Load Balancer
The {this-platform} {this-platform} Load Balancer can work in concert with HTTP load balancers such as `mod_jk` . Whenever an HTTP session is bound to a particular node, an instruction is sent to the {this-platform} Load Balancer to direct the SIP calls from the same application session to the same node.
It is sufficient to configure `mod_jk` to work for HTTP in JBoss in order to enable cooperative load balancing. {this-platform} will read the configuration and will use it without any extra configuration.
You can read more about configuring `mod_jk` with JBoss in your JBoss Application Server documentation.
===== Integrated HTTP Load Balancer
To use the integrated HTTP Load Balancer, no extra configuration is needed.
If a unique `jvmRoute` is specified and enabled in each application server, it will behave exactly as the apache balancer.
If `jvmRoute` is not present, it will use the session ID as a hash value and attempt to create a sticky session.
The integrated balancer can be used together with the apache balancer at the same time.
In addition to the apache behavior, there is a consistent hash balancer algorithm that can be enabled for both HTTP and SIP messages.
For both HTTP and SIP messages, there is a configurable affinity key, which is evaluated and hashed against each unassigned request.
All requests with the same hash value will always be routed to the same application server node.
For example, the SIP affinity key could be the callee user name and the HTTP affinity key could the "`appsession`" HTTP GET parameter of the request.
If the desired behaviour group these requests, we can just make sure the affinity values (user name and GET parameter) are the same.
[[_sslb_converged_http_sip_affinity]]
.Ensuring SIP and HTTP requests are being grouped by commonaffinity value.
image::images/converged-integrated-lb.png[]
[[_sslb_binary_sip_load_balancer_running]]
=== Running
.Procedure: Running the Load Balancer and SIP Server Nodes
. Start the Load Balancer
+
Start the load balancer, ensuring the Configuration Properties file ( [path]_lb-configuration.xml_ in this example) is specified.
In the Linux terminal, or using the Windows Command Prompt, the Load Balancers are started by issuing a command similar to this one:
+
----
java -DlogConfigFile=./lb-log4j.xml -jar sip-balancer-jar-with-dependencies.jar -mobicents-balancer-config=lb-configuration.xml
----
+
Executing the {this-platform} Load Balancer produces output similar to the following example:
+
----
home]$ java -DlogConfigFile=./lb-log4j.xml -jar sip-balancer-jar-with-dependencies.jar -Restcomm-balancer-config=lb-configuration.xml
2016-02-08 15:54:28,036 INFO main - nodeTimeout=8400
2016-02-08 15:54:28,038 INFO main - heartbeatInterval=150
2016-02-08 15:54:28,039 INFO main - Node registry starting...
2016-02-08 15:54:28,103 INFO main - Node expiration task created
2016-02-08 15:54:28,103 INFO main - Node registry started
2016-02-08 15:54:28,130 INFO main - value -1000 will be used for reliableConnectionKeepAliveTimeout stack property
2016-02-08 15:54:28,131 INFO main - Setting Stack Thread priority to 10
2016-02-08 15:54:28,134 INFO main - using Disabled tls auth policy
2016-02-08 15:54:28,159 WARN main - using default tls security policy
2016-02-08 15:54:28,162 WARN main - Using default keystore type jks
2016-02-08 15:54:28,162 WARN main - Using default truststore type jks
2016-02-08 15:54:28,173 INFO main - the sip stack timer gov.nist.javax.sip.stack.timers.DefaultSipTimer has been started
2016-02-08 15:54:28,325 INFO main - Sip Balancer started on external address 127.0.0.1, external port : 5060, internalPort : 5065
2016-02-08 15:54:28,365 INFO main - HTTP LB listening on port 2080
2016-02-08 15:54:28,377 INFO main - HTTPS LB listening on port 2081
2016-02-08 15:54:28,432 INFO main - SMPP Load Balancer started at 127.0.0.1 : 2776
2016-02-08 15:54:28,433 INFO main - SMPP Load Balancer uses port : 2876 for TLS clients.
----
+
The output shows the IP address on which the {this-platform} Load Balancer is listening, as well as the external and internal listener ports.
. Configure Server Nodes
+
The information about configuring your SIP Server, SIP Servlets or JAIN SLEE, is in the respective server User Guide.
. Start Load Balancer Client Nodes
+
Start all SIP load balancer client nodes.
[[_sslb_binary_sip_load_balancer_testing]]
=== Testing
To test load balancing, the same application must be deployed manually on each node, and two SIP Softphones must be installed.
.Procedure: Testing Load Balancing with Sip Servlets
. Deploy an Application
+
Ensure that for each node, the DAR file location is specified in the [path]_server.xml_ file.
+
Deploy the Location service manually on both nodes.
. Start the "Sender" SIP softphone
+
Start a SIP softphone client with the SIP address of `sip:sender@sip-servlets-com` , listening on port 5055.
The outbound proxy must be specified as the sip-balancer (http://127.0.0.1:5060)
. Start the "Receiver" SIP softphone
+
Start a SIP softphone client with the SIP address of `sip:receiver-failover@sip-servlets-com` , listening on port 5090.
. Initiate two calls from "Sender" SIP softphone
+
Initiate one call from `sip:sender@sip-servlets-com` to `sip:receiver-failover@sip-servlets-com` . Tear down the call once completed.
+
Initiate a second call using the same SIP address, and tear down the call once completed.
Notice that the call is handled by the second node.
.Procedure: Testing Load Balancing with JAIN SLEE and SIP RA
. Deploy SIP RA
. Configure the JAIN SIP HA properties for load balancing according to the JAIN SLEE User Guide
. Deploy a sample application
. Run the sample scenario for the application using the {this-platform} Load Balancer
[[_sslb_binary_sip_load_balancer_stopping]]
=== Stopping
Assuming that you started the JBoss Application Server as a foreground process
in the Linux terminal, the easiest way to stop it is by pressing the key combination
in the same terminal in which you started it.
If you use
link:http://www.keepalived.org/[keepalived] for maintaining active and standby
Load balancers as it was described
link:http://documentation.telestax.com/core/tutorials/high-availability/Load-Balancer_failover-keepalived.html[here]
then you can shutdown active Load balancer by using http request
http://host:statisticPort/lbstop for upgrades and redirect new calls to another LB.
The request will turn off the statistic port. The keepalived utility checks the
statistic port and then switches traffic to the standby Load balancer if the statistic
port is turned off. After two seconds the Load balancer finally stops. For more security
you can add login and password as parameters in request. You should set next properties in
configuration file in section:
trueloginpassword
and use request (example) :
http://127.0.0.1:2006/lbstop?login=daddy&password=123456
.
This should produce similar output to the following:
----
^C2016-02-08 16:16:59,788 INFO Thread-142 - Stopping the sip forwarder
2016-02-08 16:16:59,789 INFO Thread-142 - Removing the following Listening Point gov.nist.javax.sip.ListeningPointImpl@40185808
2016-02-08 16:16:59,791 INFO NioSelector-TLS-127.0.0.1/5061 - Selector is closed
2016-02-08 16:16:59,791 INFO Thread-142 - Removing the following Listening Point gov.nist.javax.sip.ListeningPointImpl@a440543
2016-02-08 16:16:59,796 INFO Thread-142 - Removing the following Listening Point gov.nist.javax.sip.ListeningPointImpl@a0d78e9
2016-02-08 16:16:59,796 INFO NioSelector-TCP-127.0.0.1/5060 - Selector is closed
2016-02-08 16:16:59,796 INFO Thread-142 - Removing the sip provider
2016-02-08 16:16:59,797 INFO Thread-142 - Removing the following Listening Point gov.nist.javax.sip.ListeningPointImpl@1ce21add
2016-02-08 16:16:59,798 INFO NioSelector-TLS-127.0.0.1/5066 - Selector is closed
2016-02-08 16:16:59,798 INFO Thread-142 - Removing the following Listening Point gov.nist.javax.sip.ListeningPointImpl@2a859dc9
2016-02-08 16:16:59,800 INFO Thread-142 - Removing the following Listening Point gov.nist.javax.sip.ListeningPointImpl@63a80928
2016-02-08 16:16:59,804 INFO NioSelector-TCP-127.0.0.1/5065 - Selector is closed
2016-02-08 16:16:59,808 INFO Thread-142 - Removing the sip provider
2016-02-08 16:16:59,808 INFO Thread-142 - the sip stack timer gov.nist.javax.sip.stack.timers.DefaultSipTimer has been stopped
2016-02-08 16:17:00,809 INFO Thread-142 - the sip stack timer gov.nist.javax.sip.stack.timers.DefaultSipTimer has been stopped
2016-02-08 16:17:01,839 INFO Thread-142 - Sip forwarder SIP stack stopped
2016-02-08 16:17:01,839 INFO Thread-142 - Stopping the http forwarder
2016-02-08 16:17:01,953 INFO Thread-142 - Stopping the SMPP balancer
2016-02-08 16:17:01,957 INFO Thread-142 - SMPP Load Balancer stopped at 127.0.0.1
2016-02-08 16:17:01,958 INFO Thread-142 - Unregistering the node registry
2016-02-08 16:17:01,958 INFO Thread-142 - Unregistering the node adapter
2016-02-08 16:17:01,962 INFO Thread-142 - Stopping the node registry
2016-02-08 16:17:01,962 INFO Thread-142 - Stopping node registry...
2016-02-08 16:17:01,963 INFO Thread-142 - Node Expiration Task cancelled true
2016-02-08 16:17:01,963 INFO Thread-142 - Node registry stopped.
----
[[_sslb_binary_sip_load_balancer_uninstalling]]
=== Uninstalling
To uninstall the SIP and SMPP load balancers, delete the JAR file you installed.
== SIP Message Flow
The {this-platform} Load Balancer appends itself to the `Via` header of each request, so that returned responses are sent to the SIP Balancer before they are sent to the originating endpoint.
The Load Balancer also adds itself to the path of subsequent requests by adding Record-Route headers.
It can subsequently handle mid-call failover by forwarding requests to a different node in the cluster if the node that originally handled the request fails or becomes unavailable.
The {this-platform} Load Balancer immediately fails over if it receives an unhealthy status, or irregular heartbeats from a node.
Additionally, {this-platform} Load Balancer will add two custom header containing the initial remote address and port of the sip client for every REGISTER request or requests with content.
* X-Sip-Balancer-InitialRemoteAddr
* X-Sip-Balancer-InitialRemotePort
Application can use these two headers to have the correct location of the sip client that sent the REGISTER request.
In advanced configurations, it is possible to run more than one {this-platform} Load Balancer.
Simply edit the balancers connection string in your SIP Server - the list is separated with semi-colon.
<<_figure_mss_basic_ip_and_port_cluster_configuration>> describes a basic IP and Port Cluster Configuration.
In the diagram, the {this-platform} Load Balancer is the server with the IP address of `192.168.1.1` .
[[_figure_mss_basic_ip_and_port_cluster_configuration]]
.Basic IP and Port Cluster Configuration
image::images/mss-MSSSIPLoadBalancer-dia-ClusterIPsAndPorts.jpg[]
== Using the Load Balancer with Third-Party SIP Servers
The load balancer only forwards requests to servers that send heartbeat signals.
A third party server can send metadata using a SIP OPTIONS or SIP INFO message towards the internal port of the {this-platform} Load Balancer.
For security reasons heartbeat messages arriving at the external entry-point will be ignored and using a single internal and external entry-point is not allowed.
The third party SIP server must advertise it's metadata in the SIP message contents.
For example, this request will advertise a SIP server listening on 127.0.0.1:5070, both TCP and UDP .
[source]
----
OPTIONS sip:[email protected]:5065;lr SIP/2.0
Call-ID: [email protected]
CSeq: 1 OPTIONS
From: ;tag=4481411
To:
Via: SIP/2.0/UDP 127.0.0.1:5070;branch=z9hG4bK-373335-ec2c7452cfd0130bd409ba4f8ea5f54e
Max-Forwards: 70
Contact:
Route:
Content-Type: text/plain;charset=UTF-8
Restcomm-Heartbeat: 1
Content-Length: 54
tcpPort=5070
udpPort=5070
hostname=sipHeartbeat
ip=127.0.0.1
----
The important headers to be filled in this request are `Restcomm-Heartbeat` , the Route header with `;node_host=127.0.0.1;node_port=5070` and the message contents.
The message contents are interpreted as properties of the `SIPNode` object representing the node in the load balancer and can be further interpreted by load balancing algorithms for load balancing purposes.
The value of the `Restcomm-Heartbeat` header is arbitrary and reserved for future use, the presence of the header is sufficient to instruct the load balancer how to process the request.
All requests initiated by the SIP Server must have the following hint in their Route header `;node_host=127.0.0.1;node_port=5070` . This hint instructs the {this-platform} Load Balancer that the dialog initated by the application server must stay on the node advertised in the hint.
This function is cruicial when the direction of the requests withing the dialog is reversed.
Since this SIP request represents a heartbeat signal, it must be sent regularly at least once every 5 seconds (by default). Sending this request is responsibility of the third party server.
The load balancer will respond to every heartbeat request with 200 OK immediately.
The third party server must expect the OK response.
If no response if received within a threshhold time then the third party SIP server must assume the {this-platform} Load Balancer is not available and use another (backup) load balancer.
If you are desgning a pluggable algorithm using the SIP metadata, you can access the properties passed in the SIP message contents using `SIPNode.getProperties()`
== Enabling TLS/WSS for Load Balancers
=== Enabling TLS/WSS for {this-platform} Load Balancer
For enabling TLS/WSS support for {this-platform} Load Balancer, you should correctly specify next property:
* keyStore;
* keyStorePassword;
* trustStore;
* trustStorePassword;
* enabledCipherSuites;
* tlsClientProtocols;
* external tlsPort or(and) wssPort exteral (and internal if needed);
* ipLoadBalancerTlsPort or(and) ipLoadBalancerWssPort (if you are using IP load balancer and internal if needed);
=== Enabling HTTPS
For enabling HTTPS and Secure WebSockets support for HTTP load balancer, you should correctly specify next property (httpsPort property enables HTTPS support):
* httpsPort;
* keyStore;
* keyStorePassword;
* trustStore;
* trustStorePassword;
* enabledCipherSuites;
* tlsClientProtocols.
=== Enabling TLS for SMPP load balancer
For enabling TLS support for SMPP load balancer, you should correctly specify next property (smppSslPort property enables TLS suport):
* smppSslPort;
* keyStore;
* keyStorePassword;
* trustStore;
* trustStorePassword.
* enabledCipherSuites;
* tlsClientProtocols;
== Getting statistic
Load balancers provide some statistic by Java Management Extensions and JSON.
You should use url http://host:statisticPort/lbstat for getting JSON statistic data from load balancer.
You can get next data:
. SIP balancer statistic:
+
[loweralpha]
.. Number of processed requests;
.. Number of processed responses;
.. Number of transferred bytes;
.. Number of requests processed by method;
.. Number of responses processed by status code;
.. Number of active connections(if transport is UDP we can't get this value);
. HTTP balancer statistic:
+
[loweralpha]
.. Number of HTTP request;
.. Number of bytes transferred to server;
.. Number of bytes transferred to client;
.. Number of requests processed by HTTP method;
.. Number of responses processed by HTTP code;
.. Number of active connections;
. SMPP balancer statistic:
+
[loweralpha]
.. Number of requests to server;
.. Number of requests to client;
.. Number of bytes transferred to server;
.. Number of bytes transferred to client;
.. Number of requests processed by SMPP Command ID;
.. Number of responses processed by SMPP Command ID;
.. Number of active connections;
. Other data:
+
[loweralpha]
.. Recent CPU usage for the Java Virtual Machine process;
.. Amount of used memory of the heap in bytes.
Also you can get info about connected nodes to the load balancer.
You should use url http://host:statisticPort/lbinfo
== Configuring Load-balancer and two Restcomm servers on the one local machine
.Load balancer and Restcomm servers diagram
image::images/Restcomm_servers-LB.png[]
This chapter will show you how to configure two Restcomm instances and Load Balancer instance on one machine with OS Ubuntu.
.Step 0: Create three sub interfaces
Example for Ubuntu:
Restcomm-server1
root@ubuntu:~# ifconfig lo:1 127.0.0.2 netmask 255.255.255.0
Restcomm-server2
root@ubuntu:~# ifconfig lo:2 127.0.0.3 netmask 255.255.255.0
Load balancer
root@ubuntu:~# ifconfig lo:3 127.0.0.4 netmask 255.255.255.0
.Step 1: Install Mysql
Example for Ubuntu: https://www.linode.com/docs/databases/mysql/install-mysql-on-ubuntu-14-04
.Step 2: Download the latest version of Restcomm
. Download the latest version of Restcomm zip file from:
https://Restcomm.ci.cloudbees.com/job/RestComm/lastSuccessfulBuild/artifact/
. Unzip the binary to a local directory. It should be similar to this Restcomm-Restcomm-JBoss-AS7-.
We shall refer to this above Restcomm directory as the $RESTCOMM_HOME.
.Step 3: Configuring the mybatis.xml file to use Mysql
. Edit the file $RESTCOMM_HOME/standalone/deployments/restcomm.war/WEB-INF/conf/mybatis.xml
. Change the to
. Add the MariaDB configuration environment tag as shown below
. Save and exit the mybatis.xml file
.Step 4: Start Mysql and Create the restcomm Database
. Start mysql service if it is not already started – sudo /etc/init.d/mysql start
. Go to the directory $RESTCOMM_HOME/standalone/deployments/restcomm.war/WEB-INF/scripts/mariadb
. There will be an init.sql file and an sql directory
. Create the restcomm database from the init.sql as follows:
mysql -u root -p < init.sql
. Log into mysql and make sure the restcomm database is created : show databases
.Step 5: Edit the restcomm.xml file to point the DAO to mysql
. Edit the file $RESTCOMM_HOME/standalone/deployments/restcomm.war/WEB-INF/conf/restcomm.xml
. Find the dao-manager tag and change the sql-files path to mariadb as shown below
${restcomm:home}/WEB-INF/conf/mybatis.xml${restcomm:home}/WEB-INF/scripts/mariadb/sql
.Step 6: Download Mysql Java Client Driver
. Download the latest mysql java connector client driver jar file from http://mvnrepository.com/artifact/mysql/mysql-connector-java.
. Put jar file in $RESTCOMM_HOME/standalone/deployments/restcomm.war/WEB-INF/lib/
.Step 7: Download key store file for SSL
You can download file from
https://github.com/RestComm/load-balancer/blob/master/config-examples/keystore
(password 123456) or create independently and put this file
in package $RESTCOMM_HOME/standalone/configuration/
.Step 8: Edit mss-sip-stack.properties file
. Edit the file $RESTCOMM_HOME/standalone/configuration/mss-sip-stack.properties
. Add next lines to file
org.Restcomm.ha.javax.sip.BALANCERS=127.0.0.4:5082
org.Restcomm.ha.javax.sip.LOCAL_HTTP_PORT=8080
org.Restcomm.ha.javax.sip.LOCAL_SSL_PORT=8443
org.Restcomm.ha.javax.sip.REACHABLE_CHECK=false
gov.nist.javax.sip.TLS_CLIENT_AUTH_TYPE=Disabled
javax.net.ssl.keyStore=/absolutePathToKeystore/keystore
javax.net.ssl.keyStorePassword=123456
javax.net.ssl.trustStorePassword=123456
javax.net.ssl.trustStore=/absolutePathToKeystore/keystore
javax.net.ssl.keyStoreType=JKS
.Step 9: Configure Restcomm IP information and Text-to-speech
. Go to the directory $RESTCOMM_HOME/bin/restcomm
. Open the file restcomm.conf
. Go to the section # Network configuration
. Configure the following variables with the network configuration details of your first server.
#Network configuration
NET_INTERFACE='lo:1'
PRIVATE_IP='127.0.0.2'
SUBNET_MASK='255.255.255.0'
NETWORK='127.0.0.0'
BROADCAST_ADDRESS='127.0.0.255'
. Add next parameters for enabling https connector (password and alias for file from github example)
#HTTPS Settings
#File should be located at $RESTCOMM_HOME/standalone/configuration folder.
#Provide just the name of the trustore file. Leave it blank to disable HTTPS
TRUSTSTORE_FILE='keystore'
#Password for the trustore file
TRUSTSTORE_PASSWORD='123456'
#The certificate alias
TRUSTSTORE_ALIAS='smpp'
#Control whether or not Restcomm will accept self-signed certificates.
#Values allowall=allow #self-signed certificates,
#strict=don't allow self signed certificates
SSL_MODE='allowall'
. Add your VoiceRSS API key to the variable
VOICERSS_KEY=‘f4840af6675b4d20a8d96dea8466296b‘
. Also edit next lines
ACTIVATE_LB='TRUE'
LB_ADDRESS='127.0.0.4'
LB_INTERNAL_PORT='5082'
LB_SIP_PORT_UDP='5080'
LB_SIP_PORT_TCP='5080'
LB_SIP_PORT_TLS='5081'
LB_SIP_PORT_WS='5082'
LB_SIP_PORT_WSS='5083'
. Save and exit the restcomm.conf file
.Step 10: Configure proxy.conf file
Change in $RESTCOMM_HOME/bin/restcomm/proxy.conf file next lines
ACTIVE_PROXY='true'
PROXY_IP='172.21.0.104'
PROXY_PRIVATE_IP='127.0.0.4'
.Step 11: Configuration second Restcomm server
. Copy $RESTCOMM_HOME folder to separate folder and change next files there:
.. $RESTCOMM_HOME/bin/restcomm/restcomm.conf
NET_INTERFACE='lo:2'
PRIVATE_IP='127.0.0.3'
.. $RESTCOMM_HOME/bin/restcomm/start-restcomm.sh
Because we are going to start second Restcomm server on same machine,
we should change name of screen session "restcomm" to "restcomm2"
or something like that and name of screen session "mms" to "mms2".
Also we should add property -bmanagement=$bind_address, as shown below:
if screen -list | grep -q 'restcomm2'; then
echo 'TelScale RestComm is already running on screen session "restcomm2"!'
....
echo 'TelScale RestComm started running on standalone mode. Screen session: restcomm2.'
....
$RESTCOMM_HOME/bin/standalone.sh -b $bind_address -bmanagement=$bind_address
else
screen -dmS 'restcomm2' $RESTCOMM_HOME/bin/standalone.sh -b $bind_address -bmanagement=$bind_address
....
screen -dmS 'restcomm2' $RESTCOMM_HOME/bin/domain.sh -b $bind_address -bmanagement=$bind_address
echo 'TelScale RestComm started running on domain mode. Screen session: restcomm2.'
....
screen -dmS 'restcomm2' $RESTCOMM_HOME/bin/standalone.sh -b $bind_address -bmanagement=$bind_address
echo 'TelScale RestComm started running on standalone mode. Screen session: restcomm2.'
....
if screen -ls | grep -q 'mms2'; then
echo '...Restcomm Media Server is already running on screen session "mms2"!'
else
chmod +x $MMS_HOME/bin/run.sh
screen -dmS 'mms2' $MMS_HOME/bin/run.sh
echo '...Restcomm Media Server started running on screen "mms2"!'
.. $RESTCOMM_HOME/bin/restcomm/stop-restcomm.sh
if screen -ls | grep -q 'mms2'; then
screen -S 'mms2' -p 0 -X 'quit'
echo '...stopped Restcomm Media Server instance running on screen session "mms2"...'
....
if screen -list | grep -q 'restcomm2'; then
screen -S 'restcomm2' -p 0 -X 'quit'
echo '...stopped RestComm instance running on screen session "restcomm2"!'
.Step 12: Start Restcomm servers and Open the Admin GUI
. Go to the directory Restcomm by running the $RESTCOMM_HOME/bin/restcomm/
. Run the command below to start Restcomm and the media server:
./start-restcomm.sh
. Open your web browser and go to the url – http://127.0.0.2:8080/
. Log in with the [email protected] and the password=RestComm
You will be prompted to change the default password.
. For stopping Restcomm run the command:
./stop-restcomm.sh
.Step 13: Download and configuring of Load Balancer
. Download last version of load balancer jar file (sip-balancer-jar-x.x.xx-zip.zip)
from https://mobicents.ci.cloudbees.com/job/Restcomm-LoadBalancer/lastSuccessfulBuild/artifact/jar/target/
Where is the release version number
. Unzip file
. Put files in one folder
. Modify lb-configuration.xml file for correct work. Set up next properties:
external.host 172.21.0.104
external.wsPort 5090
external.wssPort 5091
internal.host 127.0.0.4
internal.wsPort 5082
internal.wssPort 5083
.Step 14: Use Olympus with Restcomm for WS and WSS
For adding certificate of Load balancer to your browser you should connect to it:
https://127.21.0.104:5091
and confirm certificate.
For WS:
Login to Restcomm server http://127.0.0.2:8080/olympus/
Fill in form: Load Balancer's ip adress (external.host) 172.21.0.104 and port(external.wsPort) 5090
For WSS:
Login to Restcomm server https://127.0.0.2:8443/olympus/
Fill in form: Load Balancer's ip adress (external.host) 172.21.0.104 and port (external.wssPort) 5091
== Integration Load Balancer SMPP with Restcomm and Nexmo
.Configure Restcomm to Use SMPP
First you need to configure the IP address and other features required
to start Restcomm as explained link:http://documentation.telestax.com/connect/configuration/Starting%20Restcomm-Connect.html#start-restcomm-connect[Starting Restcomm-Connect.]
Edit the file $RESTCOMM_HOME/bin/restcomm/restcomm.conf You must set the SMPP_ACTIVATE variable to true for SMPP to be activated
#Connection details for SMPP Restcomm integration
SMPP_ACTIVATE='true' #Set to true to activate SMPP
SMPP_SYSTEM_ID='xxxxx' #your NEXMO “key”
SMPP_PASSWORD='xxxxxx' #your NEXMO “secret”
SMPP_SYSTEM_TYPE='xxxxxx' #This is required when working with Nexmo for inbound SMS
SMPP_PEER_IP='xxxxxx' #smppHost of Load Balancer
SMPP_PEER_PORT='xxxxx' #smppPort of Load Balancer
.Configure Load Balancer to Use SMPP
Edit the file lb-configuration.xml, set next properties
# The address of the load balancer
smppHost xxxxxx
# The port of the load balancer on which Restcomm connects
smppPort xxxx
# The port of the load balancer for SSL protocol on which Restcomm connects
#smppSslPort xxxx
# The IP address:port of Nexmo server
remoteServers 174.37.245.38:8000
# max number of connections/sessions this server will expect to handle
# this number corrosponds to the number of worker threads handling reading
# data from sockets and the thread things will be processed under
# it is recommended that at any time there were no more than 10 (ten) SMPP
maxConnectionSize 10
# Is NIO enabled
nonBlockingSocketsEnabled true
# Is default session counters enabled(used for testing)
defaultSessionCountersEnabled true
# Response timeout for load balancer in milliseconds (max time which LB wait
# for response, after timeout packet сonsidered lost and send error response)
timeoutResponse 10000
# Session initialization timer(if client connect but doesn’t send bind
# request then LB disconnects it)
timeoutConnection 1000
# Enquire Link Timer (after each this period LB checks connection to
# client and server(sends enquire_link))
timeoutEnquire 50000
# Time between reconnection(time which LB wait for reconnection if connection
# was lost)
reconnectPeriod 1000
# Connection check timer in load balancer (time which LB wait for
# enquire_link_resp and if doesn’t receive, close connection to client)
timeoutConnectionCheckClientSide 1000
# Connection check server side timer(time which LB wait for
# enquire_link_resp and if doesn’t receive, tries to rebind to server )
timeoutConnectionCheckServerSide 1000
Start the Load balancer and Restcomm, in a few seconds connection should established
and you can see next line:
INFO [MClientConnectionImpl] Connection to server : 127.0.0.1 : 10021 established. Server session ID : 0
Now you can send outbound and inbound messages. How to send messages you can find
link:http://documentation.telestax.com/connect/configuration/Restcomm%20-%20Connecting%20SMPP%20Endpoint%20through%20Nexmo.html[here]
== Active-Active setup and Graceful Upgrades of Load Balancer
link:http://www.keepalived.org/[Keepalived service] provides possibility graceful
shutdown of the Load balancer using health-checks and using virtual IP
on which client/server from external/internal side sends message. The Load balancer
gets this message on its real IP and transfer forward. So we can use two LBs in same
time which controlled one keepalived service.
.Active-Active Load balancer diagram
image::images/Active-Active.png[]
If LB2 will be gracefully shutdown via http request
http://ipLoadbalancer:statisticPort/lbstop (http://10.0.0.20:2006/lbstop), than the keepalived service
will switch virtual IPs 10.0.0.200 and 172.21.0.200 of second load balancer (LB2)
to other active (LB1) and all traffic which client will send to virtual IP 10.0.0.200
will be redirected to LB1.
.The graceful shutdown of Load balancer diagram
image::images/Active-Active-Graceful-Shutdown.png[]
For configuring of Load balancer you need setup keepalived service and set
additional properties in lb-configuration.xml file.
First you need install keepalived service on your machines on which you have
the Load balancers.
sudo apt-get install keepalived
The service looks for its configuration files in the /etc/keepalived directory.
Create that directory and file on both etc/keepalived/keepalived.conf.
keepalived.conf on machine with LB1:
vrrp_script chk_lb_ext {
script "/home/user/lbhealthcheck.sh"
interval 1
fall 1
rise 1
}
vrrp_script chk_lb_int {
script "/home/user/lbhealthcheck.sh"
interval 1
fall 10
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 150
advert_int 1
authentication {
auth_type AH
auth_pass k@l!ve1
}
virtual_ipaddress {
10.0.0.100/24
}
track_script {
chk_lb_ext
}
}
vrrp_instance VI_2 {
state MASTER
interface eth1
virtual_router_id 52
priority 150
advert_int 1
authentication {
auth_type AH
auth_pass k@l!ve1
}
virtual_ipaddress {
172.21.0.100/24
}
track_script {
chk_lb_int
}
}
vrrp_instance VI_3 {
state BACKUP
interface eth0
virtual_router_id 53
priority 100
advert_int 1
authentication {
auth_type AH
auth_pass k@l!ve3
}
virtual_ipaddress {
10.0.0.200/24
}
track_script {
chk_lb_ext
}
}
vrrp_instance VI_4 {
state BACKUP
interface eth1
virtual_router_id 54
priority 100
advert_int 1
authentication {
auth_type AH
auth_pass k@l!ve3
}
virtual_ipaddress {
172.21.0.200/24
}
track_script {
chk_lb_int
}
}
virtual_server 10.0.0.100 {
delay_loop 10
protocol TCP
lb_algo rr
lb_kind NAT
persistence_timeout 7200
real_server 10.0.0.10 {
}
}
virtual_server 172.21.0.100 {
delay_loop 10
protocol TCP
lb_algo rr
lb_kind NAT
persistence_timeout 7200
real_server 172.21.0.10 {
}
}
virtual_server 10.0.0.200 {
delay_loop 10
protocol TCP
lb_algo rr
lb_kind NAT
persistence_timeout 7200
real_server 10.0.0.10 {
}
}
virtual_server 172.21.0.200 {
delay_loop 10
protocol TCP
lb_algo rr
lb_kind NAT
persistence_timeout 7200
real_server 172.21.0.10 {
}
}
Then we define the symmetric configuration file on second machine. VI_3 & VI_4
are in MASTER state with a higher priority 150 to start with a stable state.
Symmetrically VI_1 & VI_2 are in default BACKUP state with lower priority of 100.
virtual_server should be 10.0.0.200, 172.21.0.200 and real_server 10.0.0.20,
172.21.0.20 accordingly. Other parameters should be same.
Also you need create file lbhealthcheck.sh somewhere(for example /home/user/),
which will be check is opened statistic port:
port=$(nc -z 172.21.0.220 2006;echo $?)
if [ $port -eq 1 ]; then
exit 1
else
exit 0
fi
Next you need set IP Load balancer adresses and ports for example next
configuration for LB1:
10.0.0.1010.0.0.100,10.0.0.2005060506050615062506350605060506150625063172.21.0.10172.21.0.100,172.21.0.2005080508050815082508350805080508150825083
Also you need set next tags:
in section
true
in section
10000
in section
gov.nist.javax.sip.NEVER_ADD_RECEIVED_RPORTtrue
Same property should be set for LB2.
On server side you should also set
sip stack property gov.nist.javax.sip.NEVER_ADD_RECEIVED_RPORT=true
== Kubernetes integration
With Kubernetes integration the Load balancer can get info about running
SIP Server Nodes from Pods and get data as IP, ports etc. When SIP Server Node
starts, it can set its property (ip, ports) in labels of pods. It will be looks
like:
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2017-03-17T09:05:43Z",
"labels": {
"Restcomm-Instance-Id": "qwe123rty",
"hostName": "node1",
"httpPort": "2080",
"name": "sip-node",
"sessionId": "123456789",
"sslPort": "2081",
"tcpPort": "4060",
"tlsPort": "4061",
"udpPort": "4060",
"version": "0",
},
"name": "sip-node-1",
"namespace": "default",
"resourceVersion": "501683",
...
From these labels, the {this-platform} Load Balancer
creates and maintains a list of all available and healthy nodes in the cluster
allowing for making the whole cluster dynamic and capable of autoscaling with no
downtime.
For creating this pod we can use script like:
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "sip-node-1",
},
"spec": {
"volumes": [
],
"containers": [
{
"name": "sip-node",
"image": "knosach/sip-node",
"ports": [
]
}
]
}
}
Only one property should be set in MSS configuration, it is
org.mobicents.ha.javax.sip.POD_NAME which should be same as in name of pod.
On side of the Load balancer we should set next tags:
org.mobicents.tools.heartbeat.kube.ServerControllerKubesip-node2000
pullPeriod - it's period in which the Load balancer will check active pods;
nodeName - it's string from which all names of pods with SIP Server Nodes should
start. For example if we have pods with SIP Server Nodes :
sip-node-1, sip-node-2 etc., then this tag should be sip-node.