All Downloads are FREE. Search and download functionalities are using the official Maven repository.

ildfly-client-all.10.0.0.Alpha5.source-code.INSTALL.html Maven / Gradle / Ivy

Go to download

This artifact provides a single jar that contains all classes required to use remote EJB and JMS, including all dependencies. It is intended for use by those not using maven, maven users should just import the EJB and JMS BOM's instead (shaded JAR's cause lots of problems with maven, as it is very easy to inadvertently end up with different versions on classes on the class path).

There is a newer version: 32.0.0.Final
Show newest version



    
    
    JGroups 3.x Installation
    


 

Installation Instructions for JGroups

JGroups is shipped as a single JAR: jgroups-3.x.y.jar.

Requirements

  • JGroups 3.x requires JDK 6 or higher.
  • There is no JNI code present so JGroups should run on all platforms.

Installing the binary distribution

The binary version consists of jgroups-3.x.y.jar: the JGroups library including some demos, test code and sample configuration files. Place jgroups-3.x.y.jar somewhere on your CLASSPATH, and you're ready to start using JGroups.

Testing your Setup

To see whether your system can find the JGroups classes, execute the following command:
java org.jgroups.Version

or

java -jar jgroups-all.jar


You should see the following output (more or less) if the class is found:

Version: 3.0.0.Final

Running the performance tests


By default, we're running 2 senders with 10000 1K messages each, to do this, execute the following in 2 shells:

 ./jgroups.sh tests.perf.Test -config ./config.txt  -props ./udp.xml -sender

You should see output like the following in both shells:

-- results:

linux-34003 (myself):
num_msgs_expected=2000000, num_msgs_received=2000000 (loss rate=0.0%), received=2GB, time=13158ms, msgs/sec=151998.78, throughput=152MB

linux-15721:
num_msgs_expected=2000000, num_msgs_received=2000000 (loss rate=0.0%), received=2GB, time=13073ms, msgs/sec=152987.07, throughput=152.99MB

combined: 152492.93 msgs/sec averaged over all receivers (throughput=152.49MB/sec)


-- results:

linux-34003 (myself):
num_msgs_expected=2000000, num_msgs_received=2000000 (loss rate=0.0%), received=2GB, time=13158ms, msgs/sec=151998.78, throughput=152MB

linux-15721:
num_msgs_expected=2000000, num_msgs_received=2000000 (loss rate=0.0%), received=2GB, time=13073ms, msgs/sec=152987.07, throughput=152.99MB

combined: 152492.93 msgs/sec averaged over all receivers (throughput=152.49MB/sec)

Running a Demo Program

To test whether JGroups works okay on your machine, run (assuming jgroups-3.x.y.jar is on the classpath)
java -Djava.net.preferIPv4Stack=true org.jgroups.demos.Draw
twice. 2 whiteboard windows should appear, and both window title bars should show 2. This means that the two instances found each other and formed a cluster.

When drawing in one window, the second instance should also be updated. As the default group transport uses IP multicast, make sure that - if you want to start the 2 instances in different subnets - IP multicast is enabled. If this is not the case, the 2 instances won't 'find' each other and the sample won't work.

You can change the properties of the demo to for example use a different transport if multicast doesn't work (it should always work on the same machine). For example, to use udp.xml, execute:

java -Djava.net.preferIPv4Stack=true org.jgroups.demos.Draw -props ./udp.xml

 

Using IP Multicasting without a network connection

Sometimes there isn't a network connection (e.g. DSL modem is down), or we want to multicast only on the local machine. For this the loopback interface (typically lo) can be configured, e.g.
route add -net 224.0.0.0 netmask 224.0.0.0 dev lo
This means that all traffic directed to the 224.0.0.0 network will be sent to the loopback interface, which means it doesn't need any network to be running. Note that the 224.0.0.0 network is a placeholder for all multicast addresses in most UNIX implementations: it will catch all multicast traffic. This is an undocumented feature of /sbin/route and may not work across all UNIX flavors. The above instructions may also work for Windows systems, but this hasn't been tested. Note that not all systems allow multicast traffic to use the loopback interface.

Typical home networks have a gateway/firewall with 2 NICs: the first (eth0) is connected to the outside world (Internet Service Provider), the second (eth1) to the internal network, with the gateway firewalling/masquerading traffic between the internal and external networks. If no route for multicast traffic is added, the default will be to use the fdefault gateway, which will typically direct the multicast traffic towards the ISP. To prevent this (e.g. ISP drops multicast traffic, or latency is too high), we recommend to add a route for multicast traffic which goes to the internal network (e.g. eth1).
 

The instances don't find each other ! 

In this case we can enable the sender and receiver test to use all available interfaces for sending and receiving. One of them will certainly be the right one... Start the receiver as follows:

java org.jgroups.tests.McastReceiverTest -mcast_addr 228.8.8.8

The multicast receiver uses JDK functionality to list all available network interfaces and bind to all of them (including the loopback interface). This means that whichever interface a packet comes in on, we will receive it.
Now start the sender:

java org.jgroups.tests.McastSenderTest -mcast_addr 228.8.8.8

The sender will also determine the available network interfaces and send each packet over all interfaces.

This test can be used to find out which network interface to bind to when previously no packets were received. E.g. when you see the following output in the receiver:

bash-2.03$ java org.jgroups.tests.McastReceiverTest -mcast_addr 228.8.8.8 -bind_addr 192.168.168.4
Socket=0.0.0.0/0.0.0.0:5555, bind interface=/192.168.168.4
dd [sender=192.168.168.4:5555]
dd [sender=192.168.168.1:5555]
dd [sender=192.168.168.2:5555]

you know that you can bind to any of the 192.168.168.{1,2,4} interfaces to receive your multicast packets. In this case you would need to modify your protocol spec to include bind_addr=192.168.168.2 in UDP, e.g. "UDP(mcast_addr=228.8.8.8;bind_addr=192.168.168.2):..." .


Alternatively you can use McastDiscovery. Start this program simultaneously on multiple machines. Binding to all available interfaces, this program tries to discover what other members are available in a network and determines which interfaces should be used by UDP. After some time (e.g. 30 seconds), press <enter> on each program. The program will then list the interfaces which can be used to bind to. There may be one or multiple interfaces. When there are multiple interfaces listed, take the one with the highest number of responses (at the top of the list). The UDP protocol spec can then be changed to explicitly bind to that interface, e.g.

"UDP(bind_addr=<interface>;...)"


Problems with IPv6

Another source of problems might be the use of IPv6, and/or misconfiguration of /etc/hosts. If you communicate between an IPv4 and an IPv6 host, and they are not able to find each other, try the java.net.preferIP4Stack=true property, e.g.

java -Djava.net.preferIPv4Stack=true org.jgroups.demos.Draw -props file:c:\\udp.xml

JDK 6 uses IPv6 by default, although is has a dual stack, that is, it also supports IPv4. Here's more details on the subject.


If you want to use IPv6, omit the -Djava.net.preferIPv4Stack=true or force use of IPv6 by using -Djava.net.preferIPv6Addresses=true

I have discovered a bug !

If you think that you discovered a bug, submit a bug report on JIRA or send email to javagroups-developers if you're unsure about it. Please include the following information:
  • Version of JGroups (java org.jgroups.Version)
  • Platform (e.g. Solaris 8)
  • Version of JDK (e.g. JDK 1.4.2_07)
  • Stack trace. Use kill -3 PID on UNIX systems or CTRL-BREAK on windows machines
  • Small program that reproduces the bug

 
 








© 2015 - 2024 Weber Informatics LLC | Privacy Policy