JacORB Performance compared

JacORB compares very well with commercial ORB implementations and with other open source Java ORBs. In many cases, we outperform our competitors. Please check detailed results yourself. For your own comparison, you can use the simple benchmark demo delivered with JacORB.

Results from the Distributed Systems Research Group, Charles Univ, Prague

A comprehensive ORB performance comparison can be found here:

http://nenya.ms.mff.cuni.cz/benchmark

Outdated: data measured with our own benchmark demo

Measurements were taken on a single SPARC Ultra 1, 170 MHz, 256 MB, running Solaris 2.7 and using Sun JDK 1.2.2. In all cases we have used Solaris green threads and enabled the JIT compiler. Also, server side skeletons inherited from an object adapter base class rather than used a tie. The competitors are JacORB 1.0 beta 15, VisiBroker for Java 4.0, Orbacus 4.0 beta 2 and, as a reference, RMI 1.2.2. We tried to also measure the performance of JavaORB, but JavaORB blocked in the middle of the test and could not complete it.

As a general result it can be said that JacORB performance is excellent when compared with commercial implementations like VisiBroker. The benchmark itself measures only very few aspects of data marshalling, though. E.g., we did not compare performance for Anys or DynAnys. All test were run 50 times and the overall results are averages over these 50 round trips. The benchmarking code is part of the JacORB distribution and lives in demo/benchmark.

The table below gives all the results obtained. All times are in millisecs., so lower numbers mean better performance. The figures in the "ping" column are independent of the array sizes and simply give repeated tests results in the different array size rows. For a more readable representation, please see the plots below. (A ping is an empty but synchronous operation invocation).

ORB array size Ping Int Array struct array
JacORB 1 2.76 2.64 2.86
VisiBroker 1 3.66 4.54 4.66
Orbacus 1 4.18 3.22 3.20
RMI 1 2.58 3.52 4.70
JacORB 10 2.64 2.56 2.64
VisiBroker 10 2.78 4.68 2.92
Orbacus 10 3.32 5.86 3.48
RMI 10 2.80 3.34 7.48
JacORB 100 6.10 2.82 3.36
VisiBroker 100 3.96 4.62 3.70
Orbacus 100 3.28 3.54 4.70
RMI 100 2.46 4.22 40.30
JacORB 1000 2.62 50.64 60.12
VisiBroker 1000 2.76 101.22 104.06
Orbacus 1000 6.00 7.18 18.98
RMI 1000 2.36 12.38 355.32
JacORB 10000 2.64 87.10 149.70
VisiBroker 10000 2.98 162.98 231.28
Orbacus 10000 3.26 52.18 180.82
RMI 10000 2.38 51.44 3490.32
JacORB 100000 2.48 399.24 1218.66
VisiBroker 100000 6.02 801.08 1337.46
Orbacus 100000 3.12 546.18 2060.80
RMI 100000 4.08 512.80 38025.74

Integer array performances compared

The following plot compares the results from sending integer arrays of different sizes as operation arguments and receiving them again as results. Please note that the scales in this as well as in the following plot are logarithmic.

Struct array performances compared

The following plot compares the results from sending struct arrays of different sizes as operation arguments. In this case, IDL structs with a single integer member were mapped to a Java class. Arrays of objects of this type were sent across the network and back again.

Ping performances compared

These values are the best ping times that were found for each ORB averaging over 50 loops.


Gerald Brose, 2 March 2000.