This is Bugzilla
View Bug Activity
Format For Printing
Clone This Bug
We ran into a performance problem with jacorb, the following details the issue
and the cause.
Once of our tests showed that the performance of our application had dropped
significantly once we enabled certain functionality in our code. I have managed
to track this down to an issue with JacORB.
The machine running our stress test showed a CPU usage of approx. 10%,
drastically reducing our performance.
The issue occurs when calls are forced to go via IIOP, in our case this was due
to the existence of interceptors, and involve a second, internal, call over IIOP.
The cause of this performance degredation was tracked down to the default
enabling of Nagle on the IIOP Connections, the sequence of events were as follows.
- first call over IIOP and flush
- handling first call invokes second call over IIOP and flush
- handling second call has to wait for Nagle to really send the data
- response received to second call
- response received to first call has to wait for Nagle.
A workaround was attempted by disabling Nagle in the ServerIIOPConnection and
ClientIIOPConnection. This brought performance back to a more acceptable level.
The attached code demonstrates the issue.
Created an attachment (id=85) [details]
Performance main class
Created an attachment (id=86) [details]
Created an attachment (id=87) [details]
initialiser for interceptors
Created an attachment (id=88) [details]
Created an attachment (id=89) [details]
Patch showing our workaround
Created an attachment (id=162) [details]
loopback IIOP transport
Having just moved up to JacORB 2.2.2 we found that this issue is still present.
Rather than play around with Nagle again I decided to take some time and
implement a more elegant solution.
The real issue with this problem is that JacORB, on detecting interceptors,
sends the GIOP call over the network. It is here that Nagle comes into play and
I decided that a better solution would be to prevent intra VM calls from hitting
the network transport layer. This patch creates a loopback transport layer,
circumventing the network and its associated issues.
A further improvement would be to make this loopback call higher up the stack,
removing the necessity to serialise/deserialise the data.
One question remains before this patch can be accepted, and that is the
treatment of the addresses within ClientIIOPConnection. The patch as submitted
contains a getLocalLoopback method that searches for intra VM addresses within
the profile. These addresses are searched for a loopback address before any
attempt is made to establish contact via a socket. This may not be correct
I hope this is of some help to you as it certainly is of benefit to us.
Thanks for taking the time to look into this issue, Kevin.
The most recent patchfile you posted refers to five files that are not part of
Could you add attachments with those files?
Those files are the loopback connector, their contents are included in the patch
Thanks Kev, I see the files now. I had overlooked the --new-file option to diff
in your patchfile.
For future reference, do you have a preferred way of generating patches?
The preferred format for patches is unified diff (diff -u) or context diff (diff
Your patches have been committed to CVS HEAD. Thanks again for looking into this