Read only archive ; use https://github.com/JacORB/JacORB/issues for new issues
Bug 915 - Bugs in calling PERSISTENT corba server methods
Summary: Bugs in calling PERSISTENT corba server methods
Status: ASSIGNED
Alias: None
Product: JacORB
Classification: Unclassified
Component: Implementation Repository (show other bugs)
Version: 3.0 beta 1
Hardware: HP All
: P1 major
Assignee: Phil Mesnier
URL:
Depends on:
Blocks:
 
Reported: 2012-01-05 06:36 CET by seetharaman
Modified: 2012-01-20 18:45 CET (History)
2 users (show)

See Also:


Attachments
Contains all the files for testing the bugs. BugDescription.txt has all the details (17.99 KB, text/plain)
2012-01-05 06:39 CET, seetharaman
Details

Note You need to log in before you can comment on or make changes to this bug.
Description seetharaman 2012-01-05 06:36:57 CET
Test Environment :
-------------------

The environment is HP-UX B11.31 ia64, java 1.6.0.10 and jacorb 2.3.1 (or jacorb 3.0 beta)

The same bug can be noticed in jacorb 3.0 also.

The bug comes only for PERSISTENT (life span policy value)corba servers. TRASIENT corba servers works fine.

Bug Description:
------------------

Around 1000 client threads are created simutaneously to call two different interface methods (500 threads for each method) 
each part of different PERSISTENT corba servers.
The methods are dummy method ones and just sleeps for 4 seconds. 
There are two problems noticed.



1.  Some methods are executed more than once. While client logs are showed that a method is executed only 500 times
      server logs showed that it is executed more than 500 time. Some times it is around 510 to even maximum of 600 times.
      This problem is CONSISTENTLY noticed in both jacorb 2.3.1 and jacorb 3.0 beta
      
2. Not all the methods are succesfully went through. Few client methods (around 1 to 3) failed with following error
 
     Unexpected Exception:org.omg.CORBA.COMM_FAILURE:   vmcid: 0x0  minor code: 0 completed: Maybe
	org.omg.CORBA.COMM_FAILURE:   vmcid: 0x0  minor code: 0 completed: Maybe
        at org.jacorb.orb.giop.ReplyPlaceholder.getInputStream(ReplyPlaceholder.java:132)
        at org.jacorb.orb.ReplyReceiver.getReply(ReplyReceiver.java:273)
        at org.jacorb.orb.Delegate.invoke_internal(Delegate.java:1090)
        at org.jacorb.orb.Delegate.invoke(Delegate.java:957)
        at org.omg.CORBA.portable.ObjectImpl._invoke(ObjectImpl.java:457)
        at com.idl.test._test_interface1Stub.test_function1(_test_interface1Stub.java:33)
        at interface1_test.runTheTests(interface1_test.java:112)
        at interface1_test.run(interface1_test.java:104)
	
	The above problem is NOT consistently noticed and noticed once in 2 or 3 times approximately in jacorb 2.3.1
	This problem is CONSISTENTLY noticed for jacorb 3.0 beta.
Comment 1 seetharaman 2012-01-05 06:39:21 CET
Created attachment 395 [details]
Contains all the files for testing the bugs. BugDescription.txt has all the details

BugDescription.txt has all the details about the bug, the source files involved and the test procedure.
Comment 2 Phil Mesnier 2012-01-19 16:29:30 CET
Here's what is happening to cause this problem.

The client is firing off 100 concurrent requests, one per thread. 

Several of these go off to the IMR, which dutifully sends back location forwarding details. 

The Delegate processes the location forward, rebinding and "resetting" all waiting requests, because these must all be waiting on the IMR, right? Unfortunately in this case, this is not right.

Upon receiving the first location forward, the Delegate rebinds, which releases the connection to the IMR and opens a new connection to the server. Since there were several requests sent to the IMR, multiple location forward exceptions are received from the IMR before the connection is released, and each one triggers a reset/rebind. 

Since the machine is loaded, it takes time to actually spawn all 100 threads, meaning some requests are started after the first rebind occurred, meaning the client sent the request to the real server.  When the second rebind occurs, all of these threads dutifully stop waiting and remarshal and resend. Depending on the system load, this can happen several times, leading to 3 or more duplications of a request. This effect can impact many threads, leading to dozens of duplicated requests sent to the server.

Now for the solution. The easiest fix is to do nothing to JacORB, and restructure the client application so that a connection to the real server is established prior to starting any MT activity. However this breaks down if for instance the client doesn't obtain the target IOR until it has already entered its MT operational phase, or if perhaps the connection to the server is lost and must be reestablished.

I think the durable solution is to be more intelligent about when to abandon a request. This would involve modifying the ReplyReceiver and the Delegate so that the RR can decide if it truly needs to abandon the current request or not.
Comment 3 Phil Mesnier 2012-01-19 16:34:29 CET
I'm stepping down the severity from Blocker to Major in that the condition under which the bug appears is very narrowly defined - it doesn't happen everywhere even under similar loads. Also, there is a work-around defined which mitigates the likelihood of occurrence even more.
Comment 4 Phil Mesnier 2012-01-20 18:45:14 CET
We have another work-around for this bug. Each client thread can make its own duplicate of the desired reference, and then any rebinding and notification will affect only the thread(s) using the duplicate. This is not a perfect solution however, since each duplicate must first interact with the IMR in order to get the target server's reference. Depending on the performance of the client host, this may result in many consecutive connections to the IMR, and possibly the target server as well, depending on reference lifespan.

To test this, modify the original interface1_test.java from:

...
  servantRef.test_function1 ()

to

  test_interface1 target = test_interface1Helper.narrow (servantRef._duplicate());
  target.test_function1 ();

The interface2_test can be similarly modified.

Thanks Nick Cross for this suggestion.