Web Service Provider on cluster

Situation:

We have a Web Service Provider published on an Adeptia Cluster. A call to service when both nodes of the cluster are up. (The soap address is using the IP of load balancer) -> Call fails in this scenario.

The error returned is below

<?xml version='1.0' encoding='UTF-8'?>

<S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">

<S:Body>

<S:Fault xmlns="" xmlns:ns3="http://www.w3.org/2003/05/soap-envelope">

<faultcode>S:Server</faultcode>

<faultstring>java.lang.RuntimeException: Published transaction did not produce output.</faultstring>

</S:Fault>

</S:Body>

</S:Envelope>

Call to service when the node B is down. (The service was deployed using migration utility in Node A) and soap address still using the IP of load balancer ->Call succeeds

In the Webrunner.log file, we are seeing this error

2016-04-28 13:43:59,025 ERROR [qtp2142659404-22] webservice com.adeptia.indigo.services.webservice.metro.WsTransactionImlMetro.invoke(WsTransactionImlMetro.java:466) - ||||null|||||null|Error while executing transaction through web service provider :: Error in creating process flow.: Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: node1; nested exception is:
2016-04-28 13:43:59 java.net.ConnectException: Connection refused: connect][Error in creating process flow.: Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: node1; nested exception is:
2016-04-28 13:43:59 java.net.ConnectException: Connection refused: connect]]|apses1639|
2016-04-28 13:43:59 java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: node1; nested exception is:
2016-04-28 13:43:59 java.net.ConnectException: Connection refused: connect]

Cause:

From the behavior of the system and the error message generated, this issue is caused by node1 being unable to communicate with node2 through ports 21000 and 1098 (default RMI ports)

 

Solution:

Enable connectivity between node 1 and node 2 through ports 21000 and 1098

Have more questions? Submit a request

0 Comments

Article is closed for comments.