Tuesday, 1 March 2011

VirtualBox on Windows XP-2

I thought it was quite simple to increase the VM-memory beyond the 1500MB limit of the GUI.
However, starting the SOABPM server in the OTN SOABPM appliance would certainly crash the VM. Very nice is the state VirtualBox will bring the VM in: the "Guru Mediation"-Mode.
Well, I would not call myself a VirtualBox-guru, so I wouldn't mediate in why the VM didn't feel as comfortable as it should.  So turn-off the machine.

It turns out that the the 1500MB limit is not as arbitrary as thought. In the Issue Ticket here I read that the VMM process places the complete VM's memory  in the process-memory space. In 32-bit Windows the maximum of the process-address space is 2GB.And 2GB for the kernel. Since there is addresspace needed for the VMM process, the video memory etc., the limit for the VM is set to 1500MB.
I would think that the limit is some further on the "save side", possibly something like 1750MB would do either.

Since I'm a pretty stubborn guy, I'll try 2000MB.  Also to see the "Guru Mediation" mode state again... Then I'll try some saver values.

VirtualBox on Windows XP

Today I installed VirtualBox 3.2.12 on a Windows XP. No not 4.0, since that was the version I happened to have with me on my external HD.

I imported the Technet SOABPM Appaliance, but I found that Virtual Box places the disks and machines in a hidden ".VirtualBox" folder within my "Documents and Settings" area. Since on this machine I have a roaming profile I don't want to have folders there with gig's on virtual disks. Logging on Windows with a roaming profile is already slow without this.

So I moved the ".VirtualBox" folder to a "C:\Data\VirtualBox" folder. Then VirtualBox won't find your machine anymore. To solve that a few things are to be done.

First change the defaults paths. In VB Go to <menu>/File/Preferences:


There you can change the paths to "C:\Data\VirtualMachines\VirtualBox\HardDisks" and "C:\Data\VirtualMachines\VirtualBox\Machines".  I differentiate my machines with a VirtualBox subfolder, since I also use VMWare Player.

Then you can make a new VMware image based on the disks. To get the new machine have the same name as the old one I backed up the existing machine directory ("c:\Data\VirtualMachines\VirtualBox\Machines\vbox-oel5u4-soabpm-11gr1ps2-bp1-otn\").

Then I create the machine with the "do not think for yourself, let's do that later"-default settings.
After creating the machine with the two disks, I did a file compare with the old and new "vbox-oel5u4-soabpm-11gr1ps2-bp1-otn.xml" file in the machine folder.

I copy and paste the diffent settings using the file compare tool of TotalCommander in Edit-mode:
  • Copy the <description> node
  •  <memory pagefusion="false" ramsize="2500"/>
    Under windows XP (with 4GB, effective 3,5 GB memory) VirtualBox does not allow the setting higher then 1500MB. But editing it on file-level allows you to raise it beyond that limit. The appliance was made with 2048. This is pretty little for SoaSuite11g. But not to eat too much from Windows I gave it 2500.
  • Copy the <guestproperties> node
  • Important: change the SATA-controller to:  <storagecontroller name="SCSI Controller" portcount="16" type="LsiLogic" usehostiocache="true"/>
  • I changed some other hardware settings from false to true based on the file compare. Eg. HardwareVirtEx.
Then it turns out that the changes did not reflect in VirtualBox. A refresh did not help. To force  VirtualBox to reread the settings, Killed the VirtualBox service with ProcessExplorer:
That did the trick. Also I have it now running with 2500MB!

Monday, 14 February 2011

JDeveloper 11g SOA & BPM Extensions

If you want to use JDeveloper 11g for Soa and/or BPM Development you need to download the SOA JDev Extension for the SOA Composite Designer and the BPM JDev extension for BPM Studio.

In all the documentation you'll find, you see an explanation on how to download them using the Help/Update utilities of JDeveloper. This might be neat and user-friendly for updating one JDeveloper instance where you have a fast internet access. But it is not so handy if you have to provide these extensions to multiple JDeveloper instances for a course for example. Then it is handy when you're able to download them and provide them on a stick.

I found these files on this JDeveloper extension page (just using Google).
So I downloaded the latest versions on my harddisk at home, where I have a good internet access. I hope this page is on-line indefinately.

Tuesday, 1 February 2011

Sql Developer Datamodeller 3.0 available

Just read on Sue Harpers blog that SQL Developer Datamodeller 3.0 went production. It's downloadable from its OTN-page. I'm happy to know it's a free download. End 2008 I used the beta version to model a datamodel for a project I did then. I found it a very nice tool, a good replacement for the ERD/Datamodelling capabilities of Designer. There were a few "misses, and the main ones I had were:
  • Generation of pl/sql packages and functions was not possible
  • Subversion support was poor: when you save a project it recreates directories to save the xml-artifacts in. Recreating directory means removal of the .svn folders, which breaks your svn working copy.
In the new features list I saw promissing statements that these should be solved now. You should be able to generate packages and functions and there is even Subversion Integration (not only support): you can see pending changes, compare models and it shoud "recognize versioned designs". Well, isn't that great?

Since it's free and since it's a really good modelling tool, I think it is a must-use on every project that uses a datamodel. Haven't used it yet? Re-engineer your ERD by importing your datamodel from the database. It gives a quick insight in the datamodel for newcomers in your project.

See also for an early adopter of Sql Developer 3.0 the SqlDeveloper OTN-page. It includes the Datamodeller.

Monday, 24 January 2011

Change http port in Oracle XE

For my current project I had to install tomcat as a j2ee server in Eclipse. It would not start since the Oracle XE database has port 8080 in use.

I found the method to change the ports in XE here. From that example I created my own little script:
set serveroutput on

select dbms_xdb.gethttpport as "HTTP-Port"
,      dbms_xdb.getftpport as "FTP-Port" 
from dual;

declare
  new_port varchar2(4):='8085';
begin

 dbms_output.put_line('Change HTTP Port to '||new_port);
 dbms_xdb.sethttpport(new_port);
end;
/

select dbms_xdb.gethttpport as "HTTP-Port"
,      dbms_xdb.getftpport as "FTP-Port" 
from dual; 


In good old Dutch: "Mucho Plezieros" with it...

Thursday, 9 December 2010

What to do to expose EBS services as a Webservice

I got a comment (in Dutch) on my article about the EBS Adapter.
Since giving short answers on simple questions is not my strengths, I'll answer with a blogpost.


In the article you can make up that I'm not too enthusiastic about the EBS Adapter. The reality is often more nuanced than stated, you should consider the EBS adapter specific for your own situation.

Two of the main reasons to use the EBS Adapter are:
  • It sets the application context for you at connecting to the EBS instance
  • The Adapter Wizard enables you to introspect the available interfaces.
If these reasons are not applicable for you since for instance you set the Application Context yourself for whatever reason and/or you use customer pl/sql procedures, then might be too little to ratify the use.

But what would I do if I need to expose a pl/sql as a webservice from EBS?

EBS 11.5.10 indeed just works with JServe. From 12 onwards OC4J is used, at least initially in the 10.1.2 version (don't know if later relases are on 10.1.3).
So to start with, I'd use at least a managed OC4J 10.1.3.x Application server. Either single node or clustered. So not a standalone oc4j.  

Then it depends mainly if you can use SoaSuite. If you can use SoaSuite I would create a BPEL Process, an Oracle ESB or (in case of SoaSuite11g) OSB service, based on the SoaSuite database adapter. Then arrange for setting the application context in the implementation block of the package where you have put the pl/sql procedure in.

If you can't use SoaSuite you could generate a webservice from JDeveloper based on the pl/sql procedure. And deploy that as a WAR or EAR file to the application server. The main disadvantage of having JDeveloper generate the webservice is that you can't influence the way the generated code calls the pl/sql procedure. So I think I would create a standalone java application that uses JNDI for getting the jdbc-connection. Then code the call of the pl/sql procedure in the java-application. Test it stand-alone. If that works then create webservice on that application. Doing so you have separated the technical code that does the job (calling the pl/sql procedure) and the actual webservice. For an example on how to use JNDI in standalone applications that also have to run on an AS see this article.

For creating the webservice it self you also have two choices:
  • Let JDeveloper generated the code for you. But then JDeveloper generates the wsdl and you have very little (near to nothing) influence on how it looks like. Unless you generate the webservice based on the wsdl.
  • Use a soap stack that supports annotations (like Sun Glassfish Metro). Using annotations you have very large influence on how the generated wsdl looks like. See for instance this article. Only in that case the wsdl will be generated at startup of the webservice application in the application server.

So my preferred way to go is either use SoaSuite/OSB or create a annotation based webservice on a standalone java-app that calls my procedure.

Tuesday, 7 December 2010

Reinvoke BPEL Process

Back in 2008 I wrote an article on creating a BPEL Scheduler. The principle of that article I used later in a real implementation of a scheduler. In several blog-posts, amongst others of Clemens Utschig and Lucas Jellema, I read all kinds of arguments why you should or shouldn't use BPEL for scheduling.

I think however that BPEL is perfectly suitable to be used as to schedule jobs. Especially if you create a datamodel with an Apex front-end for instance (used JHeadstart myself for it) to register the schedule meta data to determine the next schedule dates.

There are a few considerations though. One of them is reinvoking the scheduler for a new scheduled-date, after performing a task. In my earlier article I used a plain invoke on the client-partnerlink. A big disadvantage of that is dat every task instance is created under the same root instance. After  a few iterations the process tree under the tree finder is exorbitant big.

This is solved quite easily by replacing the invoke by a piece of embedded java:


<bpelx:exec name="ReInvokeRH" language="Java" version="1.4"><![CDATA[//Get logger
                org.apache.commons.logging.Log log = org.apache.commons.logging.LogFactory.getLog("my.log");
                // Get singletonId
                log.info("ReInvoke-Get SingletonId.");
                String singletonId = (String)getVariableData("singletonId");;
                //Construct xml message
                log.info("ReInvoke - Construct XML message.");
                String xml = "<BPELProcessProcessRequest xmlns=\"http://xmlns.oracle.com/BPELProcessProcessRequest \">\n<singletonId>"
                           + singletonId 
                           + "</singletonId>\n</BPELProcessProcessRequest >";
                log.info("ReInvoke message: " + xml);
                //Get a locator and delivery service.
                log.info("ReInvoke - Get Locator.");
                try {
                  Locator locator = getLocator();
                  log.info("ReInvoke - Initiate IDeliveryService.");
                  IDeliveryService deliveryService = (IDeliveryService)locator.lookupService(IDeliveryService.SERVICE_NAME);
                  // Construct a normalized message and send to Oracle BPEL Process Manager
                  log.info("ReInvoke - Construct a normalized message and send to Oracle BPEL Process Manager.");
                  NormalizedMessage nm = new NormalizedMessage();
                  nm.addPart("payload", xml);
                  // Initiate the BPEL process
                  log.info("ReInvoke - Initiate the BPEL process.");
                  deliveryService.post("BPELProcess", "initiate", nm);
                } catch (RemoteException e) {
                  log.error(e);
                  setVariableData("invokeError", "true");
                  setVariableData("invokeErrorMessage", "ReInvokeRH-RemoteException: "+e.toString());  
                } catch (ServerException e) {
                  log.error(e);
                  setVariableData("invokeError", "true");
                  setVariableData("invokeErrorMessage", "ReInvokeRH-ServerException: "+e.toString());
                }
                log.info("ReInvoke - Return.");]]>
              </bpelx:exec>
This will build a String message, create NormalizedMesage of it, and post it to the "Initiate" operation of the "BPELProcess" using the post method of the DeliveryService that is fetched using the locator of the bpel instance.

Doing so the parent-child relation is broken, the new instance runs in a new process-instance-tree.

Another consideration is that you might want to "listen" to external messages to do an abort or a reschedule. This can be done in several ways. For example by doing a correlated receive (using a correlation set on a field in the request message like the "singletonId" in the reinvoke example) in a parallel flow.

Problem is that at a normal reinvoke this receive has to be released. If not, the parellel flow will not end and/or the correlation set will not be freed. In the new instance you might get a runtime message that there is already a receive on the same correlation-id. In another project I solved a similar problem by calling the process instance with an abort message from within the process instance, before doing a re-invoke. This is quite costly in terms of performance since it involves a message to the bpel-service with all the overhead (even if it's in fact a WSIF call). So recently I found myself a little smarter solution.

What I basically did was to wrap the parallel flow with the receive and task-execution logic with a scope.
Instead of normally ending the branches, I throw a "finish" or "abort" exception:

<throw name="Throw_Finish" faultName="client:Finish" faultVariable="ReceiveInput_initiate_InputVariable"/> 

Then this can be catched in the surrounding scope:
<catch faultName="client:Finish" faultVariable="ReceiveInput_initiate_InputVariable">

This might look strange at first sight, to end your normal (or abort) flow using a business-exception. Ending your flow is not an exception, is it?
But it will release your receive-activities and the correlation id they hold for a reinvoke on the same correlation-id.