Thursday, 9 December 2010

What to do to expose EBS services as a Webservice

I got a comment (in Dutch) on my article about the EBS Adapter.
Since giving short answers on simple questions is not my strengths, I'll answer with a blogpost.

In the article you can make up that I'm not too enthusiastic about the EBS Adapter. The reality is often more nuanced than stated, you should consider the EBS adapter specific for your own situation.

Two of the main reasons to use the EBS Adapter are:
  • It sets the application context for you at connecting to the EBS instance
  • The Adapter Wizard enables you to introspect the available interfaces.
If these reasons are not applicable for you since for instance you set the Application Context yourself for whatever reason and/or you use customer pl/sql procedures, then might be too little to ratify the use.

But what would I do if I need to expose a pl/sql as a webservice from EBS?

EBS 11.5.10 indeed just works with JServe. From 12 onwards OC4J is used, at least initially in the 10.1.2 version (don't know if later relases are on 10.1.3).
So to start with, I'd use at least a managed OC4J 10.1.3.x Application server. Either single node or clustered. So not a standalone oc4j.  

Then it depends mainly if you can use SoaSuite. If you can use SoaSuite I would create a BPEL Process, an Oracle ESB or (in case of SoaSuite11g) OSB service, based on the SoaSuite database adapter. Then arrange for setting the application context in the implementation block of the package where you have put the pl/sql procedure in.

If you can't use SoaSuite you could generate a webservice from JDeveloper based on the pl/sql procedure. And deploy that as a WAR or EAR file to the application server. The main disadvantage of having JDeveloper generate the webservice is that you can't influence the way the generated code calls the pl/sql procedure. So I think I would create a standalone java application that uses JNDI for getting the jdbc-connection. Then code the call of the pl/sql procedure in the java-application. Test it stand-alone. If that works then create webservice on that application. Doing so you have separated the technical code that does the job (calling the pl/sql procedure) and the actual webservice. For an example on how to use JNDI in standalone applications that also have to run on an AS see this article.

For creating the webservice it self you also have two choices:
  • Let JDeveloper generated the code for you. But then JDeveloper generates the wsdl and you have very little (near to nothing) influence on how it looks like. Unless you generate the webservice based on the wsdl.
  • Use a soap stack that supports annotations (like Sun Glassfish Metro). Using annotations you have very large influence on how the generated wsdl looks like. See for instance this article. Only in that case the wsdl will be generated at startup of the webservice application in the application server.

So my preferred way to go is either use SoaSuite/OSB or create a annotation based webservice on a standalone java-app that calls my procedure.

Tuesday, 7 December 2010

Reinvoke BPEL Process

Back in 2008 I wrote an article on creating a BPEL Scheduler. The principle of that article I used later in a real implementation of a scheduler. In several blog-posts, amongst others of Clemens Utschig and Lucas Jellema, I read all kinds of arguments why you should or shouldn't use BPEL for scheduling.

I think however that BPEL is perfectly suitable to be used as to schedule jobs. Especially if you create a datamodel with an Apex front-end for instance (used JHeadstart myself for it) to register the schedule meta data to determine the next schedule dates.

There are a few considerations though. One of them is reinvoking the scheduler for a new scheduled-date, after performing a task. In my earlier article I used a plain invoke on the client-partnerlink. A big disadvantage of that is dat every task instance is created under the same root instance. After  a few iterations the process tree under the tree finder is exorbitant big.

This is solved quite easily by replacing the invoke by a piece of embedded java:

<bpelx:exec name="ReInvokeRH" language="Java" version="1.4"><![CDATA[//Get logger
                org.apache.commons.logging.Log log = org.apache.commons.logging.LogFactory.getLog("my.log");
                // Get singletonId
      "ReInvoke-Get SingletonId.");
                String singletonId = (String)getVariableData("singletonId");;
                //Construct xml message
      "ReInvoke - Construct XML message.");
                String xml = "<BPELProcessProcessRequest xmlns=\" \">\n<singletonId>"
                           + singletonId 
                           + "</singletonId>\n</BPELProcessProcessRequest >";
      "ReInvoke message: " + xml);
                //Get a locator and delivery service.
      "ReInvoke - Get Locator.");
                try {
                  Locator locator = getLocator();
        "ReInvoke - Initiate IDeliveryService.");
                  IDeliveryService deliveryService = (IDeliveryService)locator.lookupService(IDeliveryService.SERVICE_NAME);
                  // Construct a normalized message and send to Oracle BPEL Process Manager
        "ReInvoke - Construct a normalized message and send to Oracle BPEL Process Manager.");
                  NormalizedMessage nm = new NormalizedMessage();
                  nm.addPart("payload", xml);
                  // Initiate the BPEL process
        "ReInvoke - Initiate the BPEL process.");
        "BPELProcess", "initiate", nm);
                } catch (RemoteException e) {
                  setVariableData("invokeError", "true");
                  setVariableData("invokeErrorMessage", "ReInvokeRH-RemoteException: "+e.toString());  
                } catch (ServerException e) {
                  setVariableData("invokeError", "true");
                  setVariableData("invokeErrorMessage", "ReInvokeRH-ServerException: "+e.toString());
      "ReInvoke - Return.");]]>
This will build a String message, create NormalizedMesage of it, and post it to the "Initiate" operation of the "BPELProcess" using the post method of the DeliveryService that is fetched using the locator of the bpel instance.

Doing so the parent-child relation is broken, the new instance runs in a new process-instance-tree.

Another consideration is that you might want to "listen" to external messages to do an abort or a reschedule. This can be done in several ways. For example by doing a correlated receive (using a correlation set on a field in the request message like the "singletonId" in the reinvoke example) in a parallel flow.

Problem is that at a normal reinvoke this receive has to be released. If not, the parellel flow will not end and/or the correlation set will not be freed. In the new instance you might get a runtime message that there is already a receive on the same correlation-id. In another project I solved a similar problem by calling the process instance with an abort message from within the process instance, before doing a re-invoke. This is quite costly in terms of performance since it involves a message to the bpel-service with all the overhead (even if it's in fact a WSIF call). So recently I found myself a little smarter solution.

What I basically did was to wrap the parallel flow with the receive and task-execution logic with a scope.
Instead of normally ending the branches, I throw a "finish" or "abort" exception:

<throw name="Throw_Finish" faultName="client:Finish" faultVariable="ReceiveInput_initiate_InputVariable"/> 

Then this can be catched in the surrounding scope:
<catch faultName="client:Finish" faultVariable="ReceiveInput_initiate_InputVariable">

This might look strange at first sight, to end your normal (or abort) flow using a business-exception. Ending your flow is not an exception, is it?
But it will release your receive-activities and the correlation id they hold for a reinvoke on the same correlation-id.

Tuesday, 2 November 2010

B2B Queue to JMS

Lately I was asked to help with routing messages from a object-type based AQ-queue to JMS.
The thing is that JMS works with text. So you have to transform the objecttype to text.
When the Object type is one of your own, you could extend it with a method "toXML" to give a XML message based on the attributes of the object.

In this case it was about a Oracle Integratin B2B AQ Queue, which is based on the B2B object "IP_MESSAGE_TYPE".

It turns out not too hard to translate the object type to JMS. I created a package for it which I provided for download here.

You can test it with the following code:
  -- Non-scalar parameters require additional processing
  ip_message ip_message_type;
  payload    clob;
  payload         := def_b2b.varchar_to_clob('Jet, Teun, Vuur, Schapen');
  ip_message := ip_message_type(MSG_ID           => 'Aap'
                                    ,INREPLYTO_MSG_ID => 'noot'
                                    ,FROM_PARTY       => 'Mies'
                                    ,TO_PARTY         => 'Zus'
                                    ,ACTION_NAME      => 'Lezen'
                                    ,DOCTYPE_NAME     => 'Leesplankje'
                                    ,DOCTYPE_REVISION => '1.0'
                                    ,MSG_TYPE         => 1
                                    ,PAYLOAD          => payload
                                    ,ATTACHMENT       => null);
  -- Call the function
  result := def_b2b.ip_message2jms_msg(ip_message => ip_message);
   result.get_text( :msg);

As you can see a jms-accesible AQ-queue has a special system-Object-type: '$_jms_text_message'. There are several others, for different kinds of jms queues or topics. Also markt that the object types differ between Oracle 10g or 11g Enterprise Edition or equivalent and Oracle XE. In XE you wouldn't find a 'construct' method. You could try the solution of Peter Ebell for this.

Another thing is that the guys that asked me for help, had to do the enqueue of the message on the JMS-queue based on the enqueue on the source B2B-queue.
From Oracle they got permission to use a trigger on the Queue-table. To begin with they used a Before Row Insert trigger. Besides triggers on queue-tables are not support and certainly not the way to go, they encountered a problem with it. And that lays in the fact that the payload attribute is a CLOB. I allways found the way Oracle handled CLOBs at least "a little remarkable". On insert you create a row with an empty CLOB and then query it for update. In the queried row you upload the content to the CLOB-column. Since an AQ queue is based on a table it works essentially the same way. So on Before Row Insert the payload attribute is still empty. They solved it to use an After Delete trigger (when the message is consumed by the subscribing-process).

The way to go is actually to register a notification service on the queue using code like:
  lc_reg_info      SYS.AQ$_REG_INFO;
  lc_reg_info_list SYS.AQ$_REG_INFO_LIST;
  lc_reg_info := SYS.AQ$_REG_INFO('B2B.IP_IN_QUEUE:'

  lc_reg_info_list := SYS.AQ$_REG_INFO_LIST(lc_reg_info);
  dbms_aq.register(lc_reg_info_list, 1);

Such a plsql notification function is a function that is required to have a particular "authograph":
PROCEDURE handle_inbound_notification(context  RAW
                                       ,payload  RAW
                                       ,payloadl NUMBER)

These parameters provide you with the data to fetch/dequeue the message this procedure is called for. You won't get the message itself, you have to dequeue-it explicitly. See the package for an example to implement this procedure.

You should perform the register for every queue-consumer that you want to reroute the messages for. But it is not too hard to put this in a parameterized-procedure and call it based on a query that fetches the consumer from either the dictionary or (better) the B2B repository. In fact this code is extracted from such a construct. But it was a little too much (I already put a reasonable amount of time in the package) to anonymize it and make it more generic.
If you need more help with it, I could of course provide some consultancy.

Monday, 11 October 2010

SQL Datamodeler EA 3.0

The EA version of sql data modeler is available.


Finally it should be possible to generate packages.....

Monday, 27 September 2010

Templates in BPEL Transforms

Multiple times I encountered that the Transformation tool of de BPEL designer has difficulties to cope with xpath and xslt functions in the stylesheet that it does not 'know'.
We have for example some custom xslt functions and I use some xpath 2.0 functions. And if I use them and deploy the process, they'll work. But for the mapper -tool the transform is invalid and it will not show the transformation map.

This is especially true for BPEL 10.1.2 that is still used at my current customer (there is a upgrade to 11g project on going).

Very anoying, because we have some large xsl's that use a large number of custom xslt-functions to do a cached dvm lookup.

But today I found a very nice workaround. If you hide those functions in a custom user template then the transformation map does not have difficulties with it.

I now have the following user-defined template (put at the bottom of the xsl stylesheet):

<xsl:template name="TransformLandCode">
<xsl:param name="landCode"/>
<xsl:comment>Transform Landcode gebruikmakend van cache:lookup
<xsl:variable name="result" select="cache:lookup($landCode,&quot;LandCodeDomain_<FromSystem>To<ToSystem>&quot;)"/>
<xsl:value-of select="$result"/>

So where I had something like:
<xsl:value-of select="cache:lookup(/ns1:rootElement/ns1:subElement/ns1:LandCode,&quot;LandCodeDomain_&lt;FromSystem&gt;To&lt;ToSystem&gt;&quot;)"/>

I now call the template:
<xsl:call-template name="TransformLandCode">
<xsl:with-param name="landCode" select="/ns1:rootElement/ns1:subElement/ns1:LandCode"/>

Nice is by the way that the mapper also allows for adding the call statement in to the map using drag and drop. Also the parameters can be filled using dragging the lines:

And of course you can add code-snippets for it.

Wednesday, 22 September 2010

Headache from postfix

Earlier I wrote a post on how postfix could be used in a mail integration solution. This was a result on a project that I'm doing using postfix to catch mail and transfer it via a bash-shell script, through mq to BPEL PM using the mq-adapter.

I added a logging construction in the script that give me a nice mechanism to do some logging:
#Logging variables
#Check log dir and create it if it does not exist.
if [ "$LOG_ENABLED" -eq $TRUE ]; then
  if [ -d $LOG_DIR ]; then
    #log a seperation line
    mkdir $LOG_DIR
#Function to display usage
  SCRIPT_PARAMETERS=" -q queuename -s sender -r receiver";
  USAGE="Usage: `basename $0` $SCRIPT_PARAMETERS";
  echo $USAGE;
  if [ "$LOG_ENABLED" -eq $TRUE ]; then
    TEXT="$1 ""$2"
    echo $TEXT >>$LOG_FILENAME;
# First check logdir

#Do the rest of the script logic

It works nicely, but the solution described in the earlier post works only for mails with one receipient. This was caused by the flag 'D' in the transport-line of the
The 'D' flag prepends a message with a property line with the receipient.
# 2010-02-10, M. van den Akker: Setup transport routescript for passing message to a bash-script
routescript   unix  -       n       n       -       -       pipe
flags=FDq. user=smtpuser argv=/bin/bash -c /home/smtpuser/ -s $sender -r $recipient -q $nexthop
Besides removing this flag, also the argument parsing of the script should be adapted, to support for multiple receipients:
until [ -z "$1" ]
  log "Argument $ARG_NR: " $1
  case $1 in
    "-s") PARAM=$1;;
    "-r") PARAM=$1;;
    "-q") PARAM=$1;;
    * )  case $PARAM in
            "-s") SENDER=$1;
                  log "Sender: " $SENDER;;
            "-r") RECEIVER=$1;
                  log "Receiver: " $RECEIVER;
                  if  [ -z "$RECEIVER_LIST" ]
                  log "Receiverlist: " $RECEIVER_LIST;;
            "-q") FQN_QUEUE_NAME=$1;
                  log "Queue: " $FQN_QUEUE_NAME;;
            * ) usage;
            exit $E_WRONG_ARGS;;
shift 1;
#Log Parameters.

But implementing this I got the strange behaviour that posfix did call my script but the script did not get any arguments! Logging like the following gave me 0 arguments:
# Check Arguments
log "Number of arguments:  $#"
log "Arguments:  $*"

Yesterday and this morning I tried and figured and thought until my brains got nearly overheated. But since all my configs seemed alright and even the version I knew they worked, it finally stroke me that it had to with the commandline to call the script.

And I finally had it. It was due to the nasty '-c' option after '/bin/bash'!

It probably got there through an example I used. And it was removed in the test environments, but apparently not in my examples and documentation. So it also came into my earlier post. The option toggles the spawn of a new bash-session for the script. The arguments got into the 'parent' session but not into the 'child'-session that runs the script. Removing the '-c'option would do the trick.

Furthermore I moved the $receiver argument to the end since it can expand to more arguments. Although the script would not have problems with it, I found it saver.

So the resulting (correct) line in the should be:
routemq   unix  -       n       n       -       -       pipe
flags=Fq. user=smtpbridge argv=/bin/bash  /home/smtpbridge/ -q $nexthop -s $sender -r $recipient

Tuesday, 31 August 2010

Index-of, replace and key-value parsing in XSLT

Lately I had to change a few BPEL Processes in a way that I had to pass multiple parameters in one field. Instead of a complete base64 encoded data-object I had to pass a key to that object in a way that I could determine that the object was in fact a key-value pair. The field contains in that case in fact two key-value pairs. I thought that if I would code the key-value pair with something like $KEY="AABBCCDDEEFF" I could search for $KEY=" and then the value after the double-quote and before the next would then be the value.

The problem with XSLT is that you have functions like substring-before(), substring-after() and positional substring, where the from-position and length can be passed as numeric values. But for the latter function you need to determine the start and end position of the sub-string to extract from the input string. But apparently XSLT does not provide something like the Pl/Sql instr or Java index-of(). Also XSLT lacks a replace function in which you can replace a string within a string with a replacement string. The xslt-replace() function does a character-by-character replace.

Fortunately you can build these functions quite easilily yourself as xslt-templates using the substring-before(), substring-after(), and string-length() functions. I found the examples somewhere and adapted them a little for my purpose. Mainly to get them case-insensitive.

Here is my Case insensitive version of the Index-of template:
<!-- index-of: find position of search within string
2010-08-31, by Martien van den Akker -->
<xsl:template name="index-of-ci">
<xsl:param name="string"/>
<xsl:param name="search"/>
<xsl:param name="startPos"/>
<xsl:variable name="searchLwr" select="xp20:lower-case($search)"/>
<xsl:variable name="work">
<xsl:when test="string-length($startPos)&gt;0">
<xsl:value-of select="substring($string,$startPos)"/>
<xsl:value-of select="$string"/>
<xsl:variable name="stringLwr" select="xp20:lower-case($work)"/>
<xsl:variable name="result">
<xsl:when test="contains($stringLwr,$searchLwr)">
<xsl:variable name="stringBefore">
<xsl:value-of select="substring-before($stringLwr,$searchLwr)"/>
<xsl:when test="string-length($startPos)&gt;0">
<xsl:value-of select="$startPos +string-length($stringBefore)"/>
<xsl:value-of select="1 + string-length($stringBefore)"/>
<xsl:copy-of select="$result"/>

The template expects 3 parameters:
  • string: the string in which is searched
  • search: the substring that has to be searched
  • startPos: position in string where the search is started
The first thing the template does is declaring a work-variable. If startPos is not given, the variable work will contain the complete input string. But if startPos is given work will contain the substring of string from startPos to the end.

Then the input and the search strings are converted to lower case into new variables: andinputLwr and searchLwr. These variables are to test case-insensitively if the search string is in the input. If that is the case then the part of the input string before the search string is determined with the substring-before() function. The string-length() of the result denotes in fact the numeric position of the search string within the input string. This is incremented by 1 or by startPos depending on startPos being filled.

The template can be called without the startPos parameter:
<xsl:call-template name="index-of-ci">
<xsl:with-param name="string" select="$input"/>
<xsl:with-param name="search" select="$keyStr"/>

Or with the parameter:
<xsl:call-template name="index-of-ci">
<xsl:with-param name="string" select="$input"/>
<xsl:with-param name="search" select="string('&quot;')"/>
<xsl:with-param name="startPos" select="$startIdx"/>

The next template replaces the fromStr in input to toStr.
<!-- replace-ci: case insensitive replace based on strings 
2010-08-31, by Martien van den Akker -->
<xsl:template name="replace-ci">
<xsl:param name="input"/>
<xsl:param name="fromStr"/>
<xsl:param name="toStr"/>
<xsl:param name="startStr"/>
<xsl:if test="string-length( $input ) &gt; 0">
<xsl:variable name="posStartStr">
<xsl:call-template name="index-of-ci">
<xsl:with-param name="string"
<xsl:with-param name="search"
select="$startStr" />
<xsl:variable name="startPos">
<xsl:call-template name="index-of-ci">
<xsl:with-param name="string"
<xsl:with-param name="search"
select="$fromStr" />
<xsl:with-param name="startPos"
select="$posStartStr" />
<xsl:variable name="inputLwr" select="xp20:lower-case($input)"/>
<xsl:variable name="startStrLwr" select="xp20:lower-case($startStr)"/>
<xsl:when test="contains( $input, $startStrLwr ) and contains( $inputLwr, $fromStr )">
<xsl:variable name="stringBefore" select="substring($input,1,$startPos - 1)"/>   
<xsl:variable name="stringAfter" select="substring($input,$startPos + string-length($fromStr))"/>   
<xsl:value-of select="concat($stringBefore,$toStr)"/>         
<xsl:call-template name="replace-ci">
<xsl:with-param name="input"
<xsl:with-param name="fromStr"
select="$fromStr" />
<xsl:with-param name="toStr"
select="$toStr" />
<xsl:with-param name="startStr"
select="$startStr" />
<xsl:value-of select="$input"/>

Here the pos of the from string is determined using the index-of-ci template described earlier.
But this is done from the position of the startStr that marks the start of the replacement search. For example, if you want to replace domain names in email-adresses, you want to start the search of the domain after the at-sign ( '@' ).
Having the start position of the 'from'-string the part of the input before and after the 'from'-string is taken using the string-before() and strina-after() functions.
The 'to'-string is concatenated to the result of the string-before(). The result of the string-after() is used to call the template recursively to search the remainder of the input-string.

Parsing keys
The following template parses a key value like $KEY="AABBCCDDEEFF"
<!-- Parse a KeyValue
2010-08-31, By Martien van den Akker -->
<xsl:template name="getKeyValue">
<xsl:param name="input"/>
<xsl:param name="key"/>
<xsl:param name="default"/>
<!-- Init variables -->
<xsl:variable name="keyStr" select="concat('$',$key,'=&quot;')"/>
<xsl:if test="string-length( $input ) &gt; 0">
<xsl:variable name="startIdxKey">
<xsl:call-template name="index-of-ci">
<xsl:with-param name="string" select="$input"/>
<xsl:with-param name="search" select="$keyStr"/>
<xsl:variable name="keyLength" select="string-length($keyStr)"/>
<xsl:variable name="startIdx" select="$startIdxKey+$keyLength"/>
<xsl:variable name="endIdx">
<xsl:call-template name="index-of-ci">
<xsl:with-param name="string" select="$input"/>
<xsl:with-param name="search" select="string('&quot;')"/>
<xsl:with-param name="startPos" select="$startIdx"/>
<!-- Determine value -->
<xsl:when test="$startIdxKey&gt;=0 and $endIdx&gt;=0">
<xsl:value-of select="substring($input,$startIdx, $endIdx - $startIdx)"/>
<xsl:when test="$startIdxKey>0 and $endIdx&lt;0">
<xsl:value-of select="substring($input,$startIdx)"/>
<xsl:if test="$default='Y'">
<xsl:value-of select="$input"/>
The working is quite similar to the templates above. If you understand the replace-template, you wouldn't have trouble with this one. Basically the value is search using the index-of-ci template with a concatenation of '$', the key and '="'. The string after that is the value, with a double quote as the end delimiter.
Having this template you can search for the key any where in the string, even if there are multiple key-value pairs.
The template can be called like:
<xsl:call-template name="getKeyValue">
<xsl:with-param name="input" select="/ns2:aap/ns2:noot"/>
<xsl:with-param name="key" select="'KEY'"/>
<xsl:with-param name="default" select="'Y'"/>

The parameter 'default' is optional: if it is set to 'Y' and the KEY is not found in the input string, then the value of the input is returned as result. Otherwise nothing is returned.

I found these templates very helpful. Using them you can do almost any string replacements in XSLT. ALso to me they function as example to cope with more advanced XSLT-challenges.

Thursday, 1 July 2010

VMWare Player vs. Server, vs. Virtual Box

Virtualization is great. It is very handy to have your complex installations of server software in a Virtual Machine. I do it for years and have only some client tools that do need a complex installation on my host. JDeveloper, SQLDeveloper or Pl/Sql Developer for instance. It enables you to reinstall your laptop and be up and running after having installed your virtualization product. Provided that you backup your VM's of course. Also I have a Windows based VM with some tools for administration purposes.

I started years ago with VMware Workstation 3 to 5.x. But then VMware Server came out. And back then workstation did not provide much extra above Server. And although I had a demo license for Workstation (which ended), Server was for free. Actually the only missing feature in Server I encountered was 'Shared Folders'. But that I solved using (S)Ftp. Under windows it is very convenient to have a FileZilla Server. You could solve it by defining a share in Windows. But I found that that gives some problems when you have multiple copies of a VM running on the same network. Then you have to change your host name, etc. And it is problematic if you have multiple shared open in multiple VM's while your windows account have a regularly changed password. So shared folders as an alternative to FTP is not necessary but very convenient.

Shortly after VMware Server VMware Player came out. But until today it was not possible to create new VM's in Player. And that is a 'must have' to me.

VMware 2.0 was a large step for me since I found it astonishing that the footprint from 1.0 grew by a factor 5! But since VMware dit not keep 1.0 up with the Linux Kernel changes, I did the upgrade. Now I was a pretty satisfied VMware Server user.

But there are two problems now with VMware Server 2.x.:
  • VMware Server 2.x does not keep up with the Linux kernel changes either. I upgraded to OpenSuse 11.2, which worked very well. But stepped back to 11.1 since VMware Server did not install with that kernel.
  • Since FireFox 3.6 the VMware Console does not work anymore. This is solvable by installing a seperate FF 3.5.9 for VMware usage. But this is very inconvenient.
Lately I tried Virtual Box again because of the Firefox problems and I reported to be very enthousiastic about it. And indeed it is a very good product. But unfortunately most of my VM's are VMware based. It is possible to run a VM in virtual box based on VMware files. But you have to create a new Virtual Box VM for that purpose that refer to the VMware files. And than you have to de-install the VMware Tools (which probably have installed in that VM) and install VirtualBox GuestAdditions. It's all doable and no rocket-science. But it's not a simple import-and-run.

But since 25th of may VMware Player 3.1 was released. And apparently Vmware released it as an answer to the Windows XP mode on Windows 7. To run older Windows XP compatible apps in Windows 7. But it occurs to me that they've looked closely to VirtualBox. The VMware Unity mode is quite comparable or actually the same as the Seamless Windows mode of VirtualBox. This is a very attractive feature that makes VMware Player as well as VirtualBox suitable to beat the Windows XP mode of Windows 7. I have not be able to try the XP mode of Windows 7. So I don't know about load times of the XP mode. But for VirtualBox and VMware Player you need to start the guest Windows to be able to use it. And you have to have an activated version of Windows in your VM.

And since VMware Player 3.1 I'm able to create VM's in Player. Actually, with being able to create VM's in Player and with the 'Shared Folder' feature I think you might say that Player is on the 'must-have'-feature-level of VMware Workstation 5.5. And since Player has a smaller foot print (100MB, about the same as Server 1.0.x) and does not need to have an Apache Tomcat running, no need for browser access of the console: VM's running in a seperate application window, Player is a very much better suitable for running VM's on a laptop/desktop. And a feature like VMware Unity/VirtualBox Seamless Windows mode is really nice. It is funny to have windows applications running in a window side by side with your Linux apps. Even copy and paste works. And having shared folders makes it possible to have your Guest-OS apps work with the same files as your host OS apps.

And what made me most lyric? I had my OpenSuse 11.1 connected to a beamer. In the NVidia conrols I had the CRT (as NVidia/X called it) placed right of my LCD desktop. Then I placed my PowerPoint in Unity on the Beamer area of the desktop (so I was able to see other apps on my lcd screen). And then it played the slideshow fullscreen on the beamer...
I was very happy about it, because I had PowerPoint 2007 working under Wine. But editing a file under PowerPoint2007/wine was not doable. Whey drag and drop or move an object on the canvas the canvas becomes black until you release the object. Only then it draws again. But in the VMware Unity mode, PowerPoint 2007 is usable under Linux!

To conclude, when to use what? A question I try to answer a lot lately.
  • If you want to run VMware VM's on your own laptop along with other apps, use VMware Player
  • If you want to run MS-Office 2007 or other Windows apps side by side with other applications on your Linux machine, use either VMware Player with Unity or Virtual Box with Seamless Windows mode (you may choose). But you should try if the app works properly enough under Wine, since that will give you much faster startup times and lower system-load, since you do not have to load Windows as a guest os.
  • If you want to run VM's on a host that has to be reachable from different location, then use VMware Server. For example, if you have a spare desktop on the attick that you want to leverage for different purposes, then use Server. With server you can start, stop and use the VM-desktop using a browser. Unfortunatly it has to be a FireFox 3.5 at most, for the current Server version (2.x).
  • If you look for an enterprise solution, look at Oracle VM or VMware ESX and their management products. These solutions are beyond my use. Since they run on 'bare-metal', and that is out of the question for me.
I don't have particular arguments for the choice between VirtualBox and VMware Player. There are other virtualization tools like KVM or Xen based. There are people that favor a particular solution over VMware because of performance. Especially the VMware would not be that fast in IO-operations. But I haven't seen any benchmarks that support this. I haven't done any measurements in particular but I did not experience any particular performance differences between VirtualBox and VMware. The only reason actually that I favor VMware is just that most of my VM's are based on VMware.

But I sincerely hope that Oracle works on improving both Oracle VM and VirtualBox in a way that get both products in line with eachother and eventually let both products work with the same VM-architecture. Then it would be able to use Oracle VM's on both Oracle VM and VirtualBox. Maybe that would be an extra stimulus to build VirtualBox/Oracle VM appliances.

Wednesday, 16 June 2010

Oracle Inserts based on DB2 selects

It's been a while that I wrote an article. This week I struggled with creating insert scripts based on data from DB2 to be used in my local test database (Oracle XE) at my customer.

We use Siebel and have to integrate here and there by querying data from the Siebel DB2 database. It turns out that my local database adapter has trouble with connecting to the DB2 database. Could not find out what's wrong, so I decided that I would query the data from the Siebel tables and insert it into my local XE database. It's faster anyway (in my case) and it also allows me to manipulate the data for test-case purposes.
But querying db2 to generate insert statements isn't as obvious as I would do it in Oracle.

Here is an example script.

||' values ('
|| ''''|| coalesce(ctt.PARTYROWID,'') ||''','
|| ''''|| coalesce(ctt.KLANTID,'') ||''','
|| ''''|| coalesce(ctt.KLANTTYPE,'') ||''','
|| case when ctt.AANGEMAAKTOP is not null then 'to_date('''||varchar_format( ctt.AANGEMAAKTOP,'YYYY-MM-DD HH24:MI:SS')||''',''YYYY-MM-DD HH24:MI:SS'')' else 'null' end ||','
|| ''''|| coalesce(ctt.BANKCODE,'') ||''','
|| ''''|| coalesce(ctt.BANKLOCATIE,'') ||''','
|| ''''|| coalesce(ctt.KLANTSTATUS,'') ||''','
|| ''''|| coalesce(ctt.CORRESPONSDENTIETAAL,'') ||''','
|| ''''|| coalesce(ctt.PRIMAIRTELEFOONNUMMER,'') ||''','
|| ''''|| coalesce(ctt.PRIMAIRTELEFOONTYPE,'') ||''','
|| ''''|| coalesce(ctt.PRIMAIREMAIL,'') ||''','
|| ''''|| coalesce(ctt.PRIMAIREMAILFORMAAT,'') ||''','
|| ''''|| coalesce(varchar(ctt.TELEFOONPRIVE),'') ||''','
|| ''''|| coalesce(varchar(ctt.TELEFOONOVERIG),'') ||''','
|| ''''|| coalesce(varchar(ctt.TELEFOONMOBIEL),'') ||''','
|| ''''|| coalesce(varchar(ctt.TELEFOONWERK),'') ||''','
|| ''''|| coalesce(varchar(ctt.FAX),'') ||''','
|| ''''|| coalesce(varchar(ctt.EMAIL),'') ||''','
|| ''''|| coalesce(varchar(ctt.EMAILFORMAAT),'') ||''','
|| case when ctt.EMAILDATUM is not null then 'to_date('''||varchar_format( ctt.EMAILDATUM,'YYYY-MM-DD HH24:MI:SS')||''',''YYYY-MM-DD HH24:MI:SS'')' else 'null' end ||','
|| ''''|| coalesce(EMAILBRON,'') ||''','
|| ''''|| coalesce(ctt.INGEZETENEVAN,'') ||''','
|| ''''|| coalesce(ctt.NATIONALITEIT,'') ||''','
|| ''''|| coalesce(ctt.ACHTERNAAM,'') ||''','
|| ''''|| coalesce(ctt.VOLLEDIGEACHTERNAAM,'') ||''','
|| ''''|| coalesce(ctt.ROEPNAAM,'') ||''','
|| ''''|| coalesce(ctt.GESLACHTSNAAM,'') ||''','
|| ''''|| coalesce(ctt.VOORLETTERS,'') ||''','
|| ''''|| coalesce(ctt.VOLLEDIGEVOORNAMEN,'') ||''','
|| ''''|| coalesce(ctt.ACADEMISCHETITEL,'') ||''','
|| ''''|| coalesce(ctt.TUSSENTITEL,'') ||''','
|| ''''|| coalesce(ctt.ACHTERVOEGSEL,'') ||''','
|| ''''|| coalesce(ctt.ACHTERTITEL,'') ||''','
|| ''''|| coalesce(ctt.VOORVOEGSEL,'') ||''','
|| ''''|| coalesce(ctt.VOORVOEGSELGESLACHTSNAAM,'') ||''','
|| case when ctt.GEBOORTDATUM is not null then 'to_date('''||varchar_format( ctt.GEBOORTDATUM,'YYYY-MM-DD HH24:MI:SS')||''',''YYYY-MM-DD HH24:MI:SS'')' else 'null' end ||','
|| ''''|| coalesce(ctt.GESLACHT,'') ||''','
|| ''''|| coalesce(ctt.GEBOORTELAND,'') ||''','
|| ''''|| coalesce(ctt.GEBOORTEPLAATS,'') ||''','
|| ''''|| coalesce(ctt.EIGENHUIS,'') ||''','
|| ''''|| coalesce(ctt.FAILLIET,'') ||''','
|| ''''|| coalesce(ctt.SAMENLEVINGSVORM,'') ||''','
|| ''''|| coalesce( ctt.BURGERLIJKSTAAT,'') ||''','
|| ''''|| coalesce( ctt.HUW_VOORWAARDEN,'') ||''','
|| ''''|| coalesce( ctt.PERSONEEL,'') ||''','
|| ''''|| coalesce( ctt.TYPEKLANT,'') ||''','
|| ''''|| coalesce( ctt.MAATSCHAPPELIJKESTATUS,'') ||''','
|| ''''|| coalesce( ctt.LOKALEKLANTINDELING,'') ||''','
|| ''''|| coalesce( ctt.CENTRALEKLANTINDELING,'') ||''','
|| ''''|| coalesce( ctt.KLANTINDELING,'') ||''','
|| ''''|| coalesce( ctt.LOKAALINGEDEELD,'') ||''','
|| ''''|| coalesce( ctt.BEHOEFTEPROFIEL,'') ||''','
|| ''''|| coalesce( ctt.TAXIDENTIFICATIONNR,'') ||''','
|| ''''|| coalesce( ctt.WOONPLAATSVERKLARING,'') ||''','
|| ''''|| coalesce( ctt.SOFINUMMER,'') ||''','
|| ''''|| coalesce( ctt.REDENGEENSOFINUMER,'') ||''','
|| case when ctt.DATUMOVERLIJDEN is not null then 'to_date('''||varchar_format( ctt.DATUMOVERLIJDEN,'YYYY-MM-DD HH24:MI:SS')||''',''YYYY-MM-DD HH24:MI:SS'')' else 'null' end ||','
|| ''''|| coalesce( ctt.OVERLEDEN,'') ||''','
|| ''''|| coalesce( ctt.AARDIDENTIFICATIEDOC,'') ||''','
|| case when ctt.DATUMLEGITIMATIE is not null then 'to_date('''||varchar_format( ctt.DATUMLEGITIMATIE,'YYYY-MM-DD HH24:MI:SS')||''',''YYYY-MM-DD HH24:MI:SS'')' else 'null' end ||','
|| ''''|| coalesce( ctt.NRIDENTIFICATIEDOC,'') ||''','
|| case when ctt.DATUMUITGIFTE is not null then 'to_date('''||varchar_format( ctt.DATUMUITGIFTE,'YYYY-MM-DD HH24:MI:SS')||''',''YYYY-MM-DD HH24:MI:SS'')' else 'null' end ||','
|| ''''|| coalesce( ctt.LANDUITGIFTE,'') ||''','
|| ''''|| coalesce( ctt.PLAASTUITGIFTE,'') ||''','
|| ''''|| coalesce( ctt.PERTELEFOONBENADEREN,'') ||''','
|| ''''|| coalesce( ctt.PEREMAILBENADEREN,'') ||''','
|| ''''|| coalesce( ctt.PERPOSTBENADEREN,'') ||''','
|| ''''|| coalesce( ctt.PERSMSBENADEREN,'') ||''','
|| ''''|| coalesce( ctt.PERFAXBENADEREN,'') ||''','
|| ''''|| coalesce( ctt.INSOLVENCYSTATUS,'') ||''','
|| ''''|| coalesce( ctt.IKBNUMBER,'') ||''');'
from ctt
where klantid='12345';

The first 'not so obvious' is the NVL-function. This is a typical Oracle function. For most purposes this can be translated to the coalesce function above. In most cases when the column is empty I want to have an "empty value". In some cases just
giving "coalesce( column-reference,'')" does not suffice. I had to cast the column explicitly to char with the varchar() function:

Here TELEFOONPRIVE is apparently a number column. The function coalesce() assumes the number datatype and can't accept an empty string as default string.

For dates it is a little more complicated. If there is a date I want to transform it to an Oracle to_date function. But then I have to be sure that the format comming from DB2 is of a standard format. I choose "YYYY-MM-DD HH24:MI:SS". If the date is empty I just want to return an empty string again. I couldn't come up with a simple construct using coalesce(). So I used the CASE WHEN-construct:
case when ctt.DATUMOVERLIJDEN is not null then 'to_date('''||varchar_format( ctt.DATUMOVERLIJDEN,'YYYY-MM-DD HH24:MI:SS')||''',''YYYY-MM-DD HH24:MI:SS'')' else 'null' end

It took me a while to find out that where in Oracle you can provide a date-format to the to_char(<date-value>, <date-format>) function, in DB2 you need the varchar_format(<date-value>, <date-format>) function for that. Luckily the accepted date-formats are the same in my case. So here I transform the date-value from DB2 to the required date-format and concatenate it with the Oracle to_date() function with the same format.

The generated insert statement(s) will look like (enter at values clause added manually for readability):
values ('1-100BM-100','000000105727750','Person',to_date('2006-05-09 19:59:21','YYYY-MM-DD HH24:MI:SS'),'3365','336515','C','NL','','','','','','','','','','','',to_date('2008-09-15 00:00:00','YYYY-MM-DD HH24:MI:SS'),'FULFILMENT','01','',to_date('2009-07-08 00:00:00','YYYY-MM-DD HH24:MI:SS'),'NL','NL','Name','Name','','Name','I.R.S.','Iris Ronald Simon','','','','','','',to_date('1966-11-11 00:00:00','YYYY-MM-DD HH24:MI:SS'),'M','NL','Tool Town','Y','N','3','1','9','03','01','08','','1','1','Y','','','X','123456789','',null,'N','03',to_date('2005-07-29 00:00:00','YYYY-MM-DD HH24:MI:SS'),'IC4631943',to_date('2004-07-29 00:00:00','YYYY-MM-DD HH24:MI:SS'),'NL','Tool Village','N','N','Y','Y','N','','1-22A-3344');

Wednesday, 21 April 2010

Shared folders in VirtualBox

If you have installed the VirtualBox guest-additions then you can use the SharedFolders functionality as well. This might be a usefull differentiating feature over VMware Server. VMware Workstation has this feature as well, but it will cost you about 190 dollar.

Shared Folders are folders on your host OS that you denoted to the virtualization product (VirtualBox or VMware Workstation) as being available to the guest OS. This handy because you then do not have to setup Windows or Samba shares on your host to connect to from your guest. Also it prevents you from having to setup particular network settings for it.

An alternative to shared folders, besides Windows or Samba shares is to (S)Ftp the particular files to your guest. If it is about installation files (for example the Oracle 11gR2 database) then you need to have space available in the guest-virtual disks. The virtual disks will grow accordingly. If you go for that approach then it is wise to add a temporary virtual disk to hold the installations. Afterwards you can remove the disk without the need to defrag and shrink the remaining disks.

To use the shared folders in VirtualBox you need to define a folder to share in the SharedFolders screen. This is available through the Devices main menu option:

In a Windows guest you might find the shared folders in the Windows Explorer under the network places.
In Linux guest you need to mount the share explicitly.
You first need to make a directory under the /mnt folder :

[root@oel5soa11g mnt]# mkdir Zarchief
[root@oel5soa11g mnt]# chmod a+w Zarchief/
[root@oel5soa11g mnt]# chmod a+x Zarchief/
[root@oel5soa11g mnt]# ls -l
total 8
drwxrwxrwx 2 root root 4096 Apr 21 14:11 Zarchief

Then you can mount the shared folder with the following command:

# mount -t vboxsf [SharedFolderName] /mnt/[FolderName]

For example:

[root@oel5soa11g mnt]# mount -t vboxsf Zarchief /mnt/Zarchief

FireFox 3.6 conflict with VMware Console: VirtualBox

Today I wanted to work with a VMware image to play around with weblogic/soasuite 11g etc. But I ran into the nasty FireFox 3.6 conflict with the VMware Console. Since FireFox 3.6.x the console won't start, because of some time-out error. The solution would be to downgrade to FireFox 3.5. But since I have a repository installed version, I found a downgrade too tedious. Too bad that the console plugin won't install in Google Chrome. I tried Mozilla Seamonkey, but that wouldn't do the trick also.

So I desided to get VirtualBox from the stable again. I neatly installed it using the VirtualBox repository for my OpenSUSE (see the bottom of the page here).

I created a VirtualBox VM based on the VMware files of the VM I wanted to start. See this earlier blogpost for a how-to.

Naively I removed the IDE-controller. But it turns out to be needed to be able to mount the Guest-Additions-ISO file. So don't remove that.

Now I hope that there is blessing on this traject, since I have put in too much time in it already, in my opinion.

Thursday, 1 April 2010

Logging in bash script

To debug a bash script and to know what the environment is where a script is behaving it is convenient to log. This is what I added to my e-mail prosessing script yesterday. It is simple and of course every one else could think of it. Probably it is invented hundreds of times. Here's my solution.
# Log
# Script to demonstrate logging
# author: Martien van den Akker
# (C) march 2010
# Darwin-IT Professionals
#Logging variables

#Check log dir and create it if it does not exist.
if [ "$LOG_ENABLED" -eq $TRUE ]; then
if [ -d $LOG_DIR ]; then
#log a seperation line
mkdir $LOG_DIR

if [ "$LOG_ENABLED" -eq $TRUE ]; then
TEXT="$1 ""$2"
# First check logdir
# Log Arguments

log "Number of arguments: " $#
log "First argument: " $1
log "Second argument: " $1
log "End of script"

The script starts with a call to check_logdir(). This function checks if the LOG_DIR exists. If it exists it adds a seperation line. This is because the if has to have a command in the then section. But it is also convenient because you have a seperation line between script calls.
Then there is the log function. The log function accepts two parameters. One is the prompt, the other is a string to be concatenated to the prompt. Handy for listing parameter-values.
But you could also do a logging of only one line. For example the last line.

The logging can be enabled or disabled by commenting/uncommenting the proper line of:

Outsourcing of server management

It's like in the old Cobol days. When you were working at the Automation department of the Dutch Tax office (I got this from past co-workers that were some older than I), you worked in Apeldoorn in the east of the country, but the datacenter was in The Hague in the west. A distance of about 300 km.

You coded your Cobol on punch cards. And if you were smart then you indexed your lines. You had to do a visual code check. The punch cards were put in a box and sent by courier to the datacenter in The Hague. There the cards were 'loaded' to the mainframe and if you did your visual code checking well it compiled and executed. And then you got your output back by courier.
If the box fell of the cart, you were happy you indexed your code lines. Because then the cards could be fed to a punch-card-sorter.

That's about how I feel right now. I created a script and developed a postfix configuration. But it has to be put on the Linux development machine by a system manager. Although it's a development-server there are some good reasons to not give me root-priviliges to the development server.
And since postfix runs as root, you have to be root to change the config-files. But because I do not even have a normal user-account I cannot read the logs. The script is put in a non-root-non-postfix-useraccount. But I can't update the script myself.

So, I have to do a change and now we wait until the feedback. This already goes on for days, a few weeks/months if I include the requests for accounts and initial server setup. And that for something that could be solved in a few hours (if I exclude the requests for accounts and initial server setup), if I could get my own fingers at the keys.

But so be it. The server management here is not really 'outsourced', but it is at another geographical location. In another city. And done by other people that are also busy with other tasks. They're helpful. They really are. But the overall duration is the price the organization, the customer is paying for these policies.

Wednesday, 24 March 2010

Postfix for handling mail in your integration solution

Sometimes there is a need for integrating with mail. You could say: that's easy, since we could use the notification service in BPEL as described here.
However, this solution requires that there is a mail-box to connect to. But what if you want to serve multiple (10s, 100s or even 1000s of) e-mail-addresses within a certain domain? Maybe multiple sub-domains within a certain main-domain? Then you would not want to create seperate mail-boxes for each address. That would give to much of an administration. Not only you have to create new mail-boxes, but also have activation agents for each of this mail-box. That would have an enormous performance pressure. So, what then?

In my previous post on handling email with SoaSuite (BPEL), I already mentioned Apache James as an email server. Nearly every Linux distribution also comes with Postfix. Postfix is a Mail Transfer Agent. It's not an e-mail server like James. It handles SMTP-messages. It will listen for SMTP-trafic and filter each message to determine if it has to be passed on to another MTA or to be handled locally. If a message has to be handled locally, it can be stored to local mailboxes or a local POP or IMAP server, for example. But it is also quite easy to let Postfix call a local script or executable. And that for mail-addresses that meet certain filter rules, like belonging to a domain. This script can then be used to have the message put/enqueued on a queue. The script can also be used to enhance the email messages with properties from the SMTP-envelope. I added that in the example too.

Now I'm not an e-mail server expert. But I had to figure out how to configure Postfix for this use-case. And although Postfix seem complicated at first sight, this turns out remarkebally easy.

Postfix installation
I choose Oracle Enterprise Linux 5 Update 4, as an alternative to Red Hat. But about any Linux-taste would do. Provided that it has a Postfix-pacYou might uninstall Sendmail (or unselect it in the package-options on install of the OS). This to prevent collision with the Postfix functionality.

Although you would do al the configuration as root (postfix runs as root), it is strongly advised to introduce an extra user for the message-handling. To own the scripts, etc.
In my setup the following folders are created:




Configuration files and lookup tables


Postfix daemons


Queue files


Postfix commands

This is a quite common setup, but in other (Linux or Unix) distributions the folder locations might differ slightly.

Postfix Usecase
This schema provides an overview on how Postfix will work. Left Incoming (smtp) messages will be picked up by the Smtp-deamon. Then via "Cleanup" it will be handed over to the QueueManager, via the incoming queue. The Queuemanager will put the message on the Active queue. From there it will be picked up by either the smtp-service to forward the mail to the smtp-server on the infrastructure of your Internet Service Provider or your company. Using MA-records of the DNS-server it will determine to which smtp-server it has to be send actually.
In our use-case the pipe deamon is the interesting one. This is the one we're going to instruct to call a script.

Basic Settings
The hostname command can be used to determine the fully qualified name. This is the one used by Postfix to determine the host. If this does not include the domain-name then the following command can be used to instruct Postfix what the FQN should be:
postconf -e myhostname=vmsmtp.darwin-it.local

This parameter can be left default. But if mail have to be accepted for certain domains and to be delivered using the local transport, set the following parameter in

# 2010-02-03, M. vd. Akker: To have mail accepted for .darwin-it.local
mydestination = $myhostname, localhost.$mydomain, localhost, .darwin-it.local

For now leave it default:
mydestination = $myhostname, localhost.$mydomain, localhost

Network interfaces
To have mail from external networks, non-localhost, the inet_interfaces must be set in In my case I run Postfix within a Virtual Machine. And I want to use my Thunderbird to send mail to Postfix. So first Postfix has to be told from which network-devices it should accept mail from. Since my VM is "hidden", for simplicity we accept mail from all network devices. But this can be narrowed down.


# The inet_interfaces parameter specifies the network interface
# addresses that this mail system receives mail on.  By default,
# the software claims all active interfaces on the machine. The
# parameter also controls delivery of mail to user@[ip.address].
# See also the proxy_interfaces parameter, for network addresses that
# are forwarded to us via a proxy or network address translator.
# Note: you need to stop/start Postfix when this parameter changes.
# 2010-02-10, M. van den Akker: Setup all interfaces for excepting mail.
inet_interfaces = all
#inet_interfaces = $myhostname
#inet_interfaces = $myhostname, localhost
#inet_interfaces = localhost

Specific domains to accept mail from are set with mynetworks parameters. To select sub-domains to accept mail from, change the mynetwork parameter to accept mail from your host for example.

# The mynetworks parameter specifies the list of "trusted" SMTP
# clients that have more privileges than "strangers".
# In particular, "trusted" SMTP clients are allowed to relay mail
# through Postfix.  See the smtpd_recipient_restrictions parameter
# in postconf(5).
# You can specify the list of "trusted" network addresses by hand
# or you can let Postfix do it for you (which is the default).
# By default (mynetworks_style = subnet), Postfix "trusts" SMTP
# clients in the same IP subnetworks as the local machine.
# On Linux, this does works correctly only with interfaces specified
# with the "ifconfig" command.
# Specify "mynetworks_style = class" when Postfix should "trust" SMTP
# clients in the same IP class A/B/C networks as the local machine.
# Don't do this with a dialup site - it would cause Postfix to "trust"
# your entire provider's network.  Instead, specify an explicit
# mynetworks list by hand, as described below.
# Specify "mynetworks_style = host" when Postfix should "trust"
# only the local machine.
#mynetworks_style = class
#mynetworks_style = subnet
#mynetworks_style = host

# Alternatively, you can specify the mynetworks list by hand, in
# which case Postfix ignores the mynetworks_style setting.
# Specify an explicit list of network/netmask patterns, where the
# mask specifies the number of bits in the network part of a host
# address.
# You can also specify the absolute pathname of a pattern file instead
# of listing the patterns here. Specify type:table for table-based lookups
# (the value on the table right-hand side is not used).
# 2010-02-10, M. van den Akker: Setup for accepting mail from CMI and localhost.
mynetworks =,
#mynetworks =,
#mynetworks = $config_directory/mynetworks
#mynetworks = hash:/etc/postfix/network_table

Where should be replaced by the address-range of the host(s) from which you want to be able to receive email.

Start and stop postfix
Postfix can be stopped by:
[root@vmsmtpserver postfix]# postfix stop
Postfix can be started by:
[root@vmsmtpserver postfix]# postfix start
If it’s not already running.

After making changes to the configuration, Postfix has to be told to reload the configuration:
[root@vmsmtpserver postfix]# postfix reload
postfix/postfix-script: refreshing the Postfix mail system
Incoming e-Mail
Postfix will have to be configured that *.darwin-it.local will be seen as a domain for virtual mailbox addresses. The virtual transport has to be configured that these messages are handled by the pipe-deamon to call an external script.

We'll instruct Postfix also to enrich the smtp-message with two Custom properties:
  • x-envelope-to: Recipient list from the SMTP-Header
  • x-envelope-from: From-email-addres from the SMTP Header
Actually we implement the enrichment in the script. But Postfix will pass these properties as parameters.

Create a transport
To have incoming mail for .darwin-it.local transported to the script, a new transport has to be configured.

This is done in the file:
# 2010-02-10, M. van den Akker: Setup transport routescript for passing message to a bash-script
routescript   unix  -       n       n       -       -       pipe
flags=FDq. user=smtpuser argv=/bin/bash -c /home/smtpuser/ -s $sender -r $recipient -q $nexthop -

Here a new transport is created, named “routescript”. It refers to the “pipe” deamon.

The “flags=FDq” denote that the From and destination addresses from the header are added to the header.
Change the following if necessary:
  • The user denotes the unix-user that is used to run the script. In this case the unix-user “smtpuser” is used. Change it to the proper user (other than “root” or “postfix” ) .
  • argv: denotes in this case that a bash script is called. Bash is called to run the script, that is placed in the home folder of the smtpuser. Change the path according to the correct location of the given script.
  • The next parameters are parameters for the script:
    • -s: the sender of the message, $sender refers to the from-address on the smtp-envelope
    • -r: the recipient(s) of the message, $recipient refers to the to-list on the smtp-envelope
    • -q: the queue on which the message has to be put. Our script was designed to queue the message on IBM mq. The property $nexthop refers to the nexthop property in the transport map. Having used this parameter to denote the particular queue enables us to change only the transport map if a queue is changed. Or reuse this transport for different queues in different transport mappings. This is a nice way to pass info about the actual/technical transport channel to be used.
Create transport map
To route the '.darwin-it.local' domain to the script a transport map has to be made in the transport file.

Add the following line to the transport map file:
# 2010-02-10, M. van den Akker: Setup transport routescript for handling darwin-it.local-messages.
.darwin-it.local routescript:queuename
Where queuename is the queue that is used for the smtp-messages. This is the 'nexthop'-parameter where the $nexthop property in the refers to.

After having changed the transport map file, it has to be compiled into an indexed binary using the postmap tool. So execute the following command:
[root@vmsmtpserver postfix]# postmap transport
Define transport map
To have the transport map used by Postfix, add the following lines to the :
# 2010-02-10, M. van den Akker: use transport map

Make sure that the as given in the appendix is placed on the location as defined in the above. In the example above it is: /home/smtpuser/
Make it owned by the user that is used by Postfix to execute the script (smtpuser).
Then make the script executable:
[smtpuser@vmsmtpserver postfix]$ chmod +x

The actual script as an example is given at the end of this article.

Outgoing e-Mail
All outgoing e-mail should be forwarded to your companies or ISP's smpt-infrastructure.

To do so set the relay-host parameter to the particular smtp-server in

# The relayhost parameter specifies the default host to send mail to
# when no entry is matched in the optional transport(5) table. When
# no relayhost is given, mail is routed directly to the destination.
# On an intranet, specify the organizational domain name. If your
# internal DNS uses no MX records, specify the name of the intranet
# gateway host instead.
# In the case of SMTP, specify a domain, host, host:port, [host]:port,
# [address] or [address]:port; the form [host] turns off MX lookups.
# If you're connected via UUCP, see also the default_transport parameter.
#relayhost = $mydomain
#relayhost = []
#relayhost = [mailserver.isp.tld]
#relayhost = uucphost
#relayhost = [an.ip.add.ress]
Restart Postfix
After having made the necessary changes above it is important to restart (stop and sart) Postfix. Just doing a reload of the config will probably not suffice.

It took me some time to understand Postfix. I was quite overwhelmed by the options. And it took me some time to figure out how to configure it for this particular usecase. Where I had to consult a co-worker (the one that sort of made up this use-case; thanks to Hugo). But as with most other things: after all it turns out to be simple. And it might be useful for many other cases.

Appendix: the
Below the script is given to be called by Postfix to route the messages. This script is designed to either output the message to a file or to an IBM mq client. The mq-client has to be installed from a licensed installment of IBM. IBM also provides the MA01-support pack, in which for several OS'es an compiled executable is provided. This executable (simply called 'q') provides a commandline interface to the IBM mq-client. The mq-client can put the message to a queue, but also to standard out. This is handy for testing, where no mq-client is available.

There are 2 lines to edit:
Q=~/bin/ma01/q : give here the proper path the q-executable from the MA01-support pack
fi|"$Q" -O "$QUEUE_NAME" : here the output of the if-block is outputted to the q-client using the queue-name. If the queue is not available in test-environment this will give an error. For test purposes it might be usefull to comment this line use (uncomment) either the line '#fi|"$Q" -s' to have the q-client output the message to STDOUT, or '#fi>$FILENAME' to have the output directed to file.

The script will exit with the result code of the last command, which is the q-client. If that one fails, the script with exit with the result-code telling Postfix to consider the call failed. Postfix will consider the message undelivered and retry it later.

# Route messages.
# Script to route an email message, read from pipe and output it to a channel.
# author: Martien van den Akker
# (C) januari 2010
# Darwin-IT Professionals
FILENAME=/tmp/routemq-`date +%Y%m%d-%H%M%S.%N`

#Function to display usage
SCRIPT_PARAMETERS="-r receiver -s sender -q queuename";
USAGE="Usage: `basename $0` $SCRIPT_PARAMETERS";
echo $USAGE;

# Check Arguments
until [ -z "$1" ]
case $1 in
"-s") SENDER=$2;;
"-r") RECEIVER=$2;;
"-q") QUEUE_NAME=$2;;
* ) usage;
exit $E_WRONG_ARGS;;
shift 2;

#echo header variables and cat stdin to output.
#Edit next line to give the proper full path to the “q”-executable from the MA01-support-pack
if [ "$TRUE" ]
cat -
fi|"$Q"  -O "$QUEUE_NAME" #  uncomment to output to $QUEUENAME (Comment previous line)
#fi|"$Q" -s # only output to stdout
#fi>$FILENAME #uncomment to output to filename

exit $? #Exit with result-code of last command, which is the "q" command.

Wednesday, 10 March 2010

Apache James on IBM AIX

Apache James is a very nice e-mail server to be used in a development or test environment, where you need to integrate with an email system. I mentioned it in an earlier post. It supports smtp, pop, nntp (news) and imap.
It's installation is as simple as can be: just unzip the tool, set your JAVA_HOME environment variable and run the appropriate or run.bat script (given your OS being either Unix/Linux or Windows). The only thing you need besides the zip is a Java Runtime Environment that is at least of version 1.4.2.

For most systems this is all you have to do to get it running. But I found that on IBM AIX (5.x) it is a little less obvious. Getting it running is not an issue, but as soon as you want to add a user, you'll run into the error:
Exception: Security error: SHA MessageDigest not available
And after that the telnet connection is closed.
It turns out that the security-provider packages are not registered properly. To get it right there are two things to do.
  1. Make sure that the JAVA_HOME is pointing to the jre folder in the root folder of the java-installment on your system. So like: /usr/java5_64/jre instead of /usr/java5_64. Also make sure that there is a lib/ext folder (/usr/java5_64/jre/lib/ext) that contains a <make>jceprovider.jar, eg. sunjce_provider.jar or ibmjceprovider.jar.
  2. Change <james-home>/bin/ to register the extensions:
  • Find the line:
  • Change it to:

Now james can be started using the scripts and using the telnet console you should be able to succesfully add users.

It might be that on your system, the system administrators block port 25(smtp) and 110 (pop). That would prevent james to startup the smtp and pop services.
In the <james-home>/apps/james/SAR-INF/ there is a config.xml file. In that file you can find a line:
<pop3server enabled="true">

There you can choose to disable pop by changing the enabled attribute. But benaath that line there is a port element. You could change that instead to for example 8110. That would enable pop-support on the 8110 port. You should instruct your client to use that port off course.
The same counts for smtp-support. That can be found at:
<smtpserver enabled="true">

For smtp you could choose to set the port to 8025.

Thursday, 28 January 2010

Oracle Sun now definitely one team

Today I read the news that the European Commity approved the Sun acquisition without further demands. See Actually it is already a week ago. But today I read also the plans of a Web Open Office, or Open Web Office. Or Open Cloud Office. An alternative of Google's Docs. I'm very curious how that would look like and how it will compare to Google Docs. Open Office is a fairly mature Office Suite. So I'm looking forward to see which OpenOffice features will find their way in to the Oracle Cloud.

Further more the clouds are cleared upon MySql.

Another interesting statement I found was that Java SE should support more languages. Probably as an answer to the .Net world where you can develop in multiple languages. So lots of promisses that makes it interesting to "keep an eye in the sail" with Oracle (famous Dutch saying).

Monday, 18 January 2010

Application Server Connection in JDeveloper 10.1.3.x

This is actually a tip from the dusty old box, although I imagine that there are still people around with SoaSuite/Jdeveloper 10g. I got a VMware image with SoaSuite10133. And I couldn't connect to it with JDeveloper from outside the VM. And I should have the solution somewhere but I couldn't reproduce it. The solution lies in the opmn.xml found in $ORACLE_HOME/opmn/conf.

There you'll find a notification-server element. It propbably does not have the ipaddr node below it by default. But add it like below with as ip-addresses. Doing so enables you to connect with a jdeveloper on a remote machine.
<notification-server inter>
<ipaddr remote="" request=""/>
<port local="6100" remote="6200" request="6003"/>
<ssl enabled="true" wallet-file="$ORACLE_HOME/opmn/conf/ssl.wlt/default"/>

I wrote about connecting to an Oracle Application Server/SoaSuite within a VM in greater detail here, but I did not include the actual opmn.xml snippet like above.

Friday, 15 January 2010

It's back: the Business Event System

My first steps on integration were on Oracle AQ (from Oracle 8i onwards) and Oracle Workflow Standalone 2.6.x. One of my big dissapointments were that from Oracle 10g AS and DB, Oracle Workflow is desupported. You may use it as long as the 10g licenses last (either for database or AS). But then it is End of Life time. In 11g Oracle Workflow is not delivered anymore. And that is quite a pity, since it was quite a good Workflow tool, especially for datasbase centric applications. The nice thing is that it recides and runs completely in the database. Also it has a pretty good Eventing system. You can define events on a web-based UI. Then you can subscribe Oracle Workflow processes, Java-classes or pl/sql rule-functions on an event.

The good thing about business events is that you can make applications really independend from each other. If you have an application and you need that something happens if a certain mutation in your application is done, for example on creation of an order, the order should be interfaced to another application or logged, then you would create a function call in a database trigger, or from your ADF-BC entity-object. The side-affect is that if your called-functionality is invalid, not available or gives errors, then your calling-application would not work. Also if it is a long running process that is called, than it will all happen in the session of the end-user, who must wait until all processing is done.

And that all is wrong. The calling application should only notify an infrastructure component (like a business event system) and should not care about what is done with this notification. Doing so the end-user get's his sesson back immediately. The application won't disfunction because of corrupted subscribing applications. The application has done what it's responsible for.

The business event system then will handle all the subscriptions in the back-ground. And handle all errors, throw them to an error hospital for example.

But unfortunately the Oracle Workflow Business Event System is no more. Although it is still available in Oracle's E-Business Suite. But that is quite large to install for only BES. EBS is leaning heavily on BES, since most mutations on EBS entities have Events defined on which as an EBS-developer you can subscribe your custom-code. Even if there is no Event available for what you need to do, you shouldn't code it directly on a trigger, but let a database trigger raise your newly defined custom event. And subscribe your code on that event.

This week I delivered the OPN Bootcamp on SoaSuite11g. It was very nice and I'm looking forward to the next to come (9-11th february in Belgium). There's already written a lot on SoaSuite 11g, but little on the comeback of BES: the Event Delivery Network.
Although I like the SCA/Composite. Although I like the integration of all components. But if I must choose, I would pronounce the EDN as the most interesting and promissing addition.

In the OPN Bootcamp there is a small chapter with a little lab to smell just a little on EDN. See also the chapter in "Getting Started with SoaSuite11g". A larger explanation is found here. And here you can read about the managing of EDN.

Raising/publishing an event from a mediator is really easy. Just a few clicks. Also subscribing on a event is just a few clicks away. Also you can raise/publish events from ADF-BC or from Java. But what I miss is an explanation on how to raise an event from Pl/Sql. It is mentioned that it is possible. And in the managing-guide, you'll see how you can create database-agents. But I have to find out how to do it from Pl/Sql. I'll post it when I've found out the how-to.

I've great expectations on EDN. I hope and expect that Oracle will expand the useability further more with enabling other technologies to be able to publish and subscribe to events, the propagation of events to remote systems over WANs, etc. I think they'll do because they need a good Event System when for example BPEL and/or BPM is to replace Oracle Workflow in E-Business Suite.

But we as SOA/EDA consultants have to train Business Analists to think in events. EDN is targetted to the Business. So they have to embrace the concept of events.

Wednesday, 6 January 2010

Failed to compile the generated BPEL classes

Today I ran in a weird problem, one moment I could compile my bpel process, a minor change later I couldn't. I got the very descriptive error (not):
Error: Failed to compile classes. Failed to compile the generated BPEL classes for %process-name%.

After a little trial and error I ran a expression in a while loop:
condition="bpws:getVariableData('EKDSuccess')=&quot;false&quot; and bpws:getVariableData('EKDTryCount')<=bpws:getVariableData('EKDMaxRetries')" 

It turned out to be the quotes around the word "false". I changed them to single-quotes, like:

condition="bpws:getVariableData('EKDSuccess')='false' and bpws:getVariableData('EKDTryCount')<=bpws:getVariableData('EKDMaxRetries')"

And then it compiled. It was in 10.1.2 (old I know). So maybe in 10.1.3.x it is solved.

Tuesday, 5 January 2010

Passing parameters to an XSLT in BPEL

Yesterday for me it became handy to be able to pass parameters to an XSLT in BPEL. I've seen the need earlier, but solved it a different way. By setting the target element with a default and then over writing that default in a later assign-copy step.

But yesterday I had to transform a document a variable number of times, for a list of elements. In my case I had a document with a list of recipients and I had to transform that to another document for each recipient. Each recipient had a number of elements with personal and address information that had to be transformed to the target. So simply defaulting and overwriting would not work, or was a lot of work.

To pass parameters as arguments to a XSLT is quite easy and neatly described in several blogs, amongst others in Sudheer Dhurjati's Blog.

In my case I had to pass an index to be able to select the particular recipient that I need to transform. And another parameter that holds a number of documents that is determined from another source:
<?xml version="1.0" encoding="UTF-8" ?>
<parameters xmlns:xsi=""
xsi:schemaLocation=" /S:/DEV/Sources/BPEL/Processes/CRMI_COM_VersturenBerichtContactMgt/1.0/src/xsltparameters.xsd"

In Sudheer's blog (and in other examples) the parameters are initialized by copying an XML fragment with the particular parameters. If you have to change one of the parameters this could be done by doing an indexed xpath-expression in the <to> of the assign step:
with [] as [prm]
<value xmlns="">1</value>

But for flexibilities sake, I would propose a slightly different approach. And for that I need an adapted version of the XSD:
<?xml version="1.0" encoding="windows-1252" ?>
<xsd:schema xmlns:xsd=""
<xsd:element name="parameters">
<xsd:element ref="bplcmn:item" minOccurs="1" maxOccurs="unbounded"/>
<xsd:element name="item" type="bplcmn:itemType"/>
<xsd:complexType name="itemType">
<xsd:element name="name" type="xsd:string"/>
<xsd:element name="value" type="xsd:string"/>

In this XSD I created 'item' as a seperate element based on a seperated (named) complex-type. I'm actually not so fond of nested complex-types, because it prevents you from using elements lower in the hierarchy for seperate variables.
Using this XSD, you can create a seperate item-variable, based on the item-element.
This is item-variable can be filled with a name and value, just by copying to the particular elements.
The item variable has then to be added to the parameters node using the addChildNode function:
The first argument of the addChildNode function denotes the element under which you want to add a child. In the expression above this is the 'parameters' element in the XsltParams variable. The second argument is the node you want to add as a child, in this case the 'item'-variable. Of course this expression is the <from>-expression in the copy-rule of the assignment step. The <to> will be most of the times the parent element used in the addChildNode expression:
<to variable="XsltParams" query="/bplcmn:parameters"/>
I found the description of the addChildNode function in the expression builder not so clear. So this might also be helpfull for other situations where you have to build up a node structure dynamically.