Wednesday 10 December 2008

XPath evaluation in Java using Namespaces

Earlier this year I wrote an article on testing Xpath expressions and XSL transformations in Java. This is no that hard if you know how to do it.

What I did not mention there is how to do xpath-queries on documents with namespaces.
Take for example the following xml:



<?xml version="1.0" encoding="UTF-8" ?>
<XSLBatch xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.darwin-it.nl/XMLTypes/XSLBatch ../xsd/XSLBatch.xsd"
xmlns="http://www.darwin-it.nl/XMLTypes/XSLBatch">
<XSLTransform>
<Order>1</Order>
<SourceXMLFile>/home/makker/Projects/Java/trunk/src/JavaAndXML/XMLTester/workspace/xml/sos_exportjobs.xml</SourceXMLFile>
<XSLFile>/home/makker/Projects/Java/trunk/src/JavaAndXML/XMLTester/workspace/xsl/CreateObjectType_0.2.xsl</XSLFile>
<DestinationFile>/home/makker/Projects/Java/trunk/src/JavaAndXML/XMLTester/workspace/output/sos_exportjobs.tps</DestinationFile>
</XSLTransform>
<XSLTransform>
<Order>2</Order>
<SourceXMLFile>/home/makker/Projects/Java/trunk/src/JavaAndXML/XMLTester/workspace/xml/sos_exportjobs.xml</SourceXMLFile>
<XSLFile>/home/makker/Projects/Java/trunk/src/JavaAndXML/XMLTester/workspace/xsl/CreateObjectTypeBody_0.2.xsl</XSLFile>
<DestinationFile>/home/makker/Projects/Java/trunk/src/JavaAndXML/XMLTester/workspace/output/sos_exportjobs.tpb</DestinationFile>
</XSLTransform>
</XSLBatch>


If you want to do a query on all XSLTransforms, you would probably write an Xpath of the form:
/XSLBatch/XSLTransform

But if you try to do that with the statement:

private XMLDocument xmlDoc;
...
public NodeList selectNodes(String xpath) throws XSLException {
NodeList nl = this.xmlDoc.selectNodes(xpath);
return nl;
}

then you find out that the NodeList nl would not deliver you any nodes.
This is because the XML document is of namespace: "http://www.darwin-it.nl/XMLTypes/XSLBatch" and this is not taken into account in the xpath expression above.
You could use an expression in the form of:
/xbt:XSLBatch/xbt:XSLTransform

But then: how does the parser know to what Namespace the abbreviation xbt resolves?

You could avoid the problem above with the expression:
/*[local-name()="XSLBatch"]/*[local-name()="XSLTransform"]
This is especially usefull if you have no means of specifying the namespaces you used.
But it makes the expressions very complex and if you need to query on a specific value of an attribute then you're out.

You can resolve this in Java quite easily with a NameSpace resolver. A what? A NameSpace resolver is a class that implements the oracle.xml.parser.v2.NSResolver interface. That is: using the Oracle XML parser.

A sample implementation of the Namespace Resolver is as follows:
package com.m10.xmlfiles;

import java.util.HashMap;

import oracle.xml.parser.v2.NSResolver;

public class XMLNSResolver implements NSResolver{
  private HashMap nsMap = new HashMap();
  
    public XMLNSResolver() {
    }

   public void addNS(String abbrev, String namespace){
       nsMap.put(abbrev, namespace);
   }
    public String resolveNamespacePrefix(String string) {
        return (String)nsMap.get(string);
    }
}

As you can see this implementation is fairly simple. It has a HashMap in which you can store your namespaces with an abbreviation as a key.
Since it inmplements the NSResolver interface it must implement the resolveNamespacePrefix. And that one does exactly what you might expect: it gives the namespace back that belongs to the abbreviation.

So if you do the following:
XMLNSResolver nsRes = new XMLNSResolver();
nsRes.addNS("xbt", "http://www.darwin-it.nl/XMLTypes/XSLBatch");
Then you can do your xpath query as follows:
NodeList nl = this.xmlDoc.selectNodes("/xbt:XSLBatch/xbt:XSLTransform", nsRes);

And that is how this is done.

Friday 5 December 2008

EDA the successor of SOA?


Today I read the remarkable article in the Computable that according to Gardner EDA (Event Driven Architecture) will be the successor of SOA (Service Oriented Architecture). That would suggest that EDA and its technology is newer than SOA or that EDA tools will replace the current SOA-tools. Well, I'm not that into the History of ICT. But about three years ago Oracle introduced their Enterprise Service Bus as part of the SoaSuite. You could consider the ESB as the succesor of Oracle's InterConnect in J2EE technology. Remarkable also is that you can find Oracle Workflow's Business Event System parts in the ESB technology. The Business Event System (the name sais it all) is part of Oracle Workflow since version 2.6, that was introduced in 2001 if I'm right.
Oracle Service Bus (fka. BEA's AquaLogic Service Bus) was introduced in 2005. And there are serveral other service busses maybe with an even longer history.

So Busineess Events could be (and should be) part of our applications since a long time. In fact Enterprise Application Integration (another Three Letter Acronym or TLA), is about firing and receiving business events from applications.

I believe that EDA does not succeed SOA. In fact SOA is about Services and Services on their own are not usefull. Services are triggered with an information object. And this infact is an event. Connecting Services in to a business flow (what we call "orchestration") is about passing Business Events between Services folowing a Business Process. Nowadays we tend to do that using BPEL or BPMN.

Therefore I would state that SOA = EDA + BPM. I must confess that I did not make that up on my own. Since I'm a former Oracle Employee, this is what I've learned from the positioning of Oracle's toolstack onto SOA.

Having a Service Bus is not a replacement of your SOA toolstack but a very valuable addition to it. It's a very good idea to have a Service Bus abstract your services from your Business Process. That makes it easier to do things like aggregating services, replacing services, transformations from Enterprise Business Objects to Application Business Objects, etc. But this article is not about the value of an Service Bus. For that I could write a complete separate article.

Read also this previous article.

Monday 1 December 2008

Unlock SVN Repository for SVN Sync

Lately I've been introduced into the ease of use of subversion. I now use it on my laptop to keep track of my projects. It's surprisingly easy to setup a repository and to use it. Maybe I should do a posting on that in the near future.

I also setup a local synced repository that I use to get a local copy of our central project repository. However, last week I started by accident this repository server twice, and stopped the sync-run by ctrl-c to resolve this. After that I got a:
"Failed to get lock on destination repos, currently held by makker-laptop" message repeatedly.

I got the solution on: http://journal.paul.querna.org/articles/2006/09/14/using-svnsync/.

However the exact command mismatched for me (I don't know how version dependent that is).

What I had to do is to delete the lock by removing a lock-property. The command for this is:

svn propdel svn:sync-lock --revprop -r 0 .

In Pauls blog he suggested the svn-command propdelete, but this should be propdel. And I had to make up that for the repository you have to give in the link to the svnserver that runs the repository.

For example:
svn propdel svn:sync-lock --revprop -r 0 svn://localhost:3904

This worked, and got me syncing my repositories again.

Friday 28 November 2008

Home page in a JHeadstart application

This week I finished a JHeadstart generated application. Indeed I found that using JHeadstart, espessially because of the Velocity Templating framework, you get about the same productivity of Oracle Designer.

When you generate an application using JHeadstart, it generates also a Home.jspx for you. If you use this Home.jspx as your landing page (because after logging in by default you "land" on this page) you find that it lacks a menu. And maybe you want some other elements on this page. Of course you can edit the generated Home.jspx, adding the menu-facet and the other elements. But I found that sometimes JHeadstart regenerates the Home page.
If you use Subversion (or another version control system) and had committed your Homepage then you can reset your Home page back from your repository. But certainly I forget this 3 out of 4 times.

So I thought: lets generate the home page together with the other screens.

To do that you have to create another group in the application service with the JHeadstart Application Definition Editor. Call it the Home group.
Uncheck the 'Bound to Model Data collection'. Of course you do not have to choose a Data Collection.

As a layout style choose: Form.
Set the search settings both on false.
As tab-name I would choose "Home".
For a display title you can choose: "Welcome #{jhsUser!=null ? jhsUser.displayName : facesContext.externalContext.userPrincipal.name} to the Service Oriented Scheduler Maintenance Application!". In fact I copied this from the original generated home page. Of course I gave onather application name (it used to be "JHeadstart demo application").

Under Operations uncheck every DML operation.

Since there is already a "Home" button in the Global Menu, I do not want to have the Home page in the application menu. So I unchecked "Add Menu entry for this Group" under Customization Settings.

For a Group it is mandatory to have a Descriptor Item. But I did not want any items on my page. So what I did was to create one item, called "Home" (or what ever you like).


Uncheck "Bound to Model attribute?".
As a Java Type, I choose "String" (but that does not do anything) and as a display type "Hidden". Furthermore I set the Display properties all on False, and unchecked the Search options. But since I had a hidden field, these options might do nothing. I did not check.

Now if I did not forget to mention anything, the home page is generated for you, as you like, with the proper menu. You can adapt it to your wishes by adding other elements as Items (like an image for instance).

And here is my Do It Yourself-Home-page:


Deployment of Java ADF applications to Oracle 10.1.3 Application Server

In my current project I created a Java ADF 10.1.3.3 application. Actually I generated the whole bunch with JHeadstart 10.1.3.3. Pretty cool stuff actually.
At the end I had to deploy the application to a proper application server. That is: not the embedded OC4J in JDeveloper, but in my case the Oracle 10.1.3.3 application server that runs our SoaSuite.
And I couldn't find a proper document that tells me how to do that. With some searching around, and "with a little help of my friends", I found three documents:
The first one tells you how to setup your deployment descriptors. It will give you at the end an EAR file that you can use to deploy to an J2EE application server.

But by default JDeveloper ships all your connections in the connection navigator in the model-project's JAR file. And also by default it deploys with a connection-descriptor in the JAR file with the database-connection you used at development time. This is of course not usefull when you deploy the EAR file to a test, acceptance or Production environment. The second post, by Pascal Alma, solves these things.

The last link points to the adf-developers guide where is stated how to change the model-project to use a datasource instead of a fixed JDBC-url.

Below I created a step by step guide to prepare for the proper deployment. For creating the actual deployment profiles I refer to the first link above.

Disable deployment of Jdeveloper Connections.
To disable this feature:
  1. Go to the jDeveloper Preferences.

  2. Go to the Deployment node and disable the checkbox “Bundle Default data-sources.xml During Deployment”.

Change Application Module to use Datasources
  1. Right click on the Application Module and choose Configurations


  2. Click on the Edit button

  3. Set Connection Type.
    Set connection Type on “JDBC DataSource” and name the Datasource something like jdbc/FDS.

  4. Click on OK.

Now you can create a deployment just following the steps on: http://www.oracle.com/technology/obe/obe1013jdev/10131/deployment/deployment.htm.
Having done that, you can create an EAR file by right-clicking on the deployment profile in the deployment project. And choose EAR-File:

This will create an EAR file in the deploy folder of the project. This file can then be used to deploy to the Application Server. This is quite easily done in the Application Server Control application (http://<applicationserverhost:port>/em). I might write an separate article about that later.

After that you need to create a datasource in the application-server. How to do that is a pretty straightforward task usually done by an Application Server Administrator. But I probably write a separate article about that another time too.

I hope this helps preventing you for searching around as long as I did. If you were searching already for about a day, I'm sorry you did not find me earlier...

Tuesday 25 November 2008

alternative quoting mechanism

As said in my previous post a teached a course about sql and plsql 10gR2.
One of the topics was the alternative quoting mechanism introduced in Oracle 10g.

The code you had to write in previous versions was sometimes very unreadable.
See the two examples below:

Example 1:


Example 2:





In Oracle 10g you cab define your own delimiter.
The rule is as follows:
start with q' followed by an unique character string end with the same unique character.

Some examples


However there are some (documented) exceptions. The ( and [ !
Here you have to close with respectively ) and ]...
I personally do not like this kind of exceptions.
The alternative quoting mechanism however is something I do like. It makes the code easier to write and makes it more readable which means that is easier to maintain as well.

Thursday 20 November 2008

Group by rollup

This week I teached a course about the Do's and Don'ts in Oracle 10gR2 (SQL and PL/SQL) I had a room full of experienced Oracle Oracle 7 and Oracle 8 programmers.

I talked about grouping and the rollup function in SQL.
One of the questions was: How can we use the grouping totals in our reporting tool.

A solution:

Let's look at the salaries per job per department in our emp table:


What if I want the following results in a report:
- How much do all SALESMAN earn in department 30?
- How much do all employees earn in total?
By using rollup in combination with the grouping function we came up with the following query

This query results in the following output: Note the grouping function.
It shows the level of the group by
0: No grouping at this level
1: Grouping on this level

Now we can anwer the questions.
- How much do all SALESMAN earn in department 30? Answer: 5600
- How much do all employees earn in total? Answer: 27725

How can I refer to the (grouping) results in a report.

You could create a nested query or a view:

Now you can base your report on this query.

or




Note: I did some updates on the original emp-table...

Friday 7 November 2008

Wireless Network bridging with VMware Server 1.0.7

VMware Server works very fine under OpenSuse 10 and 11. In fact, it is one of the reasons I turned to OpenSuse. But I couldn't get my wireless adapter (wlan0) working with bridged networks. So if I wanted to have my VM's bridged I was restricted to the wired-adapter (eth0).

I've searched and searched both in Google and the OpenSuse Forums. Many people struggled with the same issue, but I couldn't find any solution. Until today. It started by finding this article: http://www.hauke-m.de/artikel/vm-ware-wlan-bugfix/ It's in german, but it has an English translation too.

What it suggests is that there is a bug in the vmnet that Hauke solved. You can download his, replace your vmnet.tar that resides in /usr/lib/vmware/modules/source/ with it.
Then after running vmware-config.pl it s
hould work.

Apparently the vmnet.tar is from vmware-player 2.0 which does support kernels with a version higher then
2.6.21.

However, things often are a little more complicate than simple. In my case the kernel is 2.6.25. And I ran into compilation errors. First I ran into:

CC [M] /tmp/vmware-config1/vmnet-only/driver.o In file included from /tmp/vmware-config1/vmnet-only/compat_sock.h:5, from /tmp/vmware-config1/vmnet-only/driver.c:29: /usr/src/linux-2.6.25.18-0.2/include/net/sock.h: In function ‘sock_valbool_flag’: /usr/src/linux-2.6.25.18-0.2/include/net/sock.h:106: error: implicit declaration of function ‘sock_set_flag’ /usr/src/linux-2.6.25.18-0.2/include/net/sock.h:108: error: implicit declaration of function ‘sock_reset_flag’ /usr/src/linux-2.6.25.18-0.2/include/net/sock.h: At top level: /usr/src/linux-2.6.25.18-0.2/include/net/sock.h:431: warning: conflicting types for ‘sock_set_flag’ /usr/src/linux-2.6.25.18-0.2/include/net/sock.h:431: error: static declaration of ‘sock_set_flag’ follows non-static declaration /usr/src/linux-2.6.25.18-0.2/include/net/sock.h:106: error: previous implicit declaration of ‘sock_set_flag’ was here /usr/src/linux-2.6.25.18-0.2/include/net/sock.h:436: warning: conflicting types for ‘sock_reset_flag’ /usr/src/linux-2.6.25.18-0.2/include/net/sock.h:436: error: static declaration of ‘sock_reset_flag’ follows non-static declaration /usr/src/linux-2.6.25.18-0.2/include/net/sock.h:108: error: previous implicit declaration of ‘sock_reset_flag’ was here

I found a solution for this at: http://www.tfug.org/pipermail/tfug/2008-April/018337.html
However, it does not state precise enough where to do the change. Luckily I found that.
Open /usr/src/linux/include/net/sock.h in vi (as root).
Search for the line: extern __u32 sysctl_wmem_max;
Right above it copy the following lines:

static inline void sock_valbool_flag(struct sock *sk, int bit, int valbool)
{
if (valbool)

sock_set_flag(sk, bit);
else
sock_reset_flag(sk, bit);
}


So that it reads:

(it's allways wise to first backup the file to for example sock.h.org).
Then running vmware-config.pl again got me into the following compilation errors:
make -C /usr/src/linux-2.6.25.18-0.2 O=/usr/src/linux-2.6.25.18-0.2-obj/x86_64/default/. modules
CC [M] /tmp/vmware-config2/vmnet-only/driver.o
CC [M] /tmp/vmware-config2/vmnet-only/hub.o
CC [M] /tmp/vmware-config2/vmnet-only/userif.o
CC [M] /tmp/vmware-config2/vmnet-only/netif.o
CC [M] /tmp/vmware-config2/vmnet-only/bridge.o
CC [M] /tmp/vmware-config2/vmnet-only/filter.o
/tmp/vmware-config2/vmnet-only/filter.c:48: error: ‘NF_IP_LOCAL_IN’ undeclared here (not in a function)
/tmp/vmware-config2/vmnet-only/filter.c:53: error: ‘NF_IP_POST_ROUTING’ undeclared here (not in a function)
/tmp/vmware-config2/vmnet-only/filter.c: In function ‘VNetFilterHookFn’:
/tmp/vmware-config2/vmnet-only/filter.c:233: warning: comparison between pointer and integer
make[4]: *** [/tmp/vmware-config2/vmnet-only/filter.o] Error 1
make[3]: *** [_module_/tmp/vmware-config2/vmnet-only] Error 2
make[2]: *** [sub-make] Error 2
make[1]: *** [all] Error 2

This is also solved using the http://www.tfug.org/pipermail/tfug/2008-April/018337.html.
In the vmnet.tar there is a c-file: vmnet-only/filter.c.
In that file search and replace NF_IP_LOCAL_IN with 1 and NF_IP_POST_ROUTING with 4. These values were found in:/usr/src/linux/include/linux/netfilter_ipv4.h.
Then copy filter.c back into vmnet.tar. Or just use mine here.

Then copy the changed vmnet.tar into /usr/lib/vmware/modules/source/.
In my case with running vmware-config.pl the last time it compiled vmnet neatly.
And now my windows-guest bridged over wireless!

Thanks to the guys behind the links, you've made my ICT-life a little more convenient.

And you guys: don't forget to come here again after the next kernel update (since the socks.h file might break again).

Wednesday 5 November 2008

Attachments with Integration B2B

In Integration B2B, escpecially with the ebXML adapter, you can send multiple attachments.

Attachments can be of any type, both binaries as well as ASCII-files. There can be multiple files that will be send together with a payload. The payload however is mandatory, so you
can't just send a message with attachments, but without a payload.

To send attachments you have to make-up an attachment-xml document with the following structure:
<?xml version="1.0" encoding="UTF-8"?>
<!--Sample XML file generated by XMLSpy v2005 sp1 U (http://www.xmlspy.com)-->
<Attachments xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="1.0" boundary="boundary---">
<AttachmentPart>
<Location>file:/C:/ebmsTests/cleo/binaryPayload.jpg</Location>
<Content-Type>
<Top-Level-Type>image</Top-Level-Type>
<Sub-Type>jpeg</Sub-Type>
</Content-Type>
</AttachmentPart>
</Attachments>

As you can see there can be several attachment-parts. The attachment-part as shown can have a location-element in which you provide a valid url to the particular file. It can also have an element with the content Base64-encoded. This is particular the case when receiving attachment while B2B is not directed to put the received attachments in a file-folder.

To be able to send files with a particular mimetype, you have to set a property in the tip.properties file:

oracle.tip.adapter.b2b.MimeType=application/xml : application/octet-stream : application/EDIFACT : application/EDI-X12 : application/jpg : image/jpeg : application/gzip : application/x-gzip : application/pkcs7-signature : application/pdf

Here you can add additional mime-types you need. It appears that B2B has a quite limited standard list.

Another important property is the following:
oracle.tip.adapter.b2b.attachments.dir=/mnt/hgfs/stage/nllvm12-cjib/attachments

Here you can denote where B2B has to put the received attachments. On receive of the message it will place the file there and in the attachment.xml document in the ip-message-type it will denote which attachments it received.

If you do not give up a directory then it will put the received attachments Base64-encoded in the attachment.xml in the ip-message-type on the queue.

Dynamic Partnerlinks made simple

Introduction

BPEL Process Manager is made to orchestrate services. It's a good tool to sequence the handling of services into a Business Process. In most cases it is evident which services you need to invoke at runtime. But there are cases that you want to be able to choose dynamically which process you want to call. For instance when you want to call services based on the content of a datamodel. You might have services with about the same purpose but with a difference based on a preference. For instance in a certain case different tasks have to be invoked and which specific task in which case is registered in a database. Possibly the task-definitions can be extended with extra tasks.

You can do this in the Oracle BPEL Process Manager using Dynamic Partnerlinks. On OTN you can find the BPEL Cookbook which handles this subject. In the Cookbook-case the writer starts from the Orderbooking Tutorial, where you can do a loan-request to
different loan providers. The idea is then that you might want to add loan-providers without changing your process. However, I found that this article is quite complicated and at least pretty complicated to translate into a particular situation.

But a Dynamic Partnerlink solution turns out to be quite easy to implement. In this article I'll tell you how.

Prerequisites

There are a few prerequisites, actually there is one major prerequisite. And that is that every process that you want to call has to be based on exactly the same WSDL. Not just the request and response documents based on a prescribed XSD. But the WSDL has to be literally the same. The message types, the namespaces, port types, bindings. Actually the only thing that at deployment will differ is the endpoint-url.

People that have experience in Oracle Workflow calling custom pl/sql workflow functions (standalone or the OWF embedded in E-BusinessSuite), rule-functions in OWF Business Event System or Oracle Streams Advanced Queueing's pl/sql call-back notification functions, may understand the importance of this prerequisite. Java Programmers can see this as implementing an interface-class.

Then there 3 things to do:

  1. Create a “template” process, or just the first process of a set that have to be called;

  2. Create the calling process, with a partnerlink based on the template process and an initialize-and-invocation-sequence;

  3. Create additional processes.

The template process

The template process you can build exactly the same as any other process. Keep the following things in mind:
  1. Create request and response documents that adapt to the need of every possible service that you want to be able to call. The request document need to give enough information for each service to find out what it needs to do. With the response
    document each process need to be able to pass all the relevant information for the calling process to act upon.

  2. Create a separate XSD for the request and response document and put it on a http-server (for instance the document-root on the Oracle HTTP Server of the SoaSuite. It's not recommended to use the xmllib folder of the BPEL process manager for it. This might bring up complications at activating bpel-processes
    at start-up.

  3. Think of a smart generic name for the service. Since each other process/service need to be based on the exact wsdl, the messagetypes, porttypes, etc. need to have a naming
    that resembles a valid function within each other process.

  4. Since each process will have the same WSDL think an extra time about if it has to be a synchronous or asynchronous service.

After creation of the template process, deploy it to the development server. At that time the WSDL is adapted with the particular endpoint.

Creating the Invocation Process

The invocation process is created as another BPEL process.

First step is to create a partnerlink. This partnerlink should be based on the template process that is deployed to the development server.

Then add the following namespace to your bpel process:

xmlns:wsa="http://schemas.xmlsoap.org/ws/2003/03/addressing"

Based on this namespace, add an “EndpointReference” variable:

<variable name="partnerReference" element="wsa:EndpointReference"> </variable>


This variable need to be initialized. On contrary to the BPEL Cookbook you only need to initialize the “address” node. If you don't initialize it or if you initialize also the service node you might run into a java null-pointer exception. At least I did.
You initialize the endpoint-reference by copying the following xml-fragment to the endpoint reference:

<EndpointReference
xmlns="http://schemas.xmlsoap.org/ws/2003/03/addressing">
<Address/>
</EndpointReference>


In Jdeveloper this looks like:
Then you copy the determined endpoint url into the address node of the EndpointReference variable:

<copy>
<from
expression='ora:getPreference("EndPoint")'/>
<to variable="partnerReference"
query="/wsa:EndpointReference/wsa:Address"/>
</copy>

In JDeveloper this looks like:

In this case I get the address from a preference from the deployment descriptor.

The actual address can be determined from the WSDL as it is deployed on the server, In my
case:

<service name="HelloWorldEN">
<port name="HelloWorldENPort"
binding="tns:HelloWorldENBinding">
<soap:address location="http://oel50soa10133.darwin-it.local:7777/orabpel/default/HelloWorldEN/1.0"/>
</port>
</service>



The content of the location attribute of the soap:address node is the url that has to becopied into the endpoint-reference variable. In this example it is put it in the EndPoint deployment preference:
Having done that, you'll need a partnerlink to the template project. An invoke activity (and with asynchtonous processes a receive activity). Then before doing the invoke you need to copy the Endpoint Reference to the partnerlink:

<copy>
<from variable="partnerReference" query="/wsa:EndpointReference"/>
<to partnerLink="HelloWorldEN"/>
</copy>


And that's about it. You can deploy this and see if it works, by starting the process and then doing the test again with a “wrecked” end-point url. If the url is broken, the invoke shoul turn into an error.

Create additional processes

The next thing to do is to create additional services, based on the same WSDL. For BPEL you'll have to create another BPEL Project. Then perform the following steps:

Copy WSDL

Take the WSDL of the template project and copy and paste it over the WSDL of your newly created BPEL project.

Mark that in the old situation the definitions tag of your newly created BPEL project looks like:

<definitions name="HelloWorldNL"
targetNamespace="http://xmlns.oracle.com/HelloWorldNL"
xmlns="http://schemas.xmlsoap.org/wsdl/"
xmlns:client="http://xmlns.oracle.com/HelloWorldNL"
xmlns:plnk="http://schemas.xmlsoap.org/ws/2003/05/partner-link/">


After pasting the content of the template-WSDL over it, it will look like:

<definitions name="HelloWorldEN"
targetNamespace="http://xmlns.oracle.com/HelloWorldEN"
xmlns="http://schemas.xmlsoap.org/wsdl/"
xmlns:client="http://xmlns.oracle.com/HelloWorldEN"
xmlns:plnk="http://schemas.xmlsoap.org/ws/2003/05/partner-link/">



Change the Namespaces of your BPEL process

Open the bpel process source (in JDeveloper click on the Source tab). The namespaces on top are:

<process name="HelloWorldNL"
targetNamespace="http://xmlns.oracle.com/HelloWorldNL"
xmlns="http://schemas.xmlsoap.org/ws/2003/03/business-process/"
xmlns:client="http://xmlns.oracle.com/HelloWorldNL"
xmlns:ora="http://schemas.oracle.com/xpath/extension"
xmlns:orcl="http://www.oracle.com/XSL/Transform/java/oracle.tip.pc.services.functions.ExtFunc"
xmlns:xp20="http://www.oracle.com/XSL/Transform/java/oracle.tip.pc.services.functions.Xpath20"
xmlns:ldap="http://schemas.oracle.com/xpath/extension/ldap"
xmlns:bpelx="http://schemas.oracle.com/bpel/extension"
xmlns:bpws="http://schemas.xmlsoap.org/ws/2003/03/business-process/">

Change the client and the target namespace to the namespaces matching with
the WSDL:

<process name="HelloWorldNL"
targetNamespace="http://xmlns.oracle.com/HelloWorldEN"
xmlns="http://schemas.xmlsoap.org/ws/2003/03/business-process/"
xmlns:client="http://xmlns.oracle.com/HelloWorldEN"
xmlns:ora="http://schemas.oracle.com/xpath/extension"
xmlns:orcl="http://www.oracle.com/XSL/Transform/java/oracle.tip.pc.services.functions.ExtFunc"
xmlns:xp20="http://www.oracle.com/XSL/Transform/java/oracle.tip.pc.services.functions.Xpath20"
xmlns:ldap="http://schemas.oracle.com/xpath/extension/ldap"
xmlns:bpelx="http://schemas.oracle.com/bpel/extension"
xmlns:bpws="http://schemas.xmlsoap.org/ws/2003/03/business-process/">



Changing the port types
The port types have to be changed also. The port types are referenced in the invoke and the receive steps.

<!-- Receive input from requestor. (Note: This maps to operation defined in HelloWorldNL.wsdl) -->
<receive name="receiveInput" partnerLink="client" portType="client:HelloWorldNL" operation="initiate" variable="inputVariable" createInstance="yes"/>
<!-- Asynchronous callback to the requester. (Note: the callback location and correlation id is transparently handled using WS-addressing.) -->
<invoke name="callbackClient" partnerLink="client" portType="client:HelloWorldNLCallback" operation="onResult" inputVariable="outputVariable"/>


Change the portType-attributes according to the portType name-attributes in the WSDL:

<sequence name="main">
<!-- Receive input from requestor. (Note: This maps to operation defined in elloWorldNL.wsdl) -->
<receive name="receiveInput" partnerLink="client" portType="client:HelloWorldEN"
operation="initiate" variable="inputVariable" createInstance="yes"/>
<!-- Asynchronous callback to the requester. (Note: the callback location and correlation id is transparently handled using WS-addressing.) -->
<invoke name="callbackClient" partnerLink="client" portType="client:HelloWorldENCallback"
operation="onResult" inputVariable="outputVariable"/>
</sequence>

Partnerlink Type and Roles

In the bottom of the WSDL you'll find a partnerlinkType element. It has two roles defined, one for each portType. In the BPEL process source you'll have a client-partnerlink:

<partnerLink name="client" partnerLinkType="client:HelloWorldNL" myRole="HelloWorldNLProvider" partnerRole="HelloWorldNLRequester"/>

Here you'll see also the corresponding type and roles. Change them according to the WSDL:

<partnerLink name="client" partnerLinkType="client:HelloWorldEN" myRole="HelloWorldENProvider" partnerRole="HelloWorldENRequester"/>

Input and Output Variables

The input and output variables of the bpel process are based on the message types in the WSDL.

<variables>
<!-- Reference to the message passed as input during initiation -->
<variable name="inputVariable" messageType="client:HelloWorldNLRequestMessage"/>
<!-- Reference to the message that will be sent back to the requester during callback -->
<variable name="outputVariable" messageType="client:HelloWorldNLResponseMessage"/>
</variables>

Of course these should be changed according to the WSDL too:

<variables>
<!-- Reference to the message passed as input during initiation -->
<variable name="inputVariable" messageType="client:HelloWorldENRequestMessage"/>
<!-- Reference to the message that will be sent back to the requester during callback -->
<variable name="outputVariable" messageType="client:HelloWorldENResponseMessage"/>
</variables>


If the message types in the WSDL are based on local XSD's (with in the project, not on a HTTP-Server) the you have to copy the XSD to your new project too. But I would especially in this case recommend to put the XSD's to a HTTP-Server. In that case a change in the XSD would immediately count for all the processes that are based on them.

Finish/wrap-up

This should be about it. Now you can implement your new BPEL process. Then deploy it and test your dynamic invocation by copying the endpoint of the new BPEL process and register it in your Invocation Process.


Tuesday 21 October 2008

Installing Oracle 9i on RHEL 4.0

In the past I installed DB9i several times under Windows. I might have done it earlier under Linux, but I can't remember how I did it. I also installed 10g several times. But recently I had to work with Oracle Streams under 9.2.0.8. It was a very large database that could not be upgraded that easily in a short notice.

Unfortunately we experienced some problems, on Logminer and on performance. So I wanted to have a clean DB install on a VM ware image so that I could play around a little with it to see if I could get it to work properly on a clean database install.

Installing DB 9i turned out not that simple, when done in the train having just the install-disks. But when having your friends Google and Metalink around the task turns out not too hard.

I used two inputs:
Below I'll set out the steps I took to install 9.2.0.8 on Red Hat Enterprise Linux Advanced Server 4.0. If you have another taste of linux, try the document of Werner (it contains also info about other Red Hat flavours) or Google a little further.
Packages
First check out if you have the required packages. The following are required:
  • compat-db-4.1.25-9
  • compat-gcc-32-3.2.3-47.3
  • compat-gcc-32-c++-3.2.3-47.3
  • compat-oracle-rhel4-1.0-3
  • compat-libcwait-2.0-1
  • compat-libgcc-296-2.96-132.7.2
  • compat-libstdc++-296-2.96-132.7.2
  • compat-libstdc++-33-3.2.3-47.3
  • gnome-libs-1.4.1.2.90-44
  • gnome-libs-devel-1.4.1.2.90-44
  • libaio-devel-0.3.102-1
  • libaio-0.3.102-1
  • make-3.80-5
  • openmotif21-2.1.30-11
  • xorg-x11-deprecated-libs-devel-6.8.1-23.EL
  • xorg-x11-deprecated-libs-6.8.1-23.EL

This can easily be checked by issueing:
rpm -q make                           \
compat-db                      \
compat-gcc-32                  \
compat-gcc-32-c++              \
compat-oracle-rhel4            \
compat-libcwait                \
compat-libgcc-296              \
compat-libstdc++-296           \
compat-libstdc++-33            \
gcc                            \
gcc-c++                        \
gnome-libs                     \
gnome-libs-devel               \
libaio-devel                   \
libaio                         \
make                           \
openmotif21                    \
xorg-x11-deprecated-libs-devel \
xorg-x11-deprecated-libs


In my case I lacked the libraries:
package compat-oracle-rhel4 is not installed
package compat-libcwait is not installed
package gnome-libs-devel is not installed
package libaio-devel is not installed
package xorg-x11-deprecated-libs-devel is not installed

For the X11 stuff I had several dependencies that I resolved with:
rpm -Uhv fontconfig-devel-2.2.3-7.i386.rpm \
pkgconfig-0.15.0-3.i386.rpm \
xorg-x11-libs-6.8.2-1.EL.13.37.i386.rpm \
freetype-devel-2.1.9-1.i386.rpm \
zlib-devel-1.2.1.2-1.2.i386.rpm \
xorg-x11-xfs-6.8.2-1.EL.13.37.i386.rpm \
xorg-x11-6.8.2-1.EL.13.37.i386.rpm

rpm -Uhv xorg-x11-deprecated-libs-devel-6.8.2-1.EL.13.37.i386.rpm \
xorg-x11-devel-6.8.2-1.EL.13.37.i386.rpm


Then Libaio-devel:
rpm -ihv libaio-devel-0.3.105-2.i386.rpm
For compat-oracle-rhel4 you need an Oracle patch: 4198954 from metalink.
This one installs:
  • rpm -ihv compat-oracle-rhel4-1.0-5.i386.rpm
  • rpm -ihv compat-libcwait-2.1-1.i386.rpm
The compat-oracle-rhel4 libary also checks for xorg-x11-deprecated-libs and
xorg-x11-deprecated-libs-devel.

For the gnome library I also had some depencies, that I resolved by:

rpm -ihv gnome-libs-devel-1.4.1.2.90-44.2.i386.rpm \
ORBit-devel-0.5.17-14.i386.rpm \
esound-devel-0.2.35-2.i386.rpm \
gtk+-devel-1.2.10-33.i386.rpm  \
imlib-devel-1.9.13-23.i386.rpm \
glib-devel-1.2.10-15.i386.rpm \
indent-2.2.9-6.i386.rpm \
alsa-lib-devel-1.0.6-5.RHEL4.i386.rpm \
audiofile-devel-0.2.6-1.el4.1.i386.rpm \
glib-devel-1.2.10-15.i386.rpm \
libjpeg-devel-6b-33.i386.rpm \
libtiff-devel-3.6.1-10.i386.rpm \
libungif-devel-4.1.3-1.el4.2.i386.rpm


What I did was just doing the rpm -ihv gnome-libs-devel-1.4.1.2.90-44.2.i386.rpm (that was the one I had on my dvd) and then added all the dependent rpms that it mentioned. In your case you might not need the alsa and audio libraries.
Change Sysctl.conf
There are a few settings on kernel level to set. Below my sysctl.conf:
# Kernel sysctl configuration file for Red Hat Linux
#
kernel.hostname = rhel4vm.darwin-it.local
kernel.domainname = darwin-it.local

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1
kernel.sem = 256 32000 100 142
kernel.shmmax = 4294967295
kernel.shmmni = 100
kernel.shmall = 2097152
#fs.file-max = 206173
fs.file-max = 327679
net.ipv4.ip_local_port_range = 1024 65000
kernel.msgmni = 2878
kernel.msgmax = 8192
kernel.msgmnb = 65535
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144


Pay attention to the kernel.shmmax, shmmni, shmall (shared memory), fs.file-max (max filehandles), kernel.sem (min, max semaphores), and kernel.hostname + domainname.

Swap space
You need at least the double of your machines memory as a swapspace. To check your memory you can do:
grep MemTotal /proc/meminfo

To check your swapspace:
cat /proc/swaps

You can add an extra drive and format it as swapspace. To add temporary swapspace you can use the following procedure to add for example 1GB swapspace:

su - root
dd if=/dev/zero of=/u01/swapfile01 bs=1k count=1000000
chmod 600 /u01/swapfile01
mkswap /u01/swapfile01
swapon /u01/swapfile01


To remove it again:
su - root
swapoff /u01/swapfile01
rm /u01/swapfile01


Temp space

For the Temp space, if /tmp does not have enough space you can do:
export TEMP=/           # used by Oracle
export TMPDIR=/         # used by Linux programs like the linker "ld"

Create Users
I had a Virtual Machine with RHEL4 already installed and an pre-existing Oracle user. If you haven't then use the following procedure to add the oracle user:
su - root
groupadd dba          # group of users to be granted with SYSDBA system privilege
groupadd oinstall     # group owner of Oracle files
useradd -c "Oracle software owner" -g oinstall -G dba oracle
passwd oracle
Create Oracle Directories


The following directories are needed for the install, with the specified rights. Check if your filesystems have enough space. A complete installation with a starter database will need about 2,5GB. With the addition of some temp space I would be on the save side and reserve at least 5GB.
su - root
mkdir -p /u01/app/oracle/product/9.2.0
chown -R oracle.oinstall /u01

mkdir /var/opt/oracle
chown oracle.dba /var/opt/oracle
chmod 755 /var/opt/oracle


Setting Oracle Environment variables
There are few settings important to install the database. Especially the LD_ASSUME_KERNEL variable that needs to be on 2.4.19.
So I created a little environment script oraenv.sh:
export LD_ASSUME_KERNEL=2.4.19   # for RHEL AS 4
export TMP=/u01/oracle/tmp
export TMPDIR=/u01/oracle/tmp
# Oracle Environment
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/9.2.0
export ORACLE_SID=ORCL
export NLS_LANG=AMERICAN;
export ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
export LD_LIBRARY_PATH

# Set shell search paths
export PATH=$PATH:$ORACLE_HOME/bin

The ORACLE_HOME depenent variables are merely for being able to use the database after installing it.
You can run the script by:
. ./oraenv.sh

Do not forget the extra dot '.' in front of it, this will cause the set parameters exported to the calling shell.

Install the database
With this I could install the 9.2.0.4 database with the cd's I got from: http://www.oracle.com/technology/software/products/oracle9i/index.html

Then do not forget to upgrade it to 9.2.0.8. Look for the patch 4547809 in Metalink.
During the run of catpatch, I got time outerrors on the sys.XMLType and sys.XMLTypePI objects. But checking afterwards they turned out to be created and valid.

Wednesday 1 October 2008

Soasuite In VMware revisited

This week I encountered a little, nasty, problem with my renamed/rehosted application server under VMWare.
When I registered esb-services I was not able to create a BPEL Partnerlink on them since it contained an import-url to the old host name ("localhost.localdomain"). It took me quite a while to find out, and solved it by changing the appropriate parameter in the esb_parameter table in oraesb.
I edited my original posting with this new knowlegde to have all the steps in one document.

Tuesday 30 September 2008

Tuning Soasuite 10133 and 11gDB under VMWare

In earlier posts I wrote about how to install the Oracle DB11g and SoaSuite 10133 in an Oracle Enterprise Linux based VM. Earlier this year I also wrote how to rename your SoaSuite installation when having renamed your host.

I did in fact a more or less default installation of both 11gDB and a SoaSuite 10133. But since the VM was to run on a 2GB laptop for courses I gave the VM only 1.6GB of memory. The database was sized so that it claimed about 640MB and the default J2EE+Webserver+soasuite installtion of the midtier resulted in two OC4J instances that both had a minimum heapsize of 512MB and a max of 1024. So at starting both database and soasuite I ran out of memory what results in a guest-os that gets very befriended with the harddrive (swapping all over the place).

So I took some time in getting the system tuned.

The database
First step was getting the database downsized. Earlier I shrinked the sga_max_size to about 470MB. So that was allready an improvement of about 170MB.

But that was not enough for me. So what I did was to startup an XE database. There I looked at the basic memory settings. For convenience I created a plain init.ora.
For the non-dba's amongst you, you can do that by loging on as internal with:
sqlplus "/ as sysdba"
having set the ORACLE_HOME and ORACLE_SID:
ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1
ORACLE_SID=orcl
When you logged on as internal you can create an init.ora (also called a pfile) with:
create pfile from spfile;
Then you'll find an init.ora in the $ORACLE_HOME/dbs folder.

For an Oracle XE database the most interesting settings I found were:
  • java_pool_size=4194304
  • large_pool_size=4194304
  • shared_pool_size=67108864
  • open_cursors=300
  • sessions=20
  • pga_aggregate_target=70M
  • sga_target=210M
The sga_max_size was not set.
So I changed the 11g database with these settings, created a spfile from the pfile again (create spfile from pfile) started it again.

My initorcl.ora:
#orcl.__db_cache_size=222298112
#orcl.__java_pool_size=12582912
orcl.__java_pool_size=10M
orcl.__large_pool_size=4194304
....
#orcl.__pga_aggregate_target=159383552
orcl.__pga_aggregate_target=70M
#orcl.__sga_target=478150656
orcl.__sga_target=210M
orcl.__shared_io_pool_size=0
#orcl.__shared_pool_size=234881024
orcl.__shared_pool_size=100M
orcl.__streams_pool_size=0
...
#*.memory_target=635437056
*.open_cursors=300
#*.processes=150
*.sessions=20
...
*.sga_max_size=250
...

Mark that I unset the db_cache_size and memory_target. I also replaced the processes parameter with the sessions parameter being 20. These two parameters relate to eachother, one computed from the other.

I found that I had a database of 145MB! I could start the middletier, but then I could nog logon myself because of the shared-poolsize being to small. This turned out to be about 64M, while the sga_max_size (that I did not set) was 145M.

I changed my sga_max_size to explicitly 250M and the large_pool_size to 100M:
SQL> alter system set sga_max_size=250M scope=spfile;
System altered.
SQL> alter system set shared_pool_size=100M scope=spfile;
System altered.
Then restarting the database resulted in a database of 250M:
Total System Global Area 263639040 bytes
Fixed Size 1299284 bytes
Variable Size 209718444 bytes
Database Buffers 50331648 bytes
Redo Buffers 2289664 bytes

That looks better to me.

Total System Global Area 263639040 bytes
Fixed Size 1299284 bytes
Variable Size 209718444 bytes
Database Buffers 50331648 bytes
Redo Buffers 2289664 bytes

That looks better to me.

The MidTier
The changes in the middle tier are a little less complicated. In fact you have to change two settings in opmn. So go to the $ORACLE_HOME/opmn/conf directory of the middle tier.
There you'll find a file called opmn.xml.

In that file look for:
process-type id="home" module-id="OC4J" status="enabled"
Below that you'll find a node with start-parameters, having a data sub-node with "java-options". In the value-attribute of that node change -ms512M -mx1024M into -ms128M -mx128M. These are the minimum and maximum heapsizes. The home oc4j only needs 128M. It's recommended to give the OC4J at startup the max heapsize right away. Then it need not to grow.

Look again for:
process-type id="oc4j_soa" module-id="OC4J" status="enabled"
Find the same start-parameters, and do the same change but then give it heapsizes of 384M: -ms384M -mx384M.

Conclusion
This gave me aVM with a soasuite and 11g database that runs quite fine in a 1.6GB VM.
These settings are just "wet-thumb"-values. I must strictly say that these are not valid values for a production environment and even
might not be valid for a regular development environment with a significant number of developers.

But in my case it all fits, having even 40MB of memory left. According to "top" my VM is not swapping!

Wednesday 24 September 2008

Connecting to Oracle DB 11g in OEL50 under VM takes a long time

I previously described how to install 11g Database. I found that it took a long time when connecting to the database from outside the VM, using sqldeveloper, sqlplus, Pl/Sql Developer.

I use a host-only and a bridged adapter in the VM. It took me a while but I found that it has something to do with a DNS-lookup that the database does during the connection process. In my /etc/resolv.conf a reference to the host for the name server is registered. But on my host I don't have a DNS server. It should get it from the physical/bridged network or not at all.

I resolved it by uncommenting the lines in the /etc/resolv.conf (place a semi-colon before each line). Also you change your networksettings in the network-devices. in Oracle Enterprise Linux. On one of the tabs you'll find an entry to the prefered dns-server. And a domain. You should clear those lines.

Pl/Sql Developer under Wine 2

In my previous post I mentioned that running Pl/Sql developer under wine goes fine, but it does not show the icons on the buttons. Indeed it was the case with me. But somehow they appeared magically.

However, the other "minusses" still stand.

Friday 19 September 2008

Pl/Sql Developer under Wine

This week I installed Pl/Sql Developer under wine. It was pretty easy. To have it working you need to install an Oracle Instant client. Probably you could install a complete Oracle Client, but the Instant Client will do and it just gives you enough to run Pl/Sql developer.

I unzipped the 10gR2 instant client (Windows 32-bit) into /home/makker/.wine/drive_c/oracle/product/instantclient_10_2.
I also put the sql-plus addendum there, but that did not work.
Then you place a valid tnsnames.ora in the subdirectory /home/makker/.wine/drive_c/oracle/product/instantclient_10_2/Network/Admin.

Install Pl/Sql developer (I just ran the installer under wine). When starting Pl/Sql developer you first have to go to the preferences and then the connection part. (menu=> tools => preferences). There you have to point Pl/Sql developer to your instant client in a windows way: C:\oracle\product\instantclient_10_2\oci.dll. After doing that, restart Pl/Sql Developer. Then it will load the oci.dll.

Then, provided that you have a database running and a valid tnsnames.ora you can connect to your database. In my case the connection to my 11g database in the VM is very, very slow. I haven't figured out yet what causes it. But I got the same behaviour using SqlDeveloper.

I'm very pleased having Pl/Sql developer running under Linux. There are however a few points to figure out and/or improve:
  • Buttons in the button bar are not shown.
  • Some features just don't work, like the macro-recorder.
  • Sometimes when switching applications, the Pl/Sql developer pane is not repainted correctly or at all. I have to play with switch "shade" (right-click in the taskbar) on and of, to get Pl/Sql Developer shown again.
But for me Pl/sql developer is most productive tool for the job. So I accept these "instabilities" under wine.

Tuesday 16 September 2008

Integrating Hyperion DRM 9.3.2 with SoaSuite 10133

My current customer is implementing Hyperion DRM. In DRM Organizational hierarchies are stored. These hierarchies have to be exported to several output formats for several client-systems.

We advised to use Oracle SoaSuite for the integration in stead of building exports for every single target-system. But how do you get the exports out of DRM into SoaSuite? The original idea was to have a schedular call DRM to run the export to a (XML-) file and have SoaSuite polling to that file.

I've looked into the integration possibilities of DRM. DRM has been said to have WebServices but we could, upfront, not find out if the webservices are just Soap-Services (without WSDL's) or "real" Webservices described with WSDL's. It turns out that DRM has WSDL-described webservices. The url to the WSDL should be something like:
http://--drm-server--/mdm_ntier/--service--.asmx?WSDL
Where --drm-server-- is the host where the DRM server with the webbrowser is running, and --service-- is the particular service. So something like:
http://winxp.darwin-it.local/mdm_ntier/SessionMgr.asmx?WSDL

It turns out that the DRM webservices have multipart message-types. The ESB of Oracle SoaSuite 10133 does not like them. BPEL PM seems not having a problem with them. But the wsdl's use imported schemas from:
<s:import namespace="http://schemas.xmlsoap.org/soap/encoding/" >
<s:import namespace="http://schemas.xmlsoap.org/wsdl/" >
These need an internet connection to be validated and be used in JDeveloper. What you could do is put them on a local webserver and modify the wsdl's accordingly.

But having solved that it also turns out that BPEL gets a response message that does not seem to conform the WSDL. The message I got was:
"trailing block elements must have an id attribute"
So I created a webservice-proxy on a WSDL and that works fine. Using the HTTP-Analyser of JDeveloper I intercepted the response and although the webservice proxy did accept the response, and apparently is confident with the wsdl, it seems to mee that the response does not match the wsdl. And BPEL PM agrees with me. Or better, I agree with BPEL PM.

So that did not get us any further. I've learned from a contact at Oracle Development that the Webservices from DRM indeed are not supported by BPEL PM. In DRM 11 there are changes made to the webservices in a way that some simpler ones should be accepted by BPEL PM. But apperently others still aren't.

One could wonder how this could be? Aren't webservices just invented to have technology agnostic and flexible integration? I read somewhere that the way the wsdl's of DRM are created are quite regular in the .Net world. DRM is build in Delphi (I was really surprised to see such a high-end Bussiness Application being build with Delphi, since Turbo Pascal was my favorite programming environment on college/university).

DRM also delivers java-API's. To use them you need an sdk from DRM, that can be downloaded here.

We created a java-class using the examples in the api-documentation that is delivered with DRM. Look for mdm_ntier_api_932.pdf. Unfortunately I could not find this information on OTN for you.
This Java class connects with the master-data-management server. Then it looks-up an export, starts a job and then gets the output into a string. This string is then returned. Our businessguys defined some standard exports that deliver the data into an XML message.

On this java class we created a webservice, using the webservice generation wizard from JDeveloper. Actually, since the api's are a layer on the DRM webservices, in fact they are webservice proxies, we created a webservice on several DRM webservices.
This webservice is then callable from BPEL PM.
When deploying the webservice to the AS, you should deploy the jar's from the mdm_ntier_apis-sdk also, with your deployment-descriptor. I tried to upload them as separate shared libraries in OC4J, but that didn't work.

The exported XML message is parsed in BPEL PM using the parse-xml ora:parseEscapedXML function. To be able to transfer it we had to add a namespace in the root element, using
concat(substring-before(bpws:getVariableData('Receive_Export_onResult_InputVariable','payload','/ns1:RunExportProcessResponse/ns1:result'),'<MDMMetadata'),'<MDMMetadata xmlns="http://xmlns.customer.com/drm" ',substring-after(bpws:getVariableData('Receive_Export_onResult_InputVariable','payload','/ns1:RunExportProcessResponse/ns1:result'),'<MDMMetadata'))


In the Workflow Development Kit that can be downloaded using the link above, the same approach is used. So apparently Oracle also found that BPEL PM does not support the DRM Webservices and state that this is the way to go.

I did not put in any code in this blog-entry. But most of the java-code I got from the examples. Except for transferring the export-output into a string. But that is also quite straight-forward. And generating the webservice is just playing the wizard with the defaults.