Thursday, 12 January 2017

Property expansion under windows.

A little over a year ago I wrote an article about automatic scripted installation of the SOA and BPM QuickStarts. One thing I wanted to improve is to be able to dynamically expand properties in the response file. I already found out how to do that under Linux but most QuickStart installations are done under Windows. So how to 'how to replace properties in textfile in Windows'?

I found this StackOverflow question, and favored the PowerShell option, so I modified that answer to be able to expand the ORACLE_HOME property in the response file.

I modified the response file that I got from the manual installation wizard of the BPM QuickStart as follows:

Response File Version=


#Set this to true if you wish to skip software updates

#My Oracle Support User Name

#My Oracle Support Password

#If the Software updates are already downloaded and available on your local system, then specify the path to the directory where these patches are available and set SPECIFY_DOWNLOAD_LOCATION to true

#Proxy Server Name to connect to My Oracle Support

#Proxy Server Port

#Proxy Server Username

#Proxy Server Password

#The oracle home location. This can be an existing Oracle Home or a new Oracle Home

I saved this as bpmqs1221_silentInstall.rsp.tpl. Then I created a simple command file called expandProperties.bat with the following content:
set ORACLE_HOME=c:\oracle\jdeveloper\12212_bpmqs
set QS_RSP=bpmqs1221_silentInstall.rsp
set QS_RSP_TPL=%QS_RSP%.tpl
powershell -Command "(Get-Content %QS_RSP_TPL%) -replace '\$\{ORACLE_HOME\}', '%ORACLE_HOME%' | Out-File -encoding ASCII %QS_RSP%"

This does the following:
  1. Set the ORACLE_HOME environment variable. This is what I also do in the QuickStart install script. The content of this variable should replace the '${ORACLE_HOME}' property in the response file template.
  2. Set QS_RSP to the Response File name
  3. Set QS_RSP_TPL to the Response File Template name
  4.  Call Powershell with a command line command using the '-Command' argument
    1. Read the template file denoted with %QS_RSP_TPL%' using the Get-Content argument.
    2. Replace the occurrences of the string  '${ORACLE_HOME}' with the value of the corresponsing environment variable '%ORACLE_HOME%'. But since the input is considered as a regular expression. Thus the special characters $, { and } need to be prefixed with a backslash.
    3. 'Pipe' the output to the output file denoted with %QS_RSP%. It's important to add -encoding ASCII for the encoding. Otherwise its apparently encoded in UTF in a way the Oracle Installer does not comprehend.
Run this as:
Microsoft Windows [Version 10.0.14393]
(c) 2016 Microsoft Corporation. All rights reserved.


c:\Data\Zarchief\Stage\FMW\bpm12cR2QS>set ORACLE_HOME=c:\oracle\jdeveloper\12212_bpmqs

c:\Data\Zarchief\Stage\FMW\bpm12cR2QS>set QS_RSP=bpmqs1221_silentInstall.rsp

c:\Data\Zarchief\Stage\FMW\bpm12cR2QS>set QS_RSP_TPL=bpmqs1221_silentInstall.rsp.tpl

c:\Data\Zarchief\Stage\FMW\bpm12cR2QS>rem powershell -Command "(gc bpmqs1221_silentInstall.rsp.tpl) -replace '\$\{ORACLE_HOME\}', 'c:\oracle\jdeveloper\12212_bpmqs' | Out-File -encoding ASCII bpmqs1221_silentInstall.rsp"

c:\Data\Zarchief\Stage\FMW\bpm12cR2QS>powershell -Command "(Get-Content bpmqs1221_silentInstall.rsp.tpl) -replace '\$\{ORACLE_HOME\}', 'c:\oracle\jdeveloper\12212_bpmqs' | Out-File bpmqs1221_silentInstall.rsp"


This creates a new file named bpmqs1221_silentInstall.rsp, where the content of the last line is changed to:

#The oracle home location. This can be an existing Oracle Home or a new Oracle Home

Just what I need...

Recursion in XSLT

Yesterday I got involved in a question to handle a list of input documents where there is a start value and for every other element this start value has to be increased until an end value is reached.

In more advanced XSLT cases you may end up in a situations that aren't solvable using a for-each. Why not? Well, in the for-each you can't iteratively re-calculate a variable. Like a program language as Scala, XSLT is in fact a so called 'functional' language. Maybe not in the strict sense of the definition (I haven't checked), but one of the definitions is that in a functional language, or at least in Scala, variables are immutable. In XSLT once assigned a value, a variable cannot be changed. So a variable in XSLT is only variable at first assignment.

How to solve this? Well, with recursion. This morning I googled on XSLT and recursion and found this nice explanation.

Recursion is a paradigm in which a certain object or functionality is repeated by itself.

In computer programming this is done by defining a function with a specific implementation of functionality and where before of after that implementation the function calls itself with an altered set of input values until a certain end-condition is reached. It's one of the first computer algorithms I learned in a language as Pascal. And a typical example is calculating the factorial of a natural number.

I use recursion in XSLT regularly, for instance in a template that replaces a string in an another string until all occurrences have been replaced. I wrote about that in 2010 already.

In this blog post I'll omit providing an example, since I already provided a few in the articles linked above. I just want to draw your attention to this concept in XSLT. But if you like to read more on XSLT and recursion, you might be interested in this blog article.

Monday, 2 January 2017

A good year for the Clouds

At last we stumbled into a new year. New rounds, new chances as we say in Dutch. I ended 2016 and start 2017 on a project that involved the implementation of several Oracle Cloud products, where I'm responsible for ICS and PCS. But I also did several on-premise projects. So from a current Integration/Process Cloud implementation what to think about Oracle's Cloud plans?

Last fall, one of my managers came home from a presentation where Oracle's on-premise products were declared to be deceased. According to the presenter Oracle announced that they will focus only on Cloud and no new functionality would be introduced into the on-premise products.

But honestly, I don't believe any of the 'SOA/BPM Suite on-premise is dead'-announcements. And if I turn out to be wrong in the end, I tend to call it a foolish decision.

I find it perfectly reasonable that Oracle mainly focuses on  Cloud these days. Of course Oracle needs to grow into a first-class Cloud Provider. Thus that Oracle decides to bring out new functionality first into PCS and ICS and possibly SOACS makes perfectly sense. For every functionality they need to think about how, in terms of UI,  to provide it in the Cloud product as well. But keep in mind that PCS runs on the exact same process engine as BPEL and BPMN. And that the exact same counts for Business Rules engine. So changes in those components eventually benefits the on-premise counterparts.

I remember that when I joined Oracle almost 20 years ago, I learned that Larry Ellison declared that we don't need the guys with the glue-guns, since Oracle had E-Business Suite. Which caters for everything a company might need in a software system. But customers turned out to have very plausible and good reasons (at least in their own opinion... 😉 ) to choose for E-Business Suite  for financials and Siebel for CRM or other solutions for particular functional areas. And those products need to be integrated. Oracle eventually understood that and provided InterConnect back then. And later introduced BPEL Process Manager and SOASuite.

Nowadays more and more enterprises are looking to Cloud Solutions for parts of their businesses. But will keep using on-premise solutions for other parts. Some companies still need to keep their datacenters for parts of the IT. Possibly because of the need for several other products that are only available in an on-premise variant. Maybe for regulatory reasons. Or maybe other very plausible reasons, for instance those that are agreed on the golf-course or tennis-court...

So I think Oracle can't get around providing on-premise versions of Fusion Middleware.

What I do think, and really hope, is that On-premise and cloud will grow towards each other. Years ago (back in 2009, I still have emails to back it up) I was in a conversation with a Product Manager and SOA Architect, about how BPM in the cloud could work while connecting to services in the data center. I envisioned a service like with Log Me In, where a local agent connects to the Services in the Cloud that provides control, introspection and call-outs to the local services.

Now in ICS we have the concept of on-premise agents, which is actually a light-weight Weblogic that allows you to connect Cloud Integrations with on-premise services. Although this agent is a good thing, along the lines of my earlier visions, I think it can be better. What about an cloud/on-premise-integration layer in the Fusion Middleware infrastructure?

In the PCS configuration panes you can provide the URLs to your Integration Cloud Service and Document Cloud Service subscriptions. That enables you to introspect into your Integrations and seamless integration with DOCS. I'd like to see that with FMW Infrastructure (the Weblogic+ installation which is the basis for SOASuite, OSB, etc.) you get a configuration pane in which you could provide the details of your cloud subscriptions. The FMW Infrastructure could have functionality similar to the current ICS Agents. It could connect independently to the particular cloud services and register itself there. You could register wsdls for local third-party services. But if you install OSB or SOASuite or other FMW components it natively gets information on all the services that are deployed to that FMW environment. And then from ICS or PCS you could introspect those. When you connect with JDeveloper to that environment you could introspect into the services and processes in ICS and PCS, to call them from your SOA/BPM composites or OSB services. Just like how you'd do that with local composites or services. That would give you a convenient and transparent experience.

If Oracle could build that into the FMW Infrastructure the boundaries would fade, and FMW would grow into the sky, and the clouds would come down to earth.  I'd like that. I hope a Product Manager of Oracle will pick this up. I would happily exchange thoughts about this.

But above all I hope you have a great 2017. Enjoy this new year with all its great potential.

Thursday, 1 December 2016

Note to myself: when handling large payloads

Today I stumbled on a question in the communities about handling large payloads in BPEL/XSLT. Although I know that SOASuite from 11g onwards can do paging of XML to disk, I never had the need. However, you could need it from time to time. And it's good to know how to do it.

It's noted on My Oracle Support with Doc ID 1327970.1. Which refers to the 11g documentation on Managing Large Documents and Large Numbers of Instances.

Learning all the time....

Wednesday, 2 November 2016

XMind 8 is published

Already several years I´m a user of XMind. It is a very rich free MindMapping tool. I find the free version very usefull.

Today I found out that the new XMind 8 is published. You can get it on


Monday, 31 October 2016

OSB Thread handling recommendations

I have got questions on performance of OSB quite a few times already, during the years. A few years ago on a project I got eyes on a set of recommendations on workmanagers for OSB. Many developers know that for instance Service Call outs are blocking activities. And that you should use workmanagers to solve performance problems resulting from the use of those blocking activities.

If you do nothing on dispatch policies in OSB proxy or business services, all is done in the Default Workmanager. But since some constructions, not only service call-outs, need other threads to finish the job, you can get stuck threads because the workmanager's threadpool gets empty having all or near to all threads waiting, leaving no threads to pickup work to free the others.

More on this in the terrific blog of Anthony Reynolds on the subject: Following the thread.

By the way,  I've seen that some people by default use service call-outs for almost everything. But the default use should be the routing node with a route activity. Even in some services you need to gather information from several sources, while you can only have one route node, pick or choose a 'driving-service' to use in the Routing node. Just like creating a query on several tables where you have to choose a 'driving-table'. Then  use service call outs only to do the extra enrichment.

From that earlier project I got the following recommendations, based on the blog of Anthony Reynolds. Since I refer back to it regularly, I think it would be good to share it.

For OSB to work optimally and prevent floading WebLogic’s threadpool with hogged/stuck threads you should create 3 FairShareRequest classes in ratio of 33/33/33, to distinguish different “kinds of threadpools”.

Then create 4 workmanagers:

  • FTPPollingWorkManager: file based inbound OSB proxy services. Polling a filesystem (or FTP). Uses FairShareReqClass-1, and ignores stuck threads.
  • InboudWorkManager: inbound OSB proxy services, not polling file based. Also uses FairShareReqClass-1, not ignoring stuck threads.
  • CallOutWorkManager: Service Call Out operations in a OSB proxy. Uses FairShareReqClass-2.
  • DeliveryWorkManager: outbound business services in OSB. Uses FairShareReqClass-3.
Use these as a dispatch policy in the particular Proxy/Business Services. In OSB 11g this is:
I don't have screendumps of 12c at hand. But the idea would be the same there. I haven't learned that the thread model in 12c is architecturally different.

Wednesday, 19 October 2016

Get the hostname of the executing server in BPEL

This week I got involved in a question on the Oracle Forums on getting the hostname of the server executing the bpel process. In itself this is not possible in BPEL. Also if you have a long running async process, the process gets dehydrated at several points (at a receive, wait, etc.). After an incoming signal, another server could process it further. You can't be sure that one server will process it to the end.

However, using Java, you can get the hostname of an executing server, quite easily. @AnatoliAtanasov suggested this question on stackOverflow. I thought that it would be fun to try this out.

Although you can opt for creating an embedded java activity, I used my earlier article on SOA and Spring Contexts to have it in a separate bean. By the way, in contrast to my suggestions in the article, you don't have to create a separate spring context for every bean you use.

My java bean looks like:
package nl.darwinit.soasuite;

public class ServerHostBeanImpl implements IServerHostBean {
    public ServerHostBeanImpl() {
    public  String getHostName(String hostNameDefault){
        String hostName;
            InetAddress addr;
            addr = InetAddress.getLocalHost();
            hostName = addr.getHostName();
        catch (UnknownHostException ex)
            System.out.println("Hostname can not be resolved");
            hostName = hostNameDefault;
        return hostName;

The interface class I generated is:
package nl.darwinit.soasuite;

public interface IServerHostBean {
    String getHostName(String hostNameDefault);

Then I defined a Spring Context, getHostNameContext, with the following content
<?xml version="1.0" encoding="UTF-8" ?>
<beans xmlns="" xmlns:util=""
       xmlns:jee="" xmlns:lang=""
       xmlns:aop="" xmlns:tx=""
       xmlns:sca="" xmlns:xsi=""
       xsi:schemaLocation=" META-INF/weblogic-sca.xsd">
    <!--Spring Bean definitions go here-->
    <sca:service name="GetHostService" target="ServerHostBeanImpl" type="nl.darwinit.soasuite.IServerHostBean"/>
    <bean id="ServerHostBeanImpl" class="nl.darwinit.soasuite.ServerHostBeanImpl"/>

After wiring the context to my BPEL the composite looks like:

Then, deploying and running it, gives the following output:

Nice, isn't it?