Wednesday, 20 February 2019

Generate a formatted guid from database - Use Snippets

A very simple quick post today. I'm re-engineering a few Java based Mock Webservices into a SOA Suite/BPEL service.

Some of those generate a soap fault when a message id contains "8888" for instance.
I'd like to generate a GUID based message id, that is formatted with groups of four digits.

Of course there are loads of methods to do that. For instance, the Oracle database has a sys_guid() function for years. It generates a guid like: '824F95ECCB1C0EB7E053120B260A2D0F'.

But, I'd like it in a form '824F-95EC-CB1F-0EB7-E053-120B-260A-2D0F'. It can easily done by concatenating substr() calls. But you do not want to re-generate the guid with every 4 digit substr().

So, I put it into the following select:
with get_guid as (select sys_guid() guid
from dual)
select guid
, substr(guid, 1, 4)||'-'||substr(guid, 5, 4)||'-'||substr(guid, 9, 4)||'-'||substr(guid, 13, 4)||'-'||substr(guid, 17, 4)||'-'||substr(guid, 21, 4)||'-'||substr(guid, 25, 4) ||'-'||substr(guid, 29, 4)guid_formatted
, length(guid) guid_length
from get_guid;

What might be less obvious for the regular SQL developer is the with class. It is explained excelently by Tim Hall (and although around since Oracle 9.2 already, only recently put in my personal skill-box). This allows me in this query to call the sysguid() and reuse the value in the three columns.

Although this is a very simple query, it might come in handy more often. And since I'll be around this customer for a longer period, I expect, I want to save it as a snippet.

A feature around in SQLDeveloper for years are the snippets. You can make them visible through the View menu:
I tabbed-it away to the left gutter, to have it out of my way, but still in reach:
Create and edit snippets through the indicated icons. You can create your own categories, by just enter a new category name. Name it, provide a tool tip and paste the snippet. Easy-piecy.

You'll find quite a number of predefined snippets categorized neatly.

If you have gathered several of those snippets like me, and maybe want to take them to other assignments, you might feel the need to backup them.

To reuse a snippet just drag and drop them from the list into your worksheet.

The Snippets are stored in the UserSnippets.xml in the roaming user profile of SQL Developer:
In Windows like 'c:\Users\makker\AppData\Roaming\SQL Developer\'. Just backup/copy the file. Here you see the CodeTemplate.xml file as well, that contains the shorthand acronyms/aliases to much typed pieces of code that you can create too.

By the way, googling "That Jeff Smith Snippets" brought me this archived article (yes, snippets are that old) and with a link to this nice still active library of snippets.

Friday, 15 February 2019

Upgraded my Virtualization environment

A few weeks ago VirtualBox 6.0.4 is released. A minor release of the recent major release of 6.0. Although already anounced by Tim ~Oracle Base ~ Hall, I had not upgraded yet. I was still on 5.2.x. Change log of VirtualBox can be here. There are some interesting improvements. For instance, I'm curious to see what we can expect from the Oracle Cloud integration. And on several points, like shared folders, the performance is improved.

The UI is refreshed. I like the separate Tools bar with quick buttons to Import, Export and create new VM's. But, since I work with Vagrant more and more, I will see this screen less and less.

Also Vagrant has a new version since beginning of january. And I upgraded to 2.2.3. Change log can be found here.

All seem to function fine together. With my recent uprgaded MobaXterm 11.1 I can start my JDeveloper 12c from the started VM perfectly.
 The VM was suspended with VBox 5.2.x and Vagrant 2.2.2. And started with the latest greated. With nothing on the hand.

By the way, VMWare Player had VMWare Unity and VirtualBox the Seamless mode for years. It allows you to start your apps in the VM and run them as if they run as separate windows on your host. Years ago when I used VMWare Player, I was quite impressed by it. But I never got used to the VBox Seamless mode. Nowadays my favorite way of working is to connect to start the VM without UI (set the  vb.gui property to false in your Vagrantfile), connect to it using MobaXterm and start the app (JDeveloper for instance). The X Server implementation of MobaXterm will take care of the rest. Works like a charm!

Wednesday, 13 February 2019

KafkaSeries: Starting KafkaServers in Java - Implementing the Observer pattern ... again

In my previous article I explained how I start a ZooKeeper Server (potentially more of them) in Java using the Observer pattern. As promised, in this article I will explain how I implement the starting of KafkaServers in about the same way. Again, using the Observer pattern.

In principle we need one ZooKeeper, although you can have run multiple instances in a HighAvailable version. I have to figure that out, by the way.

But we can have multiple KafkaServers. And that makes sense. You might remember that I'm planning to use Kafka in a Weblogic environment, where you can have multiple Managed Servers (for instance OSB or SOA) that run side-by-side in a cluster possibly on mulitple machines. You probably want to have the Kafka Clients (consumers & producers) connect to the local instance. I would. But, they should work together, exchanging messages, so you can track events that originated on the other instance.

So I implemented a KafkaServerDriver extending the Observable class the same way as the ZooKeeperDriver  in my previous article (I in fact copied it). I changed it in a way that it can start multiple instances of KafkaObserver.

So, let me go over the particular metods again.

     * Run from a ServerConfig.
     * @param config ServerConfig to use.
     * @throws IOException
    public void runFromProperties(Properties ksProperties) throws IOException {
        final String methodName = "runFromProperties";
        log.start(methodName);, "Starting server");
        KafkaConfig config = KafkaConfig.fromProps(ksProperties);
        //VerifiableProperties verifiableProps = new VerifiableProperties(ksProperties);
        Seq reporters = new ArraySeq(0);
        // Seq reporters = (Seq) KafkaMetricsReporter$.MODULE$.startReporters(verifiableProps);
        KafkaServer kafkaServer = new KafkaServer(config, new SystemTime(), Option.apply("prefix"), reporters);
This is essentially the method to start a Kafka Server. It begins with creating a KafkaConfig  object, from a plain java.util.Properties object. Again I created an own KafkaServer Properties class that extends the java.util.Properties object. In the ZooKeeper article I explained that I needed a few extra methods to get Int based properties or to default a property based on the value of another propertie. In this case another reason is that I want to be able to differentiate over KafkaServers, each having their own property files. We'll get into that later on.
The KafkaServer(s) allow for injecting MetricReporters that can do reporting of ruintme behavior of the particular KafkaServer in a desired way. I did not get that to work in my JDeveloper project, since these are Scala object that JDeveloper got confused by, so to speak. So, in this version I provide an empty Reporters Array.

Then we create a new KafkaServer object. The constructor expects the following parameters.
  • config: the KafkaConfig object, created from the properties.
  • new SystemTime(): a new org.apache.kafka.common.utils.SystemTime object.
  • Option.apply("prefix"): Option is a Scala way of defining a Map (Kafka is build in Scala). The value "prefix" is used to give a name to the Thread the KafkaServer will run in.
  • reporters: a list of reporters that can be provided to the KafkaServer, to monitor it.
Note by the way that the KafkaServer apparently will spawn a thread it self, that it will give a name. In our Observer pattern we'll put the KafkaServer in our own Thread.

To get a hold of the instantiated KafkaServer, we set it in our private attribute, and then startup the server.

KafkaServerDriver Properties 

We can have multiple KafaServers running in our environment. We could have multiple on the same host, or distributed over multiple hosts. Each of them will have their own property-files, since, especially when running on the same host, they need at least their own and also their own port and data/log folder.

To be able to differentiate over the different Kafka Servers and define which one of them should be started up on the particular host, I introduced my own KafkaServerDriverProperties file.
It looks like:

This defines a list of kafkaservers (server0 and server1 in this example) and then for each of those a list of attributes. Of importance are the properties:
  • <server-name>.propertyfile: naming a copy of the file that is used for this server. It it's loaded from the classpath, so only the name should be provided.
  • <server-name>.startupEnabled: should the server be started on this host (true or false)?
To work with this conveniently I added another properties class: KafkaServerDriverProperties. An object from this class fetched from PropertiesFactory.getKSDProperties();, where it is instantiated based on the loaded from the classpath.
It transforms the comma-separated list into a List object, that enables you to iterate over it. And for each servername on the list it will get the propertyfile and startupEnabled properties and put that, wrapped in a properties object, in a HashMap, identified by servername. The getServerProperties(String serverName) method enables you to fetch those properties for a certain serverName.

Observing the KafkaServer Observable

 Having the above in place, the KafkaServerDriver Observable can be implemented with the ZooKeeperDriver as an example. But, since we want to be able to fire up multiple KafkaServers, this is slightly more complicated.


The start method within the KafkaServerDriver looks like:
     * Start KafkaServers
    public void start() {
        final String methodName = "start";
        for (String kafkaServerName : ksdProperties.getKafkaServerList()) {
            log.debug(methodName, "Start KafkaServer: " + kafkaServerName);

It loops over the server names from the KafkaServerList from the KafkaServerDriverProperties. For each listed servername it will add a KafkaServer.


This method has some overloaded variants. One parameterless, that loads the default file from the class path and calls the variant that takes in a properties parameter.

But let's start with the addKafkaServer(String) variant:
     * Add a KafkaServer
     * @param kafkaServerName
    public void addKafkaServer(String kafkaServerName) {
        final String methodName = "addKafkaServer(String)";
        try {
            Properties serverProperties = ksdProperties.getServerProperties(kafkaServerName);

            if (serverProperties.getBoolValue("startupEnabled")) {
      , "Start KafkaServer " + kafkaServerName);
                String serverPropertiesFileName = serverProperties.getStringValue("propertyfile");
                log.debug(methodName, "KafkaServer propertyfile: " + serverPropertiesFileName);
                Properties ksProperties = null;
                if (serverPropertiesFileName != null) {
                    ksProperties = PropertiesFactory.getKSProperties(serverPropertiesFileName);
                } else {
                    ksProperties = PropertiesFactory.getKSProperties();
            } else {
      , "KafkaServer " + kafkaServerName + " has startupEnabled == false!");

        } catch (IOException e) {
            log.error(methodName, "Failed to load properties!", e);
            throw new RuntimeException(e);

This one takes in the kafkaServerName and gets the approppriate server Properties from the  KafkaServerDriverProperties  object. It it has the startupEnabled property set to true, then it will fetch the serverProperties file, and load that one. Using that Properties object it will call the addKafkaServer(Properties) variant:

     * Add a KafkaServer from properties
     * @param ksProperties
    public void addKafkaServer(Properties ksProperties) {
        final String methodName = "addKafkaServer";
        KafkaObserver kafkaServer = new KafkaObserver(this, ksProperties);
        Thread newKSThread = new Thread(kafkaServer);
        newKSThread.setName("KafkaServer" + ksProperties.getProperty(PRP_BRKR_ID));


What this does is pretty much equal to the addZookeeper() method in the ZooKeeperDriver class. Create a new KafkaObserver providing the KafkaServerDriver object (this) as a reference and the Kafka Server Properties object. And create a new Thread for it. New is (I didn't had that when I wrote the previous article about starting the ZooKeeper) is that I set the name of the Thread. Then I add the new thread tho the KafkaServer.

Construct a KafkaObserver

We saw that in the addKafkaServer a KafkaObserver is instantiated using a reference to the KafkaServerDriver object as an Observable and the KafkaServer Properties object.

The constructor to do so is as follows:

    public KafkaObserver(Observable kafkaServerDriver, Properties ksProperties) {
        final String methodName = "KafkaObserver(Observable, Properties)";
        if (kafkaServerDriver instanceof KafkaServerDriver) {
                     "Add observer " + this.getClass().getName() + " to observable " +
            setKafkaServerDriver((KafkaServerDriver) kafkaServerDriver);

In it we set the properties, and register the KafkaserverDriver and add this new object as an observer to the referenced KafkaserverDriver.

Run the KafkaObserver

Since the KafkaObserver is a Runnable we need to implement the run() method:
    public void run() {
        final String methodName = "run";
        try {
        } catch (IOException ioe) {
            log.error(methodName, "Run failed!", ioe);



Shutdown within KafkaObserver the is as easy as:
     * Shutdown the serving instance
    public void shutdown() {
        final String methodName = "shutdown";
        log.start(methodName);, "Let me shutdown " + getKsThread().getName());
        KafkaServer kafkaServer = getKafkaServer();

The KafkaServerDriver also has a shutdown() method:
     * Shutdown all KafkaServers
    public void shutdown() {
        final String methodName = "shutdown";
        setShutdownKafkaServers(true);, "Notify Observers to shutdown!");

It sets the shutdownKafkaServers indicator, as well as the changed indicator. Then it notifies the Observers. This will result in a signal to the update() method of all registered KafkaObservers:
    public void update(Observable o, Object arg) {
        final String methodName = "update(Observable,Object)";
        Thread ksThread = getKsThread();, ksThread.getName() + " - Got status update from Observable!");
        KafkaServerDriver ksDriver = getKafkaServerDriver();
        if (ksDriver.isShutdownKafkaServers()) {
  , ksThread.getName() + " - Apparently I´ve got to shutdown myself!");
        } else {
  , ksThread.getName() + " - Don't know what to do with this status update!");

It checks if the registered KafkaServerDriver has the shutdownKafkaServers indicator set. If so (and it obvious will), it will call the shutdown() method, mentioned earlier.

Start & Shutdown

As with the ZooKeeperDriver you need to store the KafkaServerDriver object in a static variable, and call the respective start and shutdown methods. Using the mentioned KafkaServerDriverProperties file in the class path, the particular instance will know which KafkaServers need to be started. Make sure that for each kafkaserver you have a copy of the file as found in the Kafka distribution (for instance Confluent). Each copy need to have a unique and references to the data/log folders. And possibly a unique listen-port.

Libraries and Classpath

One of the things I often miss in articles like this (my excuses that I did not add it to the previous article, is a list of libraries to add to get the lot compiled.
If you take a look at the scripts, you'll find that it would just add all the libraries in the particular folder. I like to know what particular jar's I really need to get things compiled. The following jar files in the Confluent distribution are found to be needed for both having the project compiled as well as being able to run:
  • confluent/share/java/kafka/kafka.jar
  • confluent/share/java/kafka/kafka-clients-2.0.0-cp1.jar
  • confluent/share/java/kafka/log4j-1.2.17.jar
  • confluent/share/java/kafka/slf4j-log4j12-1.7.25.jar
  • confluent/share/java/kafka/slf4j-api-1.7.25.jar
  • confluent/share/java/kafka/kafka-log4j-appender-2.0.0-cp1.jar
  • confluent/share/java/kafka/zookeeper-3.4.13.jar
  • confluent/share/java/kafka/scala-library-2.11.12.jar
  • confluent/share/java/confluent-common/common-metrics-5.0.0.jar
  • confluent/share/java/kafka/scala-logging_2.11-3.9.0.jar
  • confluent/share/java/kafka/metrics-core-2.2.0.jar
  • confluent/share/java/kafka/jackson-core-2.9.6.jar
  • confluent/share/java/kafka/jackson-databind-2.9.6.jar
  • confluent/share/java/kafka/jackson-annotations-2.9.6.jar

Added to that I have the following folders added in my project's library listing:
  • confluent/etc/kafka/
  • KafkaClient/config
These contain the Kafka and Zookeeper property files, and also my own extra property files. They're loaded using a class loader, so they need to be on the class path.`


Well, that's about it for now. Next stop: create a Weblogic domain and try to add the startup and shutdown classes to it and see if I can have ZooKeeper and KafaServers booted with Weblogic.
And of course the proof of the pudding:  produce and consume messages.

Wednesday, 23 January 2019

KafkaSeries: Start Zookeeper from Java - Implementing the Observer pattern (while I can)


Since a few months I'm diving into Apache Kafka. I've always been fascinated by queuing mechanisms.  And Apache Kafka nowadays is the most modern alternative. Lately I did a presentation on an introduction to Apache Kafka:

But now I'm investigating what I can do with it. Since Weblogic is one of my focus areas, I wanted to explore how I can embed Kafka into Weblogic.

I reasoned that when I want to use Kafka with a current customer, the administrators have to install kafka (eg. unzip the Confluent distribution), on a separate virtual server.
By default the distribution comes with startup and shutdown scripts. The administrators should use those, or create their own, and startup the Kafka and Zookeeper services. And of course keep those up-and-running.

I figured that when I would be able to start the services as a thread under a Weblogic server, no additional infra structure is needed. Also starting the Weblogic server would start the Kafka services as well.

Kafka needs a ZooKeeper service. You can see the ZooKeeper as a directory service for a Kafka infrastructure. Slightly comparable to an AdminServer in Weblogic. So it would make sense, as I see it, to start the ZooKeeper with the AdminServer. The Kafka Servers can be started as part of the Weblogic Managed Server(s)

Weblogic has a mechanism to do initializations and finalizations, using startup and shutdown classes, see these documentation. From there the ZooKeeper and KafkaServers can be started.

So I had to figure out how to start those from Java. Let's start with the ZooKeeper.
I put my sources on GitHub, so you can review them. But keep in mind that they're still under construction.

Starting a ZooKeeper

My starting point was this question on StackOverflow, that handles starting a ZooKeeperServer in Java, based on the class. It was quite promising and soon I had a first version of my startup class working. Quite simple really. But, since I also want to be able to shut it down, I soon ran into some restrictions. Some methods and attributes I needed were protected and only reachable from the same package, for instance. I wasn't quite pleased with the implementation. Digging a bit further I ran into the source of that class over here. I decided to take that class, study it and based on that knowledge implement my own class.

I created a ZooKeeperObserver class, and transformed the public void runFromConfig(ServerConfig config) method from class, into a public void runFromProperties(ZooKeeperProperties zkProperties) method.

It takes in a properties object, that is interpretted and used to start the ZooKeeper.

Zookeeper Properties

To keep things transparent and simple, I created a PropertiesFactory class that provides a method to read the from the class path (therefor we should add the /etc/kafka folder to it).
I also created an own Properties class extending java.util.Properties to add a few property getter methods, like getting an int value and defaulting a property based on an other property.

Lastly, I created the ZooKeeperProperties bean, to interpret the relevant ZooKeeper properties, from a read Properties object.

The relevant properties are:

dataDir The location where ZooKeeper will store the in-memory database snapshots and, unless specified otherwise, the transaction log of updates to the database. /tmp/zookeeper
dataLogDir This option will direct the machine to write the transaction log to the dataLogDir rather than the dataDir. dataDir
clientPort The port to listen for client connections; that is, the port that clients attempt to connect to. 2181
clientPortAddress The address (ipv4, ipv6 or hostname) to listen for client connections; that is, the address that clients attempt to connect to. Empty: every NIC in the server host.
maxClientCnxns Limits the number of concurrent connections (at the socket level) that a single client, identified by IP address. 0: disabled, since this is a non-production config.
tickTime The length of a single tick, which is the basic time unit used by ZooKeeper, as measured in milliseconds. ZooKeeperServer.DEFAULT_TICK_TIME
minSessionTimeout The minimum session timeout in milliseconds that the server will allow the client to negotiate. Defaults to 2 times the tickTime. -1: Disabled
maxSessionTimeout The maximum session timeout in milliseconds that the server will allow the client to negotiate. Defaults to 20 times the tickTime. -1: Disabled

Only the properties dataDir, clientPort and maxClientCnxns are set explicitly in the file. See the Zookeeper Administration docs for more info (apparently Zookeeper is created/invented in the Hadoop project).

Run from Properties

The runFromProperties is the one that actually starts a ZooKeeperServer instance:
     * Run from ZooKeeperProperties .
     * @param zkProperties ZooKeeperProperties to use.
     * @throws IOException
    public void runFromProperties(ZooKeeperProperties zkProperties) throws IOException {
        final String methodName = "runFromProperties";
        log.start(methodName);, "Starting server");
        FileTxnSnapLog txnLog = null;
        try {
            // Note that this thread isn't going to be doing anything else,
            // so rather than spawning another thread, we will just call
            // run() in this thread.
            // create a file logger url from the command line args
            ZooKeeperServer zkServer = new ZooKeeperServer();

            txnLog = new FileTxnSnapLog(new File(zkProperties.getDataLogDir()), new File(zkProperties.getDataDir()));

            cnxnFactory = ServerCnxnFactory.createFactory();
            log.debug(methodName, "Create Server Connection Factory");
            log.debug(methodName, "Server Tick Time: " + zkServer.getTickTime());
            log.debug(methodName, "ClientPortAddress: " + zkProperties.getClientPortAddress());
            log.debug(methodName, "Max Client Connections: " + zkProperties.getMaxClientCnxns());
            cnxnFactory.configure(zkProperties.getClientPortAddress(), zkProperties.getMaxClientCnxns());
            log.debug(methodName, "Startup Server Connection Factory");
            if (zkServer.isRunning()) {
        } catch (InterruptedException e) {
            // warn, but generally this is ok
            log.warn(methodName, "Server interrupted", e);
        } finally {
            if (txnLog != null) {
Here you see that a ZooKeeperProperties is passed. A FileTxnSnapLog is initialized for the dataDir and dataLogDir. A ZooKeeperServer is instantiated, and the particular properties are set. Then a ServerCnxnFactory is created (as a class attribute for later use). The connection factory is used to startup the ZooKeeperServer.  Actually, at that point the control is handed over to the ZooKeeperServer. So, you want to have this done in a separate thread.

Observing the Observable

Now, you might think: What is it with the name ZooKeeperObserver? Earlier, I named it EmbeddedZooKeeperServer. But I found that name long and not nice. I found it funny that Observer has the word Server in it.

As mentioned in the previous section, when starting up the ConnectionFactory/ZookeeperServer, the control is handed over. The method is not left, until the ZooKeeperServer stops running.

I therefor want (as in many implementations) that the ZooKeeperServer, runs in a seperate thread, that I can control. That is, I want to be able to send a shutdown signal to it. For that I found the Observer pattern suitable. In this pattern, the Observable or Subject maintains a list of Observers that can be notified about an update in the Observable. To do so, the Observable extends the java.util.Observable class. And the Observer implements the java.util.Observer and Runnable interfaces.

How does it work? Let's go through the applicable methods.

Start and Add a ZooKeeper

The Observable is implemented by ZooKeeperDriver. In it we'll find a method start():
    public void start() {
        final String methodName = "start";
That's not too exiting, but it calls the method addZooKeeper():
    public void addZooKeeper() {
        final String methodName = "addZooKeeper";
        try {
            ZooKeeperProperties zkProperties = PropertiesFactory.getZKProperties();
            ZooKeeperObserver zooKeeperServer = new ZooKeeperObserver(this, zkProperties);
            Thread newZooKeeperThread = new Thread(zooKeeperServer);
        } catch (IOException e) {
            log.error(methodName, "ZooKeeper Failed", e);

Here you see that the ZooKeeperProperties are fetched and a new ZooKeeperObserver is instantiated, using a reference to the ZooKeeperDriver object and the ZooKeeperProperties. Since the ZooKeeperObserver is a Runnable we can add it to a new Thread. That thread is also set to the ZooKeeperObserver so that it has a hold of it's own thread, when that come in handy.
And then the new thread is started.

Instantiate the ZooKeeperObserver

In the previous section, we saw that the ZooKeeperObserver is instantiated using a reference to the ZooKeeperDriver object. Let's see how it looks like:
    public ZooKeeperObserver(Observable zooKeeperDriver, ZooKeeperProperties zkProperties) {
        final String methodName="ZooKeeperObserver(Observable, ZooKeeperProperties)";
        if (zooKeeperDriver instanceof ZooKeeperDriver) {
  , "Add observer "+this.getClass().getName()+" to observable "+zooKeeperDriver.getClass().getName());
            setZooKeeperDriver((ZooKeeperDriver) zooKeeperDriver);

The ZooKeeperProperties are set. And then it checks if the Observable that is passed is indeed a ZooKeeperDriver. The ZooKeeperDriver is also set, and then the ZooKeeperObserver object is added as an Observer to the ZooKeeperDriver using the addObserver(this) method. This method is part of the java.util.Observable object that is extended. It adds the ZooKeeperObserver to a list, that is used to send the update signal to every instance on the list.

Run the ZooKeeperObserver

The ZooKeeperObserver is a Runnable so the run() method is implemented:

    public void run() {
        final String methodName = "run";
        try {
        } catch (IOException ioe) {
            log.error(methodName, "Run failed!", ioe);

It calls the  runFromProperties(), that is explained earlier.


The ZooKeeperDriver has a shutdown() method:

    public void shutdown() {
        final String methodName = "shutdown";
        setShutdownZooKeepers(true);, "Notify Observers to shutdown!");

It sets the shutdownZooKeepers indicator to true. This is an attribute that indicates what has been updated. In a more complex Observer pattern more kinds of updates can occur. So, you need to indicate what drove the update.
The most interesting statement is the call to the notifyObservers() method. It will call the implemeneted update() on every Observer in the list.

I implemented this earlier in another situation, a few years ago. And I reused it. But at first it did not work. I found that, apparently changed in Java 7 or 8, I had to add a call to the setChanged() method. The notification to the Observers only works after that call.

As said, notifyObservers() calls the update() method in the Observer:

    public void update(Observable o, Object arg) {
        final String methodName = "update(Observable,Object)";
        log.start(methodName);, getMyThread().getName() + " - Got status update from Observable!");
        ZooKeeperDriver zkDriver = getZooKeeperDriver();
        if (zkDriver.isShutdownZooKeepers()) {
  , getMyThread().getName() + " - Apparently I´ve got to shutdown myself!");
        } else {
  , getMyThread().getName() + " - Don't know what to do with this status update!");

And this one actually checks in the ZooKeeperDriver if the change is because of the shutDownZooKeepers indicator.
If so, it calls it's own shutdown() method. If not, then the update is ignored. The shutdown does the following:
        final String methodName = "shutdown";
        log.start(methodName);,"Let me shutdown "+myThread.getName());
        ZooKeeperServer zkServer = getZooKeeperServer();
        ServerCnxnFactory cnxnFactory = getCnxnFactory();
        if (zkServer.isRunning()) {

It gets the Connection factory and sends a shutdown() signal to it. if the ZooKeeper is still running (it shouldn't be), then it gets a shutdown() signal also.

Start and Shutdown

In the end you need to create an instance of the ZooKeeperDriver and save it into a static variable. Then you can call the start() method and later get the object again from the static variable, to call the shutdown() method.


This may look a quite complex to you, to start a server. But, again, I want to be able to embed the Kafka infrastructure in an other system, in my situation Weblogic. This method I'll use to do the same for the Kafka Servers. I'll write about that in a follow-up article. And then, I'll create a set of startup and shutdown classes for Weblogic.

It was fun to implement the Observer pattern again. But, when I encountered that the notifyObserver method did not work as expected at first, searching for a solution, I found that it is deprecated in Java 9. It will still work, but apparently people found that it has it's limitations and a better way of implementing it is developed.

Wednesday, 28 November 2018

Using ANT to investigate JCA adapters

My current customer has a SOA Suite implementation dating from the 10g era. They use many queues (JMS serves by AQ) to decouple services, which is in essence a good idea.

However, there are quite a load of them. Many composites have several adapter specifications that use the same queue, but with different message selectors. But also over composites queues are shared.

There are a few composites with ship loads of .jca files. You would like to replace those with a generic adapter specification, but you might risk eating messages from other composites. This screendump is an anonymised of one of those, that actually still does not show every adapter. They're all jms adapter specs actually.

So, how can we figure out which queues are used by which composites and if they read or write?
I wanted to create a script that reads every .jca file in our repository and write a line to a CSV file for each JCA file, containing:
  • Name of place
  • Name of the jca file
  • Type of the adapter
  • Is it an activation (consume) or interaction (type)
  • What is the location (eis JNDI )
  • Destination
  • Payload
  • Message selector (when consumption)
Amongst some other properties.

Using ANT to scan jca Files

I found that ANT is more than capable for the job. I put my project on GitHub, so you can all the files there.

First let's talk the first parts of scanJCAFiles.xml.

Since I want to know the project that the .jca file belongs to, I first select all the .jpr files in the repository. Because the project folders are spread over the repository, although structured they're not neatly in a linear row of folders, finding the .jpr files gives me a list of all the projects. 
  <!-- Initialisatie -->
  <target name="clean" description="Clean the temp folder">
    <delete dir="${jcaTempDir}"/>
    <mkdir dir="${jcaTempDir}"/>
  <!-- Perform all -->
  <target name="all" description="Scan All SOA applications" depends="clean">
    <echo file="${outputFile}" append="false"
          message="project name,jcaFile,adapter-config-name,adapter-type,connection factory location,endpoint type,class,DestinationName,QueueName,DeliveryMode,TimeToLive,UseMessageListener,MessageSelector,PayloadType,ObjectFieldName,PayloadHeaderRequired,RecipientList,Consumer${line.separator}"></echo>
    <foreach param="project.file" target="handleProject" delimiter=';' inheritall="true">
        <fileset id="dist.contents" dir="${svnRoot}" includes="**/*.jpr"/>
Side note, as can be seen in the snippet, I re-create a folder for transformed jca files (as described later) and I create a new output file, in which I write a header row with all the column names, using echo to a file with the append properties to false.

So, I do a foreach over a fileset, using the svnRoot property in the, that includes ever .jpr file anywhere in the structure. For each file the handleProject target is called with the file in the project.file property. Foreach is an antcontrib addition to ANT. So you need to add that as a task definition (one thing I do as a first thing).

  <taskdef resource="net/sf/antcontrib/antlib.xml">
      <pathelement location="${ant-contrib.jar}"/>

With the name of the .jpr file I have the name of the project and the location:
  <target name="handleProject">
    <echo message="projectFile: ${project.file}"></echo>
    <dirname property="project.dir" file="${project.file}"/>
    <echo message="project dir: ${project.dir}"></echo>
    <basename property="" file="${project.file}" suffix=".jpr"/>
    <foreach param="jca.file" target="handleJca" delimiter=";" inheritall="true">
        <fileset id="dist.contents" dir="${project.dir}" includes="**/*.jca"/>

In this snippet the dirname ANT task trims the filename from the project.file property, to provide me the project folder into the project.dir property. The can be determined from the project.file using the basename task. Nice touch is that it allows you to trim the suffix (.jpr) from it. Within the project location I can find all the .jca file, and in the same way as the .jpr files I can use a foreach on the project.dir and call the handleJca target for each .jca file.

Using XSL to pre-process the jca files

Fortunately, jca files are simply XML files and ANT turns out to be able to read XML as a property file, using the xmlproperty task, which came in handy. Those properties can be appended to an output file using echo, very easily.

However, there are two main problems with the structure of the jca files:
  1. The jca files for the interaction type (the write kind) are different from the activation type (the read kind), So I would need to distinguish those.
  2. The properties like DestinationName, payload and message selector are name value pair properties in the .jca file. The  interprets the names of the properties as separate property values with the name. I can't select specifically the Destination Name for instance.
So I decided to create an xml stylesheet to transform the JCA files to a specific schema, that merges the endpoint interaction and activation elements and has the properties I'm interested in as separate elements. To do so, I created an xsd from both types of jca files. JDeveloper can help me with that:
Just follow the wizard, but emtpy the target namespace. As said I did this for both kinds of jca files (the interaction and activation kinds) and merge them into jcaAdapter.xsd with a xsd:choice:

Out of that I created jcaAdapterProps.xsd where the xsd:choice elements are merged into spec element. I changed the targetnamespace and created specific property elements:
That allowed me to create the XSL Map jcaAdapter.xsl easily:

For the xmlproperty task it is important that the resulting xml is in a default namespace and that the elements depend on the default namespaces, they should not reference a specific namespace (not even the default one).

With that I can finish off with the handleJca target of my script:
  <target name="handleJca">
    <basename property="" file="${jca.file}"/>
    <property name="jca.file.props" value="${jcaTempDir}/${}.props"/>
    <echo message="Jca File: ${}"></echo>
    <xslt style="${jcaPropsXsl}" in="${jca.file}" out="${jca.file.props}"/>
    <xmlproperty file="${jca.file.props}" collapseattributes="true"/>
    <!-- see -->
    <property name="cf.location" value="${adapter-config.connection-factory.location}"/>
    <property name="ep.class" value="${adapter-config.endpoint.spec.className}"/>
    <property name="ep.type" value="${adapter-config.endpoint.spec.type}"/>
    <property name="ep.DestinationName" value="${adapter-config.endpoint.spec.DestinationName}"/>
    <property name="ep.DeliveryMode" value="${adapter-config.endpoint.spec.DeliveryMode}"/>
    <property name="ep.TimeToLive" value="${adapter-config.endpoint.spec.TimeToLive}"/>
    <property name="ep.UseMessageListener" value="${adapter-config.endpoint.spec.UseMessageListener}"/>
    <property name="ep.MessageSelector" value="${adapter-config.endpoint.spec.MessageSelector}"/>
    <property name="ep.PayloadType" value="${adapter-config.endpoint.spec.PayloadType}"/>
    <property name="ep.QueueName" value="${adapter-config.endpoint.spec.QueueName}"/>
    <property name="ep.ObjectFieldName" value="${adapter-config.endpoint.spec.ObjectFieldName}"/>
    <property name="ep.PayloadHeaderRequired" value="${adapter-config.endpoint.spec.PayloadHeaderRequired}"/>
    <property name="ep.RecipientList" value="${adapter-config.endpoint.spec.RecipientList}"/>
    <property name="ep.Consumer" value="${adapter-config.endpoint.spec.Consumer}"/>
    <echo file="${outputFile}" append="true"
With the xslt task the jca file is transformed to the jcaTempDir folder. And using the xmlproperty task the transformed .jca is read as an xml property file. Because the property references are quite long, I copy them in a shorter named property and then echo them as a comma separated line into the outputFile using the append attribute to true.

Note that I used collapseattributes attribute set to true.


And that is actually about it. ANT is very handy to find and process files in a controlled way. Also the combination with XSL makes it powerfull. In this project I concentrated on JMS and AQ adapters, as far as the properties are concerned. But you can extend this for DB Adapters and File Adapters, etc. quite easily. Maybe even create an output file per type.

I can't share the output with you, due to company policy contraints. Just try it out.

Thursday, 22 November 2018

How to query your JMS over AQ Queues

At my current customer we use queues a lot. They're JMS queues, but in stead of Weblogic JMS, they're served by the Oracle database.

This is not new, in fact the Oracle database supports this since 8i through Advanced Queueing. Advanced Queueing is Oracle's Queueing implementation based on tables and views. That means you can query the queue table to get to the content of the queue. But you might know this already.

What I find few people know is that you shouldn't query the queue table directly but the accompanying AQ$ view instead. So, if your queue table is called MY_QUEUE_TAB, then you should query AQ$MY_QUEUE_TAB. So simply prefix the table name with  AQ$. Why? The AQ$ view is created automatically for you and joins the queue table with accompanying IOT tables to give you a proper and convenient representation of the state, subscriptions and other info of the messages. It is actually the supported wat of query the queue tables.

A JMS queue in AQ is implemented by creating them in queue tables based on the Oracle type$_jms_text_message type.

That is in fact a quite complex type definition that implements common JMS Text Message based queues. There are a few other types to support other JMS message types. But let's leave that.

Although the payload of the queue table is a complex type, you can get to its attributes in the query using the dot notation. But for that it is mandatory to have a table shortname and prefix the view columns with the table shortname.

The$_jms_text_message has a few main attributes, such as text_lob for the content and header for the JMS header attributes. The header is based on the type$_jms_header. You'll find the JMS type there. But also the properties attribute based on$_jms_userproparray. That in its turn  is a varray based on aq$_jms_userproperty. Now, that makes it a bit complex, because we would like to know the values of the JMS properties, right?

We use those queues using the JMS adapter of SOA Suite and that adds properties containing the composite instance ID, ECID, etcetera. And if I happen to have a message that isn't picked up, it would be nice to know which Composite Instance enqueued this message, wouldn't it?

Luckily, a Varray can be considered as a collection of Oracle types. And do you know you  can query those? Simply provide it to the table() function and Oracle threats it as a table. When you know which properties you may expect, and their types, you can select them in the select clause of your query.  I found the properties that are set by SOA Suite and added them to my query. But you could find others as well.

Putting all this knowledge together, I came up with the following  query:

select qtb.queue
, qtb.msg_id
, qtb.msg_state
,qtb.user_data.header.type type
,qtb.user_data.header.userid userid
,qtb.user_data.header.appid appid
,qtb.user_data.header.groupid groupid
,qtb.user_data.header.groupseq groupseq
--, properties
, (select str_value from table ( prp where = 'tracking_compositeInstanceId') tracking_compositeInstanceId
, (select str_value from table ( prp where = 'JMS_OracleDeliveryMode') JMS_OracleDeliveryMode
, (select str_value from table ( prp where = 'tracking_ecid') tracking_ecid
, (select num_value from table ( prp where = 'JMS_OracleTimestamp') JMS_OracleTimestamp
, (select str_value from table ( prp where = 'tracking_parentComponentInstanceId') tracking_prtCptInstanceId
, (select str_value from table ( prp where = 'tracking_conversationId') tracking_conversationId
,qtb.user_data.text_lob text
from AQ$MY_QUEUE_TAB qtb
where qtb.queue = 'MY_QUEUE'
order by enq_timestamp desc;

This delivered me an actual message that was not picked up by my process. And I could use  the property tracking_compositeInstanceId to find my soa composite instance in EM.

Very helpful if you are able to pause the consumption of your messages.

This also shows you how to query tables with complex nested tables.

Monday, 19 November 2018

URL Resolving in an Enterprise Deployment

A few blogs ago I wrote about issues we encountered with persistence of settings in an Enterprise Deployment with seperate Admin and Managed Server domains.

For one of the problems, the  mdm-url-resolver.xml, used to store the Global Tokens, we had a Service Request with support. After over a year, we got an answer from development, that as per design SOA updates will only update the mdm-url-resolver.xml in the soa managed server.

Besides the workaround in my previous article, there is a Java custom system property that refers to the mdm-url-resolver.xml you want to use: 

With this property set, SOA Suite will use this file, and does not have it affacted by the domain config.
I did not try it myself yet, but I think it is advisable to put this file on a shared disk. Otherwise you would need to create a copy of it for each managed server and update every one.
Unfortunately I did not find this Java system property in the documentation. I did find a blog that mentions it, but not where can be found the documentation.

So, for global tokens this seems a workable approach. But the same behavior we saw with the UMS Driver property files. I don't have a property like this for those property files. As soon as I find it, I will update this blog post.