Tuesday, 27 August 2019

SOASuite Composite Sensors: the why and how...

Introduction

Long time ago BPEL PM was acquired by Oracle, and as part of the first release of SOA Suite (10g), it was a more or less standalone component. For initiated BPEL flow instances in the soa infrastructure database there were 2 tables:
  1. cube_instance: bpel flow instances
  2. ci_indexes: 6 indexes related to the bpel flow that can be set with an embedded java call

These 2 tables store the BPEL instances, along with a set of indexes that you could, and in 11g and 12c still can, set with a value that you determine during the flow. Yes, these tables still exist in the soa infra database. So, let's say in your BPEL you have several string based variables that you fill with a value from the input message using an assing. Then within an Embedded Java activity, you can do something like:

//Get Variables 
String messageType = (String) getVariableData("messageType"); 
String messageId = (String) getVariableData("messageId"); 
String processId = (String) getVariableData("processId"); 
String referenceNr = (String) getVariableData("referenceNr"); 
String branchId = (String) getVariableData("branchId"); 
String cmrNr = (String) getVariableData("cmrNr"); 

//Set Title and indexes 
setFlowInstanceTitle("MyProcessFlow " + messageType + '-" + messageId); 
setIndex(1,messageType); 
setIndex(2,messageId); 
setIndex(3,processId); 
setIndex(4,referenceNr); 
setIndex(5,branchId); 
setIndex(6,cmrNr);

When you spin of a set of new instances, you can use the following query to find the particular instances:
select ci.flow_id, ci.cmpst_id cube_composite_id, ci.cikey cube_cikey, cix.index_1, cix.index_2, cix.index_3, cix.index_4, cix.index_5, cix.index_6 
from cube_instance ci
join ci_indexes cix on ci.cikey = cix.cikey
where index_1 like '123456789';

With the flow_id you can query the SCA flow instance (in 12c) and/or find the instance in EM.
select * from sca_flow_instance fi where fi.flow_id=100173;

Unfortunately, not even in 10g you can query on the indexes in EM directly. You need to query on them in the database and copy and paste the resulting flow-id in EM - FMW Control.
You might have done this in the past, or still do. You might have created a JSP that helps you with this. We did in 10g at least.

Define Composite Sensors

Since 11g, there is a much more convenient way to do. And it's all declarative and usable from EM. It's called Composite Sensors. You can read more about it in the docs
I haven't blogged about it earlier, because, ...., honestly I haven't used them much until lately.

Composite Sensors can be set in the composite editor:

This will get you to the following dialog:

Select one of the Services or References and click on the blue plus icon:
In this dialog, set a Name, check/validate the Service and Operation, and click on the pencil icon to define an Expression:


  • Variables: clicking this will provide you a navigator that will allow you to drilldown the variable structure of the service operation message type, to select the element to sense the value.
  • Expression: this will show you the expression builder you should be familiar with: it's the same as the one in the assign activity copy rules in bpel. It allows you to create more complex xpath statements like: substring($in.payload/doc:RegisterDocument/doc:Document/doc:BinairyObject/@fileName,0, 100)
  • Properties: allows you to select endpoint properties, for instance JCA properies as JMS Type, JMS CorrelationID. The same as the properties on a BPEL Invoke.
Make sure you have the Enterprise Manager checkbox checked. It should be on by default.

A good source for creating the Composite sensors is the Embedded Java that sets the indexes of the BPEL, as described in the intro of this article. Create a sensor for every index, and base it on the Service Operation on which the variable is based from which the setting of the indexes are based.

I would highly recommend to create an Excel sheet to register which Sensors are defined on which Service/Operation and how they are filled. For instance, you could have several services that work with documents. And on all those composites you might have sensors that fetch the DocumentID. One of your developers would define a sensor called docId, another uses documentId, again another would define docNumber, etc., etc.  An end user or administrator would need and know all those variants. Wouldn't it be much easier that you could just search on documentId  over all those composites? Thus, introduce a method in your team that everyone uses the same sensor name for elements that mean the same.

Search on CompositeSensor values

On the Soa Infra dashboard in Enterprise Manager - Fusion MiddleWare Control (em) you can quickly search on a Sensor:
Fill in a Sensor Name and a search value and click on the Search Instances button.
These are free format fields, so it make sense to have a list of possible sensors that can be distributed along your admins or end users.

In the Search Instance panel of the flow instances tab, you have a more comprehensive search possibility:


This is not available when you click directly on the Flow Instances tab, without performing a search first.
In that case you need to click on Add/Remove Filters on the Flow Instances tab:

In this dialog, check the Flow Instance checkbox:
Having done that, you can add up to 6 sensor search conditions. Click on the magnifier glass to search on a sensor:


Here you can search on a composite on which you know there is sensor. Then you can select a sensor and an operator to search on. Unfortunately this is the only place to choose an operator, which means that you need to search for a sensor through a Composite Revision, before being able to choose an operator. Would be nice being able to just type in a sensor name (or copy and paste it from your excel sheet), select an operator and type a value to search over composites.

What is nice is that if you select a particular flow instance, you can view its composite sensor values:

This is especially handy, when in a busy environment where there are several instances of the same composite within a certain timeframe. Then you can quite easily click through the instances and identify if the particular one is the one you're interesting in. In stead of the need to open the flow trace, click to the bpel flow, select the receive activity and open the XML. In many cases this can be a very tedious job.

Tuesday, 16 July 2019

Weblogic under Kubernetes: the weblogic topology of the future

Already 4 months ago I attended the PaaSForum 2019 in Mallorca. As every year it was great to meet members of the big EMEA Oracle Partner family.

And of course a lot of interesting talks and workshops. This year I was especially interested in announcements around SOA Suite and Project Helidon as a Microservice framework. But certainly also Weblogic under Kubernetes.
And actually, to me, the Kubernetes Weblogic Operator that was this years most enthusing subject.

With his WebLogic on Kubernetes talk Maciej Gruszka, Director Product Management, enlightened the future Oracle envisions for WebLogic. He started with stating that 'Weblogic is not dead!'. Well, he got me with that already!

The road ahead is making WebLogic fit to run in Docker and managed by Kubernetes. It might not be exactly what I had in mind, but it is certainly great news to learn that WebLogic will be around and alive for a future ahead. Oracle thrives to make future releases of Weblogic available as Docker images.

Today already, WebLogic is fully supported to run in a Docker container. And according to Marciej, the team is busy with the SOA and OSB teams to get those products fit and available for Docker too. It might even be possible that future releases are going to be delivered as a Docker image.

What is the Weblogic Operator?

To run in a Kubernetes managed cluster, Kubernetes needs to be able to perform lifecycle operation on a Weblogic Managed server. For that  the Weblogic Operator for Kubernetes is created and introduced. A Kubernetes Operator is a sort of Adapter on top of a non-Kubernetes system that translates Kubernetes lifecycle commands to operations within the specific application.

The Weblogic Operator  uses Kubernetes API to implement operations like:
  • Provisioning
  • Life cycle maangement
  • Updates
  • Scaling
  • Security
Besides the Weblogic Operator, Oracle also provides an Exporter for Prometheus and Elastic Stack, for monitoring and logging. Since the managed servers are within a container, you'll need to export events and logfiles to have them accessible and introspectible, even when the container is down or recreated from an updated image.

Some interesting links

Topologies

There are actually two topologies to choose from:
  • Domain within the Docker Image
  • Domain on a Persistent Volume
With the first one the container is actually stateless. All it needs to know is within the container. The Admin Console can be used for diagnostic and monitoring purposes, but not for updating the domain. Because spinning a new container will have it read the domain from the internal container image.

With the persistent volume topology the domain is stored outside the container. Changes are persisted. This topology is more in line with an On Premises installation of Weblogic. However, High Availability and Disaster Recovery is limited, because Persistent Volume needs to be shared and the domain configuration needs to be synced across datacenters. With 'In Image' Domains, things get simpler, because the domain is transported within the container. Downside is that changes in the domain require creating a new image through the CI/CD pipeline.

Most customers seem to choose for the 'Domain in Image' topology. In practice, domains don't change that much.

You can  adapt specific artifacts like data source connections, urls and username/passwords using Configuration Overrides.

Workshop

At the PaaSForum we got the chance to play around with Kubernetes and Weblogic. The workshop is described here: https://github.com/nagypeter/weblogic-operator-tutorial. You should fork this to a repository with your own Github account, because it contains the files and scripts to create an image, the tutorial works you through configuring Oracle Container Pipelines (Worker) and for that it needs a Github repo.

There is a Domain In Image variant and  a persistent volume variant of the tutorial.

Steps to follow for the Domain In Image variant

  1. Setup Oracle Kubernetes Engine instance on Oracle Cloud Infrastructure. You'll need a trial accound on cloud.oracle.com. It will then guide you through the setup of an Kubernetes cluster on OCI.
  2. Build WebLogic container image using Oracle Container Pipelines (Wercker). The second time I did the workshop I decided to change all the labels, namespaces and the domain name. Every where there is a reference to 'sample', I entered 'makker'. In this step the image is created from your fork of the github repo. If you change the name of the domain, there are two files to edit:
    1. The Dockerfile.create is called at the initial creation of the image. If there is a base image, the Dockerfile.update is called, to update the image. The Dockerfile.create creates an image with a complete domain, including the application. But the Dockerfile.update only updates the application. So you need to update the Dockerfile.create to change the domain name in the DOMAIN_NAME environment variable in the top of the file. 
    2. The Dockerfile.create copies the scripts folder into the image. That folder contains a wlst script, called model.py. At the top, a variable domain_name is declared with the same domain name assigned to it.
    If you do not change it, and want rename the domain to start it with a different name using Kubernetes later on, then you need to remove the image from the image repository, and then run the Oracle Container Pipelines-pipeline again.
  3.  Install WebLogic Operator: installs the Weblogic Operator.
  4.  Install and configure Traefik: this installs a Traefik loadbalancer on your environment. It will loadbalance over your Weblogic managed servers.
  5. Deploy WebLogic domain: this step lets you prepare your Kubernetes cluster to run the Weblogic domain. Reuse the same domain name as explained in step 2.
  6. Scaling WebLogic cluster: This one I found particularly cool. In this step you update the domain resource yaml file, to update the number of managed servers in the domain. After that, automagically a new Kubernetes pod is spawned that starts a new Managed Server. By the way, the domain will have a dynamic cluster with predefined Managed Servers based on Server Templates.
  7. Override domain configuration:  this will show you how to perform domain configuration overrides to update the datasource.
  8. Update the appliation: The whole point of this exercise is to show you how to setup a CI/CD chain that when you update your application, the image is updated and the domain can be restarted through Kubernetes, with the new image.
  9. Assing the Weblogic Pods to specific nodes or licenced nodes. The latter is important because Weblogic is licensed, so you can't just run it on any number of nodes.
The tutorial is quite elaborate and descriptive. If you stick to the naming, it will guide you through the proces ending up with a running environment. The fun is in being self-wise and choose your own naming. That's how I tripped at step 5  Deploy Weblogic Domain. I could have stuck with the given name. But I didn't feel like it, it was more fun to understand where it was used. Now you can take advantage of it.

Conclusion

I refrained discussing why you would want to run Weblogic under Docker. I have thoughts and had discussions about it. However, it made me enthousiastic that this way Weblogic can be taken with us into the containerized future.

For me the next things to explore are:
  • Create a database on another OCI image, and create a new domain, with a sample application that actually uses that database. It would be fun to create an actuall application on it.
  • Try the same with a persistent volume. A few months ago I was busy with creating java classes to start Kafka. The goal was to create Weblogic Startup classes to have Kafka started at startup of a Weblogic server. Now, it may not seem logical to you, but wouldn't it be great to combine the two and have Kafka embedded in a Weblogic cluster on a Kubernetes Cluster? Well, at least it seems fun to me. Since Kafka needs to log it's messages in a persistent log, we need to do this with a Persistent Volume.
  • Check out other topologies and related technologies. Like accessing the logs. I really would like to be able to  inspect the Weblogic log files within the container.
Have fun with the tutorial.

Monday, 24 June 2019

Debugging code - Identifying the bug



Julia Evans is a very smart woman in IT who creates very nice, funny and insight full comics, she calls 'zines', on Linux and Coding topics.

This morning I read she came up with a question that triggered me:


Last week I  realized I'm already 25 years in IT, after that my forced membership of a famous Dutch shooting club ended after 9 months (the kind of shooting club where you got free clothing, survival courses, and in my case also a truck-driving-license. Which other club offers that?). 

Anyway, during the years I discovered that despite of all the smart people I got to know and work with, this part of our work isn't obvious. In very many cases people seem to 'just do something'. No offense, but for developers it's often frustrating and just not fun to work on bugs or problems. And administrators that are confronted with a problem are often 'too busy with other stuff.' So they try something, don't find the thing, and at a later moment try something else. So, when I got involved I ask the obvious questions and in most cases I try out the same thing myself. Even though I do believe them, I want, I need,  to see the behavior with my own eyes.

By the way, I'm always reluctant to call it a bug. A bug is only a bug when you have reproduced it and based on common interpretation, together with the tester (if he found the issue), the functional/solution designer come to a consensus that the code does not do what it is supposed to. The functional specs are interpreted by both the tester and the developer. And in a certain way also the designer. It might be that the tester finds an anomaly, but that it is either a miss-interpretaton from him or a problem with the formulating of the specs. There are cases that the coder is right. But, of course, your program can work with an unexpected logic.


But, back to Julia's tweets:  they triggered me, so I jotted down some thoughts that I  got and are the basis of my search for issues.

To me it starts with identifying a case where it goes wrong, but equally importantly, together with a similar situation where it goes right. And, as far as possible, creating a unit test for both. Since, my work is mostly done on message processing platforms (Oracle SOA Suite, BPM Suite, Service Bus), I love it when a tester can hand me over a triggering message of the case and the involving response messages. I can then add them to my unit-test-set in SoapUI/ReadyAPI.
 

Then I add instrumentation (log lines, etc.) on key positions, that identify to which points the code is executed and what lines aren't reached. SOA Suite produces a flow trace of the execution. But often expressions are used that are quite complex one liners. I then split those up into several separate assignments to 'in between' variables. In Java, JavaScript, etc., I do not like complex one-liners. I prefer several variables for 'in between' values, and assignments with short expressions. That helps with line-by-line debugging.

Next, I iteratively narrow that gap between the point I can conclude the code reaches and the point I find not reached, until the statement or point of execution that fails can be identified.
In the log lines I add key variable values that are involved.


In very rare, very difficult, cases, I sometimes break down the code, cut away all the code that is not touched, until I get a minimal working Mickey Mouse (or in Dutch: Jip en Janneke) case. From there I build it up, and test iteratively in very small steps, until it breaks.

Also, very important, for difficult problems, I document very meticulously what I have done and concluded. My slogan here is: 'Deduction, my dear Whatson!' When having a problem, one can quickly come up with some potential causes and tests to check those. A unit test for a potential cause can go two ways, it can confirm or disapprove the suspicion. Both outcomes have consequences for the follow-up. Disaproving a potential cause, can strikethrough other potential paths as well. 

But, approving it, need additional steps to narrow down. I see it as a decision-tree to follow.

What I have found through the years, is that structurally document the steps done with the particular conclusions and the follow-ups is not quite obvious. But in many cases I found them important. Especially working in a Taskforce, or when I got hired to get involved in a case. In those cases the customer that hired you has the right to have something in hands that represents what he payed for.
I once was involved in a case that turned out to be a database bug. So I could not help the customer to solve it. But they where very pleased in the structured method I used to check out what could be the problem. And for those administrators and developers that got to do this as  a side job, besides there regular things: please do yourself a favor and document. I found Google Docs very usefull in this.


Oh, and by the way: I work with BPEL, BPMN, Oracle Service Bus, Java, Pl/Sql, XSLT, XQuery, Python/Jython/WLST, sometimes JavaScript, you name it. And actually, my way of structured code or systems analysis comes down to the same procedures. Regardless of technology.