Wednesday, 15 January 2020

Javascript in ANT

Earlier I wrote about an ANT script to scan JCA adapters files in your projects home, subversion working copy or github local repo.

In my current project we use sensors to kick-of message-archiving processes, without cluttering the BPEL process. I'm not sure if I would do that like that if I would do on a new project, but technically the idea is interesting. Unfortunately, we did not build a registry what BPEL processes make use of it and how. So I tought of how I could easily find out a way to scan that, and found that based on the script to scan JCA files, I could easily scan all the BPEL sensor files. If you have found the project folders, like I did in the JCA scan script, you can search for the *_sensor.xml files.

So in a few hours I had a basic sript. Now, in a second iteration, I would like to know what sensorActions the sensors trigger. For that I need to interpret the accompanying *_sensorAction.xml file. There for, based on the found sensor filename I need to determine the name of the sensor action file.

The first step to that is to figure out how to do a substring in ANT. With a quick google on "ant property substring", I found a nice stackoverflow thread, with a nice example of an ANT script defininition based on Javascript:
  <scriptdef name="substring" language="javascript">
    <attribute name="text"/>
    <attribute name="start"/>
    <attribute name="end"/>
    <attribute name="property"/>
    <![CDATA[
       var text = attributes.get("text");
       var start = attributes.get("start");
       var end = attributes.get("end") || text.length();
       project.setProperty(attributes.get("property"), text.substring(start, end));
     ]]>
  </scriptdef>

And that can be called like:
    <substring text="${sensor.file.name}" start="0" end="20"   property="sensorAction.file.name"/>
    <echo message="Sensor Action file: ${sensorAction.file.name1}"></echo>

The javascript substring() function is zero-based, so the first character is indexed by 0.
Not every sensor file name has the same length, the file is called after the BPEL file that it is tight too. And so to get the base name, the part without the "_sensor.xml" postfix, we need to determine the length of the filename. A script that determines that can easily be extracted from the script above:
  <scriptdef name="getlength" language="javascript">
    <attribute name="text"/>
    <attribute name="property"/>
    <![CDATA[
       var text = attributes.get("text");
       var length = text.length();
       project.setProperty(attributes.get("property"), length);
     ]]>
  </scriptdef>

Perfect! Using this I could create the logic in ANT to determine the sensorAction file name. However, I thought that it would be easier to determine the filename in Javascript all the way. Using the strength of the proper language at hand:
  <!-- Script to get the sensorAction filename based on the sensor filename. 
  1. Cut the extension "_sensor.xml" from the filename.
  2. Add "_sensorAction.xml" to the base filename.
  -->
  <scriptdef name="getsensoractionfilename" language="javascript">
    <attribute name="sensorfilename"/>
    <attribute name="property"/>
    <![CDATA[
       var sensorFilename = attributes.get("sensorfilename");
       var sensorFilenameLength = sensorFilename.length();
       var postfixLength = "_sensor.xml".length();
       var sensorFilenameBaseLength=sensorFilenameLength-postfixLength;
       var sensorActionFilename=sensorFilename.substring(0, sensorFilenameBaseLength)+"_sensorAction.xml";
       project.setProperty(attributes.get("property"), sensorActionFilename);
     ]]>
  </scriptdef>
And then I can get the sensorAction filename as follows:
    <getsensoractionfilename sensorfilename="${sensor.file.name}" property="sensorAction.file.name"/>
    <echo message="Sensor Action file: ${sensorAction.file.name}"></echo>

Superb! I found ANT a powerfull language/tool already. But with a few simple JavaScript snippets you can extend it easily.
Notice by the way also the use of xslt in the Scan JCA adapters files article. You can read xml files as properties, but to do that conveniently you need to transform a file like the sensors.xml in a way that you can easily reference the properties following the element-hierarchy. This is also explained in the Scan JCA adapters files article.
I'll go further with my sensors scan script. Maybe I'll write about it when done.

Friday, 10 January 2020

My Weblogic on Kubernetes Cheatsheet, part 1.

Last week I had the honour to present at the UKOUG TechFest 19, together with my 'partner in crime', I think I can say now: Simon Haslam. We combined our sessions into a part 1 and a part 2.

For me this presentation is the result of having done a workshop at the PaaSForum in Mallorca, and then to work that around into a setup where I was able to run the MedRec Weblogic sample application against a managed Database under Kubernetes.

Kubernetes  Weblogic Operator Tutorial

I already wrote a blog about my workshop at the PaaSForum this year, but Marc Lameriks from Amis, did a walkthrough on the workshop. It basically comes down to this tutorial, which you can do as a self-paced tutorial. Or checkout a Meetup in your neighbourhoud. If you're in the Netherlands, we'll be happy to organized one, or if you like I would come over to your place and we could set something up. See also the links at the end of part 2 of our presentations for more info on the tutorial for instance.

I did the tutorial more or less three times now, once at the PaaSForum, then I re-did it, but deliberately changed namespace-names, domain-name, etc. Just to see where the dependencies are, and actually to see where the pitfalls are. It's based on my method to get to know an unfamiliar city: deliberately get lost in it. Two years ago we moved to another part of Amersfoort. To get to know my new neighbourhood, I often took another way home then I when I left. And this is basically what I did with the tutorial too.

The last time I did it was to try to run a more real-life application with an actual database. And therefor I setup a new OKE cluster, this time in a compartment of our own company cloud subscription. Interesting in that is that you work with a normal customer-alike subscription within a compartment. Another form of a deliberate D-Tour. But also to setup a database and see that configuration overrides to change your runtime datasource-connection pool actually works.





Cheatsheet

When doing the tutorial, you'll find that besides all the configurations on the Cloud Pages, to setup your OKE Cluster, configure Oracle Pipelines, you'll find that you'll have to enter a lot of commandline-commands. Most of them are kubectl commands, some helm, and a bit of OCI commandline interface. Doing it the first time I soon got lost in the meaning of them and what I was doing with it. Also, most kubectl commands work with namespaces where your Weblogic has another namespace then the Weblogic Operator. And as is my habit nowadays, I soon put the commands in smart but simple scripts. And those I want to share with you. Maybe not all, but at least enough so you'll get the idea.

I also found the official kubernetes.io kubectl cheat sheet and this one on github. But those are more explanations of the particular commands.

I found it helpfull to set up this Cheatsheet following the tutorial. I guess this helps in relating the commands in what they're meant for.

Shell vs. Alias

At the UKOUG TechFest, someone pointed that you could use aliases too. Of course. You  could do an alias like
alias k=kubectl

However, you'll still need to extend every command with the proper namespace, pod naming, etc.
Therefor, I used the approach of creating a oke_env.sh script that I can include in every script, and a property file to store the credentials to put in secrets. Then call (source) the oke_env.sh script in every other script.

Setup Oracle Kubernetes Engine instance on Oracle Cloud Infrastructure

These scripts refer to the first part of the tutorial: 0. Setup Oracle Kubernetes Engine instance on Oracle Cloud Infrastructure.

oke_env.sh

It all starts with my oke_env.sh. Here you'll find all the particular necessary variables that are used in most other scripts. I think in a next iteration I would move the OIC_USER, OCID_TENANCY and OCID_CLUSTERID to my credential properties file. But I introduced that later on, during my experiments.

#!/bin/bash
echo Set OKE Environment
export OCID_USER="ocid1.user.oc1..{here goes that long string of characters}" 
export OCID_TENANCY="ocid1.tenancy.oc1..{here goes that other long string of characters}"
export OCID_CLUSTERID="ocid1.cluster.oc1.eu-frankfurt-1.{yet another long string of characters}"
export REGION="eu-frankfurt-1" # or your other region
export CLR_ADM_BND=makker-cluster-admin-binding
export K8S_NS="medrec-weblogic-operator-ns"
export K8S_SA="medrec-weblogic-operator-sa"
export HELM_CHARTS_HOME=/u01/content/weblogic-kubernetes-operator
export WL_OPERATOR_NAME="medrec-weblogic-operator"
export WLS_DMN_NS=medrec-domain-ns
export WLS_USER=weblogic
export WLS_DMN_NAME=medrec-domain
export WLS_DMN_CRED=medrec-domain-weblogic-credentials
export OCIR_CRED=ocirsecret
export WLS_DMN_YAML=/u01/content/github/weblogic-operator-medrec-admin/setup/medrec-domain/domain.yaml
export WLS_DMN_UID=medrec-domain
export MR_DB_CRED=mrdbsecret
export ADM_POD=medrec-domain-adminserver
export MR1_POD=medrec-domain-medrec-server1
export MR2_POD=medrec-domain-medrec-server2
export MR3_POD=medrec-domain-medrec-server3
export DMN_HOME=/u01/oracle/user_projects/domains/medrec-domain
export LCL_LOGS_HOME=/u01/content/logs
export ADM_SVR=AdminServer
export MR_SVR1=medrec-server1
export MR_SVR2=medrec-server2
export MR_SVR3=medrec-server3

credentials.properties


This stores the most important credentials. That allows me to abstract those from the scripts. However, as mentioned, I should move the OIC_USER, OCID_TENANCY and OCID_CLUSTERID variables to this file.
weblogic.user=weblogic
weblogic.password=welcome1
ocir.user=my.email@address.nl
ocir.password=my;difficult!pa$$w0rd
ocir.email=my.email@address.nl
oci.tenancy=ourtenancy
oci.region=fra
db.medrec.username=MEDREC_OWNER
db.medrec.password=MEDREC_PASSWORD
db.medrec.url=jdbc:oracle:thin:@10.11.12.13:1521/pdb1.subsomecode.medrecokeclstr.oraclevcn.com

create_kubeconfig.sh

After having setup the OKE Cluster in OCI, and configured your OCI CLI, the first actuall command you issue is to create a Kube Config file, using the OCI CLI. This one is executed only once, normally for every setup. So this script is merely to document my commands:

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Create Kubeconfig -> Copy command from Access Kube Config from cluster
mkdir -p $HOME/.kube
oci ce cluster create-kubeconfig --cluster-id $OCID_CLUSTERID --file $HOME/.kube/config --region $REGION --token-version 2.0.0 

The SCRIPTPATH variable declaration is a trick to be able to refer to other scripts relatively from that variable. Then as you will see in all my subsequent scripts, I source here the oke_env.sh script. Doing so I can refer to the particular variables in the oci command. There for, as described in the tutorial, you should note down your OCID_CLUSTERID and update that into the oke_env.sh file, as well as the REGION variable.

Note by the way, that recently Oracle Kubernetes Engine upgraded to only support the Kubeconfig token version 2.0.0. See also this document.

getnodes.sh

This one is a bit dumb, and could as easily be created by an alias:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s nodes
kubectl get node

Even the call to the oke_env.sh doesn't add anything, really but it is a base for the other scripts and when needing to add namespaces it makes sense.

create_clr_rolebinding.sh

The last part of setting up the OKE cluster is to create a role binding. This is done with:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Create cluster role binding
echo kubectl create clusterrolebinding $CLR_ADM_BND --clusterrole=cluster-admin --user=$OCID_USER
kubectl create clusterrolebinding $CLR_ADM_BND --clusterrole=cluster-admin --user=$OCID_USER

Install WebLogic Operator

The second part of the tutorial is about seting up your project environment with Github and have Oracle Pipelines build your projects image. This is not particularly related to K8S, so no relevant scripts there. 
The next part of the tutorial is about installing the operator: 2. Install WebLogic Operator.

create_kubeaccount.sh

Installling Weblogic Operator is done using Helm. As far as I have understood is Helm a sort of package manager for Kubernetes. Funny thing in naming is that where Kubernetes is Greek for the steering officer on a ship, helm is the steering device of a ship. It makes use of Tiller, the server side part of Helm. A tiller is the "steering stick" or lever that manages the Helm device. (To be honest, to me it feels a bit the otherway around, I guess I would have named the server side Helm and the client Tiller).

As a first step is to create a Helm Cluster admin role binding, a kubernetes namespace for the Weblogic Operator and a serviceaccount within this namespace. To do so the script create_kubeaccount.sh does the following:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Create helm-user-cluster-admin-role
cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: helm-user-cluster-admin-role
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: default
  namespace: kube-system
EOF
echo Create namespace $K8S_NS
kubectl create namespace $K8S_NS
echo kubectl create serviceaccount -n $K8S_NS $K8S_SA
kubectl create serviceaccount -n $K8S_NS $K8S_SA

install_weblogic_operator.sh


Installing the Weblogic operator is done with this script. Notice that you need to execute the helm command within the folder in which you checked out the Weblogic Operator github repository.
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Install Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm install kubernetes/charts/weblogic-operator \
  --name $WL_OPERATOR_NAME \
  --namespace $K8S_NS \
  --set image=oracle/weblogic-kubernetes-operator:2.3.0 \
  --set serviceAccount=$K8S_SA \
  --set "domainNamespaces={}"
cd $SCRIPTPATH

The script will cd to the Weblogic Operator local repository and executes helm. In the begin of the script the current folder is saved as SCRIPTPATH. After running the helm command, it does a cd back to it.

delete_weblogic_operator.sh

During my investigations the Weblogic Operator was upraded. If you take a closer look to the command in the tutorial, you'll notice that the image that is used is oracle/weblogic-kubernetes-operator:2.0, but I used oracle/weblogic-kubernetes-operator:2.3.0 in the script above.
I found it usefull to be able to delete the operator to be able to re-install it again. To delete the weblogic operator run the  delete_weblogic_operator.sh script:

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Delete Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm del --purge $WL_OPERATOR_NAME 
cd $SCRIPTPATH

Again in this script the helm command is surrounded by a cd to the helm charts folder of the Weblogic Operator local github repository, and back again to the current folder.

getpods.sh

After having installed the Weblogic Operator, you can list the pods of the kubernetes namespace it runs in, using this script:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pods for $K8S_NS
kubectl get po -n $K8S_NS

list_wlop.sh

You can check the Weblogic Operator installion by performing a helm list of the Weblogic Operator charts. I wrapped that ino this script:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo List Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm list $WL_OPERATOR_NAME
cd $SCRIPTPATH

Conclusion

If you would have followed the workshop, and maybe used my scripts, uptil now you have installed the Weblogic Operator. Let's not make this article too long and call this Part 1. And quickly move on to part 2, to install/configure and monitor the rest of the setup. Maybe at the end I move these contents to an easy to navigate set of articles.









Monday, 2 December 2019

Create a Vagrant box with Oracle Linux 7 Update 7 Server with GUI

Yesterday and today I have been attending the UKOUG TechFest '19 in Brighton. And it got me eager to try things out. For instance with new Oracle DB 19c features. And therefor I should update my vagrant boxes to be able to install one. But I realized my basebox is still on Oracle Linux 7U5, and so I wanted to have a neatly fresh, latest OL 7U7 box.

Use Oracle's base box

Now, last year I wrote about how to create your own Vagrant Base Box: Oracle Linux 7 Update 5 is out: time to create a new Vagrant Base Box. So I could create my own, but already quite some time ago I found out that Oracle supplies those base boxes.

They're made available at https://yum.oracle.com/boxes, and there are boxes for OL6, OL7 and even OL8. I want to use OL 7U7, and thus I got started with that one. It's neatly described at the mentioned link and it all comes down to:

$ vagrant box add --name <name> <url>
$ vagrant init <name>
$ vagrant up
$ vagrant ssh

And in my case:

$ vagrant box add --name ol77 https://yum.oracle.com/boxes/oraclelinux/ol77/ol77.box
$ vagrant init ol77
$ vagrant up
$ vagrant ssh

Before you do that vagrant up, you might want to edit your vagrant file, to add a name for your VM:
BOX_NAME="ol77"
VM_NAME="ol77"
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
  # The most common configuration options are documented and commented below.
  # For a complete reference, please see the online documentation at
  # https://docs.vagrantup.com.

  # Every Vagrant development environment requires a box. You can search for
  # boxes at https://vagrantcloud.com/search.
  config.vm.box = BOX_NAME

  ...

  # Provider-specific configuration so you can fine-tune various
  # backing providers for Vagrant. These expose provider-specific options.
  # Example for VirtualBox:
  #
  config.vm.provider "virtualbox" do |vb|
    vb.name = VM_NAME
  #   # Display the VirtualBox GUI when booting the machine
  #   vb.gui = true
  #
  #   # Customize the amount of memory on the VM:
  #   vb.memory = "1024"
  end
  #
 ...

Otherwise your VM name in Virtual box would be someting like ol7_default_1235897983, something cryptic with a random number.

If you do a vagrant up now it will boot up nicely.

VirtualBox Guest Additions

The VirtualBox GuestAdditions are from version 6.12, while my VirtualBox installation already has 6.14. I found it handy to have a plugin that auto-updates it. My co-Oracle-ACE Maarten Smeets wrote about that earlier. It comes down to executing the following in a command line:
vagrant plugin install vagrant-vbguest

If you do a vagrant up now, it will update the guest additions. However, to be able to do so, it needs to install all kinds of kernel packages to compile the drivers. So, be aware that this might take some time, and you'll need internet connection.

Server with GUI

The downloaded box is a Linux Server install, without a UI. This probably is fine for most of the installations you do. But I like to be able to log on to the desktop from time to time, and I want to be able to connect to that using MobaXterm, and be able to run a UI based installer or application. A bit of X-support is handy. How to do that, I found at this link.

GUI support is one of the group packages that are supported by Oracle Linux 7, and this works exactly the same as RHEL7 (wonder why that is?).

To list the available packages groups are supported, you can do:

[vagrant@localhost ~]$ sudo  yum group list
There is no installed groups file.
Maybe run: yum groups mark convert (see man yum)
Available Environment Groups:
   Minimal Install
   Infrastructure Server
   File and Print Server
   Cinnamon Desktop
   MATE Desktop
   Basic Web Server
   Virtualization Host
   Server with GUI
Available Groups:
   Backup Client
   Base
   Cinnamon
   Compatibility Libraries
   Console internet tools
   Development tools
   E-mail server
   Educational Software
   Electronic Lab
   Fedora Packager
   Fonts
   General Purpose Desktop
   Graphical Administration Tools
   Graphics Creation Tools
   Hardware monitoring utilities
   Haskell
   Input Methods
   Internet Applications
   KDE Desktop
   Legacy UNIX Compatibility
   MATE
   Milkymist
   Network Infrastructure Server
   Networking Tools
   Office Suite and Productivity
   Performance Tools
   Scientific support
   Security Tools
   Smart card support
   System Management
   System administration tools
   Technical Writing
   TurboGears application framework
   Web Server
   Web Servlet Engine
   Xfce
Done

(After having executed vagrant ssh.)
You'll find 'Server with GUI' as one of the options. This will install all the necessary packages to run Gnome. But, if you want to have KDE there's also package group for that.

To install it you would run:
[vagrant@localhost ~]$ sudo yum groupinstall 'Server with GUI'
There is no installed groups file.
Maybe run: yum groups mark convert (see man yum)
Resolving Dependencies
--> Running transaction check
---> Package ModemManager.x86_64 0:1.6.10-3.el7_6 will be installed
--> Processing Dependency: ModemManager-glib(x86-64) = 1.6.10-3.el7_6 for package: ModemManager-1.6.10-3.el7_6.x86_64
--> Processing Dependency: libmbim-utils for package: ModemManager-1.6.10-3.el7_6.x86_64
--> Processing Dependency: libqmi-utils for package: ModemManager-1.6.10-3.el7_6.x86_64
--> Processing Dependency: libqmi-glib.so.5()(64bit) for package: ModemManager-1.6.10-3.el7_6.x86_64
....
....
 python-firewall                              noarch    0.6.3-2.0.1.el7_7.2                 ol7_latest            352 k
 systemd                                      x86_64    219-67.0.1.el7_7.2                  ol7_latest            5.1 M
 systemd-libs                                 x86_64    219-67.0.1.el7_7.2                  ol7_latest            411 k
 systemd-sysv                                 x86_64    219-67.0.1.el7_7.2                  ol7_latest             88 k

Transaction Summary
========================================================================================================================
Install  303 Packages (+770 Dependent packages)
Upgrade               (   7 Dependent packages)

Total download size: 821 M
Is this ok [y/d/N]:


It will list a whole bunch of packages with dependencies that it will install. If you're up to it, at this point you would confirm with 'y'. Notice that there will be a bit over a 1000 packages installed, so it will be busy with that for a while.
This is because it will install the complete Gnome Desktop environment.
You could also do:
[vagrant@localhost ~]$ sudo yum groupinstall 'X Window System' 'GNOME'

That will install only the minimum, necessary packages to run Gnome. I did not try that yet.
If it finished installing all the packages, the one thing that is left, is to change the default runlevel, since obviously you want to start in the GUI by default. I think most in the cases, at least.
This is done by:
[vagrant@localhost ~]$ sudo systemctl set-default graphical.target

I could have put that in a provision script, like I've done before. And maybe I will do that.

Package the box

You will have noticed that it would have stamped quite some time to update the kernel packages for installing the latest Guest Additons and the GUI desktop. To prevent us from doing that over and over again, I thought it was wise to package the box into a ol77SwGUI box (Server with GUI). I described that in my previous article last year:
vagrant package --base ol77_default_1575298630482_71883 --output d:\Projects\vagrant\boxes\OL77SwGUIv1.0.box

The result

This will deliver you a Vagrant Box/VirtualBox image with:
  • Provider: VirtualBox
  • 64 bit
  • 2 vCPUs
  • 2048 MB RAM
  • Minimal package set installed
  • 32 GiB root volume
  • 4 GiB swap
  • XFS root filesystem
  • Extra 16GiB VirtualBox disk image attached, dynamically allocated
  • Guest additions installed
  • Yum configured for Oracle Linux yum server. _latest and _addons repos enabled as well as _optional_latest, _developer, _developer_EPEL where available.
  • And as an extra addon: Server with GUI installed.
Or basically more or less what I have in may own base box. What I'm less happy with is the 16GiB extra disk image attached. I want a bigger disk for my installations, or at least the data. I'll need to figure out what I want to do with that. Maybe I add an extra disk and reformat the lot with a disk spanning Logical Volume based filesystem.

Update

I found that that the box from Oracle lacks Video Memory to run VM in GUI mode. Because GUI wasn't in the VM, it 8 MB was sufficient.
I added the following to change the video memory:
  config.vm.provider "virtualbox" do |vb|
    vb.name = VM_NAME
  #   # Display the VirtualBox GUI when booting the machine
  #   vb.gui = true
  #
  #   # Customize the amount of memory on the VM:
  #   vb.memory = "1024"
  # 
  # https://stackoverflow.com/questions/24231620/how-to-set-vagrant-virtualbox-video-memory
    vb.customize ["modifyvm", :id, "--vram", "128"]
  end

Based on the hint found here on StackOverflow.
I also added this on my GitHub vagrant project.


Thursday, 28 November 2019

SOA Suite 12c Stumbling on parsing Ampersands


Yesterday I ran into a problem parsing xml in BPEL. A bit of context: I get messages from a JMS queue, that I read 'Opaque'. Because I want to be able to dispatch the messages to different processes based on a generic WSDL, but with a different payload.

So after the Base64 Decode, for which I have a service, I need to parse the content to XML. Now, I used to use the oraext:parseEscapedXML() function for it. This function is known to have bugs, but I traced that down to BPEL 10g. And I'm on 12.2.1.3 now.

Still I got exceptions as:

<bpelFault><faultType>0</faultType><subLanguageExecutionFault xmlns="http://docs.oasis-open.org/wsbpel/2.0/process/executable"><part name="summary"><summary>An error occurs while processing the XPath expression; the expression is oraext:parseEscapedXML($Invoke_Base64EncodeDecodeService_decode_OutputVariable.part1/ns5:document)</summary></part><part name="code"><code>XPath expression failed to execute</code></part><part name="detail"><detail>XPath expression failed to execute.
An error occurs while processing the XPath expression; the expression is oraext:parseEscapedXML($Invoke_Base64EncodeDecodeService_decode_OutputVariable.part1/ns5:document)
The XPath expression failed to execute; the reason was: oracle.fabric.common.xml.xpath.XPathFunctionException: Expected ';'.
Check the detailed root cause described in the exception message text and verify that the XPath query is correct.
</detail></part></subLanguageExecutionFault></bpelFault>

Or:

<bpelFault><faultType>0</faultType><subLanguageExecutionFault xmlns="http://docs.oasis-open.org/wsbpel/2.0/process/executable"><part name="summary"><summary>An error occurs while processing the XPath expression; the expression is oraext:parseEscapedXML($Invoke_Base64EncodeDecodeService_decode_OutputVariable.part1/ns5:document)</summary></part><part name="code"><code>XPath expression failed to execute</code></part><part name="detail"><detail>XPath expression failed to execute.
An error occurs while processing the XPath expression; the expression is oraext:parseEscapedXML($Invoke_Base64EncodeDecodeService_decode_OutputVariable.part1/ns5:document)
The XPath expression failed to execute; the reason was: oracle.fabric.common.xml.xpath.XPathFunctionException: Expected name instead of  .
Check the detailed root cause described in the exception message text and verify that the XPath query is correct.
</detail></part></subLanguageExecutionFault></bpelFault>

It turns out that it was due to ampersands (&amp;) in the message. The function oraext:parseEscapedXML() is known to stumble on that.

A work around is suggested in a forum on Integration Cloud Service (ICS).  It suggests to use oraext:get-content-as-string() first. And feed the contents to oraext:parseEscapedXML(). It turns out that that helps, although I had to fiddle around with xpath expressions, to get the correct child element, since I also got the parent element surrounding the part I actually wanted to parse.

But then I found this blog, suggesting that it was replaced by oraext:parseXML() in 12c (I found that it is actually introduced in 11g).

Strange that I didn't find this earlier. Digging deeper down memory-lane, I think I must have seen the function before.  However, it shows I'm still learning all the time.

Thursday, 10 October 2019

Oracle Ground Breakers Appreciation Day - Something about Weblogic....

Our most appreciated Oracle ACE Director Tim Hall organizes this yearly initiative, with this years name Oracle Ground Breakers Appreciation Day, and appointed this day to blog about our favorite Oracle Technology, Service or sub-community.

Last week I presented the 'Oracle Kubernetes Managed Weblogic Revival', the introduction of the Weblogic Kubernetes Operator opens the future for Weblogic.

This week I deliver our Weblogic 12c Tuning and Troubleshooting training for ATOS The Netherlands in Groningen. So, hmmm. what to blog, on this years Ground Breakers Appreciation day? There are several other technologies that I use and follow, but mostly around Fusion Middleware: SOA Suite, BPM Suite and Oracle Service Bus. But also Oracle Integration Cloud, that in fact heavily depend on this technologies. And honestly, bottom line here is Oracle Weblogic.

I frequently hear voices that state that Customers should move away from Weblogic. Honestly, I don't relate to that. It has served customers very well over the last decade under the Oracle brand and before. And I still think it was a smart move of Oracle to acquire it and make it a strategic part of the Oracle platform.

Last few years I've been active on the community.oracle.com forums, where I've grown to level 13, almost level 14, by answering questions and participating in discussions around Fusion Middleware technologies. My first thank therefor is to this community, for having me participating.

My second thank goes to the whole Weblogic and related Fusion Middleware toolstack. During the Tuning and Troubleshooting training I again realize how smart and rich the Weblogic Suite is. Although, I stated before that Oracle could do something about the footprint. It seems to me that there are quite some duplicate libraries or different versions of the same library. And maybe some old parts could be cut out: maybe only support SAML2.0 and improve that, for instance.


One great, but quite rarely used feature of Weblogic is the Weblogic Diagnostic Framework. And especially the Policies and Actions part. It is quite difficult to configure, the console's UI does not help here and there, and to think of usages of it. However, every time I present it, I find myself thinking: I should use this more often in my daily developments.

So I started to create a wlst script to create a Diagnostic Module, create a few collectors,  a JMS Notification Action and 2 policies on it.  It is actually the solution to Lab 6 of our training. To me it is a start to be able to expand this. You could  create a version per technology: OSB, SOA Suite, or custom application like MedRec. And you can create a more generic version that based on different property files configure different collectors, policies and actions specific for that target environment.

WLDF Diagnostic Module

The script first creates a diagnostic module like this:
def createDiagnosticModule(diagModuleName, targetServerName):
  module=getMBean('/WLDFSystemResources/'+diagModuleName)
  if module==None:
    print 'Create new Diagnostic Module'+diagModuleName
    edit()
    startEdit()
    cd('/')
    module = cmo.createWLDFSystemResource(diagModuleName)
    targetServer=getMServer(targetServerName)
    module.addTarget(targetServer)
    # Activate changes
    save()
    activate(block='true')
    print 'Diagnostic Module created successfully.'
  else:
    print 'Diagnostic Module'+diagModuleName+' already exists!'
  return module

It checks if the Diagnostic Module already exists as a WLDFSystemResource. If not, it will create it as module = cmo.createWLDFSystemResource(diagModuleName) and target it to a targetServer.

Collectors

Then for creating a collector I created the following function:
def createCollector(diagModuleName, metricType, namespace, harvestedInstances,attributesCsv):
  harvesterName='/WLDFSystemResources/'+diagModuleName+'/WLDFResource/'+diagModuleName+'/Harvester/'+diagModuleName 
  harvestedTypesPath=harvesterName+'/HarvestedTypes/';
  print 'Check Collector '+harvestedTypesPath+metricType
  collector=getMBean(harvestedTypesPath+metricType)
  if collector==None:
    print 'Create new Collector for '+metricType+' in '+diagModuleName
    edit()
    startEdit()
    cd(harvestedTypesPath)
    collector=cmo.createHarvestedType(metricType)
    cd(harvestedTypesPath+metricType)
    attributeArray=jarray.array([String(x.strip()) for x in attributesCsv.split(',')], String)
    collector.setHarvestedAttributes(attributeArray)
    collector.setHarvestedInstances(harvestedInstances)
    collector.setNamespace(namespace)
    # Activate changes
    save()
    activate(block='true')
    print 'Collector created successfully.'
  else:
    print 'Collector '+metricType+' in '+diagModuleName+' already exists!'
  return collector

Again, it first checks for the existing of the collector as a so called HarvestedType, within a WLDFResource in the Diagnostic Module. If not it creates it. Here you need to provide the metricType as a HavervestedType. And then attributes that you want to collect. The function expects it as a comma separated values string, that it converts to an array via a List.
Then you can provide Metric Type Instances or None if you want to collect it over all instances.

You can call this as:
createCollector(diagModuleName, 'weblogic.management.runtime.JDBCDataSourceRuntimeMBean','ServerRuntime', None, 'ActiveConnectionsCurrentCount,CurrCapacity,LeakedConnectionCount')

or if you want to add instances, it's also done by creating an array:
    harvestedInstancesList=[]
    harvestedInstancesList.append('com.bea:ApplicationRuntime=medrec,Name=TTServer_/medrec,ServerRuntime=TTServer,Type=WebAppComponentRuntime')
    harvestedInstances=jarray.array([String(x.strip()) for x in harvestedInstancesList], String)    
    createCollector(diagModuleName, 'weblogic.management.runtime.WebAppComponentRuntimeMBean','ServerRuntime', harvestedInstances,'OpenSessionsCurrentCount')

This is a bit more complicated, since the strings describing the instances that you want to add are comma seperated values them selfs.

Actions

Creating an action is again pretty simple, for a JMS Notification that is:
def createJmsNotificationAction(diagModuleName, actionName, destination, connectionFactory):
  policiesActionsPath='/WLDFSystemResources/'+diagModuleName+'/WLDFResource/'+diagModuleName+'/WatchNotification/'+diagModuleName
  jmsNotificationPath=policiesActionsPath+'/JMSNotifications/'
  print 'Check notification action '+jmsNotificationPath+actionName
  jmsNtfAction=getMBean(jmsNotificationPath+actionName)
  if jmsNtfAction==None:
    print 'Create new JMS NotificationAction '+actionName+' in '+diagModuleName
    edit()
    startEdit()
    cd(policiesActionsPath)
    jmsNtfAction=cmo.createJMSNotification(actionName)
    jmsNtfAction.setEnabled(true)
    jmsNtfAction.setTimeout(0)
    jmsNtfAction.setDestinationJNDIName(destination)
    jmsNtfAction.setConnectionFactoryJNDIName(connectionFactory)
    # Activate changes
    save()
    activate(block='true')
    print 'JMS NotificationAction created successfully.'
  else:
    print 'JMS NotificationAction '+actionName+' in '+diagModuleName+' already exists!'
  return jmsNtfAction

There are different types of actions, so they're created differently. You can add one using the console and record that. It's what I did and then transformed the recorded script to functions as shown here.

Policies


Policies can be created with the following function. You need to provide a rule type and a rule expression, plus a array of actions you want to add:
def createPolicy(diagModuleName, policyName, ruleType, ruleExpression, actions):  
  policiesActionsPath='/WLDFSystemResources/'+diagModuleName+'/WLDFResource/'+diagModuleName+'/WatchNotification/'+diagModuleName
  policiesPath=policiesActionsPath+'/Watches/'
  print 'Check Policy '+policiesPath +policyName
  policy=getMBean(policiesPath +policyName)
  if policy==None:
    print 'Create new Policy '+policyName+' in '+diagModuleName
    edit()
    startEdit()
    cd(policiesActionsPath)
    policy=cmo.createWatch(policyName)
    policy.setEnabled(true)
    policy.setExpressionLanguage('EL')
    policy.setRuleType(ruleType)
    policy.setRuleExpression(ruleExpression)
    policy.setAlarmType('AutomaticReset')
    policy.setAlarmResetPeriod(300000)
    cd(policiesPath +policyName)
    set('Notifications', actions)
    schedule=getMBean(policiesPath +policyName+'/Schedule/'+policyName)
    schedule.setMinute('*')
    schedule.setSecond('*')
    schedule.setSecond('*/15')
    # Activate changes
    save()
    activate(block='true')
    print 'Policy created successfully.'
  else:
    print 'Policy '+policyName+' in '+diagModuleName+' already exists!'
  return policy

An example of calling this is:
actionsList=[]
    actionsList.append('com.bea:Name=JMSAction,Type=weblogic.diagnostics.descriptor.WLDFJMSNotificationBean,Parent=[TTDomain]/WLDFSystemResources[TTDiagnostics],Path=WLDFResource[TTDiagnostics]/WatchNotification[TTDiagnostics]/JMSNotifications[JMSAction]')
    actions=jarray.array([ObjectName(action.strip()) for action in actionsList], ObjectName)    
    createPolicy(diagModuleName,'HiStuckThreads', 'Harvester', 'wls:ServerHighStuckThreads(\"30 seconds\",\"10 minutes\",5)', actions)

As you can see the actions to add are actually expressions to the MBeans of the actions configured earlier. It apparently depend on the type and the diagnostic module that contains it. So I could create a function that assembles this expression. If you want a custom rule expression you can create it as follows:
actionsList=[]
ruleExpression='wls:ServerGenericMetricRule(\"com.bea:Name=MedRecGlobalDataSourceXA,ServerRuntime=TTServer,Type=JDBCDataSourceRuntime\",\"WaitingForConnectionHighCount\",\">\",0,\"30 seconds\",\"10 minutes\")'
    createPolicy(diagModuleName,'OverloadedDS', 'Harvester', ruleExpression, actions)

Again this is an expression that could be assembled using a function.

Conclusion

The complete script can be reviewed and downloaded from my GitHub Repo.

I hit two flies with one beat: Thank you Ground Breakers, fellow ACEs and other Oracle enthousiasts, and I guess my first article about the Weblogic Diagnostic Framework (but not my first one to include WLST scripts...). Happy OGB Appreciation Day y'all!

Tuesday, 27 August 2019

SOASuite Composite Sensors: the why and how...

Introduction

Long time ago BPEL PM was acquired by Oracle, and as part of the first release of SOA Suite (10g), it was a more or less standalone component. For initiated BPEL flow instances in the soa infrastructure database there were 2 tables:
  1. cube_instance: bpel flow instances
  2. ci_indexes: 6 indexes related to the bpel flow that can be set with an embedded java call

These 2 tables store the BPEL instances, along with a set of indexes that you could, and in 11g and 12c still can, set with a value that you determine during the flow. Yes, these tables still exist in the soa infra database. So, let's say in your BPEL you have several string based variables that you fill with a value from the input message using an assing. Then within an Embedded Java activity, you can do something like:

//Get Variables 
String messageType = (String) getVariableData("messageType"); 
String messageId = (String) getVariableData("messageId"); 
String processId = (String) getVariableData("processId"); 
String referenceNr = (String) getVariableData("referenceNr"); 
String branchId = (String) getVariableData("branchId"); 
String cmrNr = (String) getVariableData("cmrNr"); 

//Set Title and indexes 
setFlowInstanceTitle("MyProcessFlow " + messageType + '-" + messageId); 
setIndex(1,messageType); 
setIndex(2,messageId); 
setIndex(3,processId); 
setIndex(4,referenceNr); 
setIndex(5,branchId); 
setIndex(6,cmrNr);

When you spin of a set of new instances, you can use the following query to find the particular instances:
select ci.flow_id, ci.cmpst_id cube_composite_id, ci.cikey cube_cikey, cix.index_1, cix.index_2, cix.index_3, cix.index_4, cix.index_5, cix.index_6 
from cube_instance ci
join ci_indexes cix on ci.cikey = cix.cikey
where index_1 like '123456789';

With the flow_id you can query the SCA flow instance (in 12c) and/or find the instance in EM.
select * from sca_flow_instance fi where fi.flow_id=100173;

Unfortunately, not even in 10g you can query on the indexes in EM directly. You need to query on them in the database and copy and paste the resulting flow-id in EM - FMW Control.
You might have done this in the past, or still do. You might have created a JSP that helps you with this. We did in 10g at least.

Define Composite Sensors

Since 11g, there is a much more convenient way to do. And it's all declarative and usable from EM. It's called Composite Sensors. You can read more about it in the docs
I haven't blogged about it earlier, because, ...., honestly I haven't used them much until lately.

Composite Sensors can be set in the composite editor:

This will get you to the following dialog:

Select one of the Services or References and click on the blue plus icon:
In this dialog, set a Name, check/validate the Service and Operation, and click on the pencil icon to define an Expression:


  • Variables: clicking this will provide you a navigator that will allow you to drilldown the variable structure of the service operation message type, to select the element to sense the value.
  • Expression: this will show you the expression builder you should be familiar with: it's the same as the one in the assign activity copy rules in bpel. It allows you to create more complex xpath statements like: substring($in.payload/doc:RegisterDocument/doc:Document/doc:BinairyObject/@fileName,0, 100)
  • Properties: allows you to select endpoint properties, for instance JCA properies as JMS Type, JMS CorrelationID. The same as the properties on a BPEL Invoke.
Make sure you have the Enterprise Manager checkbox checked. It should be on by default.

A good source for creating the Composite sensors is the Embedded Java that sets the indexes of the BPEL, as described in the intro of this article. Create a sensor for every index, and base it on the Service Operation on which the variable is based from which the setting of the indexes are based.

I would highly recommend to create an Excel sheet to register which Sensors are defined on which Service/Operation and how they are filled. For instance, you could have several services that work with documents. And on all those composites you might have sensors that fetch the DocumentID. One of your developers would define a sensor called docId, another uses documentId, again another would define docNumber, etc., etc.  An end user or administrator would need and know all those variants. Wouldn't it be much easier that you could just search on documentId  over all those composites? Thus, introduce a method in your team that everyone uses the same sensor name for elements that mean the same.

Search on CompositeSensor values

On the Soa Infra dashboard in Enterprise Manager - Fusion MiddleWare Control (em) you can quickly search on a Sensor:
Fill in a Sensor Name and a search value and click on the Search Instances button.
These are free format fields, so it make sense to have a list of possible sensors that can be distributed along your admins or end users.

In the Search Instance panel of the flow instances tab, you have a more comprehensive search possibility:


This is not available when you click directly on the Flow Instances tab, without performing a search first.
In that case you need to click on Add/Remove Filters on the Flow Instances tab:

In this dialog, check the Flow Instance checkbox:
Having done that, you can add up to 6 sensor search conditions. Click on the magnifier glass to search on a sensor:


Here you can search on a composite on which you know there is sensor. Then you can select a sensor and an operator to search on. Unfortunately this is the only place to choose an operator, which means that you need to search for a sensor through a Composite Revision, before being able to choose an operator. Would be nice being able to just type in a sensor name (or copy and paste it from your excel sheet), select an operator and type a value to search over composites.

What is nice is that if you select a particular flow instance, you can view its composite sensor values:

This is especially handy, when in a busy environment where there are several instances of the same composite within a certain timeframe. Then you can quite easily click through the instances and identify if the particular one is the one you're interesting in. In stead of the need to open the flow trace, click to the bpel flow, select the receive activity and open the XML. In many cases this can be a very tedious job.

Tuesday, 16 July 2019

Weblogic under Kubernetes: the weblogic topology of the future

Already 4 months ago I attended the PaaSForum 2019 in Mallorca. As every year it was great to meet members of the big EMEA Oracle Partner family.

And of course a lot of interesting talks and workshops. This year I was especially interested in announcements around SOA Suite and Project Helidon as a Microservice framework. But certainly also Weblogic under Kubernetes.
And actually, to me, the Kubernetes Weblogic Operator that was this years most enthusing subject.

With his WebLogic on Kubernetes talk Maciej Gruszka, Director Product Management, enlightened the future Oracle envisions for WebLogic. He started with stating that 'Weblogic is not dead!'. Well, he got me with that already!

The road ahead is making WebLogic fit to run in Docker and managed by Kubernetes. It might not be exactly what I had in mind, but it is certainly great news to learn that WebLogic will be around and alive for a future ahead. Oracle thrives to make future releases of Weblogic available as Docker images.

Today already, WebLogic is fully supported to run in a Docker container. And according to Marciej, the team is busy with the SOA and OSB teams to get those products fit and available for Docker too. It might even be possible that future releases are going to be delivered as a Docker image.

What is the Weblogic Operator?

To run in a Kubernetes managed cluster, Kubernetes needs to be able to perform lifecycle operation on a Weblogic Managed server. For that  the Weblogic Operator for Kubernetes is created and introduced. A Kubernetes Operator is a sort of Adapter on top of a non-Kubernetes system that translates Kubernetes lifecycle commands to operations within the specific application.

The Weblogic Operator  uses Kubernetes API to implement operations like:
  • Provisioning
  • Life cycle maangement
  • Updates
  • Scaling
  • Security
Besides the Weblogic Operator, Oracle also provides an Exporter for Prometheus and Elastic Stack, for monitoring and logging. Since the managed servers are within a container, you'll need to export events and logfiles to have them accessible and introspectible, even when the container is down or recreated from an updated image.

Some interesting links

Topologies

There are actually two topologies to choose from:
  • Domain within the Docker Image
  • Domain on a Persistent Volume
With the first one the container is actually stateless. All it needs to know is within the container. The Admin Console can be used for diagnostic and monitoring purposes, but not for updating the domain. Because spinning a new container will have it read the domain from the internal container image.

With the persistent volume topology the domain is stored outside the container. Changes are persisted. This topology is more in line with an On Premises installation of Weblogic. However, High Availability and Disaster Recovery is limited, because Persistent Volume needs to be shared and the domain configuration needs to be synced across datacenters. With 'In Image' Domains, things get simpler, because the domain is transported within the container. Downside is that changes in the domain require creating a new image through the CI/CD pipeline.

Most customers seem to choose for the 'Domain in Image' topology. In practice, domains don't change that much.

You can  adapt specific artifacts like data source connections, urls and username/passwords using Configuration Overrides.

Workshop

At the PaaSForum we got the chance to play around with Kubernetes and Weblogic. The workshop is described here: https://github.com/nagypeter/weblogic-operator-tutorial. You should fork this to a repository with your own Github account, because it contains the files and scripts to create an image, the tutorial works you through configuring Oracle Container Pipelines (Worker) and for that it needs a Github repo.

There is a Domain In Image variant and  a persistent volume variant of the tutorial.

Steps to follow for the Domain In Image variant

  1. Setup Oracle Kubernetes Engine instance on Oracle Cloud Infrastructure. You'll need a trial accound on cloud.oracle.com. It will then guide you through the setup of an Kubernetes cluster on OCI.
  2. Build WebLogic container image using Oracle Container Pipelines (Wercker). The second time I did the workshop I decided to change all the labels, namespaces and the domain name. Every where there is a reference to 'sample', I entered 'makker'. In this step the image is created from your fork of the github repo. If you change the name of the domain, there are two files to edit:
    1. The Dockerfile.create is called at the initial creation of the image. If there is a base image, the Dockerfile.update is called, to update the image. The Dockerfile.create creates an image with a complete domain, including the application. But the Dockerfile.update only updates the application. So you need to update the Dockerfile.create to change the domain name in the DOMAIN_NAME environment variable in the top of the file. 
    2. The Dockerfile.create copies the scripts folder into the image. That folder contains a wlst script, called model.py. At the top, a variable domain_name is declared with the same domain name assigned to it.
    If you do not change it, and want rename the domain to start it with a different name using Kubernetes later on, then you need to remove the image from the image repository, and then run the Oracle Container Pipelines-pipeline again.
  3.  Install WebLogic Operator: installs the Weblogic Operator.
  4.  Install and configure Traefik: this installs a Traefik loadbalancer on your environment. It will loadbalance over your Weblogic managed servers.
  5. Deploy WebLogic domain: this step lets you prepare your Kubernetes cluster to run the Weblogic domain. Reuse the same domain name as explained in step 2.
  6. Scaling WebLogic cluster: This one I found particularly cool. In this step you update the domain resource yaml file, to update the number of managed servers in the domain. After that, automagically a new Kubernetes pod is spawned that starts a new Managed Server. By the way, the domain will have a dynamic cluster with predefined Managed Servers based on Server Templates.
  7. Override domain configuration:  this will show you how to perform domain configuration overrides to update the datasource.
  8. Update the appliation: The whole point of this exercise is to show you how to setup a CI/CD chain that when you update your application, the image is updated and the domain can be restarted through Kubernetes, with the new image.
  9. Assing the Weblogic Pods to specific nodes or licenced nodes. The latter is important because Weblogic is licensed, so you can't just run it on any number of nodes.
The tutorial is quite elaborate and descriptive. If you stick to the naming, it will guide you through the proces ending up with a running environment. The fun is in being self-wise and choose your own naming. That's how I tripped at step 5  Deploy Weblogic Domain. I could have stuck with the given name. But I didn't feel like it, it was more fun to understand where it was used. Now you can take advantage of it.

Conclusion

I refrained discussing why you would want to run Weblogic under Docker. I have thoughts and had discussions about it. However, it made me enthousiastic that this way Weblogic can be taken with us into the containerized future.

For me the next things to explore are:
  • Create a database on another OCI image, and create a new domain, with a sample application that actually uses that database. It would be fun to create an actuall application on it.
  • Try the same with a persistent volume. A few months ago I was busy with creating java classes to start Kafka. The goal was to create Weblogic Startup classes to have Kafka started at startup of a Weblogic server. Now, it may not seem logical to you, but wouldn't it be great to combine the two and have Kafka embedded in a Weblogic cluster on a Kubernetes Cluster? Well, at least it seems fun to me. Since Kafka needs to log it's messages in a persistent log, we need to do this with a Persistent Volume.
  • Check out other topologies and related technologies. Like accessing the logs. I really would like to be able to  inspect the Weblogic log files within the container.
Have fun with the tutorial.