Wednesday, 11 March 2020

SOA Composite Sensors and the ORA-01461: can bind a LONG value only for insert into a LONG column exception

Last year I wrote about SOA Composite Sensors and how they can be a good alternative for the BPEL Indexes in 10g. This week I was confronted with the "ORA-01461: can bind a LONG value only for insert into a LONG column" exception in one of our composites. It was about a process that is triggered to do some message archiving.

A bit about BPEL Sensors

Funny thing is that this archiving process is triggered by BPEL sensor. To recap: you can create a BPEL Sensor by clicking the monitor icon in your BPEL process:
It's the heart-beat-monitor icon in the button-area top right of the panel. Then it shows the BPEL process in a Layered mode, you can't edit the process any more, but you can add, remove and edit sensors. Sensors are indicated with little antenna icons with an activity. You can put them on any kind of activity. Even Empty activities, what adds extra potential reason to use to an empty activity.

If you click an antena icon you can define a series of sensors, but editing them will bring up the following dialog:

It allows you to add variables and possible expressions to elements within those variables to a sensor. And also add one of more sensor actions that can be triggered on the trigger moment (Evaluation Time) that can be set as well.

A Sensor action can be set as:


In 11g we used the JMS Adapter, but apparently that didn't work anymore the way it was in 12c. So, we changed it to JMS Queues. As with compsite sensors, in the BPEL folder, together with the BPEL process you get two files: YourBPELProcess_sensor.xml containing the Sensor definitions and YourBPELProcess_sensorAction.xml containing the sensor action definitions.

When the sensor is activated, a JMS message is produced on the queue, with an xml following a  predefined xsd. In that XML you will find info about the triggering BPEL instance, like name and instance ID, and a list of variable data. Each of the variables defined in the Sensor is in the list, in the order as defined in the sensor.

By the way, BPEL sensors are part of the product since before 10g...

The actual error case

In our case this message archiving process was triggered from another bpel using a Sensor. The archiving process was listening to the queue as defined in the Sensor Action. Picking up messages that are from certain sensors, using a message selector  based on the sensor name.

On the JMS Interface (Exposed Service) of the message archiving process, I defined a set of Composite Sensors, to be able to search for them. This would help in finding the  archiving instance that belongs to the triggering process. Since sensors work asynchronously, they're not tight together in a Flow Trace.

In some cases, we got the following exception in the Diagnostic log:
[2020-03-11T09:19:50.855+01:00] [DWN_SOA_01] [WARNING] [] [oracle.soa.adapter.jms.inbound] [tid: DaemonWorkThread: '639' of WorkManager: 'default_Adapters'] [userId: myadmin] [ecid: c8e2b75e-7aed-4305-84c5-9ef5cf928c7b-0bb833b1,0:11:9] [APP: soa-infra] [partition-name: DOMAIN] [tenant-name: GLOBAL] [oracle.soa.tracking.FlowId: 463993] [oracle.soa.tracking.InstanceId: 762213] [oracle.soa.tracking.SCAEntityId: 381353] [oracle.soa.tracking.FaultId: 400440] [FlowId: 0000N38eGGo5aaC5rFK6yY1UNay100012j]  [composite_name: MyComposite] [composite_version: 1.0] [endpoint_name: DWN_MyCompositeInterface_WS] JmsConsumer_runInbound: [destination = jms/DWN_OUTGOING, subscriber = null] : weblogic.transaction.RollbackException: Unexpected exception in beforeCompletion: sync=org.eclipse.persistence.transaction.JTASynchronizationListener@2d7a86a9[[

Internal Exception: java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column

Error Code: 1461 javax.resource.ResourceException: weblogic.transaction.RollbackException: Unexpected exception in beforeCompletion: sync=org.eclipse.persistence.transaction.JTASynchronizationListener@2d7a86a9

Internal Exception: java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column

Error Code: 1461
        at oracle.tip.adapter.jms.inbound.JmsConsumer.afterDelivery(JmsConsumer.java:321)
        at oracle.tip.adapter.jms.inbound.JmsConsumer.runInbound(JmsConsumer.java:982)
        at oracle.tip.adapter.jms.inbound.JmsConsumer.run(JmsConsumer.java:893)
        at oracle.integration.platform.blocks.executor.WorkManagerExecutor$1.run(WorkManagerExecutor.java:184)
        at weblogic.work.j2ee.J2EEWorkManager$WorkWithListener.run(J2EEWorkManager.java:209)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:644)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:622)
        at weblogic.work.DaemonWorkThread.run(DaemonWorkThread.java:39)
Caused by: javax.resource.ResourceException: weblogic.transaction.RollbackException: Unexpected exception in beforeCompletion: sync=org.eclipse.persistence.transaction.JTASynchronizationListener@2d7a86a9

Internal Exception: java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column

Error Code: 1461
        at oracle.tip.adapter.fw.jca.messageinflow.MessageEndpointImpl.afterDelivery(MessageEndpointImpl.java:379)
        at oracle.tip.adapter.jms.inbound.JmsConsumer.afterDelivery(JmsConsumer.java:306)
        ... 11 more
Caused by: weblogic.transaction.RollbackException: Unexpected exception in beforeCompletion: sync=org.eclipse.persistence.transaction.JTASynchronizationListener@2d7a86a9

Internal Exception: java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column
...

Of course the process instance failed. It took me some time to figure out what went wrong. It was suggested that it was due to the composite sensors, but I waved that away initially, since I introduced them earlier (although a colleague had removed them for no apparent reason and I re-introduced them). I couln't see that these were the problem, because it ran through the unit-tests and in most cases they weren't a problem.

But the error indicates an triggered interface: [endpoint_name: DWN_MyCompositeInterface_WS], and in this case a destination: [destination = jms/DWN_OUTGOING, subscriber = null].

Since the process is triggered from the queue with messages from BPEL Sensors these Composite Sensors were defined on variableData from the BPEL Sensor XML. And as said above, the variables appear in the XML in the order they're defined in the BPEL Sensor.

One of the Composite Sensors were defined as:
<sensor sensorName="UitgaandBerichtNummer" kind="service" target="undefined" filter="" xmlns:imp1="http://xmlns.oracle.com/bpel/sensor">
    <serviceConfig service="DWN_MessageArchivingBeginExchange_WS" expression="$in.actionData/imp1:actionData/imp1:payload/imp1:variableData/imp1:data" operation="ArchiverenBeginUitwisseling" outputDataType="string" outputNamespace="http://www.w3.org/2001/XMLSchema"/>
</sensor>

With the expression: $in.actionData/imp1:actionData/imp1:payload/imp1:variableData/imp1:data.
Because it is a list, there can be more than one variableData occurences. And without an index, it will select all of them. If, for instance one them contains the actual message to archive, and that message is quite large, then the resulting value becomes too large. And that results in the error above.

All I had to do is to select the proper occurence of the message id as shown in the Sensor Dialog above. The expression had to be: $in.actionData/imp1:actionData/imp1:payload/imp1:variableData[2]/imp1:data

Conclusion

This solved the error. I wanted to log this for future reference. But, also to show how to find out this seemingly more obscure error.

Friday, 28 February 2020

Vagrant box with Oracle Linux 77 basebox - additional fiddlings

Last year on the way home from the UK OUG TechFest 19, I wrote about creating a Vagrant box from the Oracle provided basebox in this article.

Lately I wanted to use it but I stumbled upon some nasty pitfalls.

Failed to load SELinux policy

For starters, as described in the article, I added the 'Server with GUI' package and packaged the box in a new base box. This is handy, because the creation of the GUI box is quite time-consuming and requires an intermediate restart. But if I use the new Server-with-GUI basebox, the new VM fails to start with the message: "Systemd: Failed to load SELinux policy. Freezing.".

This I could solve using the support document 2314747.1. I have to add it to my provision scripts, but before packaging the box, you need to edit the file /etc/selinux/config:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.


SELINUX=permissive

# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.

SELINUXTYPE=targeted

The option SELINUX turned out to be set on enforcing.


Vagrant unsecure keypair

When  you first start your VM, you'll probably see messages like:
The working of this is described in the Vagrant documentation about creating a base box under the chapter "vagrant" User. I think when I started with Vagrant, I did not fully grasped this part. Maybe the documentation changed. Basically you need to download the Vagrant insecure keypair from GitHub. Then  in the VM, you'll need to update the file authorized_keys in the .ssh folder of the vagrant user:
[vagrant@localhost ~]$ cd .ssh/
[vagrant@localhost .ssh]$ ls
authorized_keys
[vagrant@localhost .ssh]$ pwd
/home/vagrant/.ssh
[vagrant@localhost .ssh]$

The contents look like:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGn8m1kC2mHfPx0dno+HNNYfhgXUZHn8Rt7orIm2Hlc7g4JkvCN6bO7mrYhUbdN2qjy2TziPdlndTAI0E1HK2GbwRM8+N02CNzBg5zvJosMQhweU7EXsDZjYRNJ/SAgVlU5EqIPzmznFjp08uzvBAe2u+L4dZ9kIZ23z/GVWupNpTJmem6LsqS3xg/h0qKf2LFv55SqtLVLlC1sAxL4fvBi3fFIsR9+NLf0fxb+tV/xrprn3yYXT1GyRPVtYAbiOzE3gUOWLKQZVkCXN8R69JeY8P5YgPGx9gSLCiNyLLmqCdF4oLIBMg82lZ0a3/BXG7AoAHVxh7caOoWJrFAjVK9 vagrant

This is now a generated public key matching with a newly generated private key, matching with this file in my .vagrant folder:
As shown, it is the private_key file in the .vagrant\machines\darwin\virtualbox\ folder.
If you update the authorized_keys file of the vagrant user with the public key of the Vagrant insecure keypair, then you need to remove the private_key file. Vagrant will notice that it finds the insecure key and replaces the insecure file with a newly generated private one. By the way, I noticed that sometimes Vagrant won't remove the insecure public key. That means that someone could login to your box using the insecure keypair. You might not want that, so remove that public key from the file.
For convenience, the insecure public key is:
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant

It's this file on GitHub:

Oracle user

For my installations I allways use an Oracle user. And it is quite safe to say I always use the password 'welcome1', for demo and training boxes that is (fieeewww).

But I found out that I could not logon to that user using ssh with a simple password.
That is because in the Oracle vagrant basebox this option is set to no. To solve it, edit the following file /etc/ssh/sshd_config and find the option PasswordAuthentication:
...
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication yes
#PermitEmptyPasswords no
#PasswordAuthentication no
...

Comment the line with value no and uncomment the one with yes.

You can add this to your script to enable it:
echo 'Allow PasswordAuthhentication'
sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.org
sudo sed -i 's/PasswordAuthentication no/#PasswordAuthentication no/g' /etc/ssh/sshd_config
sudo sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/g' /etc/ssh/sshd_config
sudo service sshd restart

You need to restart the sshd as shown in the last line, to have this take effect.

Conclusion

I'll need to add the changes above to my Vagrant scripts, at least the one creating the box based on the one from Oracle. And now I need to look into the file systems created in the Oracle box, to be able to extend them with mine... But that might be input for another story.

Thursday, 27 February 2020

My first node program: get all the named complexTypes from an xsd file

Node JS Logo
Lately I'm working on some scripting for scanning SOA Projects for several queries. Some more in line of my script to scan JCA files. I found that ANT is very helpfull in selecting the particular files to process. Also in another script I found it very usefull to use JavaScript with in ANT.

In my JCA scan example, and my other scripts, at some points I need to read and interpret the found xml document to get the information from it in ANT and save it to a file. For that I used XSL to transform the particular document to be able to address the particular elements as properties in ANT.

In my latest fiddlings I need to gather all the references of elements from a large base xsd in XSDs, WSDLs, BPELs, XSLTs and composite.xml. I found quickly that transforming a wsdl or xsd using XSLT hard, if not near to impossible. For instance, I needed to get all the type attributes referencing an element or type within the target namespace of the referenced base xsd. And although mostly the same namespace prefix is used, I can't rely on that. So in the end I used a few JavaScript functions to parse the document as  a string.

Now, at this point I wanted to get all the named xsd:complexTypes, and then I found it fun to try that into a node js script. You might be surprised, but I haven't done this before, although I did some JavaScript once in a while. I might have done some demo node js try-outs, but don't count those.

So I came up with this script:
const fs = require('fs');
var myArgs = process.argv.slice(2);
const xsdFile=myArgs[0];
const complexTypeFile = myArgs[1];
//
const complexTypeStartTag="<xsd:complexType"
// Log arguments
console.log('myArgs: ', myArgs);
console.log('xsd: ', xsdFile);
//
// Exctract an attribute value from an element
function getAttributeValue(element, attributeName){
   var attribute =""
   var attributePos=element.indexOf(attributeName);
   if (attributePos>-1){
     attribute = element.substring(attributePos);
     attributePos=attribute.indexOf("=")+1;
     attribute=attribute.substring(attributePos).trim();
     var enclosingChar=attribute.substring(0,1);
     attribute=attribute.substring(1,attribute.indexOf(enclosingChar,1)); 
   }
   return attribute;
}
// Create complexType Output file.
fs.writeFile(complexTypeFile,'ComplexType\n', function(err){
    if(err) throw err;
});
// Read and process the xsdFile
fs.readFile(xsdFile, 'utf8', function(err, contents){
  //console.log(contents);
  var posStartComplexType = contents.indexOf(complexTypeStartTag);
  while  (posStartComplexType > -1){
   // Abstract complexType declaration
   var posEndComplexType= contents.indexOf(">", posStartComplexType);
   console.log("Pos: ".concat(posStartComplexType, "-", posEndComplexType));
   var complexType= contents.substring(posStartComplexType, posEndComplexType+1);
   // Log the complexType
   console.log("Complex type: [".concat(complexType,"]"));
   var typeName = getAttributeValue(complexType, "name")
   if (typeName==""){
       typeName="embedded";
   }
   console.log(typeName);
   fs.appendFileSync(complexTypeFile, typeName.concat("\n"));
   //Move on to find next possible complexType
   contents=contents.substring(posEndComplexType+1);
   posStartComplexType = contents.indexOf(complexTypeStartTag);
  }
});
console.log('Done with '+xsdFile);

It parses the arguments, where it expects first a reference to the XSD file to parse, and as a second argument the filename to write all the names to.

The function getAttributeValue() finds an attribute from the provided element, based on the attributeName and returns its value if found. Otherwise it will return an empty string.

The main script will first write a header row to the output csv file. Then reads the xsd file asynchronously (there for the done message will be shown before the console logs from the processing of the file), and in finds every occurence of the xsd:complexType from the contents. When found, it will find the end of the start tag declaration and within it it will find the name attribute. This name attribute is then appended (synchronously) to the csv file.

How to read a file I found here. Appending a file here on stackoverflow.

Tuesday, 25 February 2020

Get XML Document from SOA Infra table

Today I'm investigating a problem in an interaction between Siebel and SOASuite. I needed to find a set of correlated messages, where BPEL expects only one message but gets 2 from Siebel.

I have a query like:
SELECT 
  dmr.message_guid,
  dmr.document_id,
  dmr.part_name,
  dmr.document_type,
  dmr.dlv_partition_date,
  xdc.document_type,
  xdc.document,
  GET_XML_DOCUMENT(xdc.document,to_clob(' ')) doc_PAYLOAD,
  xdc.document_binary_format,
  dmg.conv_id ,
  dmg.conv_type,
  dmg.properties msg_properties
FROM
  document_dlv_msg_ref dmr
  join xml_document xdc on xdc.document_id = dmr.document_id
  join dlv_message dmg on dmg.message_guid = dmr.message_guid
  where dmg.cikey  in (select cikey from cube_instance where flow_id = 4537505 or flow_id = 4537504);

To get all the messages that are related to two flows that run parallel based on the same message exchange.
The thing is that of course you want to see the contents of the message in the xml_document. This attribute is a BLOB that contains the parsed document from oracle xml classes. You need the oracle classes to serialize it to a String representation of the document. I found this nice solution from Michael Heyn.

In 12c this did not work right a way. First I had to rename the class to SOAXMLDocument, because I got a Java compilation error complaining that XMLDocument already was in use. I think it conflicts with the imported oracle.xml.parser.v2.XMLDocument class. Renaming it was the simple solution.

Another thing is that in SOA Suite 12c, the documents are apparent

set define off;
CREATE OR REPLACE AND COMPILE JAVA SOURCE NAMED "SOAXMLDocument" as
// Title:   Oracle Java Class to Decode XML_DOCUMENT.DOCUMENT Content
  // Author:  Michael Heyn, Martien van den Akker
  // Created: 2015 05 08
  // Twitter: @TheHeynComplex
  // History:
  // 2020-02-25: Added GZIP Unzip and renamed class to SOAXMLDocument
  // Import all required classes
  import oracle.xml.parser.v2.XMLDOMImplementation;
  import java.io.ByteArrayOutputStream;
  import java.io.IOException;
  import oracle.xml.binxml.BinXMLStream;
  import oracle.xml.binxml.BinXMLDecoder;
  import oracle.xml.binxml.BinXMLException;
  import oracle.xml.binxml.BinXMLProcessor;
  import oracle.xml.scalable.InfosetReader;
  import oracle.xml.parser.v2.XMLDocument;
  import oracle.xml.binxml.BinXMLProcessorFactory;
  import java.util.zip.GZIPInputStream;

  // Import required sql classes
  import java.sql.Blob;
  import java.sql.Clob;
  import java.sql.SQLException;

  public class SOAXMLDocument{

      public static Clob GetDocument(Blob docBlob, Clob tempClob){
      XMLDOMImplementation xmlDom = new XMLDOMImplementation();
      BinXMLProcessor xmlProc = BinXMLProcessorFactory.createProcessor();
      ByteArrayOutputStream byteStream;
      String xml;
      try {
              // Create a GZIP InputStream from the Blob Object
              GZIPInputStream gzipInputStream = new GZIPInputStream(docBlob.getBinaryStream());
              // Create the Binary XML Stream from the GZIP InputStream
              BinXMLStream xmlStream = xmlProc.createBinXMLStream(gzipInputStream);
              // Decode the Binary XML Stream 
              BinXMLDecoder xmlDecode = xmlStream.getDecoder();
              InfosetReader xmlReader = xmlDecode.getReader();
              XMLDocument xmlDoc = (XMLDocument) xmlDom.createDocument(xmlReader);

              // Instantiate a Byte Stream Object
              byteStream = new ByteArrayOutputStream();

              // Load the Byte Stream Object
              xmlDoc.print(byteStream);

              // Get the string value of the Byte Stream Object as UTF8
              xml = byteStream.toString("UTF8");

              // Empty the temporary SQL Clob Object
              tempClob.truncate(0);

              // Load the temporary SQL Clob Object with the xml String
              tempClob.setString(1,xml);
              return tempClob;
      } 
      catch (BinXMLException ex) {
        return null;
      }
      catch (IOException e) {
        return null;
      }
      catch (SQLException se) {
        return null;
      }
      catch (Exception e){
        return null;
      }
    }
  }
/

Also, I needed to execute set define off before it. Another thing is that in SOA Suite 12c the documents are apparently stored as GZIP object. Therefor I had to put the binaryStream from the docBlob parameter into a GZIPInputStream, and feed that to the xmlProc.createBinXMLStream().

Then create the following Function wrapper:
CREATE OR REPLACE FUNCTION GET_XML_DOCUMENT(p_blob BLOB
                                           ,p_clob CLOB) 
                    RETURN CLOB AS LANGUAGE JAVA
                      NAME 'SOAXMLDocument.GetDocument(java.sql.Blob, java.sql.Clob) return java.sql.Clob';

You can use it in a query as:
select * from (
  select xdc2.*, GET_XML_DOCUMENT(xdc2.document,to_clob(' ')) doc_PAYLOAD
  from
    (select * 
    from xml_document xdc
    where xdc.doc_partition_date > to_date('25-02-20 09:10:00', 'DD-MM-YY HH24:MI:SS') and xdc.doc_partition_date < to_date('25-02-20 09:20:00', 'DD-MM-YY HH24:MI:SS') 
    ) xdc2
)  xdc3
where xdc3.doc_payload like '%16720284%' or xdc3.doc_payload like  '%9F630D36DD24214EE053082D260AB792%'

In this example I do a scan over documents between a certain period where I filter over contents from the blob. Notice that database need to unparse the blob of every row to be able to filter on it. You should not do this over the complete table.

Friday, 21 February 2020

My Weblogic on Kubernetes Cheatsheet, part 3.

Oracle Kubernetes

In two previous parts I already wrote about my Kubernetes experiences and the important commands I learned:
My way of learning and working is to put those commands in little scriptlets, one more usefull then the other. But all with the goal to keep those together.

It its time to write part 3, in which I will present some maintenance functions, mainly to connect with your pods.

Get node and pod info

getdmnpod-status.sh

In part 2 I ended with the script getdmnpods.sh. You can parse the output using awk to get just the status of the pods:

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pod statuses for $WLS_DMN_NS
kubectl -n $WLS_DMN_NS get pods -o wide| awk '{print $1 " - "  $3}'

getpods.sh

With getdmnpods.sh you can get the status of the pods running your domain. There's also a weblogic operator pod. To show this, use:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pods for $K8S_NS
kubectl get po -n $K8S_NS

getstmpods.sh

Then also the kubernetes cluster infrastructure consist of a set of pods. Show these using:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pods for kube-system
kubectl -n kube-system get pods


getnodes.sh


On OCI your cluster is running on a set of nodes. These OCI Instances are actually running your system. You can show those, with their IP's and Kubernetes versions using:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s nodes
kubectl get node

getdmnsitlogs.sh


Of course you want to see some logs, especially when something went wrong. Perhaps you want to see some specific loggings. For instance, this script show the logs of the admin pod, grepping the logs situational related to the situational config:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get situational config logs for $WLS_DMN_NS server $ADM_POD
kubectl -n $WLS_DMN_NS logs $ADM_POD | grep -i situational

Weblogic Operator

When I was busy with getting the MedRec Sample application deployed to Kubernetes, at one point I got stuck because, as I later learned, my Kubernetes Operator's version was behind.

list_wlop.sh 

I learned I could get Weblogic Operator information as follows:

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo List Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm list $WL_OPERATOR_NAME
cd $SCRIPTPATH

delete_weblogic_operator.sh 

When you find that the operator needs an update, you can remove it with this script:

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Delete Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm del --purge $WL_OPERATOR_NAME 
cd $SCRIPTPATH

install_weblogic_operator.sh


Then of course, you want to install it with the proper function. This can be done using:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Install Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm install kubernetes/charts/weblogic-operator \
  --name $WL_OPERATOR_NAME \
  --namespace $K8S_NS \
  --set image=oracle/weblogic-kubernetes-operator:2.3.0 \
  --set serviceAccount=$K8S_SA \
  --set "domainNamespaces={}"
cd $SCRIPTPATH

Take note of the image named in this script. Make sure that it matches the image with the latest-greatest operator version. In this script I apparently still use 2.3.0, but as of November 15th, 2019 2.4.0 is released.

upgrade_weblogic_operator.sh

Besides an install and delete chart, there is also an operator upgrade Helm chart:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Upgrade Weblogic Operator $WL_OPERATOR_NAME with domainNamespace $WLS_DMN_NS
cd $HELM_CHARTS_HOME
helm upgrade \
  --reuse-values \
  --set "domainNamespaces={$WLS_DMN_NS}" \
  --wait \
  $WL_OPERATOR_NAME \
  kubernetes/charts/weblogic-operator
cd $SCRIPTPATH

Connect to the pods

The containers in the pods are running Linux (I know this is a quite blunt statement). So you might want to be able to connect to them. In case of Weblogic, you might want to be able to run wlst.sh to navigate to the MBean tree to investigate certain settings and find out why certain settings won't work in runtime.

admbash.sh and mr1bash.sh

To get to the console of the container you can run for the AdminServer the script admbash.sh:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh

echo Start bash in $WLS_DMN_NS - $ADM_POD
kubectl exec -n $WLS_DMN_NS -it $ADM_POD /bin/bash

And for one of the managed servers a variant of mr1bash.sh:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pods for $WLS_DMN_NS
kubectl -n $WLS_DMN_NS get pods -o wide
kubectl exec -n medrec-domain-ns -it medrec-domain-medrec-server1 /bin/bash

On the commandline you can then run wlst.sh and connect to your AdminServer.

dwnldAdmLogs.sh and dwnldMr1Logs.sh


The previous scripts can help to navigate through your container and find the contents. However, you'll find that the containers lack certain basic bash commands like vi. The cat command does exist, but not very convenient investigating large log files. So, very soon I found the desire to download the log files to investigate them with a proper editor. You can do it for the admin server using dwnldAdmLogs.sh:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
#
LOG_FILE=$ADM_SVR.log
OUT_FILE=$ADM_SVR.out
#
echo From $WLS_DMN_NS/$ADM_POD download $DMN_HOME/servers/$ADM_SVR/logs/$LOG_FILE to $LCL_LOGS_HOME/$LOG_FILE
kubectl cp $WLS_DMN_NS/$ADM_POD:$DMN_HOME/servers/$ADM_SVR/logs/$LOG_FILE $LCL_LOGS_HOME/$LOG_FILE
echo From $WLS_DMN_NS/$ADM_POD download $DMN_HOME/servers/$ADM_SVR/logs/$OUT_FILE to $LCL_LOGS_HOME/$OUT_FILE
kubectl cp $WLS_DMN_NS/$ADM_POD:$DMN_HOME/servers/$ADM_SVR/logs/$OUT_FILE $LCL_LOGS_HOME/$OUT_FILE

And for one of the managed servers a variant of dwnldMr1Logs.sh:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
#
LOG_FILE=$MR_SVR1.log
OUT_FILE=$MR_SVR1.out
#
echo From $WLS_DMN_NS/$MR1_POD download $DMN_HOME/servers/$MR_SVR1/logs/$LOG_FILE to $LCL_LOGS_HOME/$LOG_FILE
kubectl cp $WLS_DMN_NS/$MR1_POD:$DMN_HOME/servers/$MR_SVR1/logs/$LOG_FILE $LCL_LOGS_HOME/$LOG_FILE
echo From $WLS_DMN_NS/$MR1_POD download $DMN_HOME/servers/$MR_SVR1/logs/$OUT_FILE to $LCL_LOGS_HOME/$OUT_FILE
kubectl cp $WLS_DMN_NS/$MR1_POD:$DMN_HOME/servers/$MR_SVR1/logs/$OUT_FILE $LCL_LOGS_HOME/$OUT_FILE

I found these scripts very handy, because I can quickly repeatedly download the particular log files.

Describe kube resources


Many resources in Kubernetes can be described. In my case I found it very usefull when debugging the configuration overrides.

descjdbccm.sh


One subject in the Weblogic Operator tutorial workshop is to do configuration overrides, and one of the steps is to create a configuration map. This is one example of a resource that can be desribed:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Describe jdbc configuration map of $WLS_DMN_NS
kubectl describe cm jdbccm -n $WLS_DMN_NS

Usefull to see what the latest overrides values are.

override_weblogic_domain.sh

To perform the weblogic override I use the following script:

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Delete configuration map jdbccm for Domain $WLS_DMN_UID 
kubectl -n $WLS_DMN_NS delete cm jdbccm
#echo Override Weblogic Domain $WLS_DMN_UID using $SCRIPTPATH/medrec-domain/override
kubectl -n $WLS_DMN_NS create cm jdbccm --from-file $SCRIPTPATH/medrec-domain/override
kubectl -n $WLS_DMN_NS label cm jdbccm weblogic.domainUID=$WLS_DMN_UID

Obviously the descjdbccm.sh is very usefull in combination with this script.

descmrsecr.sh


Another part in the configuration overrides is the storage of the database credentials and connection URL. We store those in a secret that is referenced in the overide files. This is smart, because you now only need to create or update the secret and then run the configuration override script. To describe the secret you can use:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Describe secret $MR_DB_CRED of namespace $WLS_DMN_NS
kubectl describe secret $MR_DB_CRED -n $WLS_DMN_NS

Since it is a secret, you can show the names of the attributes in the secret, but not their values.

create_mrdbsecret.sh


You need to create or update secrets. Apparently you need to delete it first to be able to (re)create it. This script does it for two secrets, for two datasources:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
#
function prop {
    grep "${1}" $SCRIPTPATH/credentials.properties|cut -d'=' -f2
}
#
MR_DB_USER=$(prop 'db.medrec.username')
MR_DB_PWD=$(prop 'db.medrec.password')
MR_DB_URL=$(prop 'db.medrec.url')
#
echo Delete Medrec DB Secret $MR_DB_CRED for $WLS_DMN_NS
kubectl -n $WLS_DMN_NS delete secret $MR_DB_CRED
echo Create Medrec DB Secret $MR_DB_CRED for $MR_DB_USER and URL $MR_DB_URL
kubectl -n $WLS_DMN_NS create secret generic $MR_DB_CRED --from-literal=username=$MR_DB_USER --from-literal=password=$MR_DB_PWD --from-literal=url=$MR_DB_URL
kubectl -n $WLS_DMN_NS label secret $MR_DB_CRED weblogic.domainUID=$WLS_DMN_UID
#
SMPL_DB_CRED=dbsecret
echo Delete Medrec DB Secret $SMPL_DB_CRED for $WLS_DMN_NS
kubectl -n $WLS_DMN_NS delete secret $SMPL_DB_CRED
echo Create DB Secret dbsecret $SMPL_DB_CRED for  $WLS_DMN_NS
kubectl -n $WLS_DMN_NS create secret generic $SMPL_DB_CRED --from-literal=username=scott2 --from-literal=url=jdbc:oracle:thin:@test.db.example.com:1521/ORCLCDB
kubectl -n $WLS_DMN_NS label secret $SMPL_DB_CRED weblogic.domainUID=$WLS_DMN_UID

This script gets the MedRec database credentials from a property file. Obviously you need to store those values in a save place. So you might figure that having them in a property file might not be a very safe way. You could change the script of course to ask for the particular password. And you might want to adapt it to be able to load different property files per target environment.

Can I?

The Kubernetes API has of course an authorization schema. One of the first things in the Weblogic Operator tutorial is that when you create your OKE Cluster you should make sure that you have the authorization to access your Kubernetes cluster using a system admin account.

To check if you're able to call the proper API's for your setup you can use the following scripts:

canideploy.sh

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo K8s Can I deploy?
kubectl auth can-i create deploy

canideployassystem.sh


#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo K8s Can I deploy as system?
kubectl auth can-i create deploy --as system:serviceaccount:kube-system:default

Conclusion

At this point I showed you my scriptlets up to now. There is a lot to investigate still. For instance, there are Terraform examples to create your OKE cluster from scratch with Terraform. This is very promising as an alternative to the on-line wizards. Also I would like to create some (micro-)services to get data from the MedRec database and run them in pods side by side with the MedRec application. Maybe even with a OJet front end.

Sunday, 2 February 2020

Virtualbox 6.1.2 and Vagrant 2.2.7 - the working combination







Today I found out that Vagrant 2.2.7 has been released. A few weeks ago, the Oracle VirtualBox celebrated the release of 6.1.2. The thing with VirtualBox 6.1.2 was that it wasn't compatile with Vagrant 2.2.6, since that version of Vagrant lacked the support of the Virtualbox 6.1 base-release. It was solvable, as described by Tim Hall, with a solution of Simon Coter. Happily, as expected, Vagrant 2.2.7 supports 6.1.x now. So, I was eager to try that out. And it works indeed.

However, the first time I 'upped' a Vagrant project, I hit the error:
VBoxManage.exe: error: Unknown option: --clipboard

Sadly this was due to the following lines in my Vagrantfile:
    # Set clipboard and drag&drop bidirectional
    #vb.customize ["modifyvm", :id, "--clipboard", "bidirectional"]
    #vb.customize ["modifyvm", :id, "--draganddrop", "bidirectional"]
I did not try the --draganddrop option. But assumed that it would fail too. Commenting those out (as in the example) got my Vagrantfile ok again.
I use this to have bi-directional clipboard and draganddrop, which is off by default. So, I have to figure why this is.
After startup of the new VM, I tested the clipboard functionality and although these lines are commented out, it worked as such. Apparently I don't need those lines anymore.

Since it did not let me go, I tried:
C:\Program Files\Oracle\VirtualBox>vboxmanage modifyvm
Usage:

VBoxManage modifyvm         
                            [--name ]
                            [--groups , ...]
                            [--description ]
...
                            [--clipboard-mode disabled|hosttoguest|guesttohost|
                                              bidirectional]
                            [--draganddrop disabled|hosttoguest|guesttohost|
                                           bidirectional]

Apparently the the option changed to --clipboard-mode.

Friday, 24 January 2020

Configure Weblogic Policies and Actions using WLST

Fairly regularly I give a training on Weblogic Tuning and Troubleshooting, where I talk about JVMs, Garbage Collections, and some subsystems of Weblogic, JMS and JDBC for instance, and how to tune and troubleshoot them.

One of the larger parts of the training is the Weblogic Diagnostic Framework. I find it quite interesting, but also pretty complex. And maybe therefor hardly used in the every day Weblogic administration. And that might be a pity, because it can be quite powerfull. You can find the use of it in Fusion Middleware, with preconfigured policies and actions. But I guess that other tooling on Weblogic diagnostics and monitoring, like WLSDM also rely on it (although I don't know for sure).

Configuring WLDF might be quite hard, and during the last run of the training, I figured that it might help to turn the solution of the workshop into a script. At least to show what you're doing when executing the labs. But, certainly also to show how you can put your configurations into a script that you can extend and reuse over different environments.

This week I got a question on the Oracle community,To be notify of les warning Logs, that made me remember this script. Maybe it's not exactly the answer, but I think at least it can be a starting point. And I realized that I did not write about it yet.

 11g vs 12c

I stumbeld upon a nice 11g blog about this subject. In 12c Oracle renamed this part of the WLDF: where in 11g it was called "Watches and Notifications" it is now called "Policies and Actions". When working with the console, you'll find that the console follows the new naming. But in WLST the APIs still have the old 11g naming. So keep in mind that Policies are Watches and Actions are Notifications.

Documentation about Configuring Policies and Actions can be found here.

I'm not going to explain all the concepts of subject, but going through my base script step by step, and then conclude with some remarks and ideas.

Diagnostic Module

Just like JMS resources, the Diagnostic Framework combines the resources into WLDFSystemResource Modules. A Diagnostic Module is in essence an administration unit to combine the resources.  A diagnostic module can be created by the following WLST function:
#
def createDiagnosticModule(diagModuleName, targetServerName):
  module=getMBean('/WLDFSystemResources/'+diagModuleName)
  if module==None:
    print 'Create new Diagnostic Module'+diagModuleName
    edit()
    startEdit()
    cd('/')
    module = cmo.createWLDFSystemResource(diagModuleName)
    targetServer=getMServer(targetServerName)
    module.addTarget(targetServer)
    # Activate changes
    save()
    activate(block='true')
    print 'Diagnostic Module created successfully.'
  else:
    print 'Diagnostic Module'+diagModuleName+' already exists!'
  return module

The script is created in a way that it first checks if the Diagnostic Module already exists. You'll see that all the functions in this article work like this. This helps also in the use of it in Weblogic under Kubernetes environments. Diagnostic modules are registered under '/WLDFSystemResources'. And it is created with the createWLDFSystemResource() method. Also like a JMSModules you need to target them. This function does this based on the targetServerName and uses the getMServer() function to get the MBean to target:
#
def getMServer(serverName):
  server=getMBean('/Servers/'+serverName)
  return server

Many Weblogic resources need to be targetted. I notice that I do this in many different ways in different scripts all over this blog and in my work. Maybe I need to write a more generic, smarter way of doing this. In this case I simply just target to a single server, but it could be list of servers and/or clusters.
The function is called from a main function as follows:
import os,sys, traceback
#
adminHost=os.environ["ADM_HOST"]
adminPort=os.environ["ADM_PORT"]
admServerUrl = 't3://'+adminHost+':'+adminPort
#
adminUser='weblogic'
adminPwd='welcome1'
ttServerName=os.environ["TTSVR_NAME"]
diagModuleName='TTDiagnostics'
#
...
def main():
  try:
    print 'Connect to '+admServerUrl
    connect(adminUser,adminPwd,admServerUrl)
    createDiagnosticModule(diagModuleName, ttServerName)
...

Collectors

Collectors are also called Harvesters. They monitor MBeans Attributes and regularly store the data in the Harvested Data Archive of the targetted Managed Server. You can find it under Diagnostics->Log Files in the Weblogic Console. The Weblogic console also includes a Monitoring Dashboard. That can be get to via the console Home page using the 'Monitoring Dashboard' link.

Without collectors, you can only view  MBean Attributes in the Graphs from the moment you start a graph. It will collect the MBeans from that moment onwards, until you pause/stop the collection.
However, using a collector you can view the Attribute values from the history back.

A Collector is created within a diagnostic module. You need to  define a metricType: the MBean Type, for instance 'JDBCDataSourceRuntimeMBean'.  Then a namespace, in this case the ServerRuntime. You can specify a set of instances of the particular MBean Type, or provide None to watch all the instances of the particular type. And from the instances you specify a comma separated list of attributes you want to harvest.

This leads to the following function:

#
def createCollector(diagModuleName, metricType, namespace, harvestedInstances,attributesCsv):
  harvesterName='/WLDFSystemResources/'+diagModuleName+'/WLDFResource/'+diagModuleName+'/Harvester/'+diagModuleName 
  harvestedTypesPath=harvesterName+'/HarvestedTypes/';
  print 'Check Collector '+harvestedTypesPath+metricType
  collector=getMBean(harvestedTypesPath+metricType)
  if collector==None:
    print 'Create new Collector for '+metricType+' in '+diagModuleName
    edit()
    startEdit()
    cd(harvestedTypesPath)
    collector=cmo.createHarvestedType(metricType)
    cd(harvestedTypesPath+metricType)
    attributeArray=jarray.array([String(x.strip()) for x in attributesCsv.split(',')], String)
    collector.setHarvestedAttributes(attributeArray)
    collector.setHarvestedInstances(harvestedInstances)
    collector.setNamespace(namespace)
    # Activate changes
    save()
    activate(block='true')
    print 'Collector created successfully.'
  else:
    print 'Collector '+metricType+' in '+diagModuleName+' already exists!'
  return collector

This creates the Collecter using createHarvestedType() for the MBean Type (metricType). The list of Attributes is provided as comma separated string. But the setter on the collector (setHarvestedAttributes(attributeArray)) expects a jarray, so the csv-list needs to be translated. It is created by a (to me a bit peculiar when I first saw it) python construct:
    attributeArray=jarray.array([String(x.strip()) for x in attributesCsv.split(',')], String)

It splits the csv string with the comma as a separator and then loops over the resulting values. For each value it constructs a String, that is trimmed. The resulting String values are fed to a String based jarray.array factory.

The following line added to the main function will call the function, when you want to watch all inststances:
createCollector(diagModuleName, 'weblogic.management.runtime.JDBCDataSourceRuntimeMBean','ServerRuntime', None, 'ActiveConnectionsCurrentCount,CurrCapacity,LeakedConnectionCount')

In the case you do want to select a specific set of instances, you need to do that as follows:
    harvestedInstancesList=[]
    harvestedInstancesList.append('com.bea:ApplicationRuntime=medrec,Name=TTServer_/medrec,ServerRuntime=TTServer,Type=WebAppComponentRuntime')
    harvestedInstances=jarray.array([String(x.strip()) for x in harvestedInstancesList], String)    
    createCollector(diagModuleName, 'weblogic.management.runtime.WebAppComponentRuntimeMBean','ServerRuntime', harvestedInstances,'OpenSessionsCurrentCount') 

The thing in this case is that the instances self are described in an expression that uses commas. You could construct these expressions using properties of course. And then use the construct above to add those to a jarray.

Actions


When you want to have WLDF to take action upon a certain condition, you need to create an Action for it. A simple one is to create a message on a JMS queue. But according to the documentation you could have the following types:
  • Java Management Extensions (JMX)
  • Java Message Service (JMS)
  • Simple Network Management Protocol (SNMP)
  • Simple Mail Transfer Protocol (SMTP)
  • Diagnostic image capture
  • Elasticity framework (scaling your dynamic cluster)
  • REST
  • WebLogic logging system
  • WebLogic Scripting Tool (WLST)
I created a script for a JMS Action, just by recording the configuration within the console and transformed it into the following script:

#
def createJmsNotificationAction(diagModuleName, actionName, destination, connectionFactory):
  policiesActionsPath='/WLDFSystemResources/'+diagModuleName+'/WLDFResource/'+diagModuleName+'/WatchNotification/'+diagModuleName
  jmsNotificationPath=policiesActionsPath+'/JMSNotifications/'
  print 'Check notification action '+jmsNotificationPath+actionName
  jmsNtfAction=getMBean(jmsNotificationPath+actionName)
  if jmsNtfAction==None:
    print 'Create new JMS NotificationAction '+actionName+' in '+diagModuleName
    edit()
    startEdit()
    cd(policiesActionsPath)
    jmsNtfAction=cmo.createJMSNotification(actionName)
    jmsNtfAction.setEnabled(true)
    jmsNtfAction.setTimeout(0)
    jmsNtfAction.setDestinationJNDIName(destination)
    jmsNtfAction.setConnectionFactoryJNDIName(connectionFactory)
    # Activate changes
    save()
    activate(block='true')
    print 'JMS NotificationAction created successfully.'
  else:
    print 'JMS NotificationAction '+actionName+' in '+diagModuleName+' already exists!'
  return jmsNtfAction
  

For other types, just click on the record link in the console and perform the configuration. Then transform it in a function as above.

I think this function does not need much explanation. It can be called as follows, using the JNDI names of the destination and a connection factory:
createJmsNotificationAction(diagModuleName, 'JMSAction', 'com.tt.jms.WLDFNotificationQueue', 'weblogic.jms.ConnectionFactory')

Policies

A Policy identifies a situation to trap for monitoring or diagnostic purposes. It constitutes of an expression that identifies the situation and one of more actions to follow up on it when the expression evaluates to true. The default language for the expression is the WLDF Query Language, but it is deprecated and superceeded by the Java Expression Language (EL).

Another aspect of the policy is the alarm. When an event in Weblogic is fired, that correlates to the policy, you might not want to have the handlers executed every time it occurs. If, for instance, a JMS queue hits a high count, and you define a policy with an email-action, you might not want a email-message for every new message posted on the queue. Then not only queue is flooded but it will flood your inbox as well. In the next function the alarm is set as 'AutomaticReset', after 300 seconds. When fired, the policy is then disabled for the given amount of time, and then is automatically enabled again.
#
def createPolicy(diagModuleName, policyName, ruleType, ruleExpression, actions):  
  policiesActionsPath='/WLDFSystemResources/'+diagModuleName+'/WLDFResource/'+diagModuleName+'/WatchNotification/'+diagModuleName
  policiesPath=policiesActionsPath+'/Watches/'
  print 'Check Policy '+policiesPath +policyName
  policy=getMBean(policiesPath +policyName)
  if policy==None:
    print 'Create new Policy '+policyName+' in '+diagModuleName
    edit()
    startEdit()
    cd(policiesActionsPath)
    policy=cmo.createWatch(policyName)
    policy.setEnabled(true)
    policy.setExpressionLanguage('EL')
    policy.setRuleType(ruleType)
    policy.setRuleExpression(ruleExpression)
    policy.setAlarmType('AutomaticReset')
    policy.setAlarmResetPeriod(300000)
    cd(policiesPath +policyName)
    set('Notifications', actions)
    schedule=getMBean(policiesPath +policyName+'/Schedule/'+policyName)
    schedule.setMinute('*')
    schedule.setSecond('*')
    schedule.setSecond('*/15')
    # Activate changes
    save()
    activate(block='true')
    print 'Policy created successfully.'
  else:
    print 'Policy '+policyName+' in '+diagModuleName+' already exists!'
  return policy

A policy can drive multiple actions. Therefor they must also be provided as an jarray. For that the following lines are added to the main function:

    actionsList=[]
    actionsList.append('com.bea:Name=JMSAction,Type=weblogic.diagnostics.descriptor.WLDFJMSNotificationBean,Parent=[TTDomain]/WLDFSystemResources[TTDiagnostics],Path=WLDFResource[TTDiagnostics]/WatchNotification[TTDiagnostics]/JMSNotifications[JMSAction]')
    actions=jarray.array([ObjectName(action.strip()) for action in actionsList], ObjectName)    
    createPolicy(diagModuleName,'HiStuckThreads', 'Harvester', 'wls:ServerHighStuckThreads(\"30 seconds\",\"10 minutes\",5)', actions)

As you can see, the JMSAction created earlier is coded as an expression and added to the list. As mentioned earlier with the Harvested Instances, you could wrap this into a separate function to build up the expression based on properties. In the example above, the rule is defined as: 'wls:ServerHighStuckThreads(\"30 seconds\",\"10 minutes\",5)', and added as a hardcoded parameter to the call to the createPolicy() function.

Another example, is:
    ruleExpression='wls:ServerGenericMetricRule(\"com.bea:Name=MedRecGlobalDataSourceXA,ServerRuntime=TTServer,Type=JDBCDataSourceRuntime\",\"WaitingForConnectionHighCount\",\">\",0,\"30 seconds\",\"10 minutes\")'
    createPolicy(diagModuleName,'OverloadedDS', 'Harvester', ruleExpression, actions)
In this example the rule is quite long and would make the line to create the policy quite long. But again, this could be abstracted into a function that builds the expression.

Conclusion

I put the complete script on github. It is a starting point showing how to setup collectors, policies and actions using WLST. It could be extended with functions that create the different expressions based on properties. This would make the scripting more robust, because you would not need to formulate your expressions for every purpose when you want different values.

When I started with this script during the training, I imagined that  you could define a library for several types of collectors, actions and policies. You could drive those with a smart property or xml-configuration file that define all the policies that you want to add to the environment. You could even create different property files for different kinds of environments. You could have different weblogic domains for particular applications, but also for OSB, SOA, BI Publisher, etc. Based on the kind of environment you may want different sets of weblogic resources monitored.

If you make sure that all your functions are re-entrant, you could easily add them to the scripts run from your Docker files, to build up and start your Kubernetes Weblogic Operator domain. See my earlier posts about my cheat sheet, part 1 and part 2.