Friday, 24 January 2020

Configure Weblogic Policies and Actions using WLST

Fairly regularly I give a training on Weblogic Tuning and Troubleshooting, where I talk about JVMs, Garbage Collections, and some subsystems of Weblogic, JMS and JDBC for instance, and how to tune and troubleshoot them.

One of the larger parts of the training is the Weblogic Diagnostic Framework. I find it quite interesting, but also pretty complex. And maybe therefor hardly used in the every day Weblogic administration. And that might be a pity, because it can be quite powerfull. You can find the use of it in Fusion Middleware, with preconfigured policies and actions. But I guess that other tooling on Weblogic diagnostics and monitoring, like WLSDM also rely on it (although I don't know for sure).

Configuring WLDF might be quite hard, and during the last run of the training, I figured that it might help to turn the solution of the workshop into a script. At least to show what you're doing when executing the labs. But, certainly also to show how you can put your configurations into a script that you can extend and reuse over different environments.

This week I got a question on the Oracle community,To be notify of les warning Logs, that made me remember this script. Maybe it's not exactly the answer, but I think at least it can be a starting point. And I realized that I did not write about it yet.

 11g vs 12c

I stumbeld upon a nice 11g blog about this subject. In 12c Oracle renamed this part of the WLDF: where in 11g it was called "Watches and Notifications" it is now called "Policies and Actions". When working with the console, you'll find that the console follows the new naming. But in WLST the APIs still have the old 11g naming. So keep in mind that Policies are Watches and Actions are Notifications.

Documentation about Configuring Policies and Actions can be found here.

I'm not going to explain all the concepts of subject, but going through my base script step by step, and then conclude with some remarks and ideas.

Diagnostic Module

Just like JMS resources, the Diagnostic Framework combines the resources into WLDFSystemResource Modules. A Diagnostic Module is in essence an administration unit to combine the resources.  A diagnostic module can be created by the following WLST function:
#
def createDiagnosticModule(diagModuleName, targetServerName):
  module=getMBean('/WLDFSystemResources/'+diagModuleName)
  if module==None:
    print 'Create new Diagnostic Module'+diagModuleName
    edit()
    startEdit()
    cd('/')
    module = cmo.createWLDFSystemResource(diagModuleName)
    targetServer=getMServer(targetServerName)
    module.addTarget(targetServer)
    # Activate changes
    save()
    activate(block='true')
    print 'Diagnostic Module created successfully.'
  else:
    print 'Diagnostic Module'+diagModuleName+' already exists!'
  return module

The script is created in a way that it first checks if the Diagnostic Module already exists. You'll see that all the functions in this article work like this. This helps also in the use of it in Weblogic under Kubernetes environments. Diagnostic modules are registered under '/WLDFSystemResources'. And it is created with the createWLDFSystemResource() method. Also like a JMSModules you need to target them. This function does this based on the targetServerName and uses the getMServer() function to get the MBean to target:
#
def getMServer(serverName):
  server=getMBean('/Servers/'+serverName)
  return server

Many Weblogic resources need to be targetted. I notice that I do this in many different ways in different scripts all over this blog and in my work. Maybe I need to write a more generic, smarter way of doing this. In this case I simply just target to a single server, but it could be list of servers and/or clusters.
The function is called from a main function as follows:
import os,sys, traceback
#
adminHost=os.environ["ADM_HOST"]
adminPort=os.environ["ADM_PORT"]
admServerUrl = 't3://'+adminHost+':'+adminPort
#
adminUser='weblogic'
adminPwd='welcome1'
ttServerName=os.environ["TTSVR_NAME"]
diagModuleName='TTDiagnostics'
#
...
def main():
  try:
    print 'Connect to '+admServerUrl
    connect(adminUser,adminPwd,admServerUrl)
    createDiagnosticModule(diagModuleName, ttServerName)
...

Collectors

Collectors are also called Harvesters. They monitor MBeans Attributes and regularly store the data in the Harvested Data Archive of the targetted Managed Server. You can find it under Diagnostics->Log Files in the Weblogic Console. The Weblogic console also includes a Monitoring Dashboard. That can be get to via the console Home page using the 'Monitoring Dashboard' link.

Without collectors, you can only view  MBean Attributes in the Graphs from the moment you start a graph. It will collect the MBeans from that moment onwards, until you pause/stop the collection.
However, using a collector you can view the Attribute values from the history back.

A Collector is created within a diagnostic module. You need to  define a metricType: the MBean Type, for instance 'JDBCDataSourceRuntimeMBean'.  Then a namespace, in this case the ServerRuntime. You can specify a set of instances of the particular MBean Type, or provide None to watch all the instances of the particular type. And from the instances you specify a comma separated list of attributes you want to harvest.

This leads to the following function:

#
def createCollector(diagModuleName, metricType, namespace, harvestedInstances,attributesCsv):
  harvesterName='/WLDFSystemResources/'+diagModuleName+'/WLDFResource/'+diagModuleName+'/Harvester/'+diagModuleName 
  harvestedTypesPath=harvesterName+'/HarvestedTypes/';
  print 'Check Collector '+harvestedTypesPath+metricType
  collector=getMBean(harvestedTypesPath+metricType)
  if collector==None:
    print 'Create new Collector for '+metricType+' in '+diagModuleName
    edit()
    startEdit()
    cd(harvestedTypesPath)
    collector=cmo.createHarvestedType(metricType)
    cd(harvestedTypesPath+metricType)
    attributeArray=jarray.array([String(x.strip()) for x in attributesCsv.split(',')], String)
    collector.setHarvestedAttributes(attributeArray)
    collector.setHarvestedInstances(harvestedInstances)
    collector.setNamespace(namespace)
    # Activate changes
    save()
    activate(block='true')
    print 'Collector created successfully.'
  else:
    print 'Collector '+metricType+' in '+diagModuleName+' already exists!'
  return collector

This creates the Collecter using createHarvestedType() for the MBean Type (metricType). The list of Attributes is provided as comma separated string. But the setter on the collector (setHarvestedAttributes(attributeArray)) expects a jarray, so the csv-list needs to be translated. It is created by a (to me a bit peculiar when I first saw it) python construct:
    attributeArray=jarray.array([String(x.strip()) for x in attributesCsv.split(',')], String)

It splits the csv string with the comma as a separator and then loops over the resulting values. For each value it constructs a String, that is trimmed. The resulting String values are fed to a String based jarray.array factory.

The following line added to the main function will call the function, when you want to watch all inststances:
createCollector(diagModuleName, 'weblogic.management.runtime.JDBCDataSourceRuntimeMBean','ServerRuntime', None, 'ActiveConnectionsCurrentCount,CurrCapacity,LeakedConnectionCount')

In the case you do want to select a specific set of instances, you need to do that as follows:
    harvestedInstancesList=[]
    harvestedInstancesList.append('com.bea:ApplicationRuntime=medrec,Name=TTServer_/medrec,ServerRuntime=TTServer,Type=WebAppComponentRuntime')
    harvestedInstances=jarray.array([String(x.strip()) for x in harvestedInstancesList], String)    
    createCollector(diagModuleName, 'weblogic.management.runtime.WebAppComponentRuntimeMBean','ServerRuntime', harvestedInstances,'OpenSessionsCurrentCount') 

The thing in this case is that the instances self are described in an expression that uses commas. You could construct these expressions using properties of course. And then use the construct above to add those to a jarray.

Actions


When you want to have WLDF to take action upon a certain condition, you need to create an Action for it. A simple one is to create a message on a JMS queue. But according to the documentation you could have the following types:
  • Java Management Extensions (JMX)
  • Java Message Service (JMS)
  • Simple Network Management Protocol (SNMP)
  • Simple Mail Transfer Protocol (SMTP)
  • Diagnostic image capture
  • Elasticity framework (scaling your dynamic cluster)
  • REST
  • WebLogic logging system
  • WebLogic Scripting Tool (WLST)
I created a script for a JMS Action, just by recording the configuration within the console and transformed it into the following script:

#
def createJmsNotificationAction(diagModuleName, actionName, destination, connectionFactory):
  policiesActionsPath='/WLDFSystemResources/'+diagModuleName+'/WLDFResource/'+diagModuleName+'/WatchNotification/'+diagModuleName
  jmsNotificationPath=policiesActionsPath+'/JMSNotifications/'
  print 'Check notification action '+jmsNotificationPath+actionName
  jmsNtfAction=getMBean(jmsNotificationPath+actionName)
  if jmsNtfAction==None:
    print 'Create new JMS NotificationAction '+actionName+' in '+diagModuleName
    edit()
    startEdit()
    cd(policiesActionsPath)
    jmsNtfAction=cmo.createJMSNotification(actionName)
    jmsNtfAction.setEnabled(true)
    jmsNtfAction.setTimeout(0)
    jmsNtfAction.setDestinationJNDIName(destination)
    jmsNtfAction.setConnectionFactoryJNDIName(connectionFactory)
    # Activate changes
    save()
    activate(block='true')
    print 'JMS NotificationAction created successfully.'
  else:
    print 'JMS NotificationAction '+actionName+' in '+diagModuleName+' already exists!'
  return jmsNtfAction
  

For other types, just click on the record link in the console and perform the configuration. Then transform it in a function as above.

I think this function does not need much explanation. It can be called as follows, using the JNDI names of the destination and a connection factory:
createJmsNotificationAction(diagModuleName, 'JMSAction', 'com.tt.jms.WLDFNotificationQueue', 'weblogic.jms.ConnectionFactory')

Policies

A Policy identifies a situation to trap for monitoring or diagnostic purposes. It constitutes of an expression that identifies the situation and one of more actions to follow up on it when the expression evaluates to true. The default language for the expression is the WLDF Query Language, but it is deprecated and superceeded by the Java Expression Language (EL).

Another aspect of the policy is the alarm. When an event in Weblogic is fired, that correlates to the policy, you might not want to have the handlers executed every time it occurs. If, for instance, a JMS queue hits a high count, and you define a policy with an email-action, you might not want a email-message for every new message posted on the queue. Then not only queue is flooded but it will flood your inbox as well. In the next function the alarm is set as 'AutomaticReset', after 300 seconds. When fired, the policy is then disabled for the given amount of time, and then is automatically enabled again.
#
def createPolicy(diagModuleName, policyName, ruleType, ruleExpression, actions):  
  policiesActionsPath='/WLDFSystemResources/'+diagModuleName+'/WLDFResource/'+diagModuleName+'/WatchNotification/'+diagModuleName
  policiesPath=policiesActionsPath+'/Watches/'
  print 'Check Policy '+policiesPath +policyName
  policy=getMBean(policiesPath +policyName)
  if policy==None:
    print 'Create new Policy '+policyName+' in '+diagModuleName
    edit()
    startEdit()
    cd(policiesActionsPath)
    policy=cmo.createWatch(policyName)
    policy.setEnabled(true)
    policy.setExpressionLanguage('EL')
    policy.setRuleType(ruleType)
    policy.setRuleExpression(ruleExpression)
    policy.setAlarmType('AutomaticReset')
    policy.setAlarmResetPeriod(300000)
    cd(policiesPath +policyName)
    set('Notifications', actions)
    schedule=getMBean(policiesPath +policyName+'/Schedule/'+policyName)
    schedule.setMinute('*')
    schedule.setSecond('*')
    schedule.setSecond('*/15')
    # Activate changes
    save()
    activate(block='true')
    print 'Policy created successfully.'
  else:
    print 'Policy '+policyName+' in '+diagModuleName+' already exists!'
  return policy

A policy can drive multiple actions. Therefor they must also be provided as an jarray. For that the following lines are added to the main function:

    actionsList=[]
    actionsList.append('com.bea:Name=JMSAction,Type=weblogic.diagnostics.descriptor.WLDFJMSNotificationBean,Parent=[TTDomain]/WLDFSystemResources[TTDiagnostics],Path=WLDFResource[TTDiagnostics]/WatchNotification[TTDiagnostics]/JMSNotifications[JMSAction]')
    actions=jarray.array([ObjectName(action.strip()) for action in actionsList], ObjectName)    
    createPolicy(diagModuleName,'HiStuckThreads', 'Harvester', 'wls:ServerHighStuckThreads(\"30 seconds\",\"10 minutes\",5)', actions)

As you can see, the JMSAction created earlier is coded as an expression and added to the list. As mentioned earlier with the Harvested Instances, you could wrap this into a separate function to build up the expression based on properties. In the example above, the rule is defined as: 'wls:ServerHighStuckThreads(\"30 seconds\",\"10 minutes\",5)', and added as a hardcoded parameter to the call to the createPolicy() function.

Another example, is:
    ruleExpression='wls:ServerGenericMetricRule(\"com.bea:Name=MedRecGlobalDataSourceXA,ServerRuntime=TTServer,Type=JDBCDataSourceRuntime\",\"WaitingForConnectionHighCount\",\">\",0,\"30 seconds\",\"10 minutes\")'
    createPolicy(diagModuleName,'OverloadedDS', 'Harvester', ruleExpression, actions)
In this example the rule is quite long and would make the line to create the policy quite long. But again, this could be abstracted into a function that builds the expression.

Conclusion

I put the complete script on github. It is a starting point showing how to setup collectors, policies and actions using WLST. It could be extended with functions that create the different expressions based on properties. This would make the scripting more robust, because you would not need to formulate your expressions for every purpose when you want different values.

When I started with this script during the training, I imagined that  you could define a library for several types of collectors, actions and policies. You could drive those with a smart property or xml-configuration file that define all the policies that you want to add to the environment. You could even create different property files for different kinds of environments. You could have different weblogic domains for particular applications, but also for OSB, SOA, BI Publisher, etc. Based on the kind of environment you may want different sets of weblogic resources monitored.

If you make sure that all your functions are re-entrant, you could easily add them to the scripts run from your Docker files, to build up and start your Kubernetes Weblogic Operator domain. See my earlier posts about my cheat sheet, part 1 and part 2.

My Weblogic on Kubernetes Cheatsheet, part 2.

In my previous blog-post I published the first part of my Kubernetes cheatsheet, following the Weblogic Operator tutorial. In this part 2, I'll publish the scripts I created for the next few chapters in the tutorial.

Install Traefik Software Loadbalancer

The fourth part of the tutorial is about installing the Treafic Software Loadbalancer service. It is described in this part: 3. Install Traefik Software Loadbalancer. And also uses Helm to install the service.

install_traefik.sh

Install Treafik is just a quest of (nice Dutch idiom 😉) installing the proper Helm chart.
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Install traefik
cd $HELM_CHARTS_HOME
helm install stable/traefik \
--name traefik-operator \
--namespace traefik \
--values kubernetes/samples/charts/traefik/values.yaml  \
--set "kubernetes.namespaces={traefik}" \
--set "serviceType=LoadBalancer"
cd $SCRIPTPATH

getsvclbr.sh

A simple script to check the Loadbalancer Service:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get service traefik
kubectl get service -n traefik

I did not change the name or the namespace of the Treafik loadbalancer. But I could do easily by moving the name and namespace to the oke_env.sh. But I haven't had the need, but I can imagine that it could raise.

getpublicip.sh

The scriptlet above will show all the treafik information. But to show only the public IP that you need to get to your application, you can use:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get traefik public IP
kubectl describe svc traefik-operator --namespace traefik | grep Ingress | awk '{print $3}'

Again, I could move the serivce name and namespace to oke_env.sh.
When the Weblogic domain is created, there will be some additional commands and therefor scriptlets, that update the Treafik configuration.

Deploy WebLogic Domain

Now it gets really interesting: we're getting to create the actual Weblogic pods that will run the Weblogic Domain. This part is described in the fifth part: Deploy Weblogic Domain.

create_wlsdmnaccount.sh

This script will do a sequence of things. You could argue that several of the scripts described earlier could have been combined, like this.
Anyway,  this one does the following:
  • Create a Weblogic Domain Namespace
  • Create and label a Kubernetes secret within that namespace for the Admin boot credentials.
  • Create a secriet for the Docker registry on the Oracle infrastructure (Oracle Container Image Repository).
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
#
function prop {
    grep "${1}" $SCRIPTPATH/credentials.properties|cut -d'=' -f2
}
#
echo Create weblogic domain namespace $WLS_DMN_NS
kubectl create namespace $WLS_DMN_NS
WLS_USER=$(prop 'weblogic.user')
WLS_PWD=$(prop 'weblogic.password')
echo Create a Kubernetes secret $WLS_DMN_CRED in namespace $WLS_DMN_NS containing the Administration Server boot credentials for user $WLS_USER
kubectl -n $WLS_DMN_NS create secret generic $WLS_DMN_CRED \
  --from-literal=username=$WLS_USER \
  --from-literal=password=$WLS_PWD
echo Label the $WLS_DMN_CRED in namespace $WLS_DMN_NS secret with domainUID $WLS_DMN_NAME
kubectl label secret $WLS_DMN_CRED \
  -n $WLS_DMN_NS \
  weblogic.domainUID=$WLS_DMN_NAME \
  weblogic.domainName=$WLS_DNM_NAME \
  --overwrite=true
echo Create secret for oci image repository $OCIR_CRED
OCIR_USER=$(prop 'ocir.user')
OCIR_PWD=$(prop 'ocir.password')
OCIR_EMAIL=$(prop 'ocir.email')
OCI_TEN=$(prop 'oci.tenancy')
OCI_REG=$(prop 'oci.region')
kubectl create secret docker-registry $OCIR_CRED \
  -n $K8S_NS \
  --docker-server=${OCI_REG}.ocir.io \
  --docker-username="${OCI_TEN}/${OCIR_USER}" \
  --docker-password="${OCIR_PWD}" \
  --docker-email="${OCIR_EMAIL}"


The scripts starts with a function to read credential properties. It relies on the credentials.properties file:
weblogic.user=weblogic
weblogic.password=welcome1
ocir.user=that.would.be.me
ocir.password=something difficult
ocir.email=my.email.adres@darwin-it.nl
oci.tenancy=ours
oci.region=fra
db.medrec.username=MEDREC_OWNER
db.medrec.password=Medrec_Password$83!
db.medrec.url=jdbc:oracle:thin:@10.0.10.6:1521/pdb1.sub50abc021b.medrecokecluste.oraclevcn.com

I like the way this works with the prop function, because you can abstract these properties without the need to have them as a environment variable. Especially credentials you would not have in an en environment variable. Actually, passwords should be stored even more secure than in a plain property file, of course. But, I think I would move the OCID's used in seting up the OKE cluster to the credentials.properties in a next iteration.

upgrade_traefik.sh

When the Weblogic domain namespace is created, you can add it to the Treafik configuration:

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Upgrade traefik with namespace $WLS_DMN_NS
cd $HELM_CHARTS_HOME
helm upgrade \
  --reuse-values \
  --set "kubernetes.namespaces={traefik,$WLS_DMN_NS}" \
  --wait \
  traefik-operator \
  stable/traefik

cd $SCRIPTPATH

start_dmn.sh

The next step in the tutorial is to create the Weblogic Domain by starting the pods. To do so, you need to create a domain.yaml, as in this example. You'll need to update the properties according to your environment, specifying things like the project domain container image, names, credential sets, etc.
For this script, one property is of special interest:
  # - "NEVER" will not start any server in the domain
  # - "ADMIN_ONLY" will start up only the administration server (no managed servers will be started)
  # - "IF_NEEDED" will start all non-clustered servers, including the administration server and clustered servers up to the replica count
  serverStartPolicy: "IF_NEEDED"

The property serverStartPolicy defines if the Weblogic pods should start or stop. To have them started, you would set the property to "IF_NEEDED" and to stop to "NEVER". I found that I had to change this property several times and then run kubectl apply on the yaml. Soon I thought that this should be more convenient. So I created a copy of the domain.yaml, called domain.yaml.tpl. In that file I changed the property to:
  # - "NEVER" will not start any server in the domain
  # - "ADMIN_ONLY" will start up only the administration server (no managed servers will be started)
  # - "IF_NEEDED" will start all non-clustered servers, including the administration server and clustered servers up to the replica count
  serverStartPolicy: "$SVR_STRT_POLICY"
  #serverStartPolicy: "NEVER"

Now using a script I can create a copy of the domain.yaml.tpl to domain.yaml while replacing the environment variable to the desired value. A few years ago I found the Linux command envsubst that can read in a file from stndin to replace all the environment variables with there corresponding value and output the result to stdout.

So to start the domain, I would use:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Start Domain $K8S_DMN_NAME
export SVR_STRT_POLICY="IF_NEEDED"
WLS_DMN_YAML_TPL=${WLS_DMN_YAML}.tpl
envsubst < $WLS_DMN_YAML_TPL > $WLS_DMN_YAML
kubectl apply -f $WLS_DMN_YAML

It will set the SVR_STRT_POLICY to "IF_NEEDED", and then streams the domain.yaml.tpl to the domain.yaml through envsubst. Then perform kubectl apply on the resulting yaml.
Obviously, you can use this also to abstract properties as cluster replicas and java options out of the yaml file and have them in a property file or oke_env.sh.

stop_dmn.sh


And of course, the stopping the domain will work accordingly, but then using the value "NEVER".
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Stop Domain $K8S_DMN_NAME
export SVR_STRT_POLICY="NEVER"
WLS_DMN_YAML_TPL=${WLS_DMN_YAML}.tpl
envsubst < $WLS_DMN_YAML_TPL > $WLS_DMN_YAML
kubectl apply -f $WLS_DMN_YAML

upgrade_traefik_routing.sh


When your domain is started, you probably want to be able to reach it from out of the internet. To do so you need to create an Ingress rule. This is done as follows:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Upgrade traefik with namespace $WLS_DMN_NS
cat << EOF | kubectl apply -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-pathrouting-1
  namespace: $WLS_DMN_NS
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host:
    http:
      paths:
      - path: /
        backend:
          serviceName: medrec-domain-cluster-medrec-cluster
          servicePort: 8001
      - path: /console
        backend:
          serviceName: medrec-domain-adminserver
          servicePort: 7001          
EOF

The thing with this script is that it does not leverage the oke_env.sh script, except for the namespace. There are still a few hardcoded names for the service and the Ingress names. But, just like the namespace, it is easy to move those to the oke_env.sh script.

getdmnpods.sh


When started your domain, you would check the pods. This can be done with this script:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pods for $WLS_DMN_NS
kubectl -n $WLS_DMN_NS get pods -o wide

Initially, you would see 3 pods, one for the Admin server and two Managed servers. But after changing the domain.yaml.tpl following the sixth part: Scaling WebLogic Cluster, you would see more (or less) pods. Mind that if you use my start and stop scripts, you should not change the domain.yaml file itself but the domain.yaml.tpl file.

Conclusion

These scripts helped me a lot in documenting the setup. You can just follow the tutorial, which leaves you with a working environment. But it's just a recipe that you need to follow over and over again. Also, I found at a few points that it is tricky to fill in the correct value.

Lately, I started to look into Terraform for AWS and OCI. I figure that the tutorial that I followed with these scripts can be broken down into a few parts that are infact setting up OCI resources (Instances, Services like Treafik, etc.) and provisioning OKE. So it would be interesting to see where we can use Terraform to setup the OCI resources and services, leaving only the scripts to setup OKE itself.





Wednesday, 15 January 2020

Javascript in ANT

Earlier I wrote about an ANT script to scan JCA adapters files in your projects home, subversion working copy or github local repo.

In my current project we use sensors to kick-of message-archiving processes, without cluttering the BPEL process. I'm not sure if I would do that like that if I would do on a new project, but technically the idea is interesting. Unfortunately, we did not build a registry what BPEL processes make use of it and how. So I tought of how I could easily find out a way to scan that, and found that based on the script to scan JCA files, I could easily scan all the BPEL sensor files. If you have found the project folders, like I did in the JCA scan script, you can search for the *_sensor.xml files.

So in a few hours I had a basic sript. Now, in a second iteration, I would like to know what sensorActions the sensors trigger. For that I need to interpret the accompanying *_sensorAction.xml file. There for, based on the found sensor filename I need to determine the name of the sensor action file.

The first step to that is to figure out how to do a substring in ANT. With a quick google on "ant property substring", I found a nice stackoverflow thread, with a nice example of an ANT script defininition based on Javascript:
  <scriptdef name="substring" language="javascript">
    <attribute name="text"/>
    <attribute name="start"/>
    <attribute name="end"/>
    <attribute name="property"/>
    <![CDATA[
       var text = attributes.get("text");
       var start = attributes.get("start");
       var end = attributes.get("end") || text.length();
       project.setProperty(attributes.get("property"), text.substring(start, end));
     ]]>
  </scriptdef>

And that can be called like:
    <substring text="${sensor.file.name}" start="0" end="20"   property="sensorAction.file.name"/>
    <echo message="Sensor Action file: ${sensorAction.file.name1}"></echo>

The javascript substring() function is zero-based, so the first character is indexed by 0.
Not every sensor file name has the same length, the file is called after the BPEL file that it is tight too. And so to get the base name, the part without the "_sensor.xml" postfix, we need to determine the length of the filename. A script that determines that can easily be extracted from the script above:
  <scriptdef name="getlength" language="javascript">
    <attribute name="text"/>
    <attribute name="property"/>
    <![CDATA[
       var text = attributes.get("text");
       var length = text.length();
       project.setProperty(attributes.get("property"), length);
     ]]>
  </scriptdef>

Perfect! Using this I could create the logic in ANT to determine the sensorAction file name. However, I thought that it would be easier to determine the filename in Javascript all the way. Using the strength of the proper language at hand:
  <!-- Script to get the sensorAction filename based on the sensor filename. 
  1. Cut the extension "_sensor.xml" from the filename.
  2. Add "_sensorAction.xml" to the base filename.
  -->
  <scriptdef name="getsensoractionfilename" language="javascript">
    <attribute name="sensorfilename"/>
    <attribute name="property"/>
    <![CDATA[
       var sensorFilename = attributes.get("sensorfilename");
       var sensorFilenameLength = sensorFilename.length();
       var postfixLength = "_sensor.xml".length();
       var sensorFilenameBaseLength=sensorFilenameLength-postfixLength;
       var sensorActionFilename=sensorFilename.substring(0, sensorFilenameBaseLength)+"_sensorAction.xml";
       project.setProperty(attributes.get("property"), sensorActionFilename);
     ]]>
  </scriptdef>
And then I can get the sensorAction filename as follows:
    <getsensoractionfilename sensorfilename="${sensor.file.name}" property="sensorAction.file.name"/>
    <echo message="Sensor Action file: ${sensorAction.file.name}"></echo>

Superb! I found ANT a powerfull language/tool already. But with a few simple JavaScript snippets you can extend it easily.
Notice by the way also the use of xslt in the Scan JCA adapters files article. You can read xml files as properties, but to do that conveniently you need to transform a file like the sensors.xml in a way that you can easily reference the properties following the element-hierarchy. This is also explained in the Scan JCA adapters files article.
I'll go further with my sensors scan script. Maybe I'll write about it when done.

Friday, 10 January 2020

My Weblogic on Kubernetes Cheatsheet, part 1.

Last week I had the honour to present at the UKOUG TechFest 19, together with my 'partner in crime', I think I can say now: Simon Haslam. We combined our sessions into a part 1 and a part 2.

For me this presentation is the result of having done a workshop at the PaaSForum in Mallorca, and then to work that around into a setup where I was able to run the MedRec Weblogic sample application against a managed Database under Kubernetes.

Kubernetes  Weblogic Operator Tutorial

I already wrote a blog about my workshop at the PaaSForum this year, but Marc Lameriks from Amis, did a walkthrough on the workshop. It basically comes down to this tutorial, which you can do as a self-paced tutorial. Or checkout a Meetup in your neighbourhoud. If you're in the Netherlands, we'll be happy to organized one, or if you like I would come over to your place and we could set something up. See also the links at the end of part 2 of our presentations for more info on the tutorial for instance.

I did the tutorial more or less three times now, once at the PaaSForum, then I re-did it, but deliberately changed namespace-names, domain-name, etc. Just to see where the dependencies are, and actually to see where the pitfalls are. It's based on my method to get to know an unfamiliar city: deliberately get lost in it. Two years ago we moved to another part of Amersfoort. To get to know my new neighbourhood, I often took another way home then I when I left. And this is basically what I did with the tutorial too.

The last time I did it was to try to run a more real-life application with an actual database. And therefor I setup a new OKE cluster, this time in a compartment of our own company cloud subscription. Interesting in that is that you work with a normal customer-alike subscription within a compartment. Another form of a deliberate D-Tour. But also to setup a database and see that configuration overrides to change your runtime datasource-connection pool actually works.





Cheatsheet

When doing the tutorial, you'll find that besides all the configurations on the Cloud Pages, to setup your OKE Cluster, configure Oracle Pipelines, you'll find that you'll have to enter a lot of commandline-commands. Most of them are kubectl commands, some helm, and a bit of OCI commandline interface. Doing it the first time I soon got lost in the meaning of them and what I was doing with it. Also, most kubectl commands work with namespaces where your Weblogic has another namespace then the Weblogic Operator. And as is my habit nowadays, I soon put the commands in smart but simple scripts. And those I want to share with you. Maybe not all, but at least enough so you'll get the idea.

I also found the official kubernetes.io kubectl cheat sheet and this one on github. But those are more explanations of the particular commands.

I found it helpfull to set up this Cheatsheet following the tutorial. I guess this helps in relating the commands in what they're meant for.

Shell vs. Alias

At the UKOUG TechFest, someone pointed that you could use aliases too. Of course. You  could do an alias like
alias k=kubectl

However, you'll still need to extend every command with the proper namespace, pod naming, etc.
Therefor, I used the approach of creating a oke_env.sh script that I can include in every script, and a property file to store the credentials to put in secrets. Then call (source) the oke_env.sh script in every other script.

Setup Oracle Kubernetes Engine instance on Oracle Cloud Infrastructure

These scripts refer to the first part of the tutorial: 0. Setup Oracle Kubernetes Engine instance on Oracle Cloud Infrastructure.

oke_env.sh

It all starts with my oke_env.sh. Here you'll find all the particular necessary variables that are used in most other scripts. I think in a next iteration I would move the OIC_USER, OCID_TENANCY and OCID_CLUSTERID to my credential properties file. But I introduced that later on, during my experiments.

#!/bin/bash
echo Set OKE Environment
export OCID_USER="ocid1.user.oc1..{here goes that long string of characters}" 
export OCID_TENANCY="ocid1.tenancy.oc1..{here goes that other long string of characters}"
export OCID_CLUSTERID="ocid1.cluster.oc1.eu-frankfurt-1.{yet another long string of characters}"
export REGION="eu-frankfurt-1" # or your other region
export CLR_ADM_BND=makker-cluster-admin-binding
export K8S_NS="medrec-weblogic-operator-ns"
export K8S_SA="medrec-weblogic-operator-sa"
export HELM_CHARTS_HOME=/u01/content/weblogic-kubernetes-operator
export WL_OPERATOR_NAME="medrec-weblogic-operator"
export WLS_DMN_NS=medrec-domain-ns
export WLS_USER=weblogic
export WLS_DMN_NAME=medrec-domain
export WLS_DMN_CRED=medrec-domain-weblogic-credentials
export OCIR_CRED=ocirsecret
export WLS_DMN_YAML=/u01/content/github/weblogic-operator-medrec-admin/setup/medrec-domain/domain.yaml
export WLS_DMN_UID=medrec-domain
export MR_DB_CRED=mrdbsecret
export ADM_POD=medrec-domain-adminserver
export MR1_POD=medrec-domain-medrec-server1
export MR2_POD=medrec-domain-medrec-server2
export MR3_POD=medrec-domain-medrec-server3
export DMN_HOME=/u01/oracle/user_projects/domains/medrec-domain
export LCL_LOGS_HOME=/u01/content/logs
export ADM_SVR=AdminServer
export MR_SVR1=medrec-server1
export MR_SVR2=medrec-server2
export MR_SVR3=medrec-server3

credentials.properties


This stores the most important credentials. That allows me to abstract those from the scripts. However, as mentioned, I should move the OIC_USER, OCID_TENANCY and OCID_CLUSTERID variables to this file.
weblogic.user=weblogic
weblogic.password=welcome1
ocir.user=my.email@address.nl
ocir.password=my;difficult!pa$$w0rd
ocir.email=my.email@address.nl
oci.tenancy=ourtenancy
oci.region=fra
db.medrec.username=MEDREC_OWNER
db.medrec.password=MEDREC_PASSWORD
db.medrec.url=jdbc:oracle:thin:@10.11.12.13:1521/pdb1.subsomecode.medrecokeclstr.oraclevcn.com

create_kubeconfig.sh

After having setup the OKE Cluster in OCI, and configured your OCI CLI, the first actuall command you issue is to create a Kube Config file, using the OCI CLI. This one is executed only once, normally for every setup. So this script is merely to document my commands:

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Create Kubeconfig -> Copy command from Access Kube Config from cluster
mkdir -p $HOME/.kube
oci ce cluster create-kubeconfig --cluster-id $OCID_CLUSTERID --file $HOME/.kube/config --region $REGION --token-version 2.0.0 

The SCRIPTPATH variable declaration is a trick to be able to refer to other scripts relatively from that variable. Then as you will see in all my subsequent scripts, I source here the oke_env.sh script. Doing so I can refer to the particular variables in the oci command. There for, as described in the tutorial, you should note down your OCID_CLUSTERID and update that into the oke_env.sh file, as well as the REGION variable.

Note by the way, that recently Oracle Kubernetes Engine upgraded to only support the Kubeconfig token version 2.0.0. See also this document.

getnodes.sh

This one is a bit dumb, and could as easily be created by an alias:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s nodes
kubectl get node

Even the call to the oke_env.sh doesn't add anything, really but it is a base for the other scripts and when needing to add namespaces it makes sense.

create_clr_rolebinding.sh

The last part of setting up the OKE cluster is to create a role binding. This is done with:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Create cluster role binding
echo kubectl create clusterrolebinding $CLR_ADM_BND --clusterrole=cluster-admin --user=$OCID_USER
kubectl create clusterrolebinding $CLR_ADM_BND --clusterrole=cluster-admin --user=$OCID_USER

Install WebLogic Operator

The second part of the tutorial is about seting up your project environment with Github and have Oracle Pipelines build your projects image. This is not particularly related to K8S, so no relevant scripts there. 
The next part of the tutorial is about installing the operator: 2. Install WebLogic Operator.

create_kubeaccount.sh

Installling Weblogic Operator is done using Helm. As far as I have understood is Helm a sort of package manager for Kubernetes. Funny thing in naming is that where Kubernetes is Greek for the steering officer on a ship, helm is the steering device of a ship. It makes use of Tiller, the server side part of Helm. A tiller is the "steering stick" or lever that manages the Helm device. (To be honest, to me it feels a bit the otherway around, I guess I would have named the server side Helm and the client Tiller).

As a first step is to create a Helm Cluster admin role binding, a kubernetes namespace for the Weblogic Operator and a serviceaccount within this namespace. To do so the script create_kubeaccount.sh does the following:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Create helm-user-cluster-admin-role
cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: helm-user-cluster-admin-role
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: default
  namespace: kube-system
EOF
echo Create namespace $K8S_NS
kubectl create namespace $K8S_NS
echo kubectl create serviceaccount -n $K8S_NS $K8S_SA
kubectl create serviceaccount -n $K8S_NS $K8S_SA

install_weblogic_operator.sh


Installing the Weblogic operator is done with this script. Notice that you need to execute the helm command within the folder in which you checked out the Weblogic Operator github repository.
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Install Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm install kubernetes/charts/weblogic-operator \
  --name $WL_OPERATOR_NAME \
  --namespace $K8S_NS \
  --set image=oracle/weblogic-kubernetes-operator:2.3.0 \
  --set serviceAccount=$K8S_SA \
  --set "domainNamespaces={}"
cd $SCRIPTPATH

The script will cd to the Weblogic Operator local repository and executes helm. In the begin of the script the current folder is saved as SCRIPTPATH. After running the helm command, it does a cd back to it.

delete_weblogic_operator.sh

During my investigations the Weblogic Operator was upraded. If you take a closer look to the command in the tutorial, you'll notice that the image that is used is oracle/weblogic-kubernetes-operator:2.0, but I used oracle/weblogic-kubernetes-operator:2.3.0 in the script above.
I found it usefull to be able to delete the operator to be able to re-install it again. To delete the weblogic operator run the  delete_weblogic_operator.sh script:

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Delete Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm del --purge $WL_OPERATOR_NAME 
cd $SCRIPTPATH

Again in this script the helm command is surrounded by a cd to the helm charts folder of the Weblogic Operator local github repository, and back again to the current folder.

getpods.sh

After having installed the Weblogic Operator, you can list the pods of the kubernetes namespace it runs in, using this script:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pods for $K8S_NS
kubectl get po -n $K8S_NS

list_wlop.sh

You can check the Weblogic Operator installion by performing a helm list of the Weblogic Operator charts. I wrapped that ino this script:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo List Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm list $WL_OPERATOR_NAME
cd $SCRIPTPATH

Conclusion

If you would have followed the workshop, and maybe used my scripts, uptil now you have installed the Weblogic Operator. Let's not make this article too long and call this Part 1. And quickly move on to part 2, to install/configure and monitor the rest of the setup. Maybe at the end I move these contents to an easy to navigate set of articles.