Tuesday, 16 July 2019

Weblogic under Kubernetes: the weblogic topology of the future

Already 4 months ago I attended the PaaSForum 2019 in Mallorca. As every year it was great to meet members of the big EMEA Oracle Partner family.

And of course a lot of interesting talks and workshops. This year I was especially interested in announcements around SOA Suite and Project Helidon as a Microservice framework. But certainly also Weblogic under Kubernetes.
And actually, to me, the Kubernetes Weblogic Operator that was this years most enthusing subject.

With his WebLogic on Kubernetes talk Maciej Gruszka, Director Product Management, enlightened the future Oracle envisions for WebLogic. He started with stating that 'Weblogic is not dead!'. Well, he got me with that already!

The road ahead is making WebLogic fit to run in Docker and managed by Kubernetes. It might not be exactly what I had in mind, but it is certainly great news to learn that WebLogic will be around and alive for a future ahead. Oracle thrives to make future releases of Weblogic available as Docker images.

Today already, WebLogic is fully supported to run in a Docker container. And according to Marciej, the team is busy with the SOA and OSB teams to get those products fit and available for Docker too. It might even be possible that future releases are going to be delivered as a Docker image.

What is the Weblogic Operator?

To run in a Kubernetes managed cluster, Kubernetes needs to be able to perform lifecycle operation on a Weblogic Managed server. For that  the Weblogic Operator for Kubernetes is created and introduced. A Kubernetes Operator is a sort of Adapter on top of a non-Kubernetes system that translates Kubernetes lifecycle commands to operations within the specific application.

The Weblogic Operator  uses Kubernetes API to implement operations like:
  • Provisioning
  • Life cycle maangement
  • Updates
  • Scaling
  • Security
Besides the Weblogic Operator, Oracle also provides an Exporter for Prometheus and Elastic Stack, for monitoring and logging. Since the managed servers are within a container, you'll need to export events and logfiles to have them accessible and introspectible, even when the container is down or recreated from an updated image.

Some interesting links

Topologies

There are actually two topologies to choose from:
  • Domain within the Docker Image
  • Domain on a Persistent Volume
With the first one the container is actually stateless. All it needs to know is within the container. The Admin Console can be used for diagnostic and monitoring purposes, but not for updating the domain. Because spinning a new container will have it read the domain from the internal container image.

With the persistent volume topology the domain is stored outside the container. Changes are persisted. This topology is more in line with an On Premises installation of Weblogic. However, High Availability and Disaster Recovery is limited, because Persistent Volume needs to be shared and the domain configuration needs to be synced across datacenters. With 'In Image' Domains, things get simpler, because the domain is transported within the container. Downside is that changes in the domain require creating a new image through the CI/CD pipeline.

Most customers seem to choose for the 'Domain in Image' topology. In practice, domains don't change that much.

You can  adapt specific artifacts like data source connections, urls and username/passwords using Configuration Overrides.

Workshop

At the PaaSForum we got the chance to play around with Kubernetes and Weblogic. The workshop is described here: https://github.com/nagypeter/weblogic-operator-tutorial. You should fork this to a repository with your own Github account, because it contains the files and scripts to create an image, the tutorial works you through configuring Oracle Container Pipelines (Worker) and for that it needs a Github repo.

There is a Domain In Image variant and  a persistent volume variant of the tutorial.

Steps to follow for the Domain In Image variant

  1. Setup Oracle Kubernetes Engine instance on Oracle Cloud Infrastructure. You'll need a trial accound on cloud.oracle.com. It will then guide you through the setup of an Kubernetes cluster on OCI.
  2. Build WebLogic container image using Oracle Container Pipelines (Wercker). The second time I did the workshop I decided to change all the labels, namespaces and the domain name. Every where there is a reference to 'sample', I entered 'makker'. In this step the image is created from your fork of the github repo. If you change the name of the domain, there are two files to edit:
    1. The Dockerfile.create is called at the initial creation of the image. If there is a base image, the Dockerfile.update is called, to update the image. The Dockerfile.create creates an image with a complete domain, including the application. But the Dockerfile.update only updates the application. So you need to update the Dockerfile.create to change the domain name in the DOMAIN_NAME environment variable in the top of the file. 
    2. The Dockerfile.create copies the scripts folder into the image. That folder contains a wlst script, called model.py. At the top, a variable domain_name is declared with the same domain name assigned to it.
    If you do not change it, and want rename the domain to start it with a different name using Kubernetes later on, then you need to remove the image from the image repository, and then run the Oracle Container Pipelines-pipeline again.
  3.  Install WebLogic Operator: installs the Weblogic Operator.
  4.  Install and configure Traefik: this installs a Traefik loadbalancer on your environment. It will loadbalance over your Weblogic managed servers.
  5. Deploy WebLogic domain: this step lets you prepare your Kubernetes cluster to run the Weblogic domain. Reuse the same domain name as explained in step 2.
  6. Scaling WebLogic cluster: This one I found particularly cool. In this step you update the domain resource yaml file, to update the number of managed servers in the domain. After that, automagically a new Kubernetes pod is spawned that starts a new Managed Server. By the way, the domain will have a dynamic cluster with predefined Managed Servers based on Server Templates.
  7. Override domain configuration:  this will show you how to perform domain configuration overrides to update the datasource.
  8. Update the appliation: The whole point of this exercise is to show you how to setup a CI/CD chain that when you update your application, the image is updated and the domain can be restarted through Kubernetes, with the new image.
  9. Assing the Weblogic Pods to specific nodes or licenced nodes. The latter is important because Weblogic is licensed, so you can't just run it on any number of nodes.
The tutorial is quite elaborate and descriptive. If you stick to the naming, it will guide you through the proces ending up with a running environment. The fun is in being self-wise and choose your own naming. That's how I tripped at step 5  Deploy Weblogic Domain. I could have stuck with the given name. But I didn't feel like it, it was more fun to understand where it was used. Now you can take advantage of it.

Conclusion

I refrained discussing why you would want to run Weblogic under Docker. I have thoughts and had discussions about it. However, it made me enthousiastic that this way Weblogic can be taken with us into the containerized future.

For me the next things to explore are:
  • Create a database on another OCI image, and create a new domain, with a sample application that actually uses that database. It would be fun to create an actuall application on it.
  • Try the same with a persistent volume. A few months ago I was busy with creating java classes to start Kafka. The goal was to create Weblogic Startup classes to have Kafka started at startup of a Weblogic server. Now, it may not seem logical to you, but wouldn't it be great to combine the two and have Kafka embedded in a Weblogic cluster on a Kubernetes Cluster? Well, at least it seems fun to me. Since Kafka needs to log it's messages in a persistent log, we need to do this with a Persistent Volume.
  • Check out other topologies and related technologies. Like accessing the logs. I really would like to be able to  inspect the Weblogic log files within the container.
Have fun with the tutorial.

Monday, 24 June 2019

Debugging code - Identifying the bug



Julia Evans is a very smart woman in IT who creates very nice, funny and insight full comics, she calls 'zines', on Linux and Coding topics.

This morning I read she came up with a question that triggered me:


Last week I  realized I'm already 25 years in IT, after that my forced membership of a famous Dutch shooting club ended after 9 months (the kind of shooting club where you got free clothing, survival courses, and in my case also a truck-driving-license. Which other club offers that?). 

Anyway, during the years I discovered that despite of all the smart people I got to know and work with, this part of our work isn't obvious. In very many cases people seem to 'just do something'. No offense, but for developers it's often frustrating and just not fun to work on bugs or problems. And administrators that are confronted with a problem are often 'too busy with other stuff.' So they try something, don't find the thing, and at a later moment try something else. So, when I got involved I ask the obvious questions and in most cases I try out the same thing myself. Even though I do believe them, I want, I need,  to see the behavior with my own eyes.

By the way, I'm always reluctant to call it a bug. A bug is only a bug when you have reproduced it and based on common interpretation, together with the tester (if he found the issue), the functional/solution designer come to a consensus that the code does not do what it is supposed to. The functional specs are interpreted by both the tester and the developer. And in a certain way also the designer. It might be that the tester finds an anomaly, but that it is either a miss-interpretaton from him or a problem with the formulating of the specs. There are cases that the coder is right. But, of course, your program can work with an unexpected logic.


But, back to Julia's tweets:  they triggered me, so I jotted down some thoughts that I  got and are the basis of my search for issues.

To me it starts with identifying a case where it goes wrong, but equally importantly, together with a similar situation where it goes right. And, as far as possible, creating a unit test for both. Since, my work is mostly done on message processing platforms (Oracle SOA Suite, BPM Suite, Service Bus), I love it when a tester can hand me over a triggering message of the case and the involving response messages. I can then add them to my unit-test-set in SoapUI/ReadyAPI.
 

Then I add instrumentation (log lines, etc.) on key positions, that identify to which points the code is executed and what lines aren't reached. SOA Suite produces a flow trace of the execution. But often expressions are used that are quite complex one liners. I then split those up into several separate assignments to 'in between' variables. In Java, JavaScript, etc., I do not like complex one-liners. I prefer several variables for 'in between' values, and assignments with short expressions. That helps with line-by-line debugging.

Next, I iteratively narrow that gap between the point I can conclude the code reaches and the point I find not reached, until the statement or point of execution that fails can be identified.
In the log lines I add key variable values that are involved.


In very rare, very difficult, cases, I sometimes break down the code, cut away all the code that is not touched, until I get a minimal working Mickey Mouse (or in Dutch: Jip en Janneke) case. From there I build it up, and test iteratively in very small steps, until it breaks.

Also, very important, for difficult problems, I document very meticulously what I have done and concluded. My slogan here is: 'Deduction, my dear Whatson!' When having a problem, one can quickly come up with some potential causes and tests to check those. A unit test for a potential cause can go two ways, it can confirm or disapprove the suspicion. Both outcomes have consequences for the follow-up. Disaproving a potential cause, can strikethrough other potential paths as well. 

But, approving it, need additional steps to narrow down. I see it as a decision-tree to follow.

What I have found through the years, is that structurally document the steps done with the particular conclusions and the follow-ups is not quite obvious. But in many cases I found them important. Especially working in a Taskforce, or when I got hired to get involved in a case. In those cases the customer that hired you has the right to have something in hands that represents what he payed for.
I once was involved in a case that turned out to be a database bug. So I could not help the customer to solve it. But they where very pleased in the structured method I used to check out what could be the problem. And for those administrators and developers that got to do this as  a side job, besides there regular things: please do yourself a favor and document. I found Google Docs very usefull in this.


Oh, and by the way: I work with BPEL, BPMN, Oracle Service Bus, Java, Pl/Sql, XSLT, XQuery, Python/Jython/WLST, sometimes JavaScript, you name it. And actually, my way of structured code or systems analysis comes down to the same procedures. Regardless of technology.



Thursday, 6 June 2019

Weblogic 12.2.1.3 Signs SAML2 requests and responses with SHA-256

Today I reviewed a few responses on a 'What's new in Weblogic 12.2.1.3' question.
One of the responses mentioned the whats-new document.

Now, I'm not used to study these documents. But today I browsed through them and one thing caught my eye.

I did some implementations of Weblogic as a SAML2 Service Provider against MS ADFS. I'm even invited to do a talk 'SSO with ADFS for Apex Using Weblogic and ORDS: How I did it and Where I Tripped' at the UKOUG Southern Technology Summit 2019, july 2nd.

What's interesting here is that about 2 years ago I already wrote about my earlier experiences, and mentioned that Weblogic 12c did not support SHA-256 for the signing of SAML requests. So you had to configure ADFS to use SHA-1. In my latest implementation it stroke me that I did not have to force my ADFS counterpart to set that, at least I think I didn't. ADFS as you might expect for really some time now, uses SHA2 (SHA-256) as a default. But only today I saw that under Manageability Improvements -> Security is mentioned that Weblogic 12.2.1.3 also has SHA2 as a default now.

Knowing this will improve my talk greatly. I'm glad I saw this. It might seem to be a minor thing, but I think it's quite important.

I use Weblogic mostly as a FMW Infrastructure for SOA Suite, OSB, etc. And occasionally I do assignments with specifics like SAML2. If you're interested in what changed in a specific Weblogic version, I think it's important to know what you're looking for. Know the functionality that you're actively using or interested in.

Monday, 29 April 2019

Oracle Java Support: why should I pay for something that used to be free?


A few weeks ago, I discussed with a colleague about the new licensing model of Oracle Java.

Customers may have concerns about this, since until now a customer was used to be entitled to download Java Updates for free. At least I was.

During the discussion I posed a way of thinking that made sense to me, and that seems to be supported by document references.

For some time now you can download Open JDK, which is an open source reference implementation based on Oracle JDK, as I understand it. It states that it is a production ready. Although this story may be a bit more nuanced as I state here. In the past it was considered to be inferior to the Oracle JDK, whilst the Oracle JDK was also free. With the new release cycles, introduced with Java 9, Oracle committed to make OpenJDK as indistinguishable from OracleJDK as possible. So functional and security features are up to the level of Oracle JDK.

In short, if you don't want to pay for support you can go and use Open JDK. Or stay on your current version.

But, since Oracle is a Sales based organization, I'm not surprised if they want to be payed for delivering (Long Time) support on Java. Especially, when more and more software is from other vendors is based on Java, and when the competitors of Cloud platforms rely on it.

If you want to have support for Java, you should have a Support contract.

I mentioned already above, but what also changed, with Oracle 9, is the release cycle. Until Oracle Java 8, Oracle supported the JDK for a very, very long time. The globally, publicly available major Java versions were released in a few years pace. Java 7 was around for around 4 years, before Java 8 was released. Java 8 has been a round for 5, before Java 9 saw the light.

To get more in pace with the developments in the market, Oracle decided to start with half-yearly release cycles, starting with Java 9 in 2017. And now with every 6 months, a new Java version is released with new features. Features that do not make the cut, are delayed to the next release when they are ready. But the major Java version gets released. With that, also the support of the version is changed: the support on the release only lasts for the live span of the release, which is 6 months. To keep up with security and features, you to need to move on to the next major version, to keep supported. Currently we're at Java 12, from march '19.

If you can't keep up with that, Oracle provides a Long Time Support version, that is supported for time frame comparable of those of Java 6, 7 and 8. One of those half yearly releases are denoted LTS, and currently it is Java 11. It's most comparable with for instance RedHat Linux, providing Fedora as an open, publicly available version (like OpenJDK) and ReadHat Enterprise Linux, which is the LTS version.


Now, what if you have Licenses for Oracle products that rely on Java? Fusion Middleware, for instance, is only supported on Oracle Java, currently Oracle Java 8. You may have licenses for Weblogic, Coherence, Forms & Reports, etc. In those cases you have a restricted license of Oracle Java. Much like when you have an E-Business Suite, Siebel or any other Enterprise Information System of Oracle that uses the database. Then you can use the database, when you use it to support that setup. You cannot run custom code in it, do reporting on it or use the database in any other way.

The same counts for Java. If you run Weblogic, or have an application that uses Coherence, etc., you're entitled to download the updates for Java. See for instance this document about Restricted Oracle Java SE License in combination with Weblogic, or Support Entitlement for Java SE When Used As Part of Another Oracle Product. Also interesting: you can file support requests against that Oracle product, but not directly against Java SE, unless you have Java Support.

And, products like SQLDeveloper, sqlcl and ORDS are supported through the Database license, which also uses Java. So, having a Database license, you have support on SQLDeveloper and the Oracle Java, used by SQLDeveloper.

Notice that if you have a Weblogic License, but also have a custom java application not running in a Weblogic instance, it's not allowed to use the same JDK Updates! If the application uses HTTP to communicate with a Weblogic Server, for instance to call a REST or SOAP Service, you're not allowed to download updates for that Java Home.

Also, if you hava a custom java application that uses JDBC drivers to connect to a licensed Oracle database, then you're not allowed to download the Java updates. Oracle states that the JDBC drivers do not use an Oracle product-specific protocol.

I encountered a little while ago that JavaDB is not delivered with Oracle JDK anymore. I suppose that this is related to the changed licensing of Java.

I hope that this little article makes sense, to you and helps to understand the licensing model.

To sum up, the options you have
  1. Stay on the version you currently use, with out changes. If you can live or cope with being behind with security updates, this can be an acceptable choice.
  2. Keep up with the 6 months major version update pace, you can use OpenJDK. You keep up to date with the major versions and are secure.
  3. Stay on a LTS release and move with to another LTS at your own pace (but only for Oracle JDK).
This article have been simmering for a few weeks, since I've been busy with other stuff and I've got some review tips. But today I saw an article of Jeff Smith on the Oracle JDK with SQL Developer. So, this triggered me to update this article right away.

I did my best to blend my thoughts, with the review tips, and the notes of support. I put down what I think and learned in my own words, but I might have rephrased things a bit incorrectly.  Check out these more formal articles and statements:

Wednesday, 24 April 2019

Test Remote Asynchronous Request Response services

A few years ago, I described how you can test Asynchronous Request Response services.

The thing with Asynchronous Request Response services is, as I used to describe it, that they're in essence two complementary Request-Only (Fire and Forget) services. That is, the client submits a request to the Asynchronous Request Response service, and at a certain point waits for the response by listening to an endpoint.

To make this work, the responding Asynchronous Request Response service should be told, which endpoint it should call with the response and which correlation id should be used. The WS-Addressing standard is used for that. All nicely explained in the before mentioned article.

In most customer-cases the problem is that your Client SoapUI or ReadyAPI project should catch the response, but the service is running on a SOA Suite in the datacenter and is not allowed to get to your local machine.

MobaXterm makes it very easy to create a tunnel. You can have a remote tunnel, that enables a local listening endpoint, that forwards every request to a remote service. Very handy if you have a Vagrant project with only a NAT NetworkAdapter, where Vagrant enabled a ssh endpoint on port 2222. You can easily create a Local tunnel on port 7101, for instance, to the remote ssh session on port 2222, that enables you to get to the weblogic console on the remote VM running on http://darlin-vce:7101/console.

To create a tunnel, just open the MobaSSHTunnel - Grahpical port forwarding tool:
This will open:

You can create a new SSH tunnel or edit a current one using the cogs icon under settings. For instance, to be able to do the Local port forwarding to get to your Weblogic console on your Vagrant box, create a tunnel as follows:



On the left you can enter a local port. That is the port you can use on your localhost. On the top right you can enter an host and port for the address to post your request to (does not need to be localhost). Then bottom right you need to provide an ssh session. A bit inconvenient is that you can't select a session from the sessions pane. Provide a host, port and user to connect to your ssh server.

What happens is that MobaXterm creates an SSH session, and a local endpoint. Every thing posted to the local endpoint is posted on the remote server to the give address. In this case I can go on my browser and enter https;//localhost:7101/console and it will bring me to the Weblogic Console on my Vagrant box. Neat, isn't it?



To get the remote Async Service respond to your local machine, you can also create a we need a tunnel that works the otherway around: we need Remote Port Forwarding:

Configuring is similar to Local port forwarding, however, now on the remote server a listen endpoint is created, and everything that is posted to the localhost:7777 adress (in this example) is forwarded to the address entered on the local server. In this case it is forwarded to localhost:7777, but it could be something else.

In our ReadyAPI project I created a Groovy script as follows:
def testCase = testRunner.testCase
def env = testCase.testSuite.project.activeEnvironment.name
if (env != "o02-12c"  &&  env != "o02" ) {
  log.info "Environment: "+env+", so set callbackIp to "+InetAddress.localHost.hostAddress
  testRunner.testCase.setPropertyValue( "callbackIp", InetAddress.localHost.hostAddress)
} else {
  log.info "Environment: "+env+", so set callbackIp to localhost"
  testRunner.testCase.setPropertyValue( "callbackIp", "localhost")
}

In ReadyAPI you can define environments, with the project property activeEnvironment.name it can be queried.

If the environment points to one of our development environments, I set the callbackIp testcase property to "localhost". But for the default environment, I use InetAddress.localHost.hostAddress to get the local ip address. This will be the ip address of our CD/CI tool, that runs ReadyAPI from a script.

You can set the WS-Addressing ReplyTo address as follows, for instance:
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" 
xmlns:add="http://schemas.xmlsoap.org/ws/2003/03/addressing">
      <soapenv:Header>
       <add:ReplyTo>
         <add:Address>http://${#TestCase#callbackIp}:7777/MyMockResponseURI</add:Address>
      </add:ReplyTo>
   </soapenv:Header>
   <soapenv:Body>

Then this address is used to do the callback. Make sure the tunnel is started:

You can also have the tunnel auto started (with the blue man-running-icon) or auto-reconnected (with the purple lightning icon).

This may also be very relevant in testing services on Oracle SOA Cloud Service, or Integration Cloud.

Happy tunneling!