Showing posts with label Docker. Show all posts
Showing posts with label Docker. Show all posts

Tuesday, 16 July 2019

Weblogic under Kubernetes: the weblogic topology of the future

Already 4 months ago I attended the PaaSForum 2019 in Mallorca. As every year it was great to meet members of the big EMEA Oracle Partner family.

And of course a lot of interesting talks and workshops. This year I was especially interested in announcements around SOA Suite and Project Helidon as a Microservice framework. But certainly also Weblogic under Kubernetes.
And actually, to me, the Kubernetes Weblogic Operator that was this years most enthusing subject.

With his WebLogic on Kubernetes talk Maciej Gruszka, Director Product Management, enlightened the future Oracle envisions for WebLogic. He started with stating that 'Weblogic is not dead!'. Well, he got me with that already!

The road ahead is making WebLogic fit to run in Docker and managed by Kubernetes. It might not be exactly what I had in mind, but it is certainly great news to learn that WebLogic will be around and alive for a future ahead. Oracle thrives to make future releases of Weblogic available as Docker images.

Today already, WebLogic is fully supported to run in a Docker container. And according to Marciej, the team is busy with the SOA and OSB teams to get those products fit and available for Docker too. It might even be possible that future releases are going to be delivered as a Docker image.

What is the Weblogic Operator?

To run in a Kubernetes managed cluster, Kubernetes needs to be able to perform lifecycle operation on a Weblogic Managed server. For that  the Weblogic Operator for Kubernetes is created and introduced. A Kubernetes Operator is a sort of Adapter on top of a non-Kubernetes system that translates Kubernetes lifecycle commands to operations within the specific application.

The Weblogic Operator  uses Kubernetes API to implement operations like:
  • Provisioning
  • Life cycle maangement
  • Updates
  • Scaling
  • Security
Besides the Weblogic Operator, Oracle also provides an Exporter for Prometheus and Elastic Stack, for monitoring and logging. Since the managed servers are within a container, you'll need to export events and logfiles to have them accessible and introspectible, even when the container is down or recreated from an updated image.

Some interesting links

Topologies

There are actually two topologies to choose from:
  • Domain within the Docker Image
  • Domain on a Persistent Volume
With the first one the container is actually stateless. All it needs to know is within the container. The Admin Console can be used for diagnostic and monitoring purposes, but not for updating the domain. Because spinning a new container will have it read the domain from the internal container image.

With the persistent volume topology the domain is stored outside the container. Changes are persisted. This topology is more in line with an On Premises installation of Weblogic. However, High Availability and Disaster Recovery is limited, because Persistent Volume needs to be shared and the domain configuration needs to be synced across datacenters. With 'In Image' Domains, things get simpler, because the domain is transported within the container. Downside is that changes in the domain require creating a new image through the CI/CD pipeline.

Most customers seem to choose for the 'Domain in Image' topology. In practice, domains don't change that much.

You can  adapt specific artifacts like data source connections, urls and username/passwords using Configuration Overrides.

Workshop

At the PaaSForum we got the chance to play around with Kubernetes and Weblogic. The workshop is described here: https://github.com/nagypeter/weblogic-operator-tutorial. You should fork this to a repository with your own Github account, because it contains the files and scripts to create an image, the tutorial works you through configuring Oracle Container Pipelines (Worker) and for that it needs a Github repo.

There is a Domain In Image variant and  a persistent volume variant of the tutorial.

Steps to follow for the Domain In Image variant

  1. Setup Oracle Kubernetes Engine instance on Oracle Cloud Infrastructure. You'll need a trial accound on cloud.oracle.com. It will then guide you through the setup of an Kubernetes cluster on OCI.
  2. Build WebLogic container image using Oracle Container Pipelines (Wercker). The second time I did the workshop I decided to change all the labels, namespaces and the domain name. Every where there is a reference to 'sample', I entered 'makker'. In this step the image is created from your fork of the github repo. If you change the name of the domain, there are two files to edit:
    1. The Dockerfile.create is called at the initial creation of the image. If there is a base image, the Dockerfile.update is called, to update the image. The Dockerfile.create creates an image with a complete domain, including the application. But the Dockerfile.update only updates the application. So you need to update the Dockerfile.create to change the domain name in the DOMAIN_NAME environment variable in the top of the file. 
    2. The Dockerfile.create copies the scripts folder into the image. That folder contains a wlst script, called model.py. At the top, a variable domain_name is declared with the same domain name assigned to it.
    If you do not change it, and want rename the domain to start it with a different name using Kubernetes later on, then you need to remove the image from the image repository, and then run the Oracle Container Pipelines-pipeline again.
  3.  Install WebLogic Operator: installs the Weblogic Operator.
  4.  Install and configure Traefik: this installs a Traefik loadbalancer on your environment. It will loadbalance over your Weblogic managed servers.
  5. Deploy WebLogic domain: this step lets you prepare your Kubernetes cluster to run the Weblogic domain. Reuse the same domain name as explained in step 2.
  6. Scaling WebLogic cluster: This one I found particularly cool. In this step you update the domain resource yaml file, to update the number of managed servers in the domain. After that, automagically a new Kubernetes pod is spawned that starts a new Managed Server. By the way, the domain will have a dynamic cluster with predefined Managed Servers based on Server Templates.
  7. Override domain configuration:  this will show you how to perform domain configuration overrides to update the datasource.
  8. Update the appliation: The whole point of this exercise is to show you how to setup a CI/CD chain that when you update your application, the image is updated and the domain can be restarted through Kubernetes, with the new image.
  9. Assing the Weblogic Pods to specific nodes or licenced nodes. The latter is important because Weblogic is licensed, so you can't just run it on any number of nodes.
The tutorial is quite elaborate and descriptive. If you stick to the naming, it will guide you through the proces ending up with a running environment. The fun is in being self-wise and choose your own naming. That's how I tripped at step 5  Deploy Weblogic Domain. I could have stuck with the given name. But I didn't feel like it, it was more fun to understand where it was used. Now you can take advantage of it.

Conclusion

I refrained discussing why you would want to run Weblogic under Docker. I have thoughts and had discussions about it. However, it made me enthousiastic that this way Weblogic can be taken with us into the containerized future.

For me the next things to explore are:
  • Create a database on another OCI image, and create a new domain, with a sample application that actually uses that database. It would be fun to create an actuall application on it.
  • Try the same with a persistent volume. A few months ago I was busy with creating java classes to start Kafka. The goal was to create Weblogic Startup classes to have Kafka started at startup of a Weblogic server. Now, it may not seem logical to you, but wouldn't it be great to combine the two and have Kafka embedded in a Weblogic cluster on a Kubernetes Cluster? Well, at least it seems fun to me. Since Kafka needs to log it's messages in a persistent log, we need to do this with a Persistent Volume.
  • Check out other topologies and related technologies. Like accessing the logs. I really would like to be able to  inspect the Weblogic log files within the container.
Have fun with the tutorial.

Monday, 3 September 2018

Docker on Oracle Linux

It occurred to me that if you want to start using Docker there are plenty examples that use Ubuntu as a base platform. I read a book called Learning Docker that assumes Ubuntu for the examples, for instance. I know I am quite stubborn, a "know-it-better" person, but I want to be able to do the same on Oracle Linux.

Docker on Oracle Linux turns out not too complicated. But I ran in to a caveat that I solved and want to share.

I use Vagrant to bring up an Oracle Linux Box, based on a Vagrantfile that prepares the box.With that as a starting point, I created a script that does the complete docker installation. In the following I'll build it up for you, step by step. I'll add my project to GitHub, and provide a link to the complete script in the end.

Init

First some initialization and a function to read the property file:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
# Install docker on Oracle Linux.
# @author: Martien van den Akker, Darwin-IT Professionals.
#
function prop {
    grep "${1}" $SCRIPTPATH/makeDockerUser.properties|cut -d'=' -f2
}

#
DOCKER_USER=$(prop 'docker.user')
DOCKER_GROUP=docker

This sets the $SCRIPTPATH property as the folder where the script resides, so I can refer to other files relative to that one.
The function prop alows me to get a property from a property file. A smart function that I got from a colleague (thanks Rob). Based on this property file called makeDockerUser.properties:
docker.user=oracle
docker.password=welcome1

I set the DOCKER_USER and DOCKER_GROUP properties.
The DOCKER_GROUP property is hardcoded however, but that is the standard group that is created at installation of Docker to allow other users to use Docker.

Install Docker Engine

The first actual step is to install the docker engine. Now you can go for the community edition and I've seen that there are examples that pulls the docker-ce (docker Community Engine) for you. However, one of the reasons I am stubborn to stick with Oracle Linux (as you know a RedHat derivate) is that Oracle Linux is the flavor that is used with most of my customers. And if not, it is RedHat. And then I just want to rely on the standard repositories.

To install the docker engine, I have to add the ol7_addons and the ol7_optional_latest repositories. During my OL prepare script, I already added the ol7_developer_EPEL repository. Then the docker-engine package can simply be installed by yum:

#
echo 1. Install Docker Engine
echo . add ol7_addons and ol7_optional_latest repos.
sudo yum-config-manager --enable ol7_addons
sudo yum-config-manager --enable ol7_optional_latest
#
echo . install docker-engine
sudo yum install -q -y docker-engine

Install Curl

For most docker related actions, it is convenient to have  curl installed as well:
#
echo 2. Install curl
sudo yum install -q -y  curl 

Add docker group to docker user

After the docker installation, we need to add the docker group to the docker user (in my case the sort-of default oracle user):
#
echo 3. Add  ${DOCKER_GROUP} group to ${DOCKER_USER}
sudo usermod -aG ${DOCKER_GROUP} ${DOCKER_USER}

This allows the ${DOCKER_USER} (set in the initialization phase) to use the docker command.

Check the docker install

Now let's add a check if docker works:
#
echo 4. Check Docker install
docker --version
sudo systemctl start docker
sudo systemctl status docker
This lists the version of the installed docker, then starts the docker service and lists the status of the docker.

Change the location of the docker containers

When creating a docker container/image (I leave the difference for now), these are saved by default in the location /var/lib/docker. The thing is that this is on the root disk of your installation. And it can grow quite big. For installations of oracle software for instance, I create an extra disk that I mount on /app. It would be better to have a /data mount point as well, but for now I stick with the /app data. So, I want to have docker place my images on the secondary disk. One solution used by Tim Hall, see here, is to create a second disk, format it with BTRFS, and mount it simply to /var/lib/docker.
I rather reconfigure docker to use another disk. This is taken from this article.

To implement this, we first need to know which storage driver Docker uses. We get this from the command docker info, as follows:
echo 5. Change docker default folder
# According to oracle-base you should create a filesystem, preferably using BTRFS, for the container-home. https://oracle-base.com/articles/linux/docker-install-docker-on-oracle-linux-ol7. 
# But let's stick with ext4.
## Adapted from  https://sanenthusiast.com/change-default-image-container-location-docker/
echo 5.1. Find Storage Driver
GREP_STRG_DRVR=$(sudo docker info |grep "Storage Driver")
DOCKER_STORAGE_DRVR=${GREP_STRG_DRVR#*": "}
echo "Storage Driver: ${DOCKER_STORAGE_DRVR}"

This mentions me the overlay2 driver. Then we need to stop docker:
echo 5.2. Stop docker
sudo systemctl stop docker

And then create the folder where we want to store the images:
echo 5.3. Add reference to data folders for storage.
DOCKER_DATA_HOME=/app/docker/data
echo mkdir -p ${DOCKER_DATA_HOME}
sudo mkdir ${DOCKER_DATA_HOME}

Now I found a bit of a problem with my solution here. When I reconfigure docker to use my custom folder, it turns out that on my system the filesystem is not writable from the docker image. If you want to install software in your image, it of course wants to write the files. And this is prevented. After quite some searching, I came on this question on stackoverflow. It turns out that selinux enforces a policy that prevents writing of docker to a custom device. This can be simply circumvented by disabling the enforcing:
#
##https://stackoverflow.com/questions/30091681/why-does-docker-prompt-permission-denied-when-backing-up-the-data-volume
echo disable selinux enforcing
sudo setenforce 0

This disables, as said, the enforcing of selinux. I would say this should be a bit more nuanced. But I don't have that at hand. This however, solved my problem.
Now all is left to configure docker to use the custom folder. Docker is started using a script. In Oracle Linux this is quite conveniently setup. In the folder /etc/sysconfig you find a few config scripts, amongst others a script called: docker-storage. This is a proper to add options. When you set the DOCKER_STORAGE_OPTIONS variable, it is added to the command line. So we simply need to add the line:
DOCKER_STORAGE_OPTIONS = --graph="/app/docker/data" --storage-driver=overlay2

, to the file /etc/sysconfig/docker-storage. This can be done with the following snippet:
#
DOCKER_STORAGE_CFG=/etc/sysconfig/docker-storage
sudo sh -c "echo 'DOCKER_STORAGE_OPTIONS = --graph=\"${DOCKER_DATA_HOME}\" --storage-driver=${DOCKER_STORAGE_DRVR}' >> ${DOCKER_STORAGE_CFG}"

And then finish up with starting docker service again:
#
echo 5.4 Reload deamon
sudo systemctl daemon-reload 
echo 5.5 Start docker again
sudo systemctl start docker

Conclusion

Last week I was at the #PaaSSummerCamp in Lisbon. I did some labs with my docker installation, that resulted in the permission problem. As mentioned I resolved that and I could run the labs succesfully with docker containers from Oracle. So, I concluded that this script should suffice. You can download the complete script at my GitHub vagrant repo.