Tuesday, 30 June 2020

A little bit of insight in SOA Suite future


A few weeks ago I was made aware of a few announcements, which I think makes sense and that I want to pass on to my followers, sauced with a bit of my own perspective.

Containerized SOA

Last year I had made myself familar with the Oracle Weblogic Kubernetes Operator. See for instance my Cheat Sheet Serie. I also had the honour to talk about it during the Tech Summit at OUK in december '19. Weblogic under Kubernetes is apparently the way to go for Weblogic. And with that, also the Fusion Middleware Stack. However, until now only 'plain' Weblogic is supported under Kubernetes, on all Cloud platforms, as well as on your own on-premises Kubernetes platform.

It was no surprise that SOA Suite would follow, and in March there an early acces for SOA Suite on Kubernetes was announced.

In the announcement it is stated that Oracle will provide Container images for SOA Suite including OSB, that are also certified for deployment on production Kubernetes environments. Also documentation, support files, deployment scripts and samples.

Later on other components will be certified. This is good news, because it will allow SOA Suite be run in co-existence with cloud native applications and be part of a more heterogenous application platform. To me this makes sense. It makes High Availability and Disaster Recovery easier, but although the application landscape will be diverse and heterogenous, this makes the maintenance, install, deploy and upgrade of FMW within that landscape more uniformly aligned with other application compents like web applications, possible microservices, etc.

Paid Market Place offering

Another announcment I got recently is about the release of  a "Paid" listing of Oracle SOA Suite for Oracle Cloud Infrastructure" on the Oracle Marketplace. There was already a Bring Your Own Licence offering, that you could bring in to use your universal cloud credits to host your SOA Suite instance in the cloud. You could purchase a separate license, but now you can also use the Universal Cloud Credits  to have a paid, licensed instance of SOA Suite in the cloud, without the need to purchase a license.

And so there are two new offerings in the market place:
  • Oracle SOA Suite on Oracle Cloud Infrastructure (PAID)
  • Oracle SOA Suite with B2B EDI Adapter on Oracle Cloud Infrastructure (PAID)
These offerings include:
  • SOA with Service Bus & B2B Cluster, with additional leverage of the B2B EDI Adapter.
  • MFT Cluster
  • BAM
This will provide better options for deploying SOA Suite on OCI, to:
  • Provision SOA instances using OCI
  • Manage instances using OCI
  • Scale up/down/in/out using OCI
  • Backup/restore using OCI.
Oracle's focus on delivering SOA Suite from the Market place. It is expected that current SOA Cloud Service customers will migrate to this offering. The Marktet Place SOA Suite will be enhanced and improved with new capabilities and functions, that not necessarily will be added to the SOA CS.
Probably this will give Oracle a better and more uniform way to improve and deliver new versions of SOA Suite. It also makes sense in relationship to the SOA Suite on Containers announcement.

For new customers the Marketplace is the way to get SOA Suite. Existing customers can use the BYOL offering, but might need to move to the new offering when contract renewal might be opportune.

What about Oracle Integration Cloud (OIC)?

This is still Oracle's prime offering for integrations and process modelling. You should first look at OIC for new projects. Only if you're an existing SOA Suite customer and/or have specific requirements that drive the choice to SOA Suite and related components, then you should consider the Marketplace SOA Suite offering.

This makes the choices a bit clearer, I think.

Friday, 19 June 2020

Use of correlation sets in SOA Suite

Years ago, I had plans to write a book about BPEL or at least a series of articles to be bundled as a BPEL Course. I stranded with only one Hello World article.

This year, I came up with the idea of doing something around Correlation Sets. Preparing a series of articles and a talk. And therefor, let's start with an article on Correlation Sets in BPEL. Maybe later on I could pick up those earlier plans again.

You may have read "BPEL", and tend to skip this article. But wait, if you use BPM Suite: the Oracle BPM Process Engine is the exact same thing as the BPEL Process engine! And if you use the Processes module of Oracle Integration Cloud: it can use Correlation Sets too. Surprise: again it uses the exact same Process Engine as Oracle SOA Suite BPEL and Oracle BPM Suite.

Why Correlation Sets?

Now, why Correlation Sets and what are those? You may be familiar with OSB or maybe Mulesoft, or other integration tools.
OSB is a stateless engine. What comes in is executed at once until it is done. So, services in OSB are inherently synchronous and short-lived. You may argue that you can do Asynch Services in OSB. But those are in fact "synchronous" one-way services. Fire & Forget, as you will. They are executed right away (hence the quoted synchronous) , until it is done. But the calling application does not expect a result (and thus asynchronous in the sense that the caller won't wait).

You could, and I have done it actually, create asynchronous request response services in OSB. Asynchronous Request Response services are actually two complementary one way fire & forget services. For such a WSDL both services are defined in different port types: one for the actual service consumer, and one callback service for the service provider. Using WS-Addressing header elements the calling service will provide a ReplyTo callback-endpoint and a MessageId to be provided by the responding service as an RelatesTo MessageId.

This RelatesTo MessageId serves as a correlation id, that maps to the initiating MessageId. WS- Addressing is a Webservice standard that describes the SOAP Header elements to use. As said, you can do this in OSB, OSB even has the WS-Addressing namespaces already defined. However, you have to code the determination and the setting of the MessageId and ReplyTo-Address yourself.

Because of the inherently stateless foundation of OSB the services are short-lived and thats why OSB is not suitable for long running processes. The Oracle SOASuite BPEL engine on the other hand, is designed to orchestrate Services (WebServices originally, but from 12c onwards REST Services as well) in a statefull way. This makes BPEL suitable for long running transactions as well. Because of that after the acquisition of Collaxa, the company who created the BPEL Engine, Oracle decided to replace it's own database product Oracle Workflow (OWF) with BPEL.  And SOA Suite and it's BPEL engine natively support WS-Addressing. Based upon an Async Request/Response WSDL it will make sure it adds the proper WS-Addressing elements and has a SOAP Endpoint to catch response messages. Based upon the RelatesTo message id  in the response it will correlate the incoming response with the proper BPEL process instance that waits for that message.

A BPEL process may run from a few seconds, to several minutes, days, months, or potentially even years. Although experience learned us that we wouldn't recommend BPEL services to run for longer than a few days. For real long running process you should choose BPM Suite or Oracle Integration Cloud/Process.

WS-Addressing helps in correlating response messages to requests that are sent out previously. But, it does not correlate Ad-Hoc messages. When a process runs for more than a few minuts, chances are that the information stored within the process is changed externally. A customer waiting for a some process may have relocated or even died. So you may need to interact with a running process. You want to be able to send a message with the changed info to the running process instance. And you want to be sure that the engine correlates the message to the correct instance. Correlation Sets help with these ad-hoc messages that may or may not be send at any time during the running of the process.

An example BPEL process

Let's make a simple customer processing process that reads an xml file and processes it and writes it back to an xml file.
My composite looks like:
It has two File Adapter definitions, an exposed service that polls on the /tmp/In folder for customer*.xml files. And a reference service that writes an xml file into the /tmp/Out folder as customer%SEQ%_%yyMMddHHmmss%.xml. I'm not going to explain how to setup the File adapters, that would be another course-chapter.

For both adapters I created the following XSD:
<?xml version="1.0" encoding="UTF-8" ?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:cmr="http://xmlns.darwin-it.nl/xsd/demo/Customer"
            targetNamespace="http://xmlns.darwin-it.nl/xsd/demo/Customer" elementFormDefault="qualified">
  <xsd:element name="customer" type="cmr:CustomerType">
    <xsd:annotation>
      <xsd:documentation>A customer</xsd:documentation>
    </xsd:annotation>
  </xsd:element>
  <xsd:complexType name="CustomerType">
    <xsd:sequence>
      <xsd:element name="id" maxOccurs="1" type="xsd:string"/>
      <xsd:element name="firstName" maxOccurs="1" type="xsd:string"/>
      <xsd:element name="lastName" maxOccurs="1" type="xsd:string"/>
      <xsd:element name="lastNamePrefixes" maxOccurs="1" type="xsd:string" minOccurs="0"/>
      <xsd:element name="gender" maxOccurs="1" type="xsd:string"/>
      <xsd:element name="streetName" maxOccurs="1" type="xsd:string"/>
      <xsd:element name="houseNumber" maxOccurs="1" type="xsd:string"/>
      <xsd:element name="country" maxOccurs="1" type="xsd:string"/>
    </xsd:sequence>
  </xsd:complexType>
</xsd:schema>
(Just finishing this article, I encounter that I missed a city element. It does not matter for the story, but in the rest of the example I use the country field for city.)

The first iteration of the BPEL process just receives the file from the customerIn adapter, assigns it to the the input variable of the invoke of the customerOut adapter and invokes it:

Deploy it to the SOA Server and test it:

[oracle@darlin-ind In]$ ls ../TestFiles/
customer1.xml  customer2.xml
[oracle@darlin-ind In]$ cp ../TestFiles/customer1.xml .
[oracle@darlin-ind In]$ ls
customer1.xml
[oracle@darlin-ind In]$ ls
customer1.xml
[oracle@darlin-ind In]$ ls
customer1.xml
[oracle@darlin-ind In]$ ls
[oracle@darlin-ind In]$ ls ../Out/
customer2_200617125051.xml
[oracle@darlin-ind In]$
The output customer hasn't changed and is just like the input:
[oracle@darlin-ind In]$ cat ../Out/customer2_200617125051.xml
<?xml version="1.0" encoding="UTF-8" ?><customer xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.darwin-it.nl/xsd/demo/Customer ../Schemas/Customer.xsd" xmlns="http://xmlns.darwin-it.nl/xsd/demo/Customer">
  <id>1001</id>
  <firstName>Jean-Michel</firstName>
  <lastName>Jarre</lastName>
  <gender>M</gender>
  <streetName>Rue d'Oxygene</streetName>
  <houseNumber>4</houseNumber>
  <country>Paris</country>
</customer>
[oracle@darlin-ind In]$

This process is now rather short-lived and doesn't do much except for moving the contents of the file. Now, let's say that this processing of the file takes quite some time and but during the processing the customer can have relocated, or died or has otherwise changed it's information.

I expanded my composite with a SOAP Service, based on a One-Way WSDL, that is based upon the same xsd:
And this is how I changed the BPEL:




In this example, after setting the customer to the customerOut variable, there is a long running  "customer processing" sequence, that takes "about" 5 minutes.

But in parallel it now also listens to the UpdateCustomer partnerlink using a Receive. This could be done in a loop to also receive more follow-up messages.

This might look like a bit unnecessarily complex, with the throw and catch combination. But the thing with the Flow activity is that it completes only when all the branches are completed. So, you need a means to "kill" the Receive_UpdateCustomer activity. Adding  a Throw activity does this nicely. Although the activity is colored red, this is not an actual Fault exception. I use it here as a flow-control activity. It just has a simple fault name, that I found the easiest to enter in the source:
<throw name="ThrowFinished" faultName="client:Finished"/>

This is because you can just use the client namespace reference. While in the designer you should provide a complete namespace URI:

Same counts for the Catch, after creating one, it's easier to add the namespace from the source:
        <catch faultName="client:Finished">
          <assign name="AssignInputCustomer">
            <copy>
              <from>$ReceiveCustomer_Read_InputVariable.body</from>
              <to expressionLanguage="urn:oasis:names:tc:wsbpel:2.0:sublang:xpath1.0">$Invoke_WriteCustomerOut_Write_InputVariable.body</to>
            </copy>
          </assign>
        </catch>

Side-note: did you know that if you click on an activity or scope/sequence in the Designer and switch to the source, the cursor will be moved to the definition of the activity you selected? To me this often comes handy with larger BPELs.

By throwing the Finished exception the flow activity is left and by that all the unfinished branches are also closed and by that the Receive is quit too.

When you get a SOAP message in the bpel example above you would still wait for finishing the process branch. You probably also need to notify the customer processing branch that the data is changed. That can be done in the same way, by doing a throw of a custom exception.

How to define Correlation Sets

The example above won't work as is. Because, how does BPEL know to which process instance the message has to be sent? We need to create a Correlation set. And to do so we need to define how we can correlate the UpdateCustomer message to the customerIn message. Luckily there is a Customer.id field. For this example that will do. But keep in mind: you can have multiple processes running for a customer. So you should add something that will identify the instance.

You can add and edit correlation sets on the invoke, receive, and pick/onMessage activities. But also from the BPEL menu:




Then you can define a Correlation Set:


As you can see you can create multiple Correlation Sets, each with one or more properties. In the last window, create a property, then select it for the Correlation Set and click ok. Up to the first dialog.


You'll see that the Correlation Set isn't valid yet. What misses, what I didn't provide in the last dialog, are the property aliases. We need to map the properties to the messages.
I find it convenient to do that on the activities, since we also need to couple the correlation Sets to particular Invoke, Receive, and/or Pick/OnMessage activities. Let's begin with the first receive:


Select the Correlations tab, and add the Correlation Set. Since this is the activity that the Customer Id first appears in a message in the BPEL processe, we need to initiate the Correlation Set. This can also be done on an invoke, when calling a process that may cause multiple ad-hoc follow-up messages. So, set the Initiate property to yes.
Also, here you can have multiple correlation sets on an activity.

Then click the edit (pencil) button to edit the Correlation Set. And add a property alias:

To find the proper message type, I find it convenient to go through the partnerlink node, then select the proper WSDL. From that WSDL choose the proper message type. Now, you would think you could select the particular element. Unfortunately, it is slightly less user-friendly. After choosing the proper message type in the particular WSDL, click in the query field and type CTRL-Space. A balloon will pop up with the possible fields and when the field has a child element, then a follow-up balloon will pop-up. Doing so, finish your xpath, and click as many times on Ok to get all the dialogs closed properly.

Another side-node, the CTRL-Space way of working with balloons also works with the regular expression builder in with creating Assign-Copy-rules. Sometimes I get the balloons un-asked for, which I actually find annoying sometimes.

Do the same for the customer update Receive:
Here it is important to select No for the initate: we now adhere to the initiated Correlation Set.

Wrap this up, deploy the composite and test.

Test Correlations

As in the first version copy an xml file to the /tmp/In folder. This results in a following BPEL Flow:

The yellow-highlighted activities are now active. So, apparently it waits for a Receive and for the processing (Wait activity).

From the flow trace you can click on the compositename, besides the instance id, and then click on the Test button:
And enter new values for your customer:


In the bottom right corner you can click on the "Test Web Service" button, and from the resulting Response tab you can click on the launch flow trace.

You'll find that the Receive has been done, and the Assign after that as well. Now, only the Wait activity is active.


After processing the flow it throws a Finished exception, and finishes the BPEL Flow.
In this case the Receive was earlier than finisihing the Wait activity. So, in this flow the throw is unnecessary, but when the message wasn't received, then the throw is needed.

Looking in the /tmp/Out folder we see that the file is updates neatly from the Update:
[oracle@darlin-ind In]$ ls ../Out/
customer2_200617125051.xml  customer3_200619160921.xml
[oracle@darlin-ind In]$ cat ../Out/customer3_200619160921.xml
<?xml version="1.0" encoding="UTF-8" ?><ns1:customer xmlns:ns1="http://xmlns.darwin-it.nl/xsd/demo/Customer">
            <ns1:id>1001</ns1:id>
            <ns1:firstName>Jean-Michel</ns1:firstName>
            <ns1:lastName>Jarre</ns1:lastName>
            <ns1:lastNamePrefixes/>
            <ns1:gender>M</ns1:gender>
            <ns1:streetName>Equinoxelane</ns1:streetName>
            <ns1:houseNumber>7</ns1:houseNumber>
            <ns1:country>Paris</ns1:country>
        </ns1:customer>[oracle@darlin-ind In]$

A bit of techie-candy

Where is all this beautifull stuff registered?
First of all, for the Correlation properties, you will find a new WSDL has appeared:

At the top of the source of the BPEL you'll find the following snippet:
 <bpelx:annotation>
    <bpelx:analysis>
      <bpelx:property name="propertiesFile">
        <![CDATA[../WSDLs/ProcessCustomer_properties.wsdl]]>
      </bpelx:property>
    </bpelx:analysis>
  </bpelx:annotation>
  <import namespace="http://xmlns.oracle.com/pcbpel/adapter/file/CorrelationDemo/CorrelationDemo/customerIn"
          location="../WSDLs/customerIn.wsdl" importType="http://schemas.xmlsoap.org/wsdl/" ui:processWSDL="true"/>

Here you see a reference to the properties wsdl. Also an import of the customerIn.wsdl. Let's take a look in there:
<?xml version= '1.0' encoding= 'UTF-8' ?>
<wsdl:definitions
     name="customerIn"
     targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/file/CorrelationDemo/CorrelationDemo/customerIn"
     xmlns:tns="http://xmlns.oracle.com/pcbpel/adapter/file/CorrelationDemo/CorrelationDemo/customerIn"
     xmlns:jca="http://xmlns.oracle.com/pcbpel/wsdl/jca/"
     xmlns:plt="http://schemas.xmlsoap.org/ws/2003/05/partner-link/"
     xmlns:pc="http://xmlns.oracle.com/pcbpel/"
     xmlns:imp1="http://xmlns.darwin-it.nl/xsd/demo/Customer"
     xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
     xmlns:cor="http://xmlns.oracle.com/CorrelationDemo/CorrelationDemo/ProcessCustomer/correlationset"
     xmlns:bpel="http://docs.oasis-open.org/wsbpel/2.0/process/executable"
     xmlns:vprop="http://docs.oasis-open.org/wsbpel/2.0/varprop"
     xmlns:ns="http://oracle.com/sca/soapservice/CorrelationDemo/CorrelationDemo/Customer"
    >
    <plt:partnerLinkType name="Read_plt">
        <plt:role name="Read_role">
            <plt:portType name="tns:Read_ptt"/>
        </plt:role>
    </plt:partnerLinkType>
    <vprop:propertyAlias propertyName="cor:customerId" xmlns:tns="http://xmlns.oracle.com/pcbpel/adapter/file/CorrelationDemo/CorrelationDemo/customerIn"
         messageType="tns:Read_msg" part="body">
        <vprop:query>imp1:id</vprop:query>
    </vprop:propertyAlias>
    <vprop:propertyAlias propertyName="cor:customerId" xmlns:ns13="http://oracle.com/sca/soapservice/CorrelationDemo/CorrelationDemo/Customer"
         messageType="ns13:requestMessage" part="part1">
        <vprop:query>imp1:id</vprop:query>
    </vprop:propertyAlias>
    <wsdl:import namespace="http://oracle.com/sca/soapservice/CorrelationDemo/CorrelationDemo/Customer"
         location="Customer.wsdl"/>
    <wsdl:import namespace="http://xmlns.oracle.com/CorrelationDemo/CorrelationDemo/ProcessCustomer/correlationset"
         location="ProcessCustomer_properties.wsdl"/>

Below the partnerLinkType you find the propertyAliases.
Especially with older, migrated processes, this might be a bit tricky, because you might get the property aliases not in the wsdl you want. Then you need to register the proper wsdl in the BPEL and move the property aliases to the other wsdl, together with the vprop namespace declaration.
When you move the WSDL to the MDS for reuse, move the property aliases to another wrapper WSDL. You shouldn't move the property aliases to the MDS with it. Because they belong to the process and shouldn't be shared, but also it makes it impossible for the designer to change. I'm not sure if it even would work. Probably it does, but you should not have that.

As I mentioned before, you can have multiple Correlation Sets in your BPEL (or BPMN) process and even on an activity. In complex interactions this may make perfectly sense. For instance when there is overlap. You  may have initiated one Correlation Set on an earlier Invoke or Receive, and use that to correlate to another message in a Receive. But that message may have another identifying field that can be used to correlate with other interactions. And so you may have a non-initiating Correlation Set on an activity that initiates another one. Maybe even based on different property-aliases on the same message.

Pitfalls

Per CorrelationSet you can have multiple properties. They are concatenated on to a string. Don't use too many properties to make-up the correlation set. Perferably only one. And use short scalar elements for the properties. In the past the maximum length was around 1000 characters. I've no idea what it is now. But multiple properties and property aliases may make it error-prone. During the concatenation, a different formatting may occur. It is harder to check, validate if the correlation elements in the messages conform with eachother.

In the example above I used the customer id for the correlation property. This results in an initiated correlation set where the UpdateCustomer Receive is listening for. If you would initiate another process instance for the same customer, the process engine will find at the UpdateCustomer Receive that there already is a (same) Receive with the same Correlation Set. And will fail. The process engine identifies the particular activity in the process definition and the combination process-activity and Correlation Set is unique. A uniqueness violation at this point will result in a runtime fault.

It doesn't matter if the message is initiate before of after the Receive is activated. If you would be so fast to issue an UpdateCustomer request before the process instance has activated the Receive, then it will be stored in a table, and picked up when the Receive activity is reached.

Conclusion

This may be new to you and sound very sophisticated. Or, not of course, if you were already familiar with it. If this is new to you: it was already in the product when Oracle acquired it in 2004!
And not only that, you can use it in OIC Processes as well. Also for years. I wrote about that in 2016.

More on correlation sets, check out the docs.

Tuesday, 19 May 2020

Honey, I shrunk the database!

For my current assignment I need to get 3 SOA/B2B environments running. I'm going to try-out the multi-hop functionality, where the middle B2B environment should work as a dispatcher. The idea is that in Dev, Test and Pre Prod environments the dev environment can send messages to the remote trading partner's test environment, through the in-between-environment. To the remote trading partner, that in-between-environment should act as the local test environment, but then should be able to dispatch the message to the actual dev, test or pre-prod environment.

I envision  a solution, where this in-between B2B environment should act as this dispatching B2B hop. So I need to have 3 VMs running with their own database (although I could have them share one database), and Fusion Middleware domain.

The Vagrant project that I wrote about earlier this week, creates a database and then provisions all the FMW installations and a domain. That database is a 12cR1 database (that I could upgrade) that is installed default. In my setup it takes about 1.8GB of memory. My laptop is 16GB, so to have 2 VMs running on it, and let Windows have some memory too, I want to have a VM of at most 6,5 GB.
I need to run an AdminServer and a SOAServer, that I gave 1GB and GB respectively. And since they're not Docker containers, they both run an Oracle Linux 7 OS too.

So, one of the main steps is to downsize the database to "very small".

My starting point is an article I wrote years ago about shrinking an 11g database to XE proportions.
As described in that article I created an pfile as follows:
create pfile from spfile;

This creates an initorcl.ora in the $ORACLE_HOME/dbs folder.

I copied that file to initorcl.ora.small and editted it:
orcl.__data_transfer_cache_size=0
#orcl.__db_cache_size=1291845632
orcl.__db_cache_size=222298112
#orcl.__java_pool_size=16777216
orcl.__java_pool_size=10M
#orcl.__large_pool_size=33554432
orcl.__large_pool_size=4194304
orcl.__oracle_base='/app/oracle'#ORACLE_BASE set from environment
#orcl.__pga_aggregate_target=620756992
orcl.__pga_aggregate_target=70M
#orcl.__sga_target=1828716544
orcl.__sga_target=210M
#orcl.__shared_io_pool_size=83886080
orcl.__shared_io_pool_size=0
#orcl.__shared_pool_size=385875968
orcl.__shared_pool_size=100M
orcl.__streams_pool_size=0
*.audit_file_dest='/app/oracle/admin/orcl/adump'
*.audit_trail='db'
*.compatible='12.1.0.2.0'
*.control_files='/app/oracle/oradata/orcl/control01.ctl','/app/oracle/fast_recovery_area/orcl/control02.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='orcl'
*.db_recovery_file_dest='/app/oracle/fast_recovery_area'
*.db_recovery_file_dest_size=4560m
*.diagnostic_dest='/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'
*.open_cursors=300
*.pga_aggregate_target=578m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
#*.sga_target=1734m
*.sga_target=350M
*.undo_tablespace='UNDOTBS1'

The lines that I changed are copied with the original values commented out. So I downsized the db_cache_size, java_pool, large_pool and pga_aggregate_target. Also the sga_target, shared_io_pool(have it auto-managed) and shared_pool. I needed to set the sga_target to at least 350M, to get it started.
SOASuite needs at least  300 processes and open_cursors.

Now the script, checks if the database running. It is actually a copy of the startDB.sh script also in my Vagrant project.

If it is running, it shutdowns the database. It then creates a pfile for backup. If the database isn't running, it only creates the pfile.

Then it I copied that file to initorcl.ora.small and creates a spfile from it. And then it starts the database again.

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/../../install_env.sh
. $SCRIPTPATH/db12c_env.sh
#
db_num=`ps -ef|grep pmon |grep -v grep |awk 'END{print NR}'`

if [ $db_num -gt 0 ]
then 
  echo "Database Already RUNNING."
  $ORACLE_HOME/bin/sqlplus "/ as sysdba" <<EOF
shutdown immediate;
prompt create new initorcl.ora.
create pfile from spfile;
exit;
EOF
  #
  # With use of a plugable database the following line needs to be added after the startup command
  # startup pluggable database pdborcl; 
  #
  sleep 10
  echo "Database Services Successfully Stopped. "
else
  echo "Database Not yet RUNNING."
  $ORACLE_HOME/bin/sqlplus "/ as sysdba" <<EOF
prompt create new initorcl.ora.
create pfile from spfile;
exit;
EOF
  sleep 10
fi
#
echo Copy initorcl.ora.small to $ORACLE_HOME/dbs/initorcl.ora, with backup to $ORACLE_HOME/dbs/initorcl.ora.org
mv $ORACLE_HOME/dbs/initorcl.ora $ORACLE_HOME/dbs/initorcl.ora.org
cp $SCRIPTPATH/initorcl.ora.small $ORACLE_HOME/dbs/initorcl.ora
#
echo "Starting Oracle Database again..."
$ORACLE_HOME/bin/sqlplus "/ as sysdba" <<EOF
create spfile from pfile;
startup;
exit;
EOF

The scripts can be found here.

Oh, by the way: I must state here that I'm not a DBA. I'm not sure if those settings make sense all together. (Should have someone review it). So you should not rely on them for a serious environment. Not even a development one. My motto is that a development environment is a developer's production environment. For me this is to be able to try something out. And to show the mechanism to you.




Friday, 15 May 2020

New FMW 12c Vagrant project

Introduction

Several years ago I blogged about automatic creation of Fusion Middleware environments.
See for instance this article on installation, this one on the domain creation and these notes.

In between I wrote several articles on issues I got, start/stop scripts, etc.

Later I found out about Vagrant and since then I worked with that. And this I enhanced through the years, for instance, nowadays I use different provisioners to setup my environment.

Until this week I struggeled with a Oracle Linux 7 Update 7 box, as I wrote earlier this week.

For my current customer I needed to create a few B2B environments. So I got back to my vagrant projects and scripts and build a Vagrant project that can create a SOA/BPM/OSB+B2B environment.

You can find it on GitHub in my ol77_soa12c project, with the scripts in this folder.

You'll need to get a Oracle Linux 7U7 Vagrant base box yourself. I tried to create one based on the simple base box of Oracle, as I wrote earlier this year. But in the end I created a simple base install of OL7U7, with one disk, and a Server with GUI package, a vagrant user (with password vagrant). As you can read in earlier articles.

Also  you'll need to download the installer zips from edelivery.oracle.com.

Modularisation

What I did with my scripts in this revision, is that I split up the main method of the domain creation script:
#
def main():
  try:
    #
    # Section 1: Base Domain + Admin Server
    createBaseDomain()
    #
    # Section 2: Extend FMW Domain with templates
    extendFMWDomain()
    #
    # Section 3: Create Domain Datasources
    createDatasources()
    #
    # Section 4: Create UnixMachines, Clusters and Managed Servers
    createMachinesClustersAndServers()
    #
    # Section 5: Add Servers to ServerGroups.
    addFMWServersToGroups()
    #
    print('Updating the domain.')
    updateDomain()
    print('Closing the domain.')
    closeDomain();
    #
    # Section 6: Create boot properties files.
    createBootPropertiesForServers()
    #
    # Checks
    print('\n7. Checks')
    print(lineSeperator)    
    listServerGroups(domainHome)
    #
    print('\nExiting...')
    exit()
  except NameError, e:
    print 'Apparently properties not set.'
    print "Please check the property: ", sys.exc_info()[0], sys.exc_info()[1]
    usage()
  except:
    apply(traceback.print_exception, sys.exc_info())
    stopEdit('y')
    exit(exitcode=1)

All the secions I moved to several sub-functions. I added an extra section for Checks and validations. One check I added is to list the server groups of the domain servers. But I may envision other validations later.

Policy Manager


Another thing is that in the method addFMWServersToGroups() I changed the script so that it complies to the topology suggestions from the Oracle Product management. Important aspect here is that for SOA, OSB and BAM it is important to determine if you want a domain with only one of these products, or that you create a combined domain. By default these products will have the Oracle Webservice Managment Policy Manager targetted in to the particular cluster or server. However, you should have only one PolicyManager per domain. So, if you want a combined domain with both SOA and OSB, then you would need to create a separate WSM_PM cluster. This is done using the wsmEnabledcproperty in the fmw.properties file. Based on this same property the server groups are added:
#
# Add a FMW server to the appropriate group depending on if a separate WSM PM Cluster is added.
def addFMWServerToGroups(server, svrGrpOnly, srvGrpComb):
  if wsmEnabled == 'true':
    print 'WSM Disabled: add server group(s) '+",".join(svrGrpOnly)+' to '+server
    setServerGroups(server, svrGrpOnly)
  else:
    print 'WSM Enabled: add server group(s) '+",".join(srvGrpComb)+' to '+server
    setServerGroups(server, srvGrpComb)  
#
# 5. Set Server Groups to the Domain Servers
def addFMWServersToGroups():
  print('\n5. Add Servers to ServerGroups')
  print(lineSeperator)
  cd('/')
  #print 'Add server groups '+adminSvrGrpDesc+ ' to '+adminServerName
  #setServerGroups(adminServerName, adminSvrGrp)     
  if osbEnabled == 'true':
    addFMWServerToGroups(osbSvr1, svrGrpOnly, srvGrpComb)
    if osbSvr2Enabled == 'true': 
      addFMWServerToGroups(osbSvr2, osbSvrOnlyGrp, osbSvrCombGrp)
  if soaEnabled == 'true':
    addFMWServerToGroups(soaSvr1, soaSvrOnlyGrp, soaSvrCombGrp)
    if soaSvr2Enabled == 'true': 
      addFMWServerToGroups(soaSvr2, soaSvrOnlyGrp, soaSvrCombGrp)
  if bamEnabled == 'true':
    addFMWServerToGroups(bamSvr1, bamSvrOnlyGrp, bamSvrCombGrp)
    if bamSvr2Enabled == 'true':  
      addFMWServerToGroups(bamSvr2, bamSvrOnlyGrp, bamSvrCombGrp)
  if wsmEnabled == 'true':
    print 'Add server group(s) '+",".join(wsmSvrGrp)+' to '+wsmSvr1+' and possibly '+wsmSvr2
    setServerGroups(wsmSvr1, wsmSvrGrp)
    if wsmSvr2Enabled == 'true': 
      setServerGroups(wsmSvr2, wsmSvrGrp)
  if wcpEnabled == 'true':
    print 'Add server group(s) '+",".join(wcpSvrGrp)+' to '+wcpSvr1+' and possibly '+wcpSvr2
    setServerGroups(wcpSvr1, wcpSvrGrp)
    if wcpSvr2Enabled == 'true': 
      setServerGroups(wcpSvr2, wcpSvrGrp)
  print('Finished ServerGroups.')

The groups are declared at the top:
# ServerGroup definitions
# See also: https://blogs.oracle.com/soa/soa-suite-12c%3a-topology-suggestions
#adminSvrGrp=["JRF-MAN-SVR"]
osbSvrOnlyGrp=["OSB-MGD-SVRS-ONLY"]
osbSvrCombGrp=["OSB-MGD-SVRS-COMBINED"]
soaSvrOnlyGrp=["SOA-MGD-SVRS-ONLY"]
soaSvrCombGrp=["SOA-MGD-SVRS"]
bamSvrOnlyGrp=["BAM12-MGD-SVRS-ONLY"]
bamSvrCombGrp=["BAM12-MGD-SVRS"]
wsmSvrGrp=["WSMPM-MAN-SVR", "JRF-MAN-SVR", "WSM-CACHE-SVR"]
wcpSvrGrp=["SPACES-MGD-SVRS","PRODUCER_APPS-MGD-SVRS","AS-MGD-SVRS","DISCUSSIONS-MGD-SVRS"]
wccSvrGrp=["UCM-MGD-SVR"]

For SOA, OSB and BAM you see that there is a default or "combined" server group, and a "server only" group. If wsmEnabeld is false, then the combined group is used and then the Policy Manager is added to the managed server or cluster. If it is true then the "only"-group is used.

Other Remarks

An earlier project I did failed when creating the domain. Somehow I had to run it twice to get the domain created. Somehow this is solved.

In my scripts I still use the 12.2.1.3 zips. But the scripts are quite easiliy adaptable for 12.2.1.4. I'll do that in the near future hopefully. But my current customer still uses this version. So, I went from here.

The project also adapts the nodemanager properties, creates a nodemanager linux service, and copies start stop scripts. However, I missed the bit of setting the listener port and type (plain or SSL) of the nodemanager in the Machine definition. So starting the domain needs a little bit of tweaking.

And for my project I need at 3 environments. So I need to downsize the database and the managed servers so that I can run it in 6GB, and can have 2 VMs on my 16GB laptop.

And I need to add a bridged network adapter to the Vagrant project, so that I can have the environments connect to each other.

Wednesday, 13 May 2020

Vagrant Oracle Linux and the Vagrant user: use a password

Last year and earlier this year I have been struggling to create a new Vagrant box based on an installation of the Oracle Base box. I had some extra requirements, for instance having a GUI in my server to get to a proper desktop when it comes handy. I found in the end that it might be more convenient to create a base box by myself. I also tried using a ssh-key to have the vagrant user connect to the box to do the provisioning. But what I did, I get "Cannot allocate memory"-errors in any stage of the provisioning. For instance, when upgrading the guest additions:


Using a ssh-key is actually the recommended approach. Read my previous blog article on the matter for instructions on how to do it.

It struck me on why I couldn't have a Oracle Linux 7U7 box working as a base for new VMs. And why would I get these nasty memory allocation errors.
I upgraded from Vagrant 2.2.6  to 2.2.9, and VirtualBox from 6.1.4 to 6.1.6, but this wasn't quite related to these versions.

And just now I realized that the one thing I do differently with this box in stead of my OL7U5 box is the vagrant user ssh-key in stead of the password. So, I made sure that the vagrant user can logon using an ssh password. For instance by reviewing the file /etc/ssh/sshd_config and specifically the option PasswordAuthentication:
...
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication yes
#PermitEmptyPasswords no
#PasswordAuthentication no
...
Make sure it's set to yes.


Then re packed the box

d:\Projects\vagrant>vagrant package --base OL7U7 --output  d:\Projects\vagrant\boxes\ol77GUIv1.1.box
==> OL7U7: Exporting VM...
==> OL7U7: Compressing package to: d:/Projects/vagrant/boxes/ol77GUIv1.1.box


And removed the old box:
d:\Projects\vagrant\ol77_gui>vagrant box list
ol75        (virtualbox, 0)
ol77        (virtualbox, 0)
ol77GUIv1.0 (virtualbox, 0)
ol77GUIv1.1 (virtualbox, 0)

d:\Projects\vagrant\ol77_gui>vagrant box remove ol77GUIv1.0
Removing box 'ol77GUIv1.0' (v0) with provider 'virtualbox'...

d:\Projects\vagrant\ol77_gui>vagrant box list
ol75        (virtualbox, 0)
ol77        (virtualbox, 0)
ol77GUIv1.1 (virtualbox, 0)

I found that it might be usefull to check if there are vagrant processes currently running. Since I got an exception that Vagrant said that the box was locked:
d:\Projects\vagrant\ol77_gui>vagrant global-status
id       name   provider state  directory
--------------------------------------------------------------------
There are no active Vagrant environments on this computer! Or,
you haven't destroyed and recreated Vagrant environments that were
started with an older version of Vagrant.

If your box is running it could say something like:
d:\Projects\vagrant\ol77_gui>vagrant  global-status
id       name   provider   state   directory
-----------------------------------------------------------------------
42cbd44  darwin virtualbox running d:/Projects/vagrant/ol77_gui

The above shows information about all known Vagrant environments
on this machine. This data is cached and may not be completely
up-to-date (use "vagrant global-status --prune" to prune invalid
entries). To interact with any of the machines, you can go to that
directory and run Vagrant, or you can use the ID directly with
Vagrant commands from any directory. For example:
"vagrant destroy 1a2b3c4d"

You could also do a prune of invalid entries:
d:\Projects\vagrant\ol77_gui>vagrant  global-status --prune
id       name   provider   state   directory
-----------------------------------------------------------------------
42cbd44  darwin virtualbox running d:/Projects/vagrant/ol77_gui
...

In the Vagrantfile I set the ssh username and password:
Vagrant.configure("2") do |config|
  # The most common configuration options are documented and commented below.
  # For a complete reference, please see the online documentation at
  # https://docs.vagrantup.com.

  # Every Vagrant development environment requires a box. You can search for
  # boxes at https://vagrantcloud.com/search.
  config.vm.box = BOX_NAME
  config.vm.box_url=BOX_URL
  config.vm.define "darwin"
  config.vm.provider :virtualbox do |vb|
    vb.name = VM_NAME
    vb.gui = true
    vb.memory = VM_MEMORY
    vb.cpus = VM_CPUS
    # Set clipboard and drag&drop bidirectional
    vb.customize ["modifyvm", :id, "--clipboard-mode", "bidirectional"]
    vb.customize ["modifyvm", :id, "--draganddrop", "bidirectional"]
...
  end
  #config.ssh.username="darwin"
  config.ssh.username="vagrant"
  config.ssh.password="vagrant"

It is common to have the vagrant user's password be "vagrant". Lastly I "upped" my VM. And this all seemed to solve my memory allocation problems.

Apparently, we can't use the ssh-key to provision the box.

Wednesday, 29 April 2020

SOA Suite: SOAP Faults in BPEL and Mediator

In the past few months, at our current customer we are having a "robustness project" to improve our SOA Suite implementation. We had a lot of duplication and it turned out that we had a lot of WSDLs in our composite projects. Many of those are a result of BPEL projects from 10g. But some of them weren't possible to move because it would break the project.

The first projects where I encountered the problem were projects with Mediators. After moving the WSDLs to MDS, most of our SoapUI/ReadyAPI unit test worked, except for those simulating a SOAP Fault. It seemed that the Mediator could not map the SOAP Fault. I searched "me an accident", we would say in Holland. But without any luck.

Actually, I can't find any documents that talks about catching SOAP Faults in SOASuite. Which is a weird thing, because in BPM Suite, sharing the same soa-infra and process engine, there is a preference for SOAP Faults. Because BPM can react with specific exception transitions on SOAP Faults.

So what is this weird behavior? Well actually, SOA Suite, apparently both BPEL and Mediator, interpret SOAP Faults as Remote Faults! So, in BPEL you can't catch it as a SOAP Fault and Mediator can't route it in the correct way. What you would suggest from the UI.

However, just now I found a solution. That is, I found it earlier for Mediator, but couldn't explain it. Since the same behavior can be seen in BPEL as well, I can write down my story.

Normally, if you would add a reference to your composite, it would look like something as follows in the composite.xml source:
  <reference name="ManagedFileService"
             ui:wsdlLocation="oramds:/apps/Generiek/WSDLs/ManagedFileUtilProcess.wsdl">
    <interface.wsdl interface="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess#wsdl.interface(managedfile_ptt)"/>
    <binding.ws port="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess#wsdl.endpoint(managedfile_ptt/managedfile_pttPort)"
                location="oramds:/apps/Generiek/WSDLs/ManagedFileUtilProcess.wsdl" soapVersion="1.1">
      <property name="endpointURI">http://soa.hostname:soa.port/soa-infra/services/default/ManagedFileService/managedfileprocess_client_ep</property>
      <property name="weblogic.wsee.wsat.transaction.flowOption" type="xs:string" many="false">WSDLDriven</property>
    </binding.ws>
  </reference>

What you see here is a ui:wsdlLocation, which should point to a WSDL in the MDS. under binding.ws there is a location attribute that at many customers would point to your concrete WSDL. At my current customer we work with an endpointURI property that is overwritten using the config plan. In any way, the service element of the WSDL is in the MDS or on the Remote Server, if your refer to an external service.

If the external service would raise an SOAP Fault, it can't be caught, other than through a Catch all:




This makes it also hard to interact in the correct way with the fault, to interpret the underlying problem. This service should rename or move a file on the filesystem. And in this case the file couldn't be found. But the remote fault would suggest something else.

But, there is a real easy workaround. I wouldn't call it a solution, since I think SOA Suite should just handle SOAP Faults correctly.

In the composites WSDLs folder make a copy of the concrete WSDL and strip it down as follows:
<?xml version= '1.0' encoding= 'UTF-8' ?>
<wsdl:definitions name="ManagedFileProcess"
                  targetNamespace="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess"
                  xmlns:tns="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess"
                  xmlns:mfs="http://xmlns.darwin-it.nl/soa/xsd/Generiek/ManagedFileService/ManagedFileProcess"
                  xmlns:plnk="http://docs.oasis-open.org/wsbpel/2.0/plnktype"
                  xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/">
  <wsdl:import location="oramds:/apps/Generiek/WSDLs/ManagedFileUtilProcess.wsdl"
               namespace="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess"/>
  <wsdl:service name="managedfile_ptt">
    <wsdl:port name="managedfile_pttPort" binding="tns:managedfile_pttSOAP11Binding">
      <soap:address location="http://soa.hostname:soa.port/soa-infra/services/default/ManagedFileService/managedfileprocess_client_ep"/>
    </wsdl:port>
  </wsdl:service>
</wsdl:definitions>
In the service element there is a reference to the SOAP endpoint of the service, in this case simply a local SOA Suite service.

In the composite you need to change the reference:
  <reference name="ManagedFileService"
             ui:wsdlLocation="oramds:/apps/Generiek/WSDLs/ManagedFileUtilProcess.wsdl">
    <interface.wsdl interface="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess#wsdl.interface(managedfile_ptt)"/>
    <binding.ws port="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess#wsdl.endpoint(managedfile_ptt/managedfile_pttPort)"
                location="WSDLs/ManagedFileUtilProcess.wsdl" soapVersion="1.1">
      <property name="endpointURI">http://soa.hostname:soa.port/soa-infra/services/default/ManagedFileService/managedfileprocess_client_ep</property>
      <property name="weblogic.wsee.wsat.transaction.flowOption" type="xs:string" many="false">WSDLDriven</property>
    </binding.ws>
  </reference>
Here you change the binding.ws location to refer to the local stripped WSDL. the endpointURI property does not make much sense anymore, but it does not gets in the way.

You also need to change your config plan to contain the following WSDL Replacement:
 <wsdlAndSchema name="*">
  <searchReplace>
   <search>http://soa.hostname:soa.port/soa-infra/services/default/ManagedFileService/managedfileprocess_client_ep</search>
   <replace>http://soasuite12c.soa.darwin-it.nl:8001/soa-infra/services/default/ManagedFileService/managedfileprocess_client_ep</replace>
  </searchReplace>
 </wsdlAndSchema>

This will do a replacement of the endpoint in the WSDL that can be used.


If you deploy this, using the config plan, then amazingly, SOAP Faults are correctly interpreted:


Now, we get a neat SoapFault caught by a specific catch, based on the fault in the WSDL of the partner link.

Again, this works similarly for Mediator.



















Wednesday, 11 March 2020

SOA Composite Sensors and the ORA-01461: can bind a LONG value only for insert into a LONG column exception

Last year I wrote about SOA Composite Sensors and how they can be a good alternative for the BPEL Indexes in 10g. This week I was confronted with the "ORA-01461: can bind a LONG value only for insert into a LONG column" exception in one of our composites. It was about a process that is triggered to do some message archiving.

A bit about BPEL Sensors

Funny thing is that this archiving process is triggered by BPEL sensor. To recap: you can create a BPEL Sensor by clicking the monitor icon in your BPEL process:
It's the heart-beat-monitor icon in the button-area top right of the panel. Then it shows the BPEL process in a Layered mode, you can't edit the process any more, but you can add, remove and edit sensors. Sensors are indicated with little antenna icons with an activity. You can put them on any kind of activity. Even Empty activities, what adds extra potential reason to use to an empty activity.

If you click an antena icon you can define a series of sensors, but editing them will bring up the following dialog:

It allows you to add variables and possible expressions to elements within those variables to a sensor. And also add one of more sensor actions that can be triggered on the trigger moment (Evaluation Time) that can be set as well.

A Sensor action can be set as:


In 11g we used the JMS Adapter, but apparently that didn't work anymore the way it was in 12c. So, we changed it to JMS Queues. As with compsite sensors, in the BPEL folder, together with the BPEL process you get two files: YourBPELProcess_sensor.xml containing the Sensor definitions and YourBPELProcess_sensorAction.xml containing the sensor action definitions.

When the sensor is activated, a JMS message is produced on the queue, with an xml following a  predefined xsd. In that XML you will find info about the triggering BPEL instance, like name and instance ID, and a list of variable data. Each of the variables defined in the Sensor is in the list, in the order as defined in the sensor.

By the way, BPEL sensors are part of the product since before 10g...

The actual error case

In our case this message archiving process was triggered from another bpel using a Sensor. The archiving process was listening to the queue as defined in the Sensor Action. Picking up messages that are from certain sensors, using a message selector  based on the sensor name.

On the JMS Interface (Exposed Service) of the message archiving process, I defined a set of Composite Sensors, to be able to search for them. This would help in finding the  archiving instance that belongs to the triggering process. Since sensors work asynchronously, they're not tight together in a Flow Trace.

In some cases, we got the following exception in the Diagnostic log:
[2020-03-11T09:19:50.855+01:00] [DWN_SOA_01] [WARNING] [] [oracle.soa.adapter.jms.inbound] [tid: DaemonWorkThread: '639' of WorkManager: 'default_Adapters'] [userId: myadmin] [ecid: c8e2b75e-7aed-4305-84c5-9ef5cf928c7b-0bb833b1,0:11:9] [APP: soa-infra] [partition-name: DOMAIN] [tenant-name: GLOBAL] [oracle.soa.tracking.FlowId: 463993] [oracle.soa.tracking.InstanceId: 762213] [oracle.soa.tracking.SCAEntityId: 381353] [oracle.soa.tracking.FaultId: 400440] [FlowId: 0000N38eGGo5aaC5rFK6yY1UNay100012j]  [composite_name: MyComposite] [composite_version: 1.0] [endpoint_name: DWN_MyCompositeInterface_WS] JmsConsumer_runInbound: [destination = jms/DWN_OUTGOING, subscriber = null] : weblogic.transaction.RollbackException: Unexpected exception in beforeCompletion: sync=org.eclipse.persistence.transaction.JTASynchronizationListener@2d7a86a9[[

Internal Exception: java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column

Error Code: 1461 javax.resource.ResourceException: weblogic.transaction.RollbackException: Unexpected exception in beforeCompletion: sync=org.eclipse.persistence.transaction.JTASynchronizationListener@2d7a86a9

Internal Exception: java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column

Error Code: 1461
        at oracle.tip.adapter.jms.inbound.JmsConsumer.afterDelivery(JmsConsumer.java:321)
        at oracle.tip.adapter.jms.inbound.JmsConsumer.runInbound(JmsConsumer.java:982)
        at oracle.tip.adapter.jms.inbound.JmsConsumer.run(JmsConsumer.java:893)
        at oracle.integration.platform.blocks.executor.WorkManagerExecutor$1.run(WorkManagerExecutor.java:184)
        at weblogic.work.j2ee.J2EEWorkManager$WorkWithListener.run(J2EEWorkManager.java:209)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:644)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:622)
        at weblogic.work.DaemonWorkThread.run(DaemonWorkThread.java:39)
Caused by: javax.resource.ResourceException: weblogic.transaction.RollbackException: Unexpected exception in beforeCompletion: sync=org.eclipse.persistence.transaction.JTASynchronizationListener@2d7a86a9

Internal Exception: java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column

Error Code: 1461
        at oracle.tip.adapter.fw.jca.messageinflow.MessageEndpointImpl.afterDelivery(MessageEndpointImpl.java:379)
        at oracle.tip.adapter.jms.inbound.JmsConsumer.afterDelivery(JmsConsumer.java:306)
        ... 11 more
Caused by: weblogic.transaction.RollbackException: Unexpected exception in beforeCompletion: sync=org.eclipse.persistence.transaction.JTASynchronizationListener@2d7a86a9

Internal Exception: java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column
...

Of course the process instance failed. It took me some time to figure out what went wrong. It was suggested that it was due to the composite sensors, but I waved that away initially, since I introduced them earlier (although a colleague had removed them for no apparent reason and I re-introduced them). I couln't see that these were the problem, because it ran through the unit-tests and in most cases they weren't a problem.

But the error indicates an triggered interface: [endpoint_name: DWN_MyCompositeInterface_WS], and in this case a destination: [destination = jms/DWN_OUTGOING, subscriber = null].

Since the process is triggered from the queue with messages from BPEL Sensors these Composite Sensors were defined on variableData from the BPEL Sensor XML. And as said above, the variables appear in the XML in the order they're defined in the BPEL Sensor.

One of the Composite Sensors were defined as:
<sensor sensorName="UitgaandBerichtNummer" kind="service" target="undefined" filter="" xmlns:imp1="http://xmlns.oracle.com/bpel/sensor">
    <serviceConfig service="DWN_MessageArchivingBeginExchange_WS" expression="$in.actionData/imp1:actionData/imp1:payload/imp1:variableData/imp1:data" operation="ArchiverenBeginUitwisseling" outputDataType="string" outputNamespace="http://www.w3.org/2001/XMLSchema"/>
</sensor>

With the expression: $in.actionData/imp1:actionData/imp1:payload/imp1:variableData/imp1:data.
Because it is a list, there can be more than one variableData occurences. And without an index, it will select all of them. If, for instance one them contains the actual message to archive, and that message is quite large, then the resulting value becomes too large. And that results in the error above.

All I had to do is to select the proper occurence of the message id as shown in the Sensor Dialog above. The expression had to be: $in.actionData/imp1:actionData/imp1:payload/imp1:variableData[2]/imp1:data

Conclusion

This solved the error. I wanted to log this for future reference. But, also to show how to find out this seemingly more obscure error.

Friday, 28 February 2020

Vagrant box with Oracle Linux 77 basebox - additional fiddlings

Last year on the way home from the UK OUG TechFest 19, I wrote about creating a Vagrant box from the Oracle provided basebox in this article.

Lately I wanted to use it but I stumbled upon some nasty pitfalls.

Failed to load SELinux policy

For starters, as described in the article, I added the 'Server with GUI' package and packaged the box in a new base box. This is handy, because the creation of the GUI box is quite time-consuming and requires an intermediate restart. But if I use the new Server-with-GUI basebox, the new VM fails to start with the message: "Systemd: Failed to load SELinux policy. Freezing.".

This I could solve using the support document 2314747.1. I have to add it to my provision scripts, but before packaging the box, you need to edit the file /etc/selinux/config:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.


SELINUX=permissive

# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.

SELINUXTYPE=targeted

The option SELINUX turned out to be set on enforcing.


Vagrant unsecure keypair

When  you first start your VM, you'll probably see messages like:
The working of this is described in the Vagrant documentation about creating a base box under the chapter "vagrant" User. I think when I started with Vagrant, I did not fully grasped this part. Maybe the documentation changed. Basically you need to download the Vagrant insecure keypair from GitHub. Then  in the VM, you'll need to update the file authorized_keys in the .ssh folder of the vagrant user:
[vagrant@localhost ~]$ cd .ssh/
[vagrant@localhost .ssh]$ ls
authorized_keys
[vagrant@localhost .ssh]$ pwd
/home/vagrant/.ssh
[vagrant@localhost .ssh]$

The contents look like:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGn8m1kC2mHfPx0dno+HNNYfhgXUZHn8Rt7orIm2Hlc7g4JkvCN6bO7mrYhUbdN2qjy2TziPdlndTAI0E1HK2GbwRM8+N02CNzBg5zvJosMQhweU7EXsDZjYRNJ/SAgVlU5EqIPzmznFjp08uzvBAe2u+L4dZ9kIZ23z/GVWupNpTJmem6LsqS3xg/h0qKf2LFv55SqtLVLlC1sAxL4fvBi3fFIsR9+NLf0fxb+tV/xrprn3yYXT1GyRPVtYAbiOzE3gUOWLKQZVkCXN8R69JeY8P5YgPGx9gSLCiNyLLmqCdF4oLIBMg82lZ0a3/BXG7AoAHVxh7caOoWJrFAjVK9 vagrant

This is now a generated public key matching with a newly generated private key, matching with this file in my .vagrant folder:
As shown, it is the private_key file in the .vagrant\machines\darwin\virtualbox\ folder.
If you update the authorized_keys file of the vagrant user with the public key of the Vagrant insecure keypair, then you need to remove the private_key file. Vagrant will notice that it finds the insecure key and replaces the insecure file with a newly generated private one. By the way, I noticed that sometimes Vagrant won't remove the insecure public key. That means that someone could login to your box using the insecure keypair. You might not want that, so remove that public key from the file.
For convenience, the insecure public key is:
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant

It's this file on GitHub:

Oracle user

For my installations I allways use an Oracle user. And it is quite safe to say I always use the password 'welcome1', for demo and training boxes that is (fieeewww).

But I found out that I could not logon to that user using ssh with a simple password.
That is because in the Oracle vagrant basebox this option is set to no. To solve it, edit the following file /etc/ssh/sshd_config and find the option PasswordAuthentication:
...
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication yes
#PermitEmptyPasswords no
#PasswordAuthentication no
...

Comment the line with value no and uncomment the one with yes.

You can add this to your script to enable it:
echo 'Allow PasswordAuthhentication'
sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.org
sudo sed -i 's/PasswordAuthentication no/#PasswordAuthentication no/g' /etc/ssh/sshd_config
sudo sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/g' /etc/ssh/sshd_config
sudo service sshd restart

You need to restart the sshd as shown in the last line, to have this take effect.

Conclusion

I'll need to add the changes above to my Vagrant scripts, at least the one creating the box based on the one from Oracle. And now I need to look into the file systems created in the Oracle box, to be able to extend them with mine... But that might be input for another story.

Thursday, 27 February 2020

My first node program: get all the named complexTypes from an xsd file

Node JS Logo
Lately I'm working on some scripting for scanning SOA Projects for several queries. Some more in line of my script to scan JCA files. I found that ANT is very helpfull in selecting the particular files to process. Also in another script I found it very usefull to use JavaScript with in ANT.

In my JCA scan example, and my other scripts, at some points I need to read and interpret the found xml document to get the information from it in ANT and save it to a file. For that I used XSL to transform the particular document to be able to address the particular elements as properties in ANT.

In my latest fiddlings I need to gather all the references of elements from a large base xsd in XSDs, WSDLs, BPELs, XSLTs and composite.xml. I found quickly that transforming a wsdl or xsd using XSLT hard, if not near to impossible. For instance, I needed to get all the type attributes referencing an element or type within the target namespace of the referenced base xsd. And although mostly the same namespace prefix is used, I can't rely on that. So in the end I used a few JavaScript functions to parse the document as  a string.

Now, at this point I wanted to get all the named xsd:complexTypes, and then I found it fun to try that into a node js script. You might be surprised, but I haven't done this before, although I did some JavaScript once in a while. I might have done some demo node js try-outs, but don't count those.

So I came up with this script:
const fs = require('fs');
var myArgs = process.argv.slice(2);
const xsdFile=myArgs[0];
const complexTypeFile = myArgs[1];
//
const complexTypeStartTag="<xsd:complexType"
// Log arguments
console.log('myArgs: ', myArgs);
console.log('xsd: ', xsdFile);
//
// Exctract an attribute value from an element
function getAttributeValue(element, attributeName){
   var attribute =""
   var attributePos=element.indexOf(attributeName);
   if (attributePos>-1){
     attribute = element.substring(attributePos);
     attributePos=attribute.indexOf("=")+1;
     attribute=attribute.substring(attributePos).trim();
     var enclosingChar=attribute.substring(0,1);
     attribute=attribute.substring(1,attribute.indexOf(enclosingChar,1)); 
   }
   return attribute;
}
// Create complexType Output file.
fs.writeFile(complexTypeFile,'ComplexType\n', function(err){
    if(err) throw err;
});
// Read and process the xsdFile
fs.readFile(xsdFile, 'utf8', function(err, contents){
  //console.log(contents);
  var posStartComplexType = contents.indexOf(complexTypeStartTag);
  while  (posStartComplexType > -1){
   // Abstract complexType declaration
   var posEndComplexType= contents.indexOf(">", posStartComplexType);
   console.log("Pos: ".concat(posStartComplexType, "-", posEndComplexType));
   var complexType= contents.substring(posStartComplexType, posEndComplexType+1);
   // Log the complexType
   console.log("Complex type: [".concat(complexType,"]"));
   var typeName = getAttributeValue(complexType, "name")
   if (typeName==""){
       typeName="embedded";
   }
   console.log(typeName);
   fs.appendFileSync(complexTypeFile, typeName.concat("\n"));
   //Move on to find next possible complexType
   contents=contents.substring(posEndComplexType+1);
   posStartComplexType = contents.indexOf(complexTypeStartTag);
  }
});
console.log('Done with '+xsdFile);

It parses the arguments, where it expects first a reference to the XSD file to parse, and as a second argument the filename to write all the names to.

The function getAttributeValue() finds an attribute from the provided element, based on the attributeName and returns its value if found. Otherwise it will return an empty string.

The main script will first write a header row to the output csv file. Then reads the xsd file asynchronously (there for the done message will be shown before the console logs from the processing of the file), and in finds every occurence of the xsd:complexType from the contents. When found, it will find the end of the start tag declaration and within it it will find the name attribute. This name attribute is then appended (synchronously) to the csv file.

How to read a file I found here. Appending a file here on stackoverflow.

Tuesday, 25 February 2020

Get XML Document from SOA Infra table

Today I'm investigating a problem in an interaction between Siebel and SOASuite. I needed to find a set of correlated messages, where BPEL expects only one message but gets 2 from Siebel.

I have a query like:
SELECT 
  dmr.message_guid,
  dmr.document_id,
  dmr.part_name,
  dmr.document_type,
  dmr.dlv_partition_date,
  xdc.document_type,
  xdc.document,
  GET_XML_DOCUMENT(xdc.document,to_clob(' ')) doc_PAYLOAD,
  xdc.document_binary_format,
  dmg.conv_id ,
  dmg.conv_type,
  dmg.properties msg_properties
FROM
  document_dlv_msg_ref dmr
  join xml_document xdc on xdc.document_id = dmr.document_id
  join dlv_message dmg on dmg.message_guid = dmr.message_guid
  where dmg.cikey  in (select cikey from cube_instance where flow_id = 4537505 or flow_id = 4537504);

To get all the messages that are related to two flows that run parallel based on the same message exchange.
The thing is that of course you want to see the contents of the message in the xml_document. This attribute is a BLOB that contains the parsed document from oracle xml classes. You need the oracle classes to serialize it to a String representation of the document. I found this nice solution from Michael Heyn.

In 12c this did not work right a way. First I had to rename the class to SOAXMLDocument, because I got a Java compilation error complaining that XMLDocument already was in use. I think it conflicts with the imported oracle.xml.parser.v2.XMLDocument class. Renaming it was the simple solution.

Another thing is that in SOA Suite 12c, the documents are apparent

set define off;
CREATE OR REPLACE AND COMPILE JAVA SOURCE NAMED "SOAXMLDocument" as
// Title:   Oracle Java Class to Decode XML_DOCUMENT.DOCUMENT Content
  // Author:  Michael Heyn, Martien van den Akker
  // Created: 2015 05 08
  // Twitter: @TheHeynComplex
  // History:
  // 2020-02-25: Added GZIP Unzip and renamed class to SOAXMLDocument
  // Import all required classes
  import oracle.xml.parser.v2.XMLDOMImplementation;
  import java.io.ByteArrayOutputStream;
  import java.io.IOException;
  import oracle.xml.binxml.BinXMLStream;
  import oracle.xml.binxml.BinXMLDecoder;
  import oracle.xml.binxml.BinXMLException;
  import oracle.xml.binxml.BinXMLProcessor;
  import oracle.xml.scalable.InfosetReader;
  import oracle.xml.parser.v2.XMLDocument;
  import oracle.xml.binxml.BinXMLProcessorFactory;
  import java.util.zip.GZIPInputStream;

  // Import required sql classes
  import java.sql.Blob;
  import java.sql.Clob;
  import java.sql.SQLException;

  public class SOAXMLDocument{

      public static Clob GetDocument(Blob docBlob, Clob tempClob){
      XMLDOMImplementation xmlDom = new XMLDOMImplementation();
      BinXMLProcessor xmlProc = BinXMLProcessorFactory.createProcessor();
      ByteArrayOutputStream byteStream;
      String xml;
      try {
              // Create a GZIP InputStream from the Blob Object
              GZIPInputStream gzipInputStream = new GZIPInputStream(docBlob.getBinaryStream());
              // Create the Binary XML Stream from the GZIP InputStream
              BinXMLStream xmlStream = xmlProc.createBinXMLStream(gzipInputStream);
              // Decode the Binary XML Stream 
              BinXMLDecoder xmlDecode = xmlStream.getDecoder();
              InfosetReader xmlReader = xmlDecode.getReader();
              XMLDocument xmlDoc = (XMLDocument) xmlDom.createDocument(xmlReader);

              // Instantiate a Byte Stream Object
              byteStream = new ByteArrayOutputStream();

              // Load the Byte Stream Object
              xmlDoc.print(byteStream);

              // Get the string value of the Byte Stream Object as UTF8
              xml = byteStream.toString("UTF8");

              // Empty the temporary SQL Clob Object
              tempClob.truncate(0);

              // Load the temporary SQL Clob Object with the xml String
              tempClob.setString(1,xml);
              return tempClob;
      } 
      catch (BinXMLException ex) {
        return null;
      }
      catch (IOException e) {
        return null;
      }
      catch (SQLException se) {
        return null;
      }
      catch (Exception e){
        return null;
      }
    }
  }
/

Also, I needed to execute set define off before it. Another thing is that in SOA Suite 12c the documents are apparently stored as GZIP object. Therefor I had to put the binaryStream from the docBlob parameter into a GZIPInputStream, and feed that to the xmlProc.createBinXMLStream().

Then create the following Function wrapper:
CREATE OR REPLACE FUNCTION GET_XML_DOCUMENT(p_blob BLOB
                                           ,p_clob CLOB) 
                    RETURN CLOB AS LANGUAGE JAVA
                      NAME 'SOAXMLDocument.GetDocument(java.sql.Blob, java.sql.Clob) return java.sql.Clob';

You can use it in a query as:
select * from (
  select xdc2.*, GET_XML_DOCUMENT(xdc2.document,to_clob(' ')) doc_PAYLOAD
  from
    (select * 
    from xml_document xdc
    where xdc.doc_partition_date > to_date('25-02-20 09:10:00', 'DD-MM-YY HH24:MI:SS') and xdc.doc_partition_date < to_date('25-02-20 09:20:00', 'DD-MM-YY HH24:MI:SS') 
    ) xdc2
)  xdc3
where xdc3.doc_payload like '%16720284%' or xdc3.doc_payload like  '%9F630D36DD24214EE053082D260AB792%'

In this example I do a scan over documents between a certain period where I filter over contents from the blob. Notice that database need to unparse the blob of every row to be able to filter on it. You should not do this over the complete table.

Friday, 21 February 2020

My Weblogic on Kubernetes Cheatsheet, part 3.

Oracle Kubernetes

In two previous parts I already wrote about my Kubernetes experiences and the important commands I learned:
My way of learning and working is to put those commands in little scriptlets, one more usefull then the other. But all with the goal to keep those together.

It its time to write part 3, in which I will present some maintenance functions, mainly to connect with your pods.

Get node and pod info

getdmnpod-status.sh

In part 2 I ended with the script getdmnpods.sh. You can parse the output using awk to get just the status of the pods:

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pod statuses for $WLS_DMN_NS
kubectl -n $WLS_DMN_NS get pods -o wide| awk '{print $1 " - "  $3}'

getpods.sh

With getdmnpods.sh you can get the status of the pods running your domain. There's also a weblogic operator pod. To show this, use:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pods for $K8S_NS
kubectl get po -n $K8S_NS

getstmpods.sh

Then also the kubernetes cluster infrastructure consist of a set of pods. Show these using:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pods for kube-system
kubectl -n kube-system get pods


getnodes.sh


On OCI your cluster is running on a set of nodes. These OCI Instances are actually running your system. You can show those, with their IP's and Kubernetes versions using:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s nodes
kubectl get node

getdmnsitlogs.sh


Of course you want to see some logs, especially when something went wrong. Perhaps you want to see some specific loggings. For instance, this script show the logs of the admin pod, grepping the logs situational related to the situational config:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get situational config logs for $WLS_DMN_NS server $ADM_POD
kubectl -n $WLS_DMN_NS logs $ADM_POD | grep -i situational

Weblogic Operator

When I was busy with getting the MedRec Sample application deployed to Kubernetes, at one point I got stuck because, as I later learned, my Kubernetes Operator's version was behind.

list_wlop.sh 

I learned I could get Weblogic Operator information as follows:

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo List Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm list $WL_OPERATOR_NAME
cd $SCRIPTPATH

delete_weblogic_operator.sh 

When you find that the operator needs an update, you can remove it with this script:

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Delete Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm del --purge $WL_OPERATOR_NAME 
cd $SCRIPTPATH

install_weblogic_operator.sh


Then of course, you want to install it with the proper function. This can be done using:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Install Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm install kubernetes/charts/weblogic-operator \
  --name $WL_OPERATOR_NAME \
  --namespace $K8S_NS \
  --set image=oracle/weblogic-kubernetes-operator:2.3.0 \
  --set serviceAccount=$K8S_SA \
  --set "domainNamespaces={}"
cd $SCRIPTPATH

Take note of the image named in this script. Make sure that it matches the image with the latest-greatest operator version. In this script I apparently still use 2.3.0, but as of November 15th, 2019 2.4.0 is released.

upgrade_weblogic_operator.sh

Besides an install and delete chart, there is also an operator upgrade Helm chart:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Upgrade Weblogic Operator $WL_OPERATOR_NAME with domainNamespace $WLS_DMN_NS
cd $HELM_CHARTS_HOME
helm upgrade \
  --reuse-values \
  --set "domainNamespaces={$WLS_DMN_NS}" \
  --wait \
  $WL_OPERATOR_NAME \
  kubernetes/charts/weblogic-operator
cd $SCRIPTPATH

Connect to the pods

The containers in the pods are running Linux (I know this is a quite blunt statement). So you might want to be able to connect to them. In case of Weblogic, you might want to be able to run wlst.sh to navigate to the MBean tree to investigate certain settings and find out why certain settings won't work in runtime.

admbash.sh and mr1bash.sh

To get to the console of the container you can run for the AdminServer the script admbash.sh:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh

echo Start bash in $WLS_DMN_NS - $ADM_POD
kubectl exec -n $WLS_DMN_NS -it $ADM_POD /bin/bash

And for one of the managed servers a variant of mr1bash.sh:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pods for $WLS_DMN_NS
kubectl -n $WLS_DMN_NS get pods -o wide
kubectl exec -n medrec-domain-ns -it medrec-domain-medrec-server1 /bin/bash

On the commandline you can then run wlst.sh and connect to your AdminServer.

dwnldAdmLogs.sh and dwnldMr1Logs.sh


The previous scripts can help to navigate through your container and find the contents. However, you'll find that the containers lack certain basic bash commands like vi. The cat command does exist, but not very convenient investigating large log files. So, very soon I found the desire to download the log files to investigate them with a proper editor. You can do it for the admin server using dwnldAdmLogs.sh:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
#
LOG_FILE=$ADM_SVR.log
OUT_FILE=$ADM_SVR.out
#
echo From $WLS_DMN_NS/$ADM_POD download $DMN_HOME/servers/$ADM_SVR/logs/$LOG_FILE to $LCL_LOGS_HOME/$LOG_FILE
kubectl cp $WLS_DMN_NS/$ADM_POD:$DMN_HOME/servers/$ADM_SVR/logs/$LOG_FILE $LCL_LOGS_HOME/$LOG_FILE
echo From $WLS_DMN_NS/$ADM_POD download $DMN_HOME/servers/$ADM_SVR/logs/$OUT_FILE to $LCL_LOGS_HOME/$OUT_FILE
kubectl cp $WLS_DMN_NS/$ADM_POD:$DMN_HOME/servers/$ADM_SVR/logs/$OUT_FILE $LCL_LOGS_HOME/$OUT_FILE

And for one of the managed servers a variant of dwnldMr1Logs.sh:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
#
LOG_FILE=$MR_SVR1.log
OUT_FILE=$MR_SVR1.out
#
echo From $WLS_DMN_NS/$MR1_POD download $DMN_HOME/servers/$MR_SVR1/logs/$LOG_FILE to $LCL_LOGS_HOME/$LOG_FILE
kubectl cp $WLS_DMN_NS/$MR1_POD:$DMN_HOME/servers/$MR_SVR1/logs/$LOG_FILE $LCL_LOGS_HOME/$LOG_FILE
echo From $WLS_DMN_NS/$MR1_POD download $DMN_HOME/servers/$MR_SVR1/logs/$OUT_FILE to $LCL_LOGS_HOME/$OUT_FILE
kubectl cp $WLS_DMN_NS/$MR1_POD:$DMN_HOME/servers/$MR_SVR1/logs/$OUT_FILE $LCL_LOGS_HOME/$OUT_FILE

I found these scripts very handy, because I can quickly repeatedly download the particular log files.

Describe kube resources


Many resources in Kubernetes can be described. In my case I found it very usefull when debugging the configuration overrides.

descjdbccm.sh


One subject in the Weblogic Operator tutorial workshop is to do configuration overrides, and one of the steps is to create a configuration map. This is one example of a resource that can be desribed:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Describe jdbc configuration map of $WLS_DMN_NS
kubectl describe cm jdbccm -n $WLS_DMN_NS

Usefull to see what the latest overrides values are.

override_weblogic_domain.sh

To perform the weblogic override I use the following script:

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Delete configuration map jdbccm for Domain $WLS_DMN_UID 
kubectl -n $WLS_DMN_NS delete cm jdbccm
#echo Override Weblogic Domain $WLS_DMN_UID using $SCRIPTPATH/medrec-domain/override
kubectl -n $WLS_DMN_NS create cm jdbccm --from-file $SCRIPTPATH/medrec-domain/override
kubectl -n $WLS_DMN_NS label cm jdbccm weblogic.domainUID=$WLS_DMN_UID

Obviously the descjdbccm.sh is very usefull in combination with this script.

descmrsecr.sh


Another part in the configuration overrides is the storage of the database credentials and connection URL. We store those in a secret that is referenced in the overide files. This is smart, because you now only need to create or update the secret and then run the configuration override script. To describe the secret you can use:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Describe secret $MR_DB_CRED of namespace $WLS_DMN_NS
kubectl describe secret $MR_DB_CRED -n $WLS_DMN_NS

Since it is a secret, you can show the names of the attributes in the secret, but not their values.

create_mrdbsecret.sh


You need to create or update secrets. Apparently you need to delete it first to be able to (re)create it. This script does it for two secrets, for two datasources:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
#
function prop {
    grep "${1}" $SCRIPTPATH/credentials.properties|cut -d'=' -f2
}
#
MR_DB_USER=$(prop 'db.medrec.username')
MR_DB_PWD=$(prop 'db.medrec.password')
MR_DB_URL=$(prop 'db.medrec.url')
#
echo Delete Medrec DB Secret $MR_DB_CRED for $WLS_DMN_NS
kubectl -n $WLS_DMN_NS delete secret $MR_DB_CRED
echo Create Medrec DB Secret $MR_DB_CRED for $MR_DB_USER and URL $MR_DB_URL
kubectl -n $WLS_DMN_NS create secret generic $MR_DB_CRED --from-literal=username=$MR_DB_USER --from-literal=password=$MR_DB_PWD --from-literal=url=$MR_DB_URL
kubectl -n $WLS_DMN_NS label secret $MR_DB_CRED weblogic.domainUID=$WLS_DMN_UID
#
SMPL_DB_CRED=dbsecret
echo Delete Medrec DB Secret $SMPL_DB_CRED for $WLS_DMN_NS
kubectl -n $WLS_DMN_NS delete secret $SMPL_DB_CRED
echo Create DB Secret dbsecret $SMPL_DB_CRED for  $WLS_DMN_NS
kubectl -n $WLS_DMN_NS create secret generic $SMPL_DB_CRED --from-literal=username=scott2 --from-literal=url=jdbc:oracle:thin:@test.db.example.com:1521/ORCLCDB
kubectl -n $WLS_DMN_NS label secret $SMPL_DB_CRED weblogic.domainUID=$WLS_DMN_UID

This script gets the MedRec database credentials from a property file. Obviously you need to store those values in a save place. So you might figure that having them in a property file might not be a very safe way. You could change the script of course to ask for the particular password. And you might want to adapt it to be able to load different property files per target environment.

Can I?

The Kubernetes API has of course an authorization schema. One of the first things in the Weblogic Operator tutorial is that when you create your OKE Cluster you should make sure that you have the authorization to access your Kubernetes cluster using a system admin account.

To check if you're able to call the proper API's for your setup you can use the following scripts:

canideploy.sh

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo K8s Can I deploy?
kubectl auth can-i create deploy

canideployassystem.sh


#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo K8s Can I deploy as system?
kubectl auth can-i create deploy --as system:serviceaccount:kube-system:default

Conclusion

At this point I showed you my scriptlets up to now. There is a lot to investigate still. For instance, there are Terraform examples to create your OKE cluster from scratch with Terraform. This is very promising as an alternative to the on-line wizards. Also I would like to create some (micro-)services to get data from the MedRec database and run them in pods side by side with the MedRec application. Maybe even with a OJet front end.