Tuesday, 19 May 2020

Honey, I shrunk the database!

For my current assignment I need to get 3 SOA/B2B environments running. I'm going to try-out the multi-hop functionality, where the middle B2B environment should work as a dispatcher. The idea is that in Dev, Test and Pre Prod environments the dev environment can send messages to the remote trading partner's test environment, through the in-between-environment. To the remote trading partner, that in-between-environment should act as the local test environment, but then should be able to dispatch the message to the actual dev, test or pre-prod environment.

I envision  a solution, where this in-between B2B environment should act as this dispatching B2B hop. So I need to have 3 VMs running with their own database (although I could have them share one database), and Fusion Middleware domain.

The Vagrant project that I wrote about earlier this week, creates a database and then provisions all the FMW installations and a domain. That database is a 12cR1 database (that I could upgrade) that is installed default. In my setup it takes about 1.8GB of memory. My laptop is 16GB, so to have 2 VMs running on it, and let Windows have some memory too, I want to have a VM of at most 6,5 GB.
I need to run an AdminServer and a SOAServer, that I gave 1GB and GB respectively. And since they're not Docker containers, they both run an Oracle Linux 7 OS too.

So, one of the main steps is to downsize the database to "very small".

My starting point is an article I wrote years ago about shrinking an 11g database to XE proportions.
As described in that article I created an pfile as follows:
create pfile from spfile;

This creates an initorcl.ora in the $ORACLE_HOME/dbs folder.

I copied that file to initorcl.ora.small and editted it:
orcl.__oracle_base='/app/oracle'#ORACLE_BASE set from environment
*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'

The lines that I changed are copied with the original values commented out. So I downsized the db_cache_size, java_pool, large_pool and pga_aggregate_target. Also the sga_target, shared_io_pool(have it auto-managed) and shared_pool. I needed to set the sga_target to at least 350M, to get it started.
SOASuite needs at least  300 processes and open_cursors.

Now the script, checks if the database running. It is actually a copy of the startDB.sh script also in my Vagrant project.

If it is running, it shutdowns the database. It then creates a pfile for backup. If the database isn't running, it only creates the pfile.

Then it I copied that file to initorcl.ora.small and creates a spfile from it. And then it starts the database again.

SCRIPTPATH=$(dirname $0)
. $SCRIPTPATH/../../install_env.sh
. $SCRIPTPATH/db12c_env.sh
db_num=`ps -ef|grep pmon |grep -v grep |awk 'END{print NR}'`

if [ $db_num -gt 0 ]
  echo "Database Already RUNNING."
  $ORACLE_HOME/bin/sqlplus "/ as sysdba" <<EOF
shutdown immediate;
prompt create new initorcl.ora.
create pfile from spfile;
  # With use of a plugable database the following line needs to be added after the startup command
  # startup pluggable database pdborcl; 
  sleep 10
  echo "Database Services Successfully Stopped. "
  echo "Database Not yet RUNNING."
  $ORACLE_HOME/bin/sqlplus "/ as sysdba" <<EOF
prompt create new initorcl.ora.
create pfile from spfile;
  sleep 10
echo Copy initorcl.ora.small to $ORACLE_HOME/dbs/initorcl.ora, with backup to $ORACLE_HOME/dbs/initorcl.ora.org
mv $ORACLE_HOME/dbs/initorcl.ora $ORACLE_HOME/dbs/initorcl.ora.org
cp $SCRIPTPATH/initorcl.ora.small $ORACLE_HOME/dbs/initorcl.ora
echo "Starting Oracle Database again..."
$ORACLE_HOME/bin/sqlplus "/ as sysdba" <<EOF
create spfile from pfile;

The scripts can be found here.

Oh, by the way: I must state here that I'm not a DBA. I'm not sure if those settings make sense all together. (Should have someone review it). So you should not rely on them for a serious environment. Not even a development one. My motto is that a development environment is a developer's production environment. For me this is to be able to try something out. And to show the mechanism to you.

Friday, 15 May 2020

New FMW 12c Vagrant project


Several years ago I blogged about automatic creation of Fusion Middleware environments.
See for instance this article on installation, this one on the domain creation and these notes.

In between I wrote several articles on issues I got, start/stop scripts, etc.

Later I found out about Vagrant and since then I worked with that. And this I enhanced through the years, for instance, nowadays I use different provisioners to setup my environment.

Until this week I struggeled with a Oracle Linux 7 Update 7 box, as I wrote earlier this week.

For my current customer I needed to create a few B2B environments. So I got back to my vagrant projects and scripts and build a Vagrant project that can create a SOA/BPM/OSB+B2B environment.

You can find it on GitHub in my ol77_soa12c project, with the scripts in this folder.

You'll need to get a Oracle Linux 7U7 Vagrant base box yourself. I tried to create one based on the simple base box of Oracle, as I wrote earlier this year. But in the end I created a simple base install of OL7U7, with one disk, and a Server with GUI package, a vagrant user (with password vagrant). As you can read in earlier articles.

Also  you'll need to download the installer zips from edelivery.oracle.com.


What I did with my scripts in this revision, is that I split up the main method of the domain creation script:
def main():
    # Section 1: Base Domain + Admin Server
    # Section 2: Extend FMW Domain with templates
    # Section 3: Create Domain Datasources
    # Section 4: Create UnixMachines, Clusters and Managed Servers
    # Section 5: Add Servers to ServerGroups.
    print('Updating the domain.')
    print('Closing the domain.')
    # Section 6: Create boot properties files.
    # Checks
    print('\n7. Checks')
  except NameError, e:
    print 'Apparently properties not set.'
    print "Please check the property: ", sys.exc_info()[0], sys.exc_info()[1]
    apply(traceback.print_exception, sys.exc_info())

All the secions I moved to several sub-functions. I added an extra section for Checks and validations. One check I added is to list the server groups of the domain servers. But I may envision other validations later.

Policy Manager

Another thing is that in the method addFMWServersToGroups() I changed the script so that it complies to the topology suggestions from the Oracle Product management. Important aspect here is that for SOA, OSB and BAM it is important to determine if you want a domain with only one of these products, or that you create a combined domain. By default these products will have the Oracle Webservice Managment Policy Manager targetted in to the particular cluster or server. However, you should have only one PolicyManager per domain. So, if you want a combined domain with both SOA and OSB, then you would need to create a separate WSM_PM cluster. This is done using the wsmEnabledcproperty in the fmw.properties file. Based on this same property the server groups are added:
# Add a FMW server to the appropriate group depending on if a separate WSM PM Cluster is added.
def addFMWServerToGroups(server, svrGrpOnly, srvGrpComb):
  if wsmEnabled == 'true':
    print 'WSM Disabled: add server group(s) '+",".join(svrGrpOnly)+' to '+server
    setServerGroups(server, svrGrpOnly)
    print 'WSM Enabled: add server group(s) '+",".join(srvGrpComb)+' to '+server
    setServerGroups(server, srvGrpComb)  
# 5. Set Server Groups to the Domain Servers
def addFMWServersToGroups():
  print('\n5. Add Servers to ServerGroups')
  #print 'Add server groups '+adminSvrGrpDesc+ ' to '+adminServerName
  #setServerGroups(adminServerName, adminSvrGrp)     
  if osbEnabled == 'true':
    addFMWServerToGroups(osbSvr1, svrGrpOnly, srvGrpComb)
    if osbSvr2Enabled == 'true': 
      addFMWServerToGroups(osbSvr2, osbSvrOnlyGrp, osbSvrCombGrp)
  if soaEnabled == 'true':
    addFMWServerToGroups(soaSvr1, soaSvrOnlyGrp, soaSvrCombGrp)
    if soaSvr2Enabled == 'true': 
      addFMWServerToGroups(soaSvr2, soaSvrOnlyGrp, soaSvrCombGrp)
  if bamEnabled == 'true':
    addFMWServerToGroups(bamSvr1, bamSvrOnlyGrp, bamSvrCombGrp)
    if bamSvr2Enabled == 'true':  
      addFMWServerToGroups(bamSvr2, bamSvrOnlyGrp, bamSvrCombGrp)
  if wsmEnabled == 'true':
    print 'Add server group(s) '+",".join(wsmSvrGrp)+' to '+wsmSvr1+' and possibly '+wsmSvr2
    setServerGroups(wsmSvr1, wsmSvrGrp)
    if wsmSvr2Enabled == 'true': 
      setServerGroups(wsmSvr2, wsmSvrGrp)
  if wcpEnabled == 'true':
    print 'Add server group(s) '+",".join(wcpSvrGrp)+' to '+wcpSvr1+' and possibly '+wcpSvr2
    setServerGroups(wcpSvr1, wcpSvrGrp)
    if wcpSvr2Enabled == 'true': 
      setServerGroups(wcpSvr2, wcpSvrGrp)
  print('Finished ServerGroups.')

The groups are declared at the top:
# ServerGroup definitions
# See also: https://blogs.oracle.com/soa/soa-suite-12c%3a-topology-suggestions

For SOA, OSB and BAM you see that there is a default or "combined" server group, and a "server only" group. If wsmEnabeld is false, then the combined group is used and then the Policy Manager is added to the managed server or cluster. If it is true then the "only"-group is used.

Other Remarks

An earlier project I did failed when creating the domain. Somehow I had to run it twice to get the domain created. Somehow this is solved.

In my scripts I still use the zips. But the scripts are quite easiliy adaptable for I'll do that in the near future hopefully. But my current customer still uses this version. So, I went from here.

The project also adapts the nodemanager properties, creates a nodemanager linux service, and copies start stop scripts. However, I missed the bit of setting the listener port and type (plain or SSL) of the nodemanager in the Machine definition. So starting the domain needs a little bit of tweaking.

And for my project I need at 3 environments. So I need to downsize the database and the managed servers so that I can run it in 6GB, and can have 2 VMs on my 16GB laptop.

And I need to add a bridged network adapter to the Vagrant project, so that I can have the environments connect to each other.

Wednesday, 13 May 2020

Vagrant Oracle Linux and the Vagrant user: use a password

Last year and earlier this year I have been struggling to create a new Vagrant box based on an installation of the Oracle Base box. I had some extra requirements, for instance having a GUI in my server to get to a proper desktop when it comes handy. I found in the end that it might be more convenient to create a base box by myself. I also tried using a ssh-key to have the vagrant user connect to the box to do the provisioning. But what I did, I get "Cannot allocate memory"-errors in any stage of the provisioning. For instance, when upgrading the guest additions:

Using a ssh-key is actually the recommended approach. Read my previous blog article on the matter for instructions on how to do it.

It struck me on why I couldn't have a Oracle Linux 7U7 box working as a base for new VMs. And why would I get these nasty memory allocation errors.
I upgraded from Vagrant 2.2.6  to 2.2.9, and VirtualBox from 6.1.4 to 6.1.6, but this wasn't quite related to these versions.

And just now I realized that the one thing I do differently with this box in stead of my OL7U5 box is the vagrant user ssh-key in stead of the password. So, I made sure that the vagrant user can logon using an ssh password. For instance by reviewing the file /etc/ssh/sshd_config and specifically the option PasswordAuthentication:
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication yes
#PermitEmptyPasswords no
#PasswordAuthentication no
Make sure it's set to yes.

Then re packed the box

d:\Projects\vagrant>vagrant package --base OL7U7 --output  d:\Projects\vagrant\boxes\ol77GUIv1.1.box
==> OL7U7: Exporting VM...
==> OL7U7: Compressing package to: d:/Projects/vagrant/boxes/ol77GUIv1.1.box

And removed the old box:
d:\Projects\vagrant\ol77_gui>vagrant box list
ol75        (virtualbox, 0)
ol77        (virtualbox, 0)
ol77GUIv1.0 (virtualbox, 0)
ol77GUIv1.1 (virtualbox, 0)

d:\Projects\vagrant\ol77_gui>vagrant box remove ol77GUIv1.0
Removing box 'ol77GUIv1.0' (v0) with provider 'virtualbox'...

d:\Projects\vagrant\ol77_gui>vagrant box list
ol75        (virtualbox, 0)
ol77        (virtualbox, 0)
ol77GUIv1.1 (virtualbox, 0)

I found that it might be usefull to check if there are vagrant processes currently running. Since I got an exception that Vagrant said that the box was locked:
d:\Projects\vagrant\ol77_gui>vagrant global-status
id       name   provider state  directory
There are no active Vagrant environments on this computer! Or,
you haven't destroyed and recreated Vagrant environments that were
started with an older version of Vagrant.

If your box is running it could say something like:
d:\Projects\vagrant\ol77_gui>vagrant  global-status
id       name   provider   state   directory
42cbd44  darwin virtualbox running d:/Projects/vagrant/ol77_gui

The above shows information about all known Vagrant environments
on this machine. This data is cached and may not be completely
up-to-date (use "vagrant global-status --prune" to prune invalid
entries). To interact with any of the machines, you can go to that
directory and run Vagrant, or you can use the ID directly with
Vagrant commands from any directory. For example:
"vagrant destroy 1a2b3c4d"

You could also do a prune of invalid entries:
d:\Projects\vagrant\ol77_gui>vagrant  global-status --prune
id       name   provider   state   directory
42cbd44  darwin virtualbox running d:/Projects/vagrant/ol77_gui

In the Vagrantfile I set the ssh username and password:
Vagrant.configure("2") do |config|
  # The most common configuration options are documented and commented below.
  # For a complete reference, please see the online documentation at
  # https://docs.vagrantup.com.

  # Every Vagrant development environment requires a box. You can search for
  # boxes at https://vagrantcloud.com/search.
  config.vm.box = BOX_NAME
  config.vm.define "darwin"
  config.vm.provider :virtualbox do |vb|
    vb.name = VM_NAME
    vb.gui = true
    vb.memory = VM_MEMORY
    vb.cpus = VM_CPUS
    # Set clipboard and drag&drop bidirectional
    vb.customize ["modifyvm", :id, "--clipboard-mode", "bidirectional"]
    vb.customize ["modifyvm", :id, "--draganddrop", "bidirectional"]

It is common to have the vagrant user's password be "vagrant". Lastly I "upped" my VM. And this all seemed to solve my memory allocation problems.

Apparently, we can't use the ssh-key to provision the box.

Wednesday, 29 April 2020

SOA Suite: SOAP Faults in BPEL and Mediator

In the past few months, at our current customer we are having a "robustness project" to improve our SOA Suite implementation. We had a lot of duplication and it turned out that we had a lot of WSDLs in our composite projects. Many of those are a result of BPEL projects from 10g. But some of them weren't possible to move because it would break the project.

The first projects where I encountered the problem were projects with Mediators. After moving the WSDLs to MDS, most of our SoapUI/ReadyAPI unit test worked, except for those simulating a SOAP Fault. It seemed that the Mediator could not map the SOAP Fault. I searched "me an accident", we would say in Holland. But without any luck.

Actually, I can't find any documents that talks about catching SOAP Faults in SOASuite. Which is a weird thing, because in BPM Suite, sharing the same soa-infra and process engine, there is a preference for SOAP Faults. Because BPM can react with specific exception transitions on SOAP Faults.

So what is this weird behavior? Well actually, SOA Suite, apparently both BPEL and Mediator, interpret SOAP Faults as Remote Faults! So, in BPEL you can't catch it as a SOAP Fault and Mediator can't route it in the correct way. What you would suggest from the UI.

However, just now I found a solution. That is, I found it earlier for Mediator, but couldn't explain it. Since the same behavior can be seen in BPEL as well, I can write down my story.

Normally, if you would add a reference to your composite, it would look like something as follows in the composite.xml source:
  <reference name="ManagedFileService"
    <interface.wsdl interface="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess#wsdl.interface(managedfile_ptt)"/>
    <binding.ws port="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess#wsdl.endpoint(managedfile_ptt/managedfile_pttPort)"
                location="oramds:/apps/Generiek/WSDLs/ManagedFileUtilProcess.wsdl" soapVersion="1.1">
      <property name="endpointURI">http://soa.hostname:soa.port/soa-infra/services/default/ManagedFileService/managedfileprocess_client_ep</property>
      <property name="weblogic.wsee.wsat.transaction.flowOption" type="xs:string" many="false">WSDLDriven</property>

What you see here is a ui:wsdlLocation, which should point to a WSDL in the MDS. under binding.ws there is a location attribute that at many customers would point to your concrete WSDL. At my current customer we work with an endpointURI property that is overwritten using the config plan. In any way, the service element of the WSDL is in the MDS or on the Remote Server, if your refer to an external service.

If the external service would raise an SOAP Fault, it can't be caught, other than through a Catch all:

This makes it also hard to interact in the correct way with the fault, to interpret the underlying problem. This service should rename or move a file on the filesystem. And in this case the file couldn't be found. But the remote fault would suggest something else.

But, there is a real easy workaround. I wouldn't call it a solution, since I think SOA Suite should just handle SOAP Faults correctly.

In the composites WSDLs folder make a copy of the concrete WSDL and strip it down as follows:
<?xml version= '1.0' encoding= 'UTF-8' ?>
<wsdl:definitions name="ManagedFileProcess"
                  xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/">
  <wsdl:import location="oramds:/apps/Generiek/WSDLs/ManagedFileUtilProcess.wsdl"
  <wsdl:service name="managedfile_ptt">
    <wsdl:port name="managedfile_pttPort" binding="tns:managedfile_pttSOAP11Binding">
      <soap:address location="http://soa.hostname:soa.port/soa-infra/services/default/ManagedFileService/managedfileprocess_client_ep"/>
In the service element there is a reference to the SOAP endpoint of the service, in this case simply a local SOA Suite service.

In the composite you need to change the reference:
  <reference name="ManagedFileService"
    <interface.wsdl interface="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess#wsdl.interface(managedfile_ptt)"/>
    <binding.ws port="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess#wsdl.endpoint(managedfile_ptt/managedfile_pttPort)"
                location="WSDLs/ManagedFileUtilProcess.wsdl" soapVersion="1.1">
      <property name="endpointURI">http://soa.hostname:soa.port/soa-infra/services/default/ManagedFileService/managedfileprocess_client_ep</property>
      <property name="weblogic.wsee.wsat.transaction.flowOption" type="xs:string" many="false">WSDLDriven</property>
Here you change the binding.ws location to refer to the local stripped WSDL. the endpointURI property does not make much sense anymore, but it does not gets in the way.

You also need to change your config plan to contain the following WSDL Replacement:
 <wsdlAndSchema name="*">

This will do a replacement of the endpoint in the WSDL that can be used.

If you deploy this, using the config plan, then amazingly, SOAP Faults are correctly interpreted:

Now, we get a neat SoapFault caught by a specific catch, based on the fault in the WSDL of the partner link.

Again, this works similarly for Mediator.

Wednesday, 11 March 2020

SOA Composite Sensors and the ORA-01461: can bind a LONG value only for insert into a LONG column exception

Last year I wrote about SOA Composite Sensors and how they can be a good alternative for the BPEL Indexes in 10g. This week I was confronted with the "ORA-01461: can bind a LONG value only for insert into a LONG column" exception in one of our composites. It was about a process that is triggered to do some message archiving.

A bit about BPEL Sensors

Funny thing is that this archiving process is triggered by BPEL sensor. To recap: you can create a BPEL Sensor by clicking the monitor icon in your BPEL process:
It's the heart-beat-monitor icon in the button-area top right of the panel. Then it shows the BPEL process in a Layered mode, you can't edit the process any more, but you can add, remove and edit sensors. Sensors are indicated with little antenna icons with an activity. You can put them on any kind of activity. Even Empty activities, what adds extra potential reason to use to an empty activity.

If you click an antena icon you can define a series of sensors, but editing them will bring up the following dialog:

It allows you to add variables and possible expressions to elements within those variables to a sensor. And also add one of more sensor actions that can be triggered on the trigger moment (Evaluation Time) that can be set as well.

A Sensor action can be set as:

In 11g we used the JMS Adapter, but apparently that didn't work anymore the way it was in 12c. So, we changed it to JMS Queues. As with compsite sensors, in the BPEL folder, together with the BPEL process you get two files: YourBPELProcess_sensor.xml containing the Sensor definitions and YourBPELProcess_sensorAction.xml containing the sensor action definitions.

When the sensor is activated, a JMS message is produced on the queue, with an xml following a  predefined xsd. In that XML you will find info about the triggering BPEL instance, like name and instance ID, and a list of variable data. Each of the variables defined in the Sensor is in the list, in the order as defined in the sensor.

By the way, BPEL sensors are part of the product since before 10g...

The actual error case

In our case this message archiving process was triggered from another bpel using a Sensor. The archiving process was listening to the queue as defined in the Sensor Action. Picking up messages that are from certain sensors, using a message selector  based on the sensor name.

On the JMS Interface (Exposed Service) of the message archiving process, I defined a set of Composite Sensors, to be able to search for them. This would help in finding the  archiving instance that belongs to the triggering process. Since sensors work asynchronously, they're not tight together in a Flow Trace.

In some cases, we got the following exception in the Diagnostic log:
[2020-03-11T09:19:50.855+01:00] [DWN_SOA_01] [WARNING] [] [oracle.soa.adapter.jms.inbound] [tid: DaemonWorkThread: '639' of WorkManager: 'default_Adapters'] [userId: myadmin] [ecid: c8e2b75e-7aed-4305-84c5-9ef5cf928c7b-0bb833b1,0:11:9] [APP: soa-infra] [partition-name: DOMAIN] [tenant-name: GLOBAL] [oracle.soa.tracking.FlowId: 463993] [oracle.soa.tracking.InstanceId: 762213] [oracle.soa.tracking.SCAEntityId: 381353] [oracle.soa.tracking.FaultId: 400440] [FlowId: 0000N38eGGo5aaC5rFK6yY1UNay100012j]  [composite_name: MyComposite] [composite_version: 1.0] [endpoint_name: DWN_MyCompositeInterface_WS] JmsConsumer_runInbound: [destination = jms/DWN_OUTGOING, subscriber = null] : weblogic.transaction.RollbackException: Unexpected exception in beforeCompletion: sync=org.eclipse.persistence.transaction.JTASynchronizationListener@2d7a86a9[[

Internal Exception: java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column

Error Code: 1461 javax.resource.ResourceException: weblogic.transaction.RollbackException: Unexpected exception in beforeCompletion: sync=org.eclipse.persistence.transaction.JTASynchronizationListener@2d7a86a9

Internal Exception: java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column

Error Code: 1461
        at oracle.tip.adapter.jms.inbound.JmsConsumer.afterDelivery(JmsConsumer.java:321)
        at oracle.tip.adapter.jms.inbound.JmsConsumer.runInbound(JmsConsumer.java:982)
        at oracle.tip.adapter.jms.inbound.JmsConsumer.run(JmsConsumer.java:893)
        at oracle.integration.platform.blocks.executor.WorkManagerExecutor$1.run(WorkManagerExecutor.java:184)
        at weblogic.work.j2ee.J2EEWorkManager$WorkWithListener.run(J2EEWorkManager.java:209)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:644)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:622)
        at weblogic.work.DaemonWorkThread.run(DaemonWorkThread.java:39)
Caused by: javax.resource.ResourceException: weblogic.transaction.RollbackException: Unexpected exception in beforeCompletion: sync=org.eclipse.persistence.transaction.JTASynchronizationListener@2d7a86a9

Internal Exception: java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column

Error Code: 1461
        at oracle.tip.adapter.fw.jca.messageinflow.MessageEndpointImpl.afterDelivery(MessageEndpointImpl.java:379)
        at oracle.tip.adapter.jms.inbound.JmsConsumer.afterDelivery(JmsConsumer.java:306)
        ... 11 more
Caused by: weblogic.transaction.RollbackException: Unexpected exception in beforeCompletion: sync=org.eclipse.persistence.transaction.JTASynchronizationListener@2d7a86a9

Internal Exception: java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column

Of course the process instance failed. It took me some time to figure out what went wrong. It was suggested that it was due to the composite sensors, but I waved that away initially, since I introduced them earlier (although a colleague had removed them for no apparent reason and I re-introduced them). I couln't see that these were the problem, because it ran through the unit-tests and in most cases they weren't a problem.

But the error indicates an triggered interface: [endpoint_name: DWN_MyCompositeInterface_WS], and in this case a destination: [destination = jms/DWN_OUTGOING, subscriber = null].

Since the process is triggered from the queue with messages from BPEL Sensors these Composite Sensors were defined on variableData from the BPEL Sensor XML. And as said above, the variables appear in the XML in the order they're defined in the BPEL Sensor.

One of the Composite Sensors were defined as:
<sensor sensorName="UitgaandBerichtNummer" kind="service" target="undefined" filter="" xmlns:imp1="http://xmlns.oracle.com/bpel/sensor">
    <serviceConfig service="DWN_MessageArchivingBeginExchange_WS" expression="$in.actionData/imp1:actionData/imp1:payload/imp1:variableData/imp1:data" operation="ArchiverenBeginUitwisseling" outputDataType="string" outputNamespace="http://www.w3.org/2001/XMLSchema"/>

With the expression: $in.actionData/imp1:actionData/imp1:payload/imp1:variableData/imp1:data.
Because it is a list, there can be more than one variableData occurences. And without an index, it will select all of them. If, for instance one them contains the actual message to archive, and that message is quite large, then the resulting value becomes too large. And that results in the error above.

All I had to do is to select the proper occurence of the message id as shown in the Sensor Dialog above. The expression had to be: $in.actionData/imp1:actionData/imp1:payload/imp1:variableData[2]/imp1:data


This solved the error. I wanted to log this for future reference. But, also to show how to find out this seemingly more obscure error.

Friday, 28 February 2020

Vagrant box with Oracle Linux 77 basebox - additional fiddlings

Last year on the way home from the UK OUG TechFest 19, I wrote about creating a Vagrant box from the Oracle provided basebox in this article.

Lately I wanted to use it but I stumbled upon some nasty pitfalls.

Failed to load SELinux policy

For starters, as described in the article, I added the 'Server with GUI' package and packaged the box in a new base box. This is handy, because the creation of the GUI box is quite time-consuming and requires an intermediate restart. But if I use the new Server-with-GUI basebox, the new VM fails to start with the message: "Systemd: Failed to load SELinux policy. Freezing.".

This I could solve using the support document 2314747.1. I have to add it to my provision scripts, but before packaging the box, you need to edit the file /etc/selinux/config:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.


# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.


The option SELINUX turned out to be set on enforcing.

Vagrant unsecure keypair

When  you first start your VM, you'll probably see messages like:
The working of this is described in the Vagrant documentation about creating a base box under the chapter "vagrant" User. I think when I started with Vagrant, I did not fully grasped this part. Maybe the documentation changed. Basically you need to download the Vagrant insecure keypair from GitHub. Then  in the VM, you'll need to update the file authorized_keys in the .ssh folder of the vagrant user:
[vagrant@localhost ~]$ cd .ssh/
[vagrant@localhost .ssh]$ ls
[vagrant@localhost .ssh]$ pwd
[vagrant@localhost .ssh]$

The contents look like:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGn8m1kC2mHfPx0dno+HNNYfhgXUZHn8Rt7orIm2Hlc7g4JkvCN6bO7mrYhUbdN2qjy2TziPdlndTAI0E1HK2GbwRM8+N02CNzBg5zvJosMQhweU7EXsDZjYRNJ/SAgVlU5EqIPzmznFjp08uzvBAe2u+L4dZ9kIZ23z/GVWupNpTJmem6LsqS3xg/h0qKf2LFv55SqtLVLlC1sAxL4fvBi3fFIsR9+NLf0fxb+tV/xrprn3yYXT1GyRPVtYAbiOzE3gUOWLKQZVkCXN8R69JeY8P5YgPGx9gSLCiNyLLmqCdF4oLIBMg82lZ0a3/BXG7AoAHVxh7caOoWJrFAjVK9 vagrant

This is now a generated public key matching with a newly generated private key, matching with this file in my .vagrant folder:
As shown, it is the private_key file in the .vagrant\machines\darwin\virtualbox\ folder.
If you update the authorized_keys file of the vagrant user with the public key of the Vagrant insecure keypair, then you need to remove the private_key file. Vagrant will notice that it finds the insecure key and replaces the insecure file with a newly generated private one. By the way, I noticed that sometimes Vagrant won't remove the insecure public key. That means that someone could login to your box using the insecure keypair. You might not want that, so remove that public key from the file.
For convenience, the insecure public key is:
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant

It's this file on GitHub:

Oracle user

For my installations I allways use an Oracle user. And it is quite safe to say I always use the password 'welcome1', for demo and training boxes that is (fieeewww).

But I found out that I could not logon to that user using ssh with a simple password.
That is because in the Oracle vagrant basebox this option is set to no. To solve it, edit the following file /etc/ssh/sshd_config and find the option PasswordAuthentication:
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication yes
#PermitEmptyPasswords no
#PasswordAuthentication no

Comment the line with value no and uncomment the one with yes.

You can add this to your script to enable it:
echo 'Allow PasswordAuthhentication'
sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.org
sudo sed -i 's/PasswordAuthentication no/#PasswordAuthentication no/g' /etc/ssh/sshd_config
sudo sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/g' /etc/ssh/sshd_config
sudo service sshd restart

You need to restart the sshd as shown in the last line, to have this take effect.


I'll need to add the changes above to my Vagrant scripts, at least the one creating the box based on the one from Oracle. And now I need to look into the file systems created in the Oracle box, to be able to extend them with mine... But that might be input for another story.

Thursday, 27 February 2020

My first node program: get all the named complexTypes from an xsd file

Node JS Logo
Lately I'm working on some scripting for scanning SOA Projects for several queries. Some more in line of my script to scan JCA files. I found that ANT is very helpfull in selecting the particular files to process. Also in another script I found it very usefull to use JavaScript with in ANT.

In my JCA scan example, and my other scripts, at some points I need to read and interpret the found xml document to get the information from it in ANT and save it to a file. For that I used XSL to transform the particular document to be able to address the particular elements as properties in ANT.

In my latest fiddlings I need to gather all the references of elements from a large base xsd in XSDs, WSDLs, BPELs, XSLTs and composite.xml. I found quickly that transforming a wsdl or xsd using XSLT hard, if not near to impossible. For instance, I needed to get all the type attributes referencing an element or type within the target namespace of the referenced base xsd. And although mostly the same namespace prefix is used, I can't rely on that. So in the end I used a few JavaScript functions to parse the document as  a string.

Now, at this point I wanted to get all the named xsd:complexTypes, and then I found it fun to try that into a node js script. You might be surprised, but I haven't done this before, although I did some JavaScript once in a while. I might have done some demo node js try-outs, but don't count those.

So I came up with this script:
const fs = require('fs');
var myArgs = process.argv.slice(2);
const xsdFile=myArgs[0];
const complexTypeFile = myArgs[1];
const complexTypeStartTag="<xsd:complexType"
// Log arguments
console.log('myArgs: ', myArgs);
console.log('xsd: ', xsdFile);
// Exctract an attribute value from an element
function getAttributeValue(element, attributeName){
   var attribute =""
   var attributePos=element.indexOf(attributeName);
   if (attributePos>-1){
     attribute = element.substring(attributePos);
     var enclosingChar=attribute.substring(0,1);
   return attribute;
// Create complexType Output file.
fs.writeFile(complexTypeFile,'ComplexType\n', function(err){
    if(err) throw err;
// Read and process the xsdFile
fs.readFile(xsdFile, 'utf8', function(err, contents){
  var posStartComplexType = contents.indexOf(complexTypeStartTag);
  while  (posStartComplexType > -1){
   // Abstract complexType declaration
   var posEndComplexType= contents.indexOf(">", posStartComplexType);
   console.log("Pos: ".concat(posStartComplexType, "-", posEndComplexType));
   var complexType= contents.substring(posStartComplexType, posEndComplexType+1);
   // Log the complexType
   console.log("Complex type: [".concat(complexType,"]"));
   var typeName = getAttributeValue(complexType, "name")
   if (typeName==""){
   fs.appendFileSync(complexTypeFile, typeName.concat("\n"));
   //Move on to find next possible complexType
   posStartComplexType = contents.indexOf(complexTypeStartTag);
console.log('Done with '+xsdFile);

It parses the arguments, where it expects first a reference to the XSD file to parse, and as a second argument the filename to write all the names to.

The function getAttributeValue() finds an attribute from the provided element, based on the attributeName and returns its value if found. Otherwise it will return an empty string.

The main script will first write a header row to the output csv file. Then reads the xsd file asynchronously (there for the done message will be shown before the console logs from the processing of the file), and in finds every occurence of the xsd:complexType from the contents. When found, it will find the end of the start tag declaration and within it it will find the name attribute. This name attribute is then appended (synchronously) to the csv file.

How to read a file I found here. Appending a file here on stackoverflow.