Thursday, 19 April 2018

Garbage First in JDeveloper

At my current customer we work with VDI's: Virtual Desktop Images, that at several times a day very, very slow. Even so slow that it more or less stalls for a minute or two.

JDeveloper is not known as a Ferrari under the IDE's. One of the causes is that by default heap settings is very poor: 128M-800M. Especially when you use it in  SOA or BPM Quickstart then at startup it will need to grow several times. But very soon working in it you'll get out of memory errors.

Because of the VDI's I did several changes to try to improve performance.
Main thing is set Xms and Xmx both at 2048M. I haven't found needing more up to this day.

But I found using the Garbage First collector gives me a slightly better performance.

To set it, together with the heap, add/change the following options in the ide.conf in ${JDEV_HOME}\jdeveloper\ide\bin\:
# Set the default memory options for the Java VM which apply to both 32 and 64-bit VM's.
# These values can be overridden in the user .conf file, see the comment at the top of this file.
#AddVMOption  -Xms128M
#AddVMOption  -Xmx800M
AddVMOption  -Xms2048M
AddVMOption  -Xmx2048M
AddVMOption  -XX:+UseG1GC 
AddVMOption  -XX:MaxGCPauseMillis=200

Find more on the command line options in this G1GC tutorial.

You can also use the ParNew incombination with the ParOld or ConcMarkSeep collector, as suggested in this blog. But from Java9 onwards G1GC is the default, and I expect that it better fits the behavior of JDeveloper, as in SOASuite and OSB installations.

Wednesday, 11 April 2018

The vagrant way of provisioning - an introduction

About 2003, I guess, I was introduced into VMWare by a colleague. I was hooked about right away.
Since then I made numerous VMs for as many purposes. I played around with several VMWare products, but since Oracle acquired Sun, I stuck with VirtualBox.

A few years ago the tool Vagrant was mentioned to me. But I did not get the advantage of it, since all I needed to do I could do using VirtualBox.

However, over the years I found that maintaining VM's is a tedious job. And often I create and use a VM, but shut it down for months. And when I need it again, I don't the state anymore. Although you can use snapshots, it's nice to be able to start with a fresh install again.

In between, Oracle can have come up with another (minor) version of Fusion Middleware. Oracle Linux can have an new upgrade. There's a new patch set. Then you want to do a re-install of the software. And I find it nice that I can drop a VM and recreate it from scratch again.

For those purposes Vagrant comes in handy. It allows you to define a Box, based on a template, a base box, and configure it by configure CPU and memory settings, add disks and then after first boot, provision it.
So if I want to adapt a VM with a slightly different setting, or I need an extra disk, I destroy my box, adapt my Vagrant project file, and boot my box up again.

So, let's see what Vagrant is, actually. Then in a follow-up, I'll explain how I setup my Vagrant Project.


Vagrant is an open-source software product for building and maintaining portable virtual software development environments. So it helps you in creating and building Virtual Machines, especially in situations where you need to do that regularly and distribute those.

It simplifies software configuration management of virtualizations, to increase development productivity. Vagrant automates both the creation of VM’s and the provisioning of created VM's.  It does this by abstracting the configuration of the virtualization component and the installation/setup of the software within the VM, via a project file.

The architecture distinguishes two building blocks: Providers and Provisioners.
Providers are services to set up and create VMs, for instance:
  • VirtualBox
  • Docker
  • Vmware
  • AWS
Provisioners are tools to customize the configuration of VM, for example, configure the guest OS and install software within the VM. Possible provisioners are: 
  • Shell
  • Ansible
  • Puppet
  • Chef
I haven't made my self familiar with Ansible or Puppet, yet, (still on my list), so I work with the default provisioner: Shell.

A Vagrant project is in fact a folder with the Vagrant file in it. The vagrant file contains all the configuration of the resulting Vagrant box, the actual created VM.

A Vagrant project is always based on  a base box. Often, a downloadable box from a Vagrant repository is used. In fact, if you don't specify an url, but only a name, it will try to find it from the Vagrant repository. A popular one is the hashicorp/precise64, used in many examples. However, I prefer to use my own local box. For two main reasons:
  • I then know what's init.
  • It's local, so I don't have to download it.
To be able to be used by Vagrant, a box has the following requirements:
  • It contains an actual VM, with OS installed in it.
  • A vagrant user is defined, with sudo rights, and an insecure key (downloadable from vagrant's github, but it will be replaced by a generated secure key at first startup), but  you can specify a password (as I do).
  • NAT network adapter as a first NIC.
  • SSH deamon running.
There is a tool, called Packer, that is  able to create a box, with an OS installed. I haven't tried it, but actually, I created a very simple VM in VirtualBox, installed Oracle Linux 7 Update 4, with the server with gui option as a next-next-finish install in it, defined the vagrant user as mentioned in it. And then with the vagrant package command I got the particular box. I had a few iterations to get it as I wanted it. But once you get it right, you should not need to touch it. Unless another Linux update comes along.

Now I have only one simple base box, and I only need to define different vagrant projects and a stage folder with the latest greatest on the software downloaded. And a simple vagrant up command will create my VM a new, and install all the software in it.

Last year, on the NLOUG's Tech Experience '17, together with my colleague Rob, I spoke about how to script a complete Oracle Fusion MiddleWare environment. It was a result of a series of projects we did up to the event, where we tried to automate the environment creation as much as possible. See my series of blogs on the matter. In the upcoming period, I plan to write about how to leverage these scripts with Vagrant to set up a complete VM, with the latest greatest FMW in it.

So stay tuned.




Tuesday, 10 April 2018

PaaSForum and the talk with the two ladies...

It's already been a week or three that I've been to the excellent PaaSForum '18 in Budapest.
Much is already said and written about it. About the talks and breakout sessions.
To see and hear about the state of art of the Oracle PaaS products: every year I'm having a good time with Oracle Friends around Europe and beyond.

It was nice to play around with API Management, Dynamic Processes, Oracle JET and ChatBots. And, ..., to do a few runs to the Donau and back again. I hadn't run for about half a year because of my relocation and remodelling of our new home.

But besides the great talks with Product Management and other Oracle Friends, the thing that maybe made me most exciting was the talk with two ladies: Mary Beth and Liza... A review session about the User Experience of the Oracle PaaS products.

My major concern about the PaaS products, or maybe the Oracle Cloud products in general, is that they're very 'siloed'. Looking at ICS, PCS, VBCS (now bundled in OIC), API Management, Chatbots, CX, etc., they all have a very different history of birth.
Created in different teams. And all have a different User Experience, although Oracle did work in creating a uniform UI definition.
If I want to start with a project with different services, then I need to create and provision my different services. Those all have different URLs, etc. And if I want to move my artifacts to PreProduction or Production, I need to create new services, with their own authentication and authorisation schemas.
And I need to do the release/install of those artifacts to the different environments, myself.


In the feedback session, Liza presented me two personas (Mary Beth was taking notes): a Development Manager and a Developer.
The Development manager will be able to login to the PaaS environment landing page and create a new project. He will be able to select the components by himself or base the project on a template. This will then select the particular components, or project features, so to speak. You could compare it with creating an application in JDeveloper where you get to choose between a Java application, ServiceBus, SOA Suite or BPM Suite, and one or more appropriate projects. Creating such a PaaS Project will provision the necessary Cloud services, as indicated by the chosen components. The Development Manager can also invite project members, that get an invite with URL via email, or Slack, etc.

The Developer, can follow the link in the invite and log on to the Unified PaaS environment, or One PaaS, and from a palette can select a component he/she wants to create and work upon. I suggested that the palette should be restricted by the components selected on creation by the Development Manager. Cloud Services cost subscription fees, pressing on the budget. So, when a developer finds he needs to be able to create a certain component, the Development Manager should approve the addition of that component type, to solve the particular problem. Maybe the tiles of component types for which the project does not have a Cloud Service could be grayed out.

Another suggested addition is the environments management. They foresee a kind of devops administration page, where you can see the dev, test, pre-prod and production environments, and the artifact-versions mapped over that. So that you can see what version of which artifacts are on which environment/services. I suggested that it would be nice, in my opinion, to define releases or configurations of artifacts. Some artifacts are related to each other, for instance a certain version of a VBCS screen is dependent on a version of a REST service in OIC and/or API Mgt. So you want to combine those in a release, to make sure that they're released/installed to the next environment together.

Of course I can't show you any screens or inside information. Not lastly because I only saw mock-up screens myself. 

I got a nice present, a small Bluetooth speaker with a surprisingly great sound.  But, and be assured: I don't have Oracle shares (anymore), the biggest takeaway for me, that made me enthusiastic was the knowledge and the assurance that Oracle is really putting much effort in this. It is important and I believe this is going to make the big difference in the PaaS offering. Although the different offerings on their own are promising, a unified UI and development and management experience is going to make it actually usable. As a developer I do want to create UI's or Processes or Integrations, but I do not want to bother about the URLs to use for which environment. And I want to be helped by promoting my artifacts on a uniform way to the next environment. I should not export artifacts in different ways and import those one by one in a target cloud service.

The other day I also had an introductory meeting with one of the directors for UI/UX design. And that stressed the importance of UX on the unified PaaS UX initiatives.

As you'll understand: I'm very curious and looking forward to see new developments. If they reach me, I'll keep you posted (as far as I'm allowed of course).


Thursday, 22 March 2018

SQLDeveloper: User Defined Extensions and ForeignKey query revised

It was so fun: yesterday I wrote  a small article on creating a query on Foreign Keys refering a certain table. A post with content that I made up dozens of times in my Oracle carreer. And right away I got 2 good comments. One was on the blog itself.
 

And of course Anonymous is absolutely right. So I added 'U' as a constraint type option.

The other comment was from my much appreciated colleague Erik. He brought this to another level, by pointing me out how to add this as a User Defined Extension in SQL Developer.

I must say I was already quite pleased with the Snippets in SQLDeveloper. So I already added the query as a snippet:
But the tip of Erik is much cooler.
He refered to a tip by Sue Harper that explains this (What, it's been in there since 2007?!).

Now what to do? First create an xml file, for instance referred_by_fks.xml,  with the following content:
<items>
    <item type="editor" node="TableNode" vertical="true">
    <title><![CDATA[FK References]]></title>
    <query>
        <sql>
            <![CDATA[select fk.owner,
fk.table_name,
fk.constraint_name,
fk.status
from all_constraints  fk
join all_constraints rpk on rpk.constraint_name = fk.r_constraint_name 
where fk.constraint_type='R'
and rpk.constraint_type in('P','U')
and rpk.table_name = :OBJECT_NAME
and rpk.owner = :OBJECT_OWNER
order by fk.table_name, fk.constraint_name;]]>
        </sql>
    </query>
    </item>
</items>

Note that I updated my query a bit.


Then to add the extension to SQL Developer:
  • Open the prefereces via: Tools > Preferences
  • Navigate to Database > User Defined Extensions
  • Click "Add Row" button
  • In Type choose "EDITOR", Location is where you saved the xml file above
  • Click "Ok" then restart SQL Developer

Now, if you click on a table in the navigater, you will have an extra tab on your table editor:

Cool stuff! And it's been there for ages!

Wednesday, 21 March 2018

Which tables have foreign keys refering to a particular table?

Ok, this time a quick not so exciting post. Actually, I find my self recreating a query again, that I created many times in my carreer. So, why not post it?

Last year, I published my Darwin Object Type Accelerator (Dotacc). It allows you to generate objects from a datamodel. What it also does is create collection types for tables that refer to the tabel you want to generate an object for. For some you want that, but for others you don't. Simply because you don't need them to be queried along. Therefor, I added functionality to disable those.

But then comes the question: which are the tables with their foreignkey constraints that refer to this particular table?

The answer is in the ALL_CONSTRAINTS view (with the variants of DBA_% and USER_%).
There are several types of constraints:
  • C: Check constraints
  • R: Referential -> the particular foreign keys
  • P: Primary Key
  • U: Unique Key
I'm interested in the Foreign keys, thus those where constraint_type='R'. But those refer not to a table but to another constraint. So, I need to get the primary key, constraint_type='P', of the table that I want to query and join those together.

That get's me:
select fk.* 
from all_constraints  fk
join all_constraints rpk on rpk.constraint_name = fk.r_constraint_name 
where fk.constraint_type='R'
and rpk.constraint_type in ('P', 'U')
and rpk.table_name = 'DWN_MY_TABLE';

Thursday, 8 March 2018

Set the minimum password length on your default authenticator in Weblogic

End of last year I wrote how to create a demo community of users in your Weblogic using wlst.
Using these scripts I wanted to do the same at my current customer: creating test users in the DefaultAuthenticator. However, I faced that the minimum password length was 8, while one of the user failed creation, because the password was the same as the user, and only 5 characters long.

So I need to change the password validator. And preferably using WLST (of course). Now, the password validator of de authenticator can also be found through the console. However, the Weblogic realm also has a system password validator. Both have a default length of 8.

Let me show you some snippets (that you can add to the create users script, or your own purpose), on how to change the minimum password length.

First a method to get the default realm:
#
#
def getRealm(name=None):
  cd("/")
  if name == None:
    realm = cmo.getSecurityConfiguration().getDefaultRealm()
  else:
    realm = cmo.getSecurityConfiguration().lookupRealm(name)
  return realm

With that you can get the authenticator:
#
#
def getAuthenticator(realm, name=None):
  if name == None:
    authenticator = realm.lookupAuthenticationProvider("DefaultAuthenticator")
  else:
    authenticator = realm.lookupAuthenticationProvider(name)
  return authenticator

With a realm an an authenticator, we can change the password length:
#
#
def setMinPasswordLengthOnDftAuth(minPasswordLength):
  try:
    edit()
    startEdit()
    # Get Realm and Authenticator
    realm = getRealm()
    authenticator = getAuthenticator(realm)
    authenticator.setMinimumPasswordLength(int(minPasswordLength))
    passwordValidator=realm.lookupPasswordValidator('SystemPasswordValidator')
    passwordValidator.setMinPasswordLength(int(minPasswordLength))    
    save()
    activate(block='true')
    print('Succesfully set minimum password length to '+minPasswordLength+ ' on '+authenticator.getRealm().getName()+'.')
    print('For '+ authenticator.getName() +': '+str(authenticator.getMinimumPasswordLength()))
    print('For SystemPasswordValidator of '+getRealm().getName()+': '+ str(passwordValidator.getMinPasswordLength()))
  except WLSTException:
    stopEdit('y')
    message="Failed to update minimum password length!"
    print (message)
    raise Exception(message)

The minimum password length from the authenticator can be set directly. From the realm this function looks up the SystemPasswordValidator. And on that it set the minimum password length.

This function goes to edit mode, saves and activates the changes. But if you want to add users, you need to get wlst into domainConfig() mode.

Other password validator property setters are:
  • setMinPasswordLength()
  • setMaxPasswordLength()
  • setMaxConsecutiveCharacters()
  • setMaxInstancesOfAnyCharacter()
  • setMinAlphabeticCharacters()
  • setMinNumericCharacters()
  • setMinLowercaseCharacters()
  • setMinUppercaseCharacters()
  • setMinNonAlphanumericCharacters()
  • setMinNumericOrSpecialCharacters()
  • setRejectEqualOrContainUsername(true)
  • setRejectEqualOrContainReverseUsername(true) 
See the docs for more.

Friday, 9 February 2018

Weblogic 12c + SAML2: publish your metadata over an URL

This week I got to do a SAML2 implementation again for APEX against ADFS. Actually the same setup as last year. One pitfall I fell into with open eyes, was the Redirect URI on the 'Web SSO Partner Provider'. I entered /ords/f*, but it had to be with out the wild-card: /ords/f. But that aside.

At one step in the setup of a SAML2 configuration is that you have to publish the metadata, by clicking a button. Some SAML2 capabable middleware solutions can publish the metadata over an URL. ADFS does support a URL to get the metadata from the Service Provider, being Weblogic12c servicing your application. This prevents that you need to hand over the xml file every time you change/update your configuration. For instance because of expired certificates. How nice would it be if Weblogic supported this?

Well, actually, you can! Sort of... Weblogic does support to service a document-folder, like the htdocs folder of Apache. To do so, you need to create a war file, with only a weblogic.xml file that couples a context-root to a certain folder. And apparently Glassfish can do so too!

When you install ORDS on Weblogic, following the steps, you generate an i.war that is actually the example for this post. You could extract that file and adapt it for this purpose. But I wanted to be able to generate it. Doing so I could reuse this for several other purposes if I would need to.

So I started with a new Saml2MetaData project folder and created a src folder, with a WEB-INF folder beneath it.
Then I copied the three deployment descriptors:
  • sun-web.xml
  • web.xml
  • weblogic.xml
 The sun-web.xml (not being the travel company):
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE sun-web-app PUBLIC "-//Sun Microsystems, Inc.//DTD GlassFish Application Server 3.0 Servlet 3.0//EN" "http://www.sun.com/software/appserver/dtds/sun-web-app_3_0-0.dtd">
<sun-web-app>
 <!-- This element specifies the context path the static resources are served from --> 
 <context-root>${samlMetaData.contextRoot}</context-root>
 <!-- This element specifies the location on disk where the static resources are located -->
 <property name="alternatedocroot_1" value="from=/* dir=${samlMetaData.home}"/>
</sun-web-app>

As you can see I placed the ${samlMetaData.contextRoot} the property in the context-root-tag and the property named alternatedocroot_1 got the directory reference containing the ${samlMetaData.home}.

The web.xml is there for completeness, but does not contain a directory reference:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE web-app PUBLIC
"-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
"http://java.sun.com/j2ee/dtds/web-app_2_3.dtd">
<web-app>
 <!-- This Web-App leverages the alternate doc-root functionality in WebLogic and GlassFish to serve static content 
      For WebLogic refer to the weblogic.xml file in this folder
      For GlassFish refer to the sun-web.xml file in this folder
  -->
</web-app>

And then the weblogic.xml: including the same properties referencing the context-root and folder:
<weblogic-web-app xmlns="http://www.bea.com/ns/weblogic/weblogic-web-app">
 <!-- This element specifies the context path the static resources are served from -->
 <context-root>${samlMetaData.contextRoot}</context-root>
 <virtual-directory-mapping>
  <!-- This element specifies the location on disk where the static resources are located -->
  <local-path>${samlMetaData.home}</local-path>
  <url-pattern>/*</url-pattern>
 </virtual-directory-mapping>
</weblogic-web-app>

Then I need an ANT build file that copies these files replacing the properties. I would have done it with WLST if I had found a way to wrap the lot into a war file, that quickly. But ANT does the job well. First I need a build.properties file, that denotes the properties values:
build.dir=${basedir}/build
dist.dir=${basedir}/dist
src.dir=${basedir}/src
samlMetaData.home=c:\\certs\\saml2
samlMetaData.contextRoot=/samlMetaData


And then the ANT build.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<project name="SamlMetaData" basedir="." default="build">
   <property file="build.properties" />
   <!-- Clean & Init -->
   <target name="clean">
      <echo>Delete build and dist folder</echo>
      <delete dir="${build.dir}" />
      <delete dir="${dist.dir}" />
   </target>
   <target name="init" depends="clean">
      <echo>Create build and dist folder</echo>
      <mkdir dir="${build.dir}" />
      <mkdir dir="${dist.dir}" />
   </target>
   <!-- war the project -->
   <target name="war">
      <property name="war.dir" value="${dist.dir}/${ant.project.name}" />
      <property name="war.file" value="${war.dir}/${ant.project.name}.war" />
      <echo>Create war file ${war.file} from ${build.dir}</echo>
      <mkdir dir="${war.dir}" />
      <jar destfile="${war.file}" basedir="${build.dir}">
         <manifest />
      </jar>
   </target>
   <!-- Build the war file -->
   <target name="build" depends="init">
      <echo>Copy ${src.dir} to  ${build.dir}, expanding properties</echo>
      <copy todir="${build.dir}">
         <fileset dir="${src.dir}" />
         <filterchain>
            <expandproperties />
         </filterchain>
      </copy>
      <ant target="war" />
   </target>
</project>

Run this with ANT andit will create a build and a dist  folder with the war file.
This can be deployed to Weblogic that results in a context root as configured in the build.properties. Everything placed in the folder as configured in the samlMetaData.home folder can be fetched through Weblogic.

So just publish your metadata to that folder and the IdentityProvider can get it auto-magically.
And of course you can use this for any other static file provisioning through Weblogic.

How to install the Notepad++ 64-bit plugin manager

I'm a Notepad++ fan for years. And as soon as a 64-bit version arose I adopted it.
But since a few months I have a new laptop, and I apparently didn't get the plugin manager with the latest new install.  And only now I took the opportunity to sort it out and write about it.

I found that the plugin manager is available since April 2017 on GitHub, version 1.49. I downloaded the zip from the mentioned location, I choose the _x64 version:
Then unzipped it into my Notepad++ folder:

Then started Notepad++ and the plugin manager appears:

So now I can format my XML files again...

Tuesday, 23 January 2018

SoapUI: validate a date field in response with current date

Once in a while you need to validate a service that has dates in the response. Although SoapUI has xpath and xquery match assertions, validate against strings is quite difficult. How to do a date comparison against for instance the current date?

You can do it with a script assertion:
And the content of this can be:
def groovyUtils = new com.eviware.soapui.support.GroovyUtils(context)
// Set Namespaces
def holder = groovyUtils.getXmlHolder(messageExchange.responseContent)
//holder.namespaces["soapenv"] = "http://schemas.xmlsoap.org/soap/envelope/"
def dateFoundStr = holder.getNodeValue("/Results/ResultSet/Row[1]/DATE_FOUND")
def dateFound = new Date().parse("yyyy-MM-dd hh:mm:ss", dateFoundStr)
dateFoundStr = dateFound.format("yyyy-MM-dd")
//Current Date
def date = new Date()
def currentDate=date.format("yyyy-MM-dd")
//
assert dateFoundStr == currentDate

First we need to fetch and parse the response content using the holder variable, parsing the messageExchange.responseContent using groovyUtils.getXmlHolder.

Second, the particular date is found, here dateFound as a field from a JDBC response. A JDBC response does not have namespaces, but from a SOAP response it helps to declare namespaces. For an example see the commented line for holder.namespaces["soapenv"].

Third, I parse the found date, which is a string as fetched from the xml, to a date time, then format it to a string to get only the date part. This could be done simply using substring methods, but I wanted to try this. And get and formatthe currentDate as a string.

In the end just do assert with a comparison of both values.

There you go.

Monday, 22 January 2018

Modify your nodemanager.properties in wlst

In 2016 I did several posts on automatic installs of Fusion MiddleWare, including domain creation using wlst.

With weblogic 12c you automatically get a pre-configured per-domain nodemanager. But you might find the configuration not completely suiting your whishes.

It would be nice to update the nodemanager.properties file to with your properties in the same script.

Today I started with upgrading our Weblogic Tuning and Troubleshooting training to 12c, and one of the steps is to adapt the domain creation script. In the old script, the AdminServer is started right way, to add the managed server to the domain. In my before mentioned script, I do that offline. But since I like to be able to update the nodemanager.properties file I figured that out.

Earlier, I created  a function to just write a new property file:
#
# Create a NodeManager properties file.
def createNodeManagerPropertiesFile(javaHome, nodeMgrHome, nodeMgrType, nodeMgrListenAddress, nodeMgrListenPort):
  print ('Create Nodemanager Properties File for home: '+nodeMgrHome)
  print (lineSeperator)
  nmProps=nodeMgrHome+'/nodemanager.properties'
  fileNew=open(nmProps, 'w')
  fileNew.write('#Node manager properties\n')
  fileNew.write('#%s\n' % str(datetime.now()))
  fileNew.write('DomainsFile=%s/%s\n' % (nodeMgrHome,'nodemanager.domains'))
  fileNew.write('LogLimit=0\n')
  fileNew.write('PropertiesVersion=12.2.1\n')
  fileNew.write('AuthenticationEnabled=true\n')
  fileNew.write('NodeManagerHome=%s\n' % nodeMgrHome)
  fileNew.write('JavaHome=%s\n' % javaHome)
  fileNew.write('LogLevel=INFO\n')
  fileNew.write('DomainsFileEnabled=true\n')
  fileNew.write('ListenAddress=%s\n' % nodeMgrListenAddress)
  fileNew.write('NativeVersionEnabled=true\n')
  fileNew.write('ListenPort=%s\n' % nodeMgrListenPort)
  fileNew.write('LogToStderr=true\n')
  fileNew.write('weblogic.StartScriptName=startWebLogic.sh\n')
  if nodeMgrType == 'ssl':
    fileNew.write('SecureListener=true\n')
  else:
    fileNew.write('SecureListener=false\n')
  fileNew.write('LogCount=1\n')
  fileNew.write('QuitEnabled=true\n')
  fileNew.write('LogAppend=true\n')
  fileNew.write('weblogic.StopScriptEnabled=true\n')
  fileNew.write('StateCheckInterval=500\n')
  fileNew.write('CrashRecoveryEnabled=false\n')
  fileNew.write('weblogic.StartScriptEnabled=true\n')
  fileNew.write('LogFile=%s/%s\n' % (nodeMgrHome,'nodemanager.log'))
  fileNew.write('LogFormatter=weblogic.nodemanager.server.LogFormatter\n')
  fileNew.write('ListenBacklog=50\n')
  fileNew.flush()
  fileNew.close()

But this one just rewrites the file, and so I need to determine the values for properties like DomainsFile, JavaHome, etc., which are already set correctly in the original file. I only want to update the ListenAddress, and ListenPort, and possibly the SecureListener property based on the nodemanager type. Besides that, I want to backup the original file as well.

So, I adapted this  function to:
#
# Update the Nodemanager Properties
def updateNMProps(nmPropertyFile, nodeMgrListenAddress, nodeMgrListenPort, nodeMgrType):
  nmProps = ''
  print ('Read Nodemanager properties file%s: ' % nmPropertyFile)
  f = open(nmPropertyFile)
  for line in f.readlines():
    if line.strip().startswith('ListenPort'):
      line = 'ListenPort=%s\n' % nodeMgrListenPort
    elif line.strip().startswith('ListenAddress'):
      line = 'ListenAddress=%s\n' % nodeMgrListenAddress
    elif line.strip().startswith('SecureListener'):
       if nodeMgrType == 'ssl':
         line = 'SecureListener=true\n'
       else:
         line = 'SecureListener=false\n'
    # making sure these properties are set to true:
    elif line.strip().startswith('QuitEnabled'):
      line = 'QuitEnabled=%s\n' % 'true'
    elif line.strip().startswith('CrashRecoveryEnabled'):
      line = 'CrashRecoveryEnabled=%s\n' % 'true'
    elif line.strip().startswith('weblogic.StartScriptEnabled'):
      line = 'weblogic.StartScriptEnabled=%s\n' % 'true'
    elif line.strip().startswith('weblogic.StopScriptEnabled'):
      line = 'weblogic.StopScriptEnabled=%s\n' % 'true'         
    nmProps = nmProps + line
  # Backup file
  print nmProps
  nmPropertyFileOrg=nmPropertyFile+'.org'
  print ('Rename File %s to %s ' % (nmPropertyFile, nmPropertyFileOrg))
  os.rename(nmPropertyFile, nmPropertyFileOrg)  
  # Save New File
  print ('\nNow save the changed property file to %s' % nmPropertyFile)
  fileNew=open(nmPropertyFile, 'w')
  fileNew.write(nmProps)
  fileNew.flush()
  fileNew.close()
It first reads the property file, denoted with nmPropertyFile line by line.
If a line starts with a particular property that I want to set specifically, then the line is replaced. Each line is then added to the nmProps  variable. For completeness and validation I print the resulting variable.
Then I rename the original file to nmPropertyFile+'.org' using os.rename(). And lastly, I write the contents of the nmProps to the original file in one go.


This brings me again one step further to a completely scripted domain.