Tuesday, 13 December 2011

SoapUI Tip 3

It was a little searching using the javadocs of SoapUI, but if you want to log the result of the script of a mockService or mockResponse step in a testCase, then that can be done using:
log.info("Response: "+mockResponse.getMockResult().getResponseContent())
Another nice tip I found at another blog: "SoapUI limitations and workarounds : mock response test step"
If you want to use mockServices in your testcase or testsuite, but don't want to implement mockResponse steps for them, you can create (of course) mockServices for them. They can handle multple responses and use xpath expressions to choose the right response for a particular request. But running the testcase you'll need to start the mockServices that you need in the test. this can be done by in the "Setup Script" of the testCase:
def project = testCase.getTestSuite().getProject();
def mockService = project.getMockServiceByName("Name of the MockService");
mockService.start();
After running the testcase/testsuite the mockservices should be stopped neatly. This can be done in the "TearDown Script":
def project = testCase.getTestSuite().getProject();
def mockService = project.getMockServiceByName("Name of the MockService");
def mockRunner = mockService.getMockRunner();
mockRunner.stop();

Monday, 12 December 2011

SoapUI: Properties from test Suite

In my previous post I showed how to get a file from a script in SoapUI. One of the parts on building the filepath of the file to load was a property from the mockService. The same thing is possible from a test suite. There are several places at which you can define properties. Amongst them is the TestSuite.

To get the properties from a test suite, you have to "get" it. To get the test suite on which you defined the properties goes as follows:

def project = mockResponse.mockOperation.mockService.project
def testSuite = project.testSuites["TestSupport"]

The property can then be fetched with:

def filePath = testSuite.getPropertyValue( "responseFilePath")


Test cases are part of the testSuite and they can be fetched from an string-based array in the same manner as getting the testSuite:

def testCase = project.testSuites["TestSuite 1"].testCases["TestCase 1"]

Expanding SoapUI possibilities...

Thursday, 8 December 2011

Load Response Message File System in SoapUI

Today I figured out how to get a response from filesystem in SoapUI. I found that I could not dynamically select from multiple responses in a mockResponse-TestStep in a TestSuite. But loading from a file within a Groovy script should do the job. Advantage is also that you can have as many response files as you like. You just put a value in the filename, that you select using xpath from the request message. The folder in which you store the message can be put in a property on the mockService. I don't have the time to explain the whole lot. But the response in the mockService should be like:
<soapenv:envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
   <soapenv:header>
      <darwinheader xmlns="http://www.darwin-it.nl/XMLHeader/10">
         <headerversion>${headerVersion}</headerversion>
         <messageid>${messageId}</messageid>
         <servicerequestordomain>${serviceRequestorDomain}</servicerequestordomain>
         <servicerequestorid>${serviceRequestorId}</servicerequestorid>
         <serviceproviderdomain>${serviceProviderDomain}</serviceproviderdomain>
         <serviceid>${serviceId}</serviceid>
         <serviceversion≶${serviceVersion}</serviceversion>
         <faultindication>${faultIndication}</faultindication>
         <messagetimestamp>${messageTimestamp}</messagetimestamp>
      </darwinheader>
   </soapenv:header>
   <soapenv:body>${responseBody}</soapenv:body>
</soapenv:envelope>
Here you see that the Soapenvelope with the header is given, but the diffent values are properties, as well as the responseBody. The script to get these from the request is as follows:
def method = "ChangeServiceRequest.Response 1.Script"
log.info("Start "+method)

log.info(mockRequest.requestContent)

def groovyUtils = new com.eviware.soapui.support.GroovyUtils(context)
// Set Namespaces
def holder = groovyUtils.getXmlHolder(mockRequest.requestContent)
holder.namespaces["soapenv"] = "http://schemas.xmlsoap.org/soap/envelope/"
holder.namespaces["ns"] = "http://www.darwin-it.nl/XMLHeader/10"
holder.namespaces["rpy"] = "http://www.darwin-it.nl/ChangeServiceRequest/2/Rpy"


log.info("Get Header Properties")
context.messageId=Math.random()
log.info("messageId: "+context.messageId)
context.headerVersion= holder.getNodeValue("//ns:DarwinHeader/ns:HeaderVersion")
log.info("headerVersion: "+context.headerVersion)
context.serviceRequestorDomain= holder.getNodeValue("//ns:DarwinHeader/ns:ServiceRequestorDomain")
log.info("ServiceRequestorDomain: "+context.serviceRequestorDomain)
context.serviceRequestorId= holder.getNodeValue("//ns:DarwinHeader/ns:ServiceRequestorId")
log.info("serviceRequestorId: "+context.serviceRequestorId)
context.serviceProviderDomain= holder.getNodeValue("//ns:DarwinHeader/ns:ServiceProviderDomain")
log.info("serviceProviderDomain: "+context.serviceProviderDomain)
context.serviceId= holder.getNodeValue("//ns:DarwinHeader/ns:ServiceId")
log.info("serviceId: "+context.serviceId)
context.serviceVersion= holder.getNodeValue("//ns:DarwinHeader/ns:ServiceVersion")
log.info("serviceVersion: "+context.serviceVersion)
context.faultIndication= holder.getNodeValue("//ns:DarwinHeader/ns:FaultIndication")
log.info("faultIndication: "+context.faultIndication)
context.messageTimestamp= holder.getNodeValue("//ns:DarwinHeader/ns:MessageTimestamp")
log.info("messageTimestamp: "+context.messageTimestamp)

def serviceRequestNr= holder.getNodeValue("//req:ChangeServiceRequest_Req/req:ServiceRequestData/req:ServiceRequestNumber")
log.info("ServiceRequestNr: "+serviceRequestNr)

def mockRunner = context.getMockRunner()
def mockService = mockRunner.mockService
def filePath = mockService.getPropertyValue( "responseFilePath")+"/FT01_ChangeServiceRequest_"+serviceRequestNr+".xml"
log.info("FileName: "+ filePath)
def File file = new File( filePath )
def fileLength = (int) file.length();
def buffer = new char[fileLength];
def inputReader = new FileReader(file);
def numChar = inputReader.read(buffer);
requestContext.responseBody = new String(buffer);
log.info("End "+method)
With "def holder = groovyUtils.getXmlHolder(mockRequest.requestContent)" you get an XML object of the request. The lines "holder.namespaces["ns"] = "http://www.darwin-it.nl/XMLHeader/10" set the namespace-abbreviations, needed to select the values from the request.
Then with "def serviceRequestNr= holder.getNodeValue("//req:ChangeServiceRequest_Req/req:ServiceRequestData/req:ServiceRequestNumber")", you can perform an xpath query, to get the serviceRequestNr from the request message in this case.

The lines
def mockRunner = context.getMockRunner()
def mockService = mockRunner.mockService
def filePath = mockService.getPropertyValue( "responseFilePath")

show how to read a property from the mockService.

Then the last lines are to determine the actual file path using this property and the  queried serviceRequestNr, and  to read the file.

By putting the file in the requestContext using: "requestContext.responseBody = new String(buffer);" it can be put in the response using the property "${responseBody}" like in the response-skeleton above. You might notice that it is a mixture of Java and Groovy. I'm not so familiar with Groovy yet. But I found that if it works in Java it is easy to get it work in Groovy. Actually, Groovy is just more loosely typed. I hope this clears enough to get this going. Good luck.

Wednesday, 19 October 2011

Drag and drop in Gmail

Well, I'm probably one of the last gmail users that see's it but I just a moment ago encountered that Gmail supports drag and drop.  One of the things I really missed. When I wanted to move mails to a label I had to do it using the 'move-to' poplist. But apparently sometime in the past months google implemented the possibility to grab one of more mails/conversations and drag them to a label. Great. By this the frontend mimics a rich-mail-client as Thunderbird., so that handling mail using Gmail is much more convenient. I love empty mailboxes...

Thursday, 22 September 2011

Shared Folders in VirtualBox

Shared Folders are a big advantage in both VirtualBox and VMware Player (>3.0.x). However I notice that in my direct neighborhood there is a unfamiliarity with it. And how to use it. That is, it turns out that a created shared folder is not mounted right a way in a started VM. So to help this out, a little how to.

To create a Shared Folder you have to go to the SharedFolders node in the VM Settings:

There you can click on the add folder button. Of course you can remove or edit existing ones.



Give here the path to the folder on your host. At the layout of the screen you could gues my host is a Windows 7. You can give in a Shared Folder name.
The other options (check boxes, I really need to do a reinstall of VB in English, however it was no option I've chosen consciously to install it in Dutch) are:
  • Read only
  • Auto Mount
  • Make permanent
I check the the latter 2.
Although auto mount is checked, this does not count right away for running Linux guests. Therefor you have to do a restart.

Shared Folders are automounted in linux as /media/sf_<sharedfolder name>

So as root you have to
  • create the folder
  • ' change owner' it to root:vboxsf
  • set " group writable" with chmod
  • Then mount the folder
So:
makker@makker-lnx:~> cd /media
makker@makker-lnx:/media> ls -l
total 92
drwxrwx--- 1 root vboxsf 4096 Aug 25 17:07 sf_Data
drwxrwx--- 1 root vboxsf 8192 Sep 21 13:45 sf_Documents
dr-xr-xr-x 1 root root 16384 Sep 22 09:59 sf_Downloads
drwxrwx--- 1 root vboxsf 65536 Aug 23 14:26 sf_Music
makker@makker-lnx:/media> sudo mkdir sf_Projects
root's password:
makker@makker-lnx:/media> ls -l
total 96
drwxrwx--- 1 root vboxsf 4096 Aug 25 17:07 sf_Data
drwxrwx--- 1 root vboxsf 8192 Sep 21 13:45 sf_Documents
dr-xr-xr-x 1 root root 16384 Sep 22 09:59 sf_Downloads
drwxrwx--- 1 root vboxsf 65536 Aug 23 14:26 sf_Music
drwxr-xr-x 2 root root 4096 Sep 22 12:06 sf_Projects

makker@makker-lnx:/media> sudo chown root:vboxsf sf_Projects
makker@makker-lnx:/media> sudo chmod g+w sf_Projects
makker@makker-lnx:/media> ls -l
total 96
drwxrwx--- 1 root vboxsf 4096 Aug 25 17:07 sf_Data
drwxrwx--- 1 root vboxsf 8192 Sep 21 13:45 sf_Documents
dr-xr-xr-x 1 root root 16384 Sep 22 09:59 sf_Downloads
drwxrwx--- 1 root vboxsf 65536 Aug 23 14:26 sf_Music
drwxrwxr-x 2 root vboxsf 4096 Sep 22 12:06 sf_Projects


To mount the folder use the command mount -t vboxsf <sharedfolder name> /media/sf_<sharedfolder name>
So:
makker@makker-lnx:/media> sudo mount -t vboxsf Projects /media/sf_Projects
makker@makker-lnx:/media> ls -l sf_Projects/
total 94
-rwxrwxrwx 1 root root 9776 Oct 12 2010 AIA Project in Amsterdam
....

Now, as you'll notice to use the shared folder by another user than root, you'll need to add that user to the vboxsf user group.

The easiest way to do that is to use the user-administration tool of your linux distribution.

To add it on the commandline, first list the groups you allready have:

makker@makker-lnx:/media> id
uid=1000(makker) gid=100(users) groups=100(users),6(disk),17(audio),20(cdrom),33(video),49(ftp)

Or:

makker@makker-lnx:/media> groups
users disk audio cdrom video ftp

Then use usermod (as root or via sudo) to add the group:

makker@makker-lnx:/media> sudo /usr/sbin/usermod -g users -G disk,audio,cdrom,video,ftp,vboxsf makker
makker@makker-lnx:/media> groups
users disk audio cdrom video ftp vboxsf

Mind that you have always a primary group indicated with lowercase '-g', and several secondary groups indicated with capital '-G' and a comma separated list (don't use spaces).

That's it.

Thursday, 15 September 2011

Decomposing Virtual Machines

It's been a while since my last post, but here's a new one.
This summer I got a VirtualMachine to do a Webcenter workshop. The VM turned out to be 30GB in size. This 30GB is basically one big Virtual Disk of logically 50GB. That is: it's dynamically allocated and can grow until 50GB.
Although it is a VirtualBox VM, the disk uses VMDK format, which is the VMware virtual disk format.
The disk contained the OS (of course), the install/setup files of the database, Weblogic Suite, Repository Creation Utillity and WebcenterSuite, and the installation itself.
This means that after installing the OS, the setup files are copied to the VM into a staging directory, from which the installation is done. The installation is put in a /u01/app folder, but physically in the same disk after the setup files. So deleting the setup files won't decrease the VM a bit.

I found it a nice project to try to split the VM getting it into a more suitable size.

The project turned out to be about the following steps:
  1. Add new virtual disks for the Oracle/Webcenter installation and the staging-area (the setup files)
  2. Partition, format and mount the disks
  3. Copy/Move the staging and installation folder to their particular disks
  4. Detach the staging disk
  5. Transform the OS disk from VMDK to VDI
  6. Shrink/Compact the OS disk
1. Add new virtual disks
Adding a disk to the VM is pretty simple: goto VM settings and node Storage. I'm sorry the screendumps are in dutch. Some how I got a Dutch VirtualBox installation.


Click the add disk Icon on the Sata Controller node.
Choose "create new disk":



Choose VDI as disk format. VDI is the "Virtual Disk Image" format of VirtualBox. VMDK is the "Virtual Machine Disk" format of VMware. Later I'll show how to create a VDI image out of a VMDK, but it's better to choose the right format right away. Especially because VirtualBox can't compact a VMDK.



For "transportable" VM's choose the option "Dynamically Allocated". That way the Disk Image is only the size of the needed space.


Then choose a location and preferably a sensible name. Then set the (maximum) size on an appropriate value.


Now, you could add all the disks you need right a way. In my case I need two extra disks: one for the installation (disk2; disk1 one is for the OS: the root disk) and one for staging (named the stage disk here). But it is important to know which disk get's which device name. I think you can assume that the disk that is first in the list gets "/dev/sda", the second "/dev/sdb" and so this staging disk "/dev/sdc". But I don't take the risk and add a disk and format it one by one (and thus do a restart each new disk).

2. Partition, Format and mount disks
This can in fact in two ways:
  • The raw way: partition, format and mount the raw disk.
  • Use Logical Volume Manager to create a Logical Volume in a Volume Group
2.1 the raw way:
First check with fdisk -l the disk devices:
[root@server1 ~]# fdisk -l

Disk /dev/sda: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 6527 52323705 8e Linux LVM

Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 2611 20971519+ 8e Linux LVM

Disk /dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/dm-0: 47.7 GB, 47747956736 bytes
255 heads, 63 sectors/track, 5805 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/dm-0 doesn't contain a valid partition table

Disk /dev/dm-1: 5804 MB, 5804916736 bytes
255 heads, 63 sectors/track, 705 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/dm-1 doesn't contain a valid partition table

Here you see that /dev/sdc does not contain a partition table (ignore /dev/dm-0 and /dev/dm-1).
Then partition it with fdisk /dev/sdc (the device that you want to partition) and enter the following commands:
  • n (new partition)
  • p (for primary)
  • 1 (first partition)
  • 2 x enter, accepting the defaults for first and last cylinder
  • w (write partition table to disk)

[root@server1 ~]# fdisk /dev/sdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.


The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305):
Using default value 1305

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
Then you can create a filesystem on it (format). Based on the version of the Linux distro you can choose for ext3 or ext4 (older systems have ext2). You can do this by issueing the command mkfs.ext4 and ofcourse choose y(es) at the question to proceed anyway:
[root@server1 ~]# mkfs.ext4 /dev/sdc
mke4fs 1.41.12 (17-May-2010)
/dev/sdc is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 37 mounts or
180 days, whichever comes first. Use tune4fs -c or -i to override.


Then the disk is usable but you need to mount it. Since it in this case is a staging disk I want to mount it as such. So I create a directory /stage in the root:
[root@server1 /]# mkdir stage
[root@server1 /]# ls -l
total 186
...
drwxr-xr-x 2 root root 4096 Sep 9 17:43 stage
...


Then create an entry in the /etc/fstab:
...
/dev/sdc /stage ext4 defaults 1 2
...

Of course it is most convenient to copy an existing row and edit it in the above way. Take care of editing the file system correctly (ext4 in this case).
You can mount the disk with mount /stage.
Then the owner is automatically root:root. To enable another user (eg. oracle) to use it you can create a directory oracle in it and change the owner to oracle:
[root@server1 ~]# cd /stage
[root@server1 stage]# mkdir oracle
[root@server1 stage]# chown oracle:oinstall oracle
[root@server1 stage]# ls -l
total 20
drwx------ 2 root root 16384 Sep 15 17:51 lost+found
drwxr-xr-x 2 oracle oinstall 4096 Sep 15 18:05 oracle

2.2 Logical Volume Manager

Using a Logical Volume Group with a Logical Volume gives you a little more flexibility in extending your disk. Also, using the Logical Volume manager is a little simpeler. Since there is a nice gui for it in Gnome.
Start it using the administration menu:

You need to provide the root password.
Then you get in the following screen:

Here you see an uninitialized disk. Actually it is the disk that I used in my previous example. So it is (in my case at least) already partitioned and formated. That would not be the case with new disk of course.
So first you have to initialize it and then it asks to partition it.


It gives an error and suggests a reboot. That I did and I just re-initalized it. Which went ok.
After it is initialized you'll find it in the Unallocated Volumes node. You can add it to an existing volume group. That is handy if you want to extend it. Especially in an enterprise environment that needs to extend a volume with more space (for a growing database or something).


In this case we choose to create a new Volume Group.


Give it an appropriate name. You can choose an extent size. You'll probably don't want to create it too small, since that would probably cause an extend quite often.


Then you can create a Logical Volume in it:


Choose Ext4 for the filesystem. Click on "Use Remaining" to span the complete partion.
You can check "Mount" and "Mount when rebooted".
After pressing "Ok" you get the following dialog:


Of course you choose "Yes". If you check the /etc/fstab you'll find an entry like:
/dev/VolGroup02/LogVol00                /stage          ext4    defaults        1 2
3. Copy/Move the files to the new disk
Well, I assume I don't have to explain this. Fortunately in my case the installation is done in /u01/app/oracle. So I could name my mount point /u01. It's important to choose the mount point precisely as the first directory of the directory path (see step 2). To do so, you'd probably rename the directory with the installation first. The result should be that the mount point and the rest of the directory structure matches the original path. Otherwise the installation would break and the whole exercise is quite needless.

4. Detach the staging disk
You probably don't want to ship the staging disk with the VM. In this decomposition project we created it to get the staging files out of the root disk. Normally when creating a VM you create an empty staging disk and copy the setup files into it. After doing the installation you don't need the stage disk anymore. But it can be handy to refine an installation later on or redo an installation in an other VM. In those cases you'll want to backup the disk.

But anyway, at this point you can detach it from your VM. But before you do that it is extremely important to remove or comment the corresponding line (see the line above) from the /etc/fstab. If you don't you'll not be able to restart the VM again. Because Linux tries to mount the disk that can't be found when removed from the storage.

Then you can shutdown the machine, go to the storage settings of the machine and remove the disk from the sata controller.

If you removed the disk from the /etc/fstab then you can safely startup the VM again.

5. Transform the OS disk from VMDK to VDI
To be able to compact the disk, you'll neet to transform it to VDI. VirtualBox can't convert it in one go. You'll need to convert from VMDK to Raw format first and then from Raw to VDI.

5.1 Convert from VMDK to Raw
Open a command window or terminal/console session and cd to your VirtualBox installation. My host is a Windows 7 laptop and my VirtualBox Installation is in c:\Program Files\Oracle\VirtualBox>. In that directory you'll find the Vboxmanage tool. To convert the disk to raw you issue the command:

VboxManage internalcommands converttoraw  -format vmdk <path to vmdk disk> <path to new raw disk>

Spaces in folders are not so convenient, since it can break the option list in the command line. Enclose the path's to the input (vmdk) and output (raw) disk with quotes.
For example:
c:\Program Files\Oracle\VirtualBox>VboxManage internalcommands converttoraw -format vmdk "c:\Users\makker\VirtualBox VMs\PTS_WebCenter_PS4\PTS_DWN_Webcenter_PS4-disk1.vmdk" "d:\VirtualMachines\VirtualBox\Webcenter11g\PTS_Webcenter_PS4-disk1.raw"
Converting image "c:\Users\makker\VirtualBox VMs\PTS_WebCenter_PS4\PTS_DWN_Webcenter_PS4-disk1.vmdk" with size 536870912
00 bytes (51200MB) to raw...


It should be needles to say that it is not so wise to do this while running the Virtual Machine. Make sure it is down before running the conversion.

5.2 Convert from Raw to VDI
The command to convert it from raw to VDI is quite similar. Remarkably the conversion to raw is apparently an "internal command" to virtual box, but the conversion from raw to another disk format is not. The command is:
VboxManage convertfromraw <path to raw disk> <path to (new) VDI disk> --format vdi --variant Standard
For example:
c:\Program Files\Oracle\VirtualBox>VboxManage convertfromraw d:\VirtualMachines\VirtualBox\Webcenter11g\PTS_Webcenter_PS
4-disk1.raw "c:\Users\makker\VirtualBox VMs\PTS_WebCenter_PS4\PTS_Webcenter_PS4-disk1.vdi" --format vdi --variant Standard
Converting from raw image file="d:\VirtualMachines\VirtualBox\Webcenter11g\PTS_Webcenter_PS4-disk1.raw" to file="c:\Users\makker\VirtualBox VMs\PTS_WebCenter_PS4\PTS_Webcenter_PS4-disk1.vdi"...
Creating dynamic image with size 53687091200 bytes (51200MB)...

For a big disk, this can take quite an amount of time. In my case it took about 20 minutes per conversion.

Having done this, you can remove the VMDK from the storage in the Virtual Machine settings and add the VDI disk. You can do that in the same way as shown above with creating a new disk, except in stead choose "existing disk".
Make sure you put the disk on the correct sata port. For the root disk this should be port 0:
The root disk is of course the first one to be found by Linux and should get /dev/sda as device name. If it is your second it should get port 1 and thus named /dev/sdb by Linux, and so on.

6. Shrink/Compact the OS disk
Now, the disks are VDI, the root disk only contains the OS by now, since we moved the staging and installation in step 3. However: the disk is still huge: 30GB while the OS would only take about 5 to 6 GB (OS plus swap space). So it needs to shrink.
But for the compact tool, this 25GB remaining space is written and thus data. We first have to zero out the space. There is a tool for that: zerofree. Unfortunately it is not in the Oracle Enterprise Linux distribution. But OEL 5 is compatible with RedHat5. So I found my rpm here. Be sure you download the correct one (i386 or x86_64). Install it as root with
rpm -ihv zerofree-1.0.1-5.el5.i386.rpm
Well and then we have a problem. Zerofree only works on a readonly filesystem.
So we have to mount the rootdisk read only. That's not so easy since there are several processes that runs against the root disk.
Most of them are ruledout when booting the system in single user mode.
To do so login as root and issue:
[root@server1 ~]# telinit 1

This causes the system into single user mode and doing so terminating most of the services. You'll find yourself logged on as root.
However, probably there will be still some services running.
To check them out:
sh-3.2# service --status-all|grep runnnig

This will list all possibly running (or explictly stated "not running" services.
You can stop services by:
sh-3.2# service iscsid stop

Stop all running services and kill all p
Then you can try to remount the root volume rewritable:
sh-3.2# mount -n -o remount,ro -t ext3 /dev/VolGroup00/LogVol00 /

If that succeeds then you can zerofree the filesystem by:
zerofree /dev/VolGroup00/LogVol00
After this we're ready to compact the disk. You can shutdown the system by issueing the command:
sh-3.2# init 0

Go to the VirtualBox install directory again in a command window.
To compact the disk, VitualBox has the command:
VboxManage modifyhd <path to the VDI disk file> --compact

For example:
c:\Program Files\Oracle\VirtualBox>VboxManage modifyhd "c:\Users\makker\VirtualBox VMs\PTS_WebCenter_PS4\PTS_Webcenter_PS4-disk1.vdi" --compact

After a while the file is probaly a lot smaller.

Conclusion
A first blog in a long time and it became quite a story. I feel like it looks like a little VirtualBox and Oracle Enterprise Linux administration course.

The last few months I created several VM's with environments to do courses (Virtual Course Environments I call them). And several of these tasks help me with this. In fact, see it as a toolbox of VCE creation. And the 30GB VM? Exported to an OVA file it is only 11GB now...

Thursday, 30 June 2011

BPM 11g Advanced Workshop

Last week I was on a trip to Lissabon for a BPM 11g Advanced Workshop. In the same event there were also advanced courses on ADF and Webcenter.

It was a pleasant week since there were showers all over the place in the Netherlands. Since the pentacost weekend there nearly wasn't a dry moment over here. Portugal was cloudless and the ocean was really refreshing.

The BPM workshop was actually about building a POC on the process of handling a request on increasing the credit limit on a credit card of a fictive bank. You had to create an application with a team of 3 or 4 members involving:
  • ADF Web application for initiating the process using either Webservice or (better) EDN.
  • Advanced Business Rules for determining the allowed increase
  • ADF Human Task for approval screens
  • Implementing Conversation/Correlation constructs using BPEL to be able to have cancel or fraud-detection messages influencing the running process. BPM currently lacks functionality to handle multiple conversations/correlations.
  • BAM, BAM, BAM (seen as highly convincing in a POC)
 Although it was not required, I for myself was so stubborn to insist on creating a Custom ADF Human Task screen to initiate the BPM Process. Since I've ran in to that problem in an earlier BPM11g try-out. I started on an investigation on what to do to rebuild an BPM10R3 project into BPM11g.

BPM11g is a real new product. The way I see it, there is little from the old BPM10 product. You could find a significant part from BPM10 in the modeller (studio). But the engine part is replaced by the one used by BPEL Process Manager as a base. On top of that all the parts needed to run BPM processes are added.
BPM10 projects tend to have loads of PBL code, snippets of program code or sometimes larger amounts of code to implement a specific automatic action. For most it is used to build up request messages to call (web)services and process the response messages. But it is not unusual to have script-tasks to calculate outcomes of business rules, or other business values. Often this is done on methods of objects in object model in the Business Category. It is the part of BPM10 I was suprised of the most. Since you would not find this in BPEL (if you would left out embedded java tasks) and it should not be needed in a tool that is targeted to Business Users. In BPM11g it is not needed, because you can use XSLT and XPath (or alike) expressions to process messages. And ADF BC to display corresponding data in your Human Task Screens.
I learned that Oracle is going to support scripting in the BPM12c release. But it will be deprecated upon introduction. Since you would not need it and should not use it. Only to be able to migrate your projects to 12c. But from there onwards you should replace your code with Data associations and Transformations. Also as a best practice: only keep the data in your project that is really needed, for example Primary/Foreign key values. All relating data needed to display in the screens you can query along using ADF BC. And that's basically what I wanted to learn about.

Another neat subject we've got presented was about the upcoming feature pack that is scheduled for the summer. It is explicitly stated as not being a Patch Set. Because it's not primarily focused on bugfixing but on adding new features. To name a few:
  • Business Rules made simpeler
  • Support for conversation/correlation on message exchanges with multiple external systems. As you can and  should do now with BPEL
  • Migration from Oracle Workflow!
The latter I found very surprising. It has been for years stated that Oracle ends with Oracle Workflow from the end of support of the 10g product-line. A very big pity if you're asking me. I would very much support the idea that Oracle should give Oracle Workflow over to the Open Source community. But that aside. Since in 2004 Collaxa was acquired because of the BPEL PM, it was thought to be the replacement for OWF. Although until SOASuite11g there was no alternative foreseen for OWF's Business Event System. With SOASuite11g there is an alternative in ways of EDN (Event Delivery Network). But because of this migration tool apparently BPM is rightly seen as the replacement of OWF. Although it is kind of late. Because I suppose there are not yet so many OWF Customers left. Oracle should have already convinced them all over to BPEL. Particularly Oracle EBusiness Suite customers could still have complex custom workflows. For those customers the migration to BPM11g would be very interesting.

After all it was a very good and recommendable course. Having worked and talked and dined with more or less Oracle SOA/BPM Specialist all over Europe and the people from Oracle Product Mgt. I hope I have been able to give you a glimpse on the week to get tasty for another round of this event. I hope I can get into the ADF or Webcenter class... Not only because Lissabon is nice...

And now we wait ... , for
  • The sun breaking through in Holland
  • A nice vacation in France
  • ehm, oh yes: the SOA/BPM feature pack to be released this summer

Wednesday, 29 June 2011

Oracle Fusion Applications 11g Release 1 (11.1.1.5.0)

Since June 7 Oracle has quietly released Fusion Applications v1.0

It can be downloaded from http://edelivery.oracle.com at this moment from it's own Fusion Application section.

If you have 64-bit Linux available you may even install it, without to many hurdles:
http://onlineappsdba.com/index.php/2011/06/15/install-oracle-fusion-applications-in-10-steps/

Official information about the available 'modules' can be found here http://www.oracle.com/us/products/applications/fusion/index.html

The official pricing is available (since June 27th) from:
http://www.cio.com/article/685156/Oracle_Fusion_Applications_Pricing_Revealed
for a discussion.

An overview can be found on http://www.oracleappshub.com/fusion/oracle-fusion-and-oracle-fusion-applications-overview/ where you can also find a short description of Fusion Applications Supply Chain, Procurement and Project Portfolio Management.

The documentation can also be downloaded from edelivery.oracle.com or may be viewed more easily at
or http://www.orastudy.com/oradoc/selfstu/fusion/sysint_role.htm for the documentation for the System Implementor/ntegrator job role.

The first Fusion Application book by Richard Bingham can be pre-viewed (and ordered) from McGraw-Hill:

Ofcourse I will probably not be able to not discuss this this afternoon while delivering another Oracle Product
Portfolio Overview training. If you are interested, check out:

Wednesday, 15 June 2011

Webcenter 11g VM: Add Spaces

As mentioned in my earlier posts, there is a Oracle Virtual Box VM for Webcenter. But it is a VM with Webcenter Portal (11gPS3: 11.1.1.4). It turns out that Webcenter Spaces is not installed.  And since I needed just that for my course-preperations I went looking for a VM that contains spaces. I was directed to the Pre-built Appliances page for the spaces VM but it turns out that the links to download the files were removed, because "in the near future" a new version will be made  available.


So that leaves me with the Webcenter Portal VM. On OTN there is only one download to install Webcenter. So the software of webcenter including spaces is on the VM. The database just does not contain the repository schema's for spaces. So what to do? Well apparently the following:
  1. Run RCU (Repository Creation Utility) for Spaces against the database
  2. Extend the webcenter weblogic domain
  3. Fiddle around somewhat to make things actually working (I found that necessary)
Run the RCU
To be able to run the RCU you have to download the right version (11.1.1.4) and unzip it. You can download it from here. Yannick Ogena's blog was a good starting point here, by the way. Expand the nodes under "Prerequisites & Recommended Install Process" of the 11.1.1.4.0 version. Look for "4 Repository Creation Utility (11.1.1.4.0) for Linux" and download that one. Unzip it somewhere so that you can reach it from inside the VM. A shared folder would be helpful. You could of course FTP/SCP it to the VM, but I don't like that idea since it will unnecessarly expand the vitual disks.
Then start "rcu" from the bin folder.
If you shrunk the database like I did following my previous post on this, you'll hit the error that your processes parameter is too small. It has to be at least 200.
If so increase it like:
SQL> alter system set processes=200 scope=spfile;
System altered.
SQL> shutdown immediate;
...
SQL> startup

In the "Database Connection Details" screen enter:
  • Hostname: localhost
  • Port: 1521
  • Service name: orcl
  • username: sys
  • password: welcome1 (all database passwords in the Webcenter Portal VM are "welcome1" so it might be handy to use that for all other schema's too)
Then you'll see a screen like:

Check the boxes like above. It will suggest however to create a new prefix "DEV1". Instead  select the existing prefix "DEV". Then you'll mention that some of the schema's allready exist and won't be created. That's fine. Then finish the wizard keeping the defaults and confirm with "OK" after the checks. It will ask for creating non existent Tablespaces. Just confirm with OK.

 At the end you can press close to close the RCU.

Extend the domain
Now the repository is ready, to extend the domain. To do so, start the domain configurator. You can find it in "/u01/app/oracle/product/Middleware/wlserver_10.3/common/bin". Start the script "config.sh".

It starts with the screen:
Choose Extend existing Weblogic domain and look for the webcenter domain. It is the folder "/u01/app/oracle/product/Middleware/user_projects/domains/webcenter":


When choosing next you'll be able to select the options to add to the domain. Check the Webcenter options following the next example screendumps:


Then, you'll have to add the connection properties of the different repository schemas. What you can do is select every schema that does not have orcl.localdomain as service and localhost as Host Name. Then enter in the fields in the top of the screen orcl.localdomain as service (just "orcl" is not enough, you have to add "localdomain" as a domain name). Since all the schema's have "welcome1" as a password, you can enter that in the Schema password field. Do not touch the Schema owner field. That won't be changed.


Then finish the wizard. After finishing up the wizard the domain has got some new Manged Servers added. You can start the Admin Server using the "startWeblogic.sh" command in "$DOMAIN_HOME/bin" (in our case: "/u01/app/oracle/product/Middleware/user_projects/domains/webcenter/bin").
After having started the admin server you can browse to http://localhost:7001/console. Log in as:
  • user name: weblogic
  • password: welcome1
At the left you'll see a portlet called "Domain Structure". Open up the "Environment" node and click on servers.
A table with the mananged servers is given:
AdminServer(admin)

RUNNING OK7001
UCM_server1

SHUTDOWN
16200
WC_Collaboration

SHUTDOWN
8890
WC_CustomPortal

SHUTDOWN
8892
WC_Portlet

SHUTDOWN
8889
WC_Spaces

SHUTDOWN
8888
WC_Utilities

SHUTDOWN
8891

Except for "WC_Collaboration" all the WC_% servers are added. Each with it's own port number.

A managed server can be started with the "startManagedWebLogic.sh"  command with as an extra parameter the name of the managed server like:
startManagedWebLogic.sh WC_Spaces
You can start just the managed servers you'll need.

Tuning the domain
However, when you start the added managed servers, you'll find the error: " <Getting boot identity from user.> " in the log. It seems you can enter the weblogic user but it will fail.
I tried to add the info using the weblogic Admin Server console. But that did not work.

When you start the managed server for the first time, it will add a folder with the name of the managed server in the servers folder within the domain folder, like:
[oel50wc oracle /u01/app/oracle/product/Middleware/user_projects/domains/webcenter/servers]$ ls
AdminServer     domain_bak   WC_Collaboration  WC_Portlet  WC_Utilities
AdminServerTag  UCM_server1  WC_CustomPortal   WC_Spaces

That is: when it is not started for the first time, it will not be there.
To solve the boot-identity problem, add a folder called "security" to the managed-server folder. In that folder a file called "boot.properties" is expected. You can copy the security folder from for example the "UCM_server1" or "WC_CustomPortal" managed servers. Edit the boot.properties file with the following values:
username=weblogic
password=welcome1
At startup of the managed server both values will be encripted.

To start the managed servers you can adapt the script vmctl.sh that is provided in the VM. I did not like that, since it has lot's of double code. I like a more modular approach.

When you start all of the managed servers after some time the VM will be very busy with ... swapping! Each of the managed server is started with it's own JVM session. And it turns out that each VM gets an initial heap of 512MB and a max of 1024MB. I think that is somewhat oversized for a demo VM. You can see it in the output of the startup script. Since the first thing it does is to log the memory properties:
JAVA Memory arguments: -Xms512m -Xmx1024m -XX:CompileThreshold=8000 -XX:PermSize=128m  -XX:MaxPermSize=512m 
The settings can be adapted in the "setDomainEnv.sh" script in the bin folder of the webcenter domain.
There you'll find the following values:
XMS_SUN_64BIT="256"
export XMS_SUN_64BIT
XMS_SUN_32BIT="512"
export XMS_SUN_32BIT
XMX_SUN_64BIT="512"
export XMX_SUN_64BIT
XMX_SUN_32BIT="1024"
export XMX_SUN_32BIT
XMS_JROCKIT_64BIT="256"
export XMS_JROCKIT_64BIT
XMS_JROCKIT_32BIT="256"
export XMS_JROCKIT_32BIT

The VM uses 32-bit Linux and the Sun JVM. So the values to change are "XMS_SUN_32BIT" and "XMX_SUN_32BIT" for the min and max heap size respectively. I changed them to "256" and "512". Since each managed server, including the admin server uses the same script, these values are the same for each server. If you might need to adapt for just one managed server then you probably have to copy the scripts specificly for that particular managed server. Or install a node manager...

Together with a downsized database, it has to make to run the VM better.

Thursday, 9 June 2011

Webcenter 11g VM: JDeveloper project location

The Webcenter VM on OTN contains a tutorial. The tutorial let you do some excersises with Jdeveloper. I put my projects under /home/oracle/Jdeveloper/mywork. But then it lets you unzip a package into $JDEV_USER_DIR/mywork. It turns out that it the environment variable points to: /u01/JDevApps. I found that it is set in the ".bashrc" script.

Webcenter 11g VM: downsizing the database

This week I started with Webcenter 11g, using the Oracle VirtualBox VM that can be downloaded here.
One of the tips upfront was that the NAT network adapter should be "Internal Network" or "Host Only". Because otherwise the UCM part of the hands-on would not work.

I also changed the memory settings to 3GB at first. But that caused my Windows 7 to stutter. Aparently Windows has became very memory consumptive. On my Linux (Open Suse) it would not have be too much of a problem to raise the VM to 3GB. An 8GB laptop would be nice. So I brought the settings back to a more modest 2,5 GB.

But then I encountered that the install of Oracle DB 11g was pretty basic. And that means  a memory consumption of 700GB only the lonely for the database. I remembered my earlier post to tune Oracle DB11g together with SoaSuite10g on a OEL5 VM.

It was basically about resizing the memory. What I did was to startup an XE database. There I looked at the basic memory settings. For convenience I created a plain init.ora.

For the non-dba's amongst you, you can do that by loging on as internal with:
sqlplus "/ as sysdba"
having set the ORACLE_HOME and ORACLE_SID:
ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1
ORACLE_SID=orcl
When you logged on as internal you can create an init.ora (also called a pfile) with:
create pfile from spfile;
Then you'll find an init.ora in the $ORACLE_HOME/dbs folder.

For an Oracle XE database the most interesting settings I found were:

java_pool_size=4194304
    large_pool_size=4194304
    shared_pool_size=67108864
    open_cursors=300
    sessions=20
    pga_aggregate_target=70M
    sga_target=210M

The sga_max_size was not set.
So I changed the 11g database with these settings, created a spfile from the pfile again (create spfile from pfile) started it again.

My initorcl.ora:
#orcl.__db_cache_size=222298112
#orcl.__java_pool_size=12582912
orcl.__java_pool_size=10M
orcl.__large_pool_size=4194304
....
#orcl.__pga_aggregate_target=159383552
orcl.__pga_aggregate_target=70M
#orcl.__sga_target=478150656
orcl.__sga_target=210M
orcl.__shared_io_pool_size=0
#orcl.__shared_pool_size=234881024
orcl.__shared_pool_size=100M
orcl.__streams_pool_size=0
...
#*.memory_target=635437056
*.open_cursors=300
#*.processes=150
*.sessions=20
...
*.sga_max_size=250
...

Mark that I unset the db_cache_size and memory_target. I also replaced the processes parameter with the sessions parameter being 20. These two parameters relate to eachother, one computed from the other.

I found that I had a database of 145MB!But that's some what too small, especially the shared-poolsize being about 64M, while the sga_max_size was 145M.

I changed my sga_max_size to explicitly 250M and the large_pool_size to 100M:
SQL> alter system set sga_max_size=250M scope=spfile;
System altered.
SQL> alter system set shared_pool_size=100M scope=spfile;
System altered.
Then restarting the database resulted in a database of 250M:
Total System Global Area 263639040 bytes
Fixed Size 1299284 bytes
Variable Size 209718444 bytes
Database Buffers 50331648 bytes
Redo Buffers 2289664 bytes

That looks better to me. And having it posted again refreshes it for me.

Update: I now see that in the initorcl.ora I had a sga_max_size of 250. But that should be 250M... Maybe that caused the shared_pool_size and sga_max_size too small.