Friday 28 December 2007

Oracle Designer 10gR2: Time to say goodbye

For the customer I am currentlty working for I did the migration of the development environment.
The toolstack we started with was the following

- development environment:
Designer 6i
Forms/Reports 6i
Database 10gR2
Windows Client/Server Runtime environment
Headstart Utilities
Several own scripts/utilities based on the repository API

- deployment environment:
Oracle Forms/Reports 10gR2
Oracle Database 10gR2
Oracle Application server 10gR2
Repository Object Browser

The target was to get rid of Designer 6i as support stops at the end of this year.
The fact that the deployment envrironment was allready Forms 10gR2 made life much easier. When preparing a new release the forms (and reports) are allready converted to the 10gR2 toolstack.
First I installed Oracle Developersuite 10gR2 (client software) with some additional patches
Then I created a new database which was a copy of the development database.

In this database I simply pressed [Upgrade] in the Repository Administration Utility to do the actual migration.
In less than an hour the actual migration was finished!
After this I tested all related software and everything worked fine.
Compared to the migration from Oracle Designer 1.3.2 to Oracle Designer 6i this migration went extremely fast and smooth.
I became a litlle suspicious so I did some research to discover the differences between the two Oracle Designer Versions. I compared the two database schemes and
I found some minor changes (10 changed objects):



After this I compared the client software. On this side also little changes. They kept the same old directory names!
(cgenf61, repos6i etc. etc.).
Next I tried to search on metalink to a document with changes between the two versions. I did not find anything!

The main difference that at least I notice is the fact that Oracle Forms 10g is generated instead off Oracle Forms 6i.

I come to the conclusion that Oracle Designer 6i was the latest version of Oracle Designer with really new functionality and that all further releases are simple upgrades. It is really time to say goodbye to the tool who has been my friend for a very long time.....

Wednesday 19 December 2007

B2B Tips using VMWare

To test certain functionality in B2B I use VMWare. This is very convenient because you do one install in your VMware Image and then just bring it down, zip it for archiving and copy it for having two instances.
Doing so I have two VMWare images with Integration B2B in it. When you have a 2GB computer and set memory settings of the guest OS to 850 MB you can startup the two instances on your computer together. I agree it is not fast, you have to accept swapping on your host and guest OS but it works. Of course it is very convenient if you have 4GB in your computer.

There are however some things you have to arrange to get it working properly.

1. Network settings
Take care that both the VM's have a host-only adapter. When start the guest-os after the copy VMware will ask you to create or keep the system-identifier. Let it create a new one, since it is in fact a new computer. If you're using Linux as a guest, you'll have to probe it for a new mac-adress. If you have RHEL4.0 with gnome, you can do that via application menu> system settings>network. Double click on the device (it is probably inactive), choose the hardware device tab. You'll see a "Bind to MAC address"-field with a probe button at the right of it. Click on the probe button and after that you can activate your network device.

Determine the ip-adress of your guest OS-es. If you're using Windows, you can do that with the ipconfig command in a command-window. Under Linux you can do it as root in a terminal window with the command ifconfig. Be sure that you use the right ip-address, the one that is coupled to your VMWare host-only network. It is convenient to have the ip-addresses added to the /etc/hosts file of the guest-os'es as well as your host. Under Windows you can find it in c:\windows\system32\drivers\etc. Under linux of course it is /etc/hosts.
Choose a smart host name, for example rhel40-b2b-acme and rhel40-b2b-globalchips.
Having it added to your host's hosts file, you can use these names to connect to your b2b servers. When you use firefox (or if you really insist internet-explorer 7.0) you can open up the UI's of both servers then in seperate tabs.
Check if the machines can reach each other by doing a ping.

2. Change the server name of your Apache
When you do an install of the Oracle Application server the installer will fill in the server name of the server at install in the httpd.conf as a servername. This is pretty inconvenient in that when you try to connect to your server using your new hostname (as filled-in in your hosts-file) Apache will respond by filling in the servername in the URL. Firefox will then do subsequent requests to the original servername that may not exist in your hosts-file. You can solve this by changing the servername in the httpd.conf file. This file is found in /Apache/Apache/conf. Look for the parameter ServerName and change it to the name you've given it in your hosts-files.

3. Copy your configuration.
The above steps you'll do if you want to use two B2B to test the internet based protocols like ebMS. In that case you'll need two servers (or you'll have to use som kind of driver java-application). When you build up an agreement with quite a lot of business actions, you probably consider to do this twice. Since at one B2B you'll do the configuration but need a complementary setup in the other B2B. Since the Initiating Capability at the first B2B need the complementary Receiving Capability at the other B2B. And ofcourse you'll have to setup the agreement and all the businessactions and document types. To at least prevent doing the business actions and document types twice, you could do the setup at one site and export and import it at the other site. Then you can delete all the supported business actions and agreements (but ofcourse not the business actions and document types). And then add them correctly again (in the complementary way).

But I figured: lets try it the "nasty way". Do the configuration at one site and import it into the other. Open up an sql-tool and connect it to b2b user in the infrastructure-database of the second B2B install. Query the table tip_party_t. You'll then find the parties that you configured. Also you see one party that has the ISHOSTED column set to 'Y', the other ones have an empty or 'N'. Update the two parties that you want to switch: update the one that has to be 'host' to 'Y', and the other one to 'N'. Then click on the Trading Partners tab in the UI of the B2B and you'll see that the other party has become hosted.
The pitfall that you have here is that you are probably logged on as admin . This user is coupled to the party that used to be the hosted party but now is not. When you purge the repository and log off, you would not be able to logon again, because this user is deleted.
So in the UI go to the new hosted trading partner and then to the "users" tab. Create a new user name it "admin-host" (something else then "admin") and add the "administrator" role to it.
Now you can logoff and logon using the new user again. Then you can delete the admin users at the original tradingpartner.

It should not be necessary to mention that switching host-tradingpartners like this is not supported. It is just old-school hacking your way through the B2B repository. But hey, you did copy/archive your Virtual Machine, right?


Friday 14 December 2007

Highavailability Architecture using Integration B2B


Integration B2B can be used for connecting with chain partners using the ebXML adapter. The ebXML is actually a set of standards enclosing ebMS which is the messaging service standard that is supported by Integration B2B. Basically it is a SOAP based standard over Internet. So the coupling with the chainpartners is based on HTTP. That's why the B2B server needs a HTTP server to receive the messages. But in this case (and the other internet-based B2B-protocols) it is also particularly handy for our high-availability architecture, because a simple loadbalancer can dispatch the requests over the two (or more) B2B Servers. The B2B servers both share the same Infrastructure database. So you have to do the configuration only once.

Since they're stateless there is not much more to do than simply install two Application Servers with Integration B2B that both use the same database. There's is actually no use for clustering or webcache. Clustering might be convenient for maintenance using Grid-control, but not really for run-time. Since we do not use the file-adapter, we can have this in an active-active setup.

Using the file adapter you have to be aware that files may only be processed by one of the two instances. There is a tip.properties-property that set the two in an active-passive configuration. But I learned that there are troubles with that. Also B2B has particular requirements about the naming of the files that might not be supported by your tradingpartners. So you might need a script or BPEL PM installation in front of B2B that takes care of these issues.

Since our configuration has to communicate with chainpartners over a WAN or internet we don't want to have the HTTP Servers directly connected to the WAN/internet. Of course there is a Firewall in front of it. But it is a good practice to define a Demilitarized Zone (DMZ) using two firewalls, to make it even harder to break in. The Oracle documentation suggests to put the HTTP Server of the Integration B2B application server into the DMZ, or to have another one that connects directly to the B2B server (by passing its own HTTP Server). This coupling is then based on AJP (Apache Java Protocol, if I'm right). The problem with that is that it is a little hard to monitor and maintain since you need to have several ports open in the internal firewall. We therefor choose to have the Integration B2B server use its own HTTP-server on it's own port. We installed a Oracle HTTP Server 2.0 (OHS) from the Oracle Application Server's companion CD (10.1.3). We particularly choose the 2.0 because it contains additional features needed by SSO that we might need in the future when pulishing portals to the Chainpartners.


The HTTP Server 2.0 is configured as a regular Apache 2.0 server so that it acts as both a forward and reverse proxy server.
An example of the proxy-configuration is below:
#LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so
#LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
LoadModule proxy_http_module modules/mod_proxy_http.so

Listen 8085
NameVirtualHost 192.168.248.148:8085

ProxyRequests On
ProxyVia On

Allow from all

ErrorLog logs/error_proxyfwd8085.log
CustomLog logs/access_proxyfwd8085.log common


Listen 4080
NameVirtualHost 192.168.248.148:4080

ProxyRequests Off

ProxyPass / http://nllvm12c:1234/
ProxyPassReverse / http://nllvm12c:1234/

ErrorLog logs/error_proxyrev4080.log
CustomLog logs/access_proxyrev4080.log common



You can simply put this piece in a proxy.conf file that you include in the httpd.conf of the OHS.

All the incoming traffic comes in on port 4080. This is the default port that Axway Cyclone (another B2B gateway product) uses. It is convenient to standardize that so that all the chainpartners use the same port, particularly when you use a Hop in the middle. The Hop then only has to configure it's firewall to open up port 4080. Internally we use other ports. So the incoming traffic on port 4080 are rerouted in this example to the B2B server (nllvm12c) on port 1234.

In the tip.properties files of the Integration B2B server you need to put in the ip address of the loadbalancer of the proxyserver (in the picture the right one). The proxyserver above is listening on port 8085 for these requests, so that is the port you need to fill in in the tip.properties as the proxy port.

It is very important to use different ports for the internal traffic and the external (incoming requests): the internal firewall should have other ports open than the external firewall to have the DMZ secure. I think many security-officers prefer to have internally also other protocols, like the AJP protocol in the architecture suggested by Oracle. But for the convenience of configuration and maintainability we choose the configuration above.

Referential data in a multilingual application

The application described in the previous post consists of a lot of tables.
Some of them are populated by the developers and are not maintainable by end users.
Most of these tables have columns that have to be translated. We found a generic solution for these kind of tables.

In the application all tables have an artificial primary key that is populated by a sequence.
We created a table [column_translations] with the following structure
create table COLUMN_TRANSLATIONS
(
 ID                      NUMBER(10)                  not null
 ,TABLE_NAME              VARCHAR2(30)      not null
 ,COLUMN_NAME             VARCHAR2(30)    not null
 ,RECORD_ID               NUMBER(14)           not null
 ,LANGUAGE                VARCHAR2(2)          not null
 ,TRANSLATION             VARCHAR2(4000) not null
)
Given a table name/column name and a record id the translation is actually stored. Around this table a generic function is created that retrieves the translation given a certain language
function get_translation
( p_table_name        in column_translations.table_name%type
, p_column_name      in column_translations.column_name%type
, p_record_id          in column_translations.record_id%type
, p_language            in column_translations.language%type
)
return column_translations.translation%type
;
Now every table gets a seperate view that contains translations for the columns that needs to be translated.
For example:
create or replace view general_messages_vw
(id, code, message, .....)
as
select osc.id
,      osc.code
,      nvl(package.get_translation
      ('GENERAL_MESSAGES'
      ,'MESSAGE'
      , osc.id
      ,package.get_language -- this function returns the actual language
      )
     ,....
     )
Now you have to redirect all calls to the original table to the newly created view. You can do this by shuffling around synonyms and grants.
We decided to actually redirect the calls by changing the calls in database and in the Oracle Forms and Oracle Reports.
We used this mechanism only for referential tables that are not maintained by end-users but you could use it for other referential tables as well.

Friday 7 December 2007

Multilingual Oracle Forms application

Yesterday I did a session for the developers of the project I'm currently working for.
It was about the introduction of a multilingual version of their application and the consequences of this version for their work.

The application is an Oracle Forms 10gR2 application with about 800 forms. About half of the forms are 100% generated from Oracle Designer.
The other half of the forms are hand-build in earlier versions of Oracle Forms (starting with Oracle Forms 4.5) and migrated several times.

The datamodel used for the multilingual enabling is relative straight forward. The first table is a table that contains all modules in the application.
A detail table from this table contains all user-interface items from a certain type that are in this module. For instance windows,text items, tabpages, checkboxes etc. etc. (all objects in a form). The last table is a detail table from the user-interface-items table that contains the actual translations of the items. Columns in this last table are for instance language and text_item_type (the user interface item [text item] has a prompt an a hint text).

In the pre-form trigger of all modules a call is made to a central program unit (the application is generated/build using headstart) where the contents of the tables are read for that specific module and the actual translation is started. All items are translated using the standard built-ins like set_item_property.

Main problem of this project is the way the metadata tables are populated. You could write logic that extracts this information from Oracle Designer. However, half of the forms is not generated so this approach does not work.

We decided to use a third-party tool. It is called the FormsApi Master (http://www.orcl-toolbox.com/fapimaster.asp).
This tool uses the Oracle Forms Open Application Interface to programmatically query, modify and create FMB/MMB/PLL/OLB modules. The tool uses a scripting language (similar to PL/SQL) that gives you full access & flexibility over the Oracle FormsAPI. We created a script that extracts all the information form the Oracle Forms modules and puts this information in a script. The scripts are used to populate the metadata tables.

Another problem that we were facing were the occurences of boilerplate text. Unfortunately there is no built-in like set_boilerplate_text. Therefore all boilerplate text has to be removed from the forms and replaced by alternatives. The older forms contained for instance items with boilerplate prompts (In forms 4.5 this was not a property from an item). So all occurences of this prompts have to be transfered tot the actual item. A boring and error prone job. Fortunately the FormsApi Master helps again. In the scripts corner of http://www.orcl-toolbox.com/fapimaster.asp I found a demo script that automates this job.

The information extracted from a fmb is not the only information that has to be translated. Almost every application uses referential data that is not maintainable by end-users. You can think of error messages, domain-information, module information etc.
I will post on this subject on a later moment