tag:blogger.com,1999:blog-45337774176001036982024-03-14T07:16:36.967+01:00Darwin-ITDarwin-IT professionals do ICT-projects based on a broad range of Oracle products and technologies. We write about our experiences and share our thoughts and tips.Martien van den Akkerhttp://www.blogger.com/profile/05183907832966359401noreply@blogger.comBlogger434125tag:blogger.com,1999:blog-4533777417600103698.post-21437308935068384182020-09-29T23:17:00.002+02:002020-09-30T08:58:04.793+02:00Logging in SOA Suite BPEL<p>This article feels like I should have written years ago. As it is, I haven't, but let's do it anyway.</p><p>A few weeks ago a<a href="https://community.oracle.com/thread/4342003"> (somewhat older) question on community.oracle.com</a> caught my attention. It was about how to do logging in Oracle SOA Suite. </p><p>This very much possible, and it can be simply done using an Embedded Java activity. However, if you want to have multiple loggings in a larger BPEL process, or have multiple BPEL components in a composite with multiple loggings scattered all over them, then Embedded Java activities aren't that practical.</p><p>So, I have developed a bit more sophisticated solution. For just one logging, it is a bit overdone, but I find it more practical with multiple loggings.</p><p>It starts with a quite simple <a href="https://github.com/makker-nl/blog/blob/master/SOALibraries/SOALogging/src/nl/darwinit/soautils/logging/Log.java" target="_blank">Log wrapper class</a>, that I added to GitHub. It is a wrapper around Java Util Logging, and helps with instantiating with a Logger instance. One of the constructors takes in a compositeName and componentName:</p>
<pre class="brush:java">public class Log {
private static final String BASE_PACKAGE="oracle.soa.bpel";
private static Logger log;
private String className;
...
public Log(Class loggingClass) {
super();
setClassName(loggingClass.getName());
log = Logger.getLogger(getClassName());
}
public Log(String loggingClass) {
super();
setClassName(loggingClass);
log = Logger.getLogger(getClassName());
}
public Log(String compositeName, String componentName) {
super();
String loggingClass = BASE_PACKAGE+"."+compositeName+"."+componentName;
setClassName(loggingClass);
log = Logger.getLogger(getClassName());
}
</pre><p>An important aspect here is the static variable <i>BASE_PACKAGE</i> which is set to "oracle.soa.bpel". Which I'll get back to in a minute. The constructor uses this, and the compositeName and compentName to build up a sort of className, that it prefixes with the <i>BASE_PACKAGE</i>.</p><p>It also has some logging methods, that requires a methodName, that it uses as an extra identifier for the logging, added to the full class Name. <br /></p><p>This class I 'deployed' to a <a href="https://github.com/makker-nl/blog/blob/master/SOALibraries/SOALogging/dist/SOALogging.jar" target="_blank">jar file</a>. This makes it reusable in multiple composites, while the source is versioned only once.<br /></p><p> Add it to the <i>SCA_INF/lib</i> folder of your composite:</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7ZFdycucdCfoIi2uQg6hjRkoCkPu5bv5DQnzW8zBFymqjzuG2_1YJMzKUCxPFILhe05rC2VrfuYnGmHjj95lzFHVlorhbwXlQrO2444uowmMX9BytxVehAWSKKNfYG5rtCQcmC5Xf59me/s665/sca-inf-lib.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="100" data-original-width="665" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7ZFdycucdCfoIi2uQg6hjRkoCkPu5bv5DQnzW8zBFymqjzuG2_1YJMzKUCxPFILhe05rC2VrfuYnGmHjj95lzFHVlorhbwXlQrO2444uowmMX9BytxVehAWSKKNfYG5rtCQcmC5Xf59me/s320/sca-inf-lib.png" width="320" /></a></div>But you could probably also add it to the <i>oracle.soa.ext_11.1.1</i> folder in your <i>$MW_HOME/soa/soa/modules</i> folder and run the Ant script there: <p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPH1TMpNWGhgWOj1qwMAyjbTrt75oxq3aZ1Gv7Usnsb4ewtwyOIrRVnlehyomKYHQxlyMtdIhNBgwPa9qRg8Nyb5J7eeK-x-NVl6npSDL8folJm2lB6rL5DoiGe4TQnkdAMV-00BEIEUwM/s1266/oracle.soa.ext_11.1.1-folder.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="300" data-original-width="1266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPH1TMpNWGhgWOj1qwMAyjbTrt75oxq3aZ1Gv7Usnsb4ewtwyOIrRVnlehyomKYHQxlyMtdIhNBgwPa9qRg8Nyb5J7eeK-x-NVl6npSDL8folJm2lB6rL5DoiGe4TQnkdAMV-00BEIEUwM/s320/oracle.soa.ext_11.1.1-folder.png" width="320" /></a></div><p>After running Ant in that folder, you should restart the server. The Ant script will add all the jar files in that folder, including yours, to the manifest file of the <i>oracle.soa.ext.jar</i> file in that folder. Doing so, it will be appended to the Classpath of SOA Suite.</p><p>To use this in your BPEL, it is important to add the following line at the beginning:</p>
<pre class="brush:xml"><import location="nl.darwinit.soautils.logging.Log" importType="http://schemas.oracle.com/bpel/extension/java"/></pre><p>Like this:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsPUiiCoGjSYt5WaielsDdg1_j5aOcr0TRWMitOWkr6gmu561LwC3nb9T3VsG9uquDjgSh3gOuXQ6EFv0UiYrO0s6XyiRZF9UAWnDP56vU1QIGuYRo7ERjAC5mt36yDofUIpoa4fRAg5J4/s1059/import-Log-class.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="256" data-original-width="1059" height="154" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsPUiiCoGjSYt5WaielsDdg1_j5aOcr0TRWMitOWkr6gmu561LwC3nb9T3VsG9uquDjgSh3gOuXQ6EFv0UiYrO0s6XyiRZF9UAWnDP56vU1QIGuYRo7ERjAC5mt36yDofUIpoa4fRAg5J4/w640-h154/import-Log-class.png" width="640" /></a></div><p></p><p>Having done that you can use the Log class in the Embbeded Java activity. To begin with, I find it usefull to add an Embedded Java to a scope which contains simple xsd:string based variables. Using an Assign you can easily assing proper values to the local variables:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9CmbhO4ZSWBQrRqLvaZkNoFDP1ntYvGRtIhSgMBuU0bnl5Eodmq55YWx8DF0Fj6kV946ZyugvOg9b7xRAE-UhPCC1R78m478TIoTWRKgIdjrLCbPMVjWrkOQDVDEJT9CTwpK2idzDICkm/s1503/AssignLogAttributes.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="793" data-original-width="1503" height="211" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9CmbhO4ZSWBQrRqLvaZkNoFDP1ntYvGRtIhSgMBuU0bnl5Eodmq55YWx8DF0Fj6kV946ZyugvOg9b7xRAE-UhPCC1R78m478TIoTWRKgIdjrLCbPMVjWrkOQDVDEJT9CTwpK2idzDICkm/w400-h211/AssignLogAttributes.png" width="400" /></a></div><p>The <i>compositeName</i> and <i>componentName</i> variables can be filled with <i>ora:getCompositeName()</i> and <i>ora:getComponentName()</i> respectively. Doing so makes it easier to access these values in an Embedded Java activity. The Java snippet Embedded Java in my example project is:</p>
<pre class="brush:java">String compositeName = (String) getVariableData("compositeName");
String componentName = (String) getVariableData("componentName");
String text = (String) getVariableData("text");
String methodName= (String) getVariableData("methodName");
Log log = new Log(compositeName,componentName);
String message="**** BPEL "+methodName +" " + text +" ****";
log.info(methodName, message);
addAuditTrailEntry(message);</pre><p>The <i>addAuditTrailEntry()</i> shown in this snippet is an API that adds the message to the flowtrace also:</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjau5YTZ20oa6_6b9zundhhyphenhyphenBlfIW3ypybCk-w8iKw_Y7-50sRtagzG4TAGUBciPvQepo6H98DVfSzwr-v-WQDacnLkbfSM0-mqcjxaydDCUe0k-Wh39qkss2mIwWJggYOSgCB6SJLSqaiy/s665/AddAuditTrail.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="107" data-original-width="665" height="64" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjau5YTZ20oa6_6b9zundhhyphenhyphenBlfIW3ypybCk-w8iKw_Y7-50sRtagzG4TAGUBciPvQepo6H98DVfSzwr-v-WQDacnLkbfSM0-mqcjxaydDCUe0k-Wh39qkss2mIwWJggYOSgCB6SJLSqaiy/w400-h64/AddAuditTrail.png" width="400" /></a></div><br />So not necessary for logging and also not specifically in scope of this article, but good to mention.<br />The message build up in this snippet is a concatenation of: <i>"**** BPEL "+methodName +" " + text +" ****"</i>. This maybe handy in the AuditTrail, but in the log you may want to show just the text, like: <i>log.info(methodName, text)</i>.<br /><p></p><p>Earlier I mentioned the <i>BASE_PACKAGE</i> variable in the Log class. This refers to the <i>oracle.soa.bpel</i> log-appender. This can be configured in the soa-infra Log Configuration:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxo64gcXGdA0FLRsCzvr4xdHqJSUbL92nU76SsykNK9n6V0o9KlXr3D7dMsAmnLvQYOK3GazfANm44kqjQMPW0cRhQEV_dXhp6qSjyxQHPveiKpQVD6qjOk0IyIw3lDO02QFrRMG_qk9NJ/s664/Menu-LogConfiguration.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="664" data-original-width="536" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxo64gcXGdA0FLRsCzvr4xdHqJSUbL92nU76SsykNK9n6V0o9KlXr3D7dMsAmnLvQYOK3GazfANm44kqjQMPW0cRhQEV_dXhp6qSjyxQHPveiKpQVD6qjOk0IyIw3lDO02QFrRMG_qk9NJ/s320/Menu-LogConfiguration.png" /></a></div><br /><p>And then:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaYmz6vOImFz_MwdC-p4ZvkQHoEIETeO14eufdKRm5Nsc4JS5Cuzzl3tftqcQcTieOTPirrEBgJYGJvr1flU7wSTDTSr-zKcmSOjjT5I9NRu2USuwZzNXGhzbvwCAzOhhzXPU89FK7suu5/s1117/LogConfigurationOracleSoaBPEL.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1117" height="413" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaYmz6vOImFz_MwdC-p4ZvkQHoEIETeO14eufdKRm5Nsc4JS5Cuzzl3tftqcQcTieOTPirrEBgJYGJvr1flU7wSTDTSr-zKcmSOjjT5I9NRu2USuwZzNXGhzbvwCAzOhhzXPU89FK7suu5/w640-h413/LogConfigurationOracleSoaBPEL.png" width="640" /></a></div><p>You could add a custom Logger, but it is easier to use an existing one. And to me it makes sense to use the <i>oracle.soa.bpel</i> Logger. If you would choose to use another logger, you would need to change the BASE_PACKAGE variable in the class.<br /></p><p>Make sure it has a severity or Log Level low enough to cater for your logging. Set it to the Runtime Logger, but for persistence purpose you probably would need to add it to the "Loggers with Persistent Log Level State". For changing the Runtime logger, you would not need to restart the server. You do need to make sure that the "minimum severity to log" on the server in the WebLogic console.<br /></p><p>Before you test, it can be handy to "tail" the diagnostic log as follows:</p>
<pre class="brush:bash">[oracle@ol7vm logs]$ tail -f DefaultServer-diagnostic.log |grep oracle.soa.bpel
</pre><p>I added my <a href="https://github.com/makker-nl/blog/tree/master/SOALoggingDemo/SOALoggingBPEL" target="_blank">Demo BPEL process also to GitHub</a>. If you test it, the output will be as follows:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqaTXsnq7zZj9fb1Yb9XA_j1V4w2z19IDP1A1E7W2LWLOaOdhy2BSHfIAFkzg05dfh-sKvYsak9Tp1s8k1VG7Gu4g3wWo4H_6NANP7z67GT4l-ytElRh0S1JZKHJPX2_UjrFU7gtHJyb0M/s1597/diagnostic-log.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="205" data-original-width="1597" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqaTXsnq7zZj9fb1Yb9XA_j1V4w2z19IDP1A1E7W2LWLOaOdhy2BSHfIAFkzg05dfh-sKvYsak9Tp1s8k1VG7Gu4g3wWo4H_6NANP7z67GT4l-ytElRh0S1JZKHJPX2_UjrFU7gtHJyb0M/s16000/diagnostic-log.png" /></a></div><br /><p>So that works! Easy, right?</p><p>Now, this works for one simple Log in a BPEL process. But what about if you want to trace the flow using multiple Logs. And maybe even, in fault handlers log particular errors. The scope I introduced can be converted to a subProcess:</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjT9MeroBv2th6lvoBP1yA1UYGHNepbb32tDoCJb5_MkMmFGMqW4Av5AaasZyVssPN2pX88kRWBWTDlW7ITcshD3X3xNi9Pw0K_vEy8-RJfffDKQWaz8jzl_DXbMAO3Dz-3EqvvM9r864c1/s514/ConvertToSubProcess.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="468" data-original-width="514" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjT9MeroBv2th6lvoBP1yA1UYGHNepbb32tDoCJb5_MkMmFGMqW4Av5AaasZyVssPN2pX88kRWBWTDlW7ITcshD3X3xNi9Pw0K_vEy8-RJfffDKQWaz8jzl_DXbMAO3Dz-3EqvvM9r864c1/s320/ConvertToSubProcess.png" width="320" /></a></div><br /> I renamed the SubProcess to <i>Log</i> and then you can remove the Assign:<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRPJU4J4hh-D8I3Qn7CYv1yB93Jvp02u66Kee7FFG-V6q4ugirSTWx5OJLx_yBlDELT4mQNeALqYDwlQNMupueoBD-oZYJPA8SU6blCzfMyUtEvIbdt9HOnY9NhdZtNfA0FN7V0YtjnITe/s632/LogSubProcess.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="368" data-original-width="632" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRPJU4J4hh-D8I3Qn7CYv1yB93Jvp02u66Kee7FFG-V6q4ugirSTWx5OJLx_yBlDELT4mQNeALqYDwlQNMupueoBD-oZYJPA8SU6blCzfMyUtEvIbdt9HOnY9NhdZtNfA0FN7V0YtjnITe/s320/LogSubProcess.png" width="320" /></a></div><br /><p>The scope is replaced with a call activity, that can be renamed. The scope variables now function as call arguments:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjeW9aArF8NvBRAGn2vHKhyBILTgGd1gIM9zSFyohcGt-8gjQrG7G2x08vkmaH5TRnCBxbaHjXu1z2wUBeFfEQeuilL7yLBriyAXUM9U1EsEG2_VL7Ghoo4P0gdG1DbMazYAjwwuQuf59PA/s532/LogStart.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="530" data-original-width="532" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjeW9aArF8NvBRAGn2vHKhyBILTgGd1gIM9zSFyohcGt-8gjQrG7G2x08vkmaH5TRnCBxbaHjXu1z2wUBeFfEQeuilL7yLBriyAXUM9U1EsEG2_VL7Ghoo4P0gdG1DbMazYAjwwuQuf59PA/s320/LogStart.png" width="320" /></a></div><p>You can copy&paste this and rename it to reflect for instance a LogEnd activity:<br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEin9sfzW9dD643d_XQifeB_Jy5xgkTx4DecYxzC-kHNQ9OH9cEtxwbFxI-mM05mfZG-bJcotFhcTNeHsychNQpFfAsW2QUmmW-DuiDO3-4tCx6XPN1VrSPlvPfsfQhlEje6PxfbVAF1DA47/s891/LogStartLogEndBPEL.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="687" data-original-width="891" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEin9sfzW9dD643d_XQifeB_Jy5xgkTx4DecYxzC-kHNQ9OH9cEtxwbFxI-mM05mfZG-bJcotFhcTNeHsychNQpFfAsW2QUmmW-DuiDO3-4tCx6XPN1VrSPlvPfsfQhlEje6PxfbVAF1DA47/s320/LogStartLogEndBPEL.png" width="320" /></a></div><br /><p>Testing this, gives the following output:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlvrRUINFCzXbyPwVIyNonIVJ7ySF0TwC6S_9S_y4nU78nECs2e-7znRbl5mA3iIxhLZ0GRVwMXj_WYMy29RrDrOyWYlNufqJYmhI11fkzBJMzniPcaHwOsXlrbzVmjyADR2m1RIe8HqvA/s1574/LogStartLogEndOutput.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="217" data-original-width="1574" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlvrRUINFCzXbyPwVIyNonIVJ7ySF0TwC6S_9S_y4nU78nECs2e-7znRbl5mA3iIxhLZ0GRVwMXj_WYMy29RrDrOyWYlNufqJYmhI11fkzBJMzniPcaHwOsXlrbzVmjyADR2m1RIe8HqvA/s16000/LogStartLogEndOutput.png" /></a></div><p></p><p>As can be seen, now a simple log is done using a simple call activity. In this example the Log subprocess is within the same BPEL process, so you could move the setting of the componentName and compositeName variables in an assign in the subProcess, re-enstating the original assign.</p><p>However, you could of course move the Embedded Subprocess to a Reusable Subprocess. And then it might be usefull to be able to provide at least the componentName as an argument.</p><p>I think it is not so useful to put it in a separate BPEL process that can be called from external composites. In that case you would need to do an invoke with an accompanying assign and variable declarations for each Logging. So I would prefer to define either an embedded or a reusable subprocess for each composite/bpel that you want to do logging in.</p><p>Although I experience this article a few years late, I hope it does help. <br /></p>Martien van den Akkerhttp://www.blogger.com/profile/05183907832966359401noreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-16720395381810623992020-09-25T14:53:00.004+02:002020-09-25T14:55:11.794+02:00My boxes in the Vagrant Cloud<p>Last year I wrote about how I created a seemless desktop using <a href="https://blog.darwin-it.nl/2019/03/my-seemless-linux-desktop-using.html">Vagrant, VirtualBox en MobaXterm</a>.</p><p>This week I was busy creating a new box with Oracle Linux, later switching to CentOS and installing several IDE's in it. And Docker.</p><p>Next week a big change is due for me. And for that I'll be going to switch laptops. Also others will going to use my vagrant project. Up till now I used local file based boxes. So if you wanted to use my projects that I posted on <a href="https://github.com/makker-nl/vagrant">GitHub</a>, you had not only to have the install binaries in a certain folder structure, but also the particular box downloaded in the particular boxes folder. </p><p>This morning I decided to figure out how to publish them on the Vagrant cloud. And it is surprisingly easy, of course! Why I didn't do that before? Well, actually, I started with this by preparing a workshop for colleagues. And to simultaneously download the same box by every participant, did not seem a good idea. So I distributed the vagrant project with all the installers including the box on a stick.</p><p>But now, preparing for my laptop switch and distributing it for my colleagues, it seems a good idea.</p><p>I found <a href="https://blog.ycshao.com/2017/09/16/how-to-upload-vagrant-box-to-vagrant-cloud/" target="_blank">this step by step article </a>that guided me through the process. But let me go through the process my self.</p><p>First you'll need an account on the <a href="https://app.vagrantup.com/boxes/search">Vagrant Cloud</a>. You can get there from the main page of <a href="https://www.vagrantup.com/">vagrantup.com.</a> And then click on the <i>Find Boxes</i> button:</p><p> </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjA4rpqBGMAECqPs_DECMVbktHQO9H1FpDXpy1n-tVD39QZX6yW89awxhj3zkHuTJJSnoS0VT8oEwKuAvcR_kYfnFjQ0oJjdDaxvtWrxIyvbSLns7MR7JWLZsm73ekiaH260q9tqjvczODW/s656/VagrantFrontPage.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="395" data-original-width="656" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjA4rpqBGMAECqPs_DECMVbktHQO9H1FpDXpy1n-tVD39QZX6yW89awxhj3zkHuTJJSnoS0VT8oEwKuAvcR_kYfnFjQ0oJjdDaxvtWrxIyvbSLns7MR7JWLZsm73ekiaH260q9tqjvczODW/s320/VagrantFrontPage.png" width="320" /> </a> <br /></div><p></p><p>Create a new account or login, if not already done that.</p><p>You'll land on the <i>Search</i> page:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOxsC0dld7jkTp-5-rfSGm2KQZ9lS2L1LhusqPmn5uDNsy_as4-hHx2b5lz4u-lzqCrVjmoEMPAjCI9WsD0kXlYvGMM0H9HVUQIawW00HLSJE4YNSq1YjByML4NHTQ27_SQjOB6yQP_QI2/s1192/VagrantSearch.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="563" data-original-width="1192" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOxsC0dld7jkTp-5-rfSGm2KQZ9lS2L1LhusqPmn5uDNsy_as4-hHx2b5lz4u-lzqCrVjmoEMPAjCI9WsD0kXlYvGMM0H9HVUQIawW00HLSJE4YNSq1YjByML4NHTQ27_SQjOB6yQP_QI2/s320/VagrantSearch.png" width="320" /></a></div><br />
There you can search for existing boxes. But to create and upload your own one, click on the Dashboard tab: <br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2i1eB2NbEyrtE6zfuK1MR84y08nSeBmdVXcaNmwRV5aFq0pDdH5h5XLpqFGU5uGkSMJxUAXhzfWorGzBea9B9wkngsIWYfO3_8MlDfHrE3HyAcOjqcHHSijcGoG19ikkTPXGebcSp2ych/s1185/VagrantDashboard.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="587" data-original-width="1185" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2i1eB2NbEyrtE6zfuK1MR84y08nSeBmdVXcaNmwRV5aFq0pDdH5h5XLpqFGU5uGkSMJxUAXhzfWorGzBea9B9wkngsIWYfO3_8MlDfHrE3HyAcOjqcHHSijcGoG19ikkTPXGebcSp2ych/s320/VagrantDashboard.png" width="320" /></a></div><p></p><p>There click the "New Vagrant Box" button:</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcFvn_XZkTPW3WUnH386GjY49AifE1e2GZgoMIl1dew_sWicjuvVKbPaPjS1OuD6UcMMhyphenhyphenQeh2rDzCSN-F6QURJLHZ5aErGc6Vb_o9eSI2l_UQ1lwx1KN8lAdqhhZvUMVOvpLBcUjfdYxz/s1285/VagrantNewBox.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="634" data-original-width="1285" height="317" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcFvn_XZkTPW3WUnH386GjY49AifE1e2GZgoMIl1dew_sWicjuvVKbPaPjS1OuD6UcMMhyphenhyphenQeh2rDzCSN-F6QURJLHZ5aErGc6Vb_o9eSI2l_UQ1lwx1KN8lAdqhhZvUMVOvpLBcUjfdYxz/w640-h317/VagrantNewBox.png" width="640" /></a></div><br />Here give the box a name and a short description. My first boxes had a version number in the name. But I found that a bit overdone, because later on you get to define a box versions. Click on the <i>Create box</i> button. I would urge you to provide a description that give some base, identifiable information on the box.<br /><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0ftoa-HQbJ07irENmCGFVJ_StYSpSjoXmeY9uGICBia-ZhOi_a4Z1mb1nSxQloR2xogS6EDKpz3KGlaAajKSFKnjQUB7fKjGkcYuQhAJvfzzsivZ3Qv3H22E0JF84WdRu2uoeG2PSKIti/s1166/VagrantNewBoxVersion.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="610" data-original-width="1166" height="334" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0ftoa-HQbJ07irENmCGFVJ_StYSpSjoXmeY9uGICBia-ZhOi_a4Z1mb1nSxQloR2xogS6EDKpz3KGlaAajKSFKnjQUB7fKjGkcYuQhAJvfzzsivZ3Qv3H22E0JF84WdRu2uoeG2PSKIti/w640-h334/VagrantNewBoxVersion.png" width="640" /></a></div><br /><p>Provide a version, it's smart to start with 1 (it will check it), and possibly a description. Although I find a good base description important, I'm not sure what to write on a first version as a description. For subsequent versions, it seems a good to fill in as well. Like with GitHub/Subversion commit messages.</p><p> Within the version, I was looking for an upload button, but you first get to define a provider. So click on the provider button:<br /></p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGbbxWuJ9OiViQYS-eucoCNDHnkld8n-wWSqMvmJAVaRjbkQTn9NAnpAB-B0PJkWq9qSIEimoQvTv7AOZoI-2E5mvIFmdleAz1FVONV57z70ujJu7-PCQwb2HBpvBBI_ZxvZlFKl4-nDBV/s1184/VagrantNewBoxAddProvider.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="785" data-original-width="1184" height="424" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGbbxWuJ9OiViQYS-eucoCNDHnkld8n-wWSqMvmJAVaRjbkQTn9NAnpAB-B0PJkWq9qSIEimoQvTv7AOZoI-2E5mvIFmdleAz1FVONV57z70ujJu7-PCQwb2HBpvBBI_ZxvZlFKl4-nDBV/w640-h424/VagrantNewBoxAddProvider.png" width="640" /></a></div><br />
In the following page you get to define a provider. Provide <i>virtualbox</i> as a provider name. Vagrant need to be able to recognize and use that. But there is no poplist, so just a free text field.
<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsW6Wxa9bca_J6jYy__jfdwPEfHZcdxNS7xoAlzBE_iRaJN7o877E6__56-D9C6en-1vH6OTzPDuvrAReZoNjVhfWsbGjC5AMNev_Oy4-0P9Ro28gH6QBwu_SD50ZJXzHuctQY_mmZTLNr/s1203/VagrantNewBoxAddProviderVirtualBox.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="656" data-original-width="1203" height="348" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsW6Wxa9bca_J6jYy__jfdwPEfHZcdxNS7xoAlzBE_iRaJN7o877E6__56-D9C6en-1vH6OTzPDuvrAReZoNjVhfWsbGjC5AMNev_Oy4-0P9Ro28gH6QBwu_SD50ZJXzHuctQY_mmZTLNr/w640-h348/VagrantNewBoxAddProviderVirtualBox.png" width="640" /></a></div><p>I want to upload to the Vagrant Cloud, so the default will suffice. Click on the <i>Continue to upload</i> button: </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuINUIFDZXyL85i3-Z72PsOwfaLYsHBJCv_ZX6K7FOQnZLQPKXm5SzFUQNlksMZZop50asAyJRuRfvq9AZ5i_jyuRs2mmmeN3R-ttrEIxLRvP22R5VGeMyYYPIgyuqNn8Y60hMu8izoFix/s1178/VagrantNewBoxAddProviderUpload.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="690" data-original-width="1178" height="374" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuINUIFDZXyL85i3-Z72PsOwfaLYsHBJCv_ZX6K7FOQnZLQPKXm5SzFUQNlksMZZop50asAyJRuRfvq9AZ5i_jyuRs2mmmeN3R-ttrEIxLRvP22R5VGeMyYYPIgyuqNn8Y60hMu8izoFix/w640-h374/VagrantNewBoxAddProviderUpload.png" width="640" /></a></div>
<p>Using the Browse button, browse to your Vagrant box and have it upload it.</p><p>Now, to be able to use the box, and others to discover your box, you'll need to <i>release</i> it. So go to the <i>versions</i> sub tab, and click on the <i>Release </i>button for the <i>v1</i> version:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7m9k9TsHZg3XR5ofLnnYnGU2ax673yCzwiVyOju6A91f79cnoi42RkgMu_LATEej11LmBTrnM4WncYZXuuAhxeVgBjT5I-eERy2GxtY0dH7q5X9jVP5F949tjyptzfjF4l9WvO_D_Jg0t/s1154/VagrantNewBoxVersionRelease.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="827" data-original-width="1154" height="458" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7m9k9TsHZg3XR5ofLnnYnGU2ax673yCzwiVyOju6A91f79cnoi42RkgMu_LATEej11LmBTrnM4WncYZXuuAhxeVgBjT5I-eERy2GxtY0dH7q5X9jVP5F949tjyptzfjF4l9WvO_D_Jg0t/w640-h458/VagrantNewBoxVersionRelease.png" width="640" /></a></div><br />In the following page, click on the release button:<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVNvDV8vfvtH17qCMwHHjhbDtpjmL78lRG6mFrizR9tEFRJSYtxke5Fv8_yQhm3Vm6GQPAxbGpLwb-R3hAWSK9f7Vkwkx1rQB_5qESNX5oI0TZmWF5XrK4mIxrq3mE_8BWzBMLYJwc0aP5/s1194/VagrantNewBoxVersionRelease-2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="715" data-original-width="1194" height="384" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVNvDV8vfvtH17qCMwHHjhbDtpjmL78lRG6mFrizR9tEFRJSYtxke5Fv8_yQhm3Vm6GQPAxbGpLwb-R3hAWSK9f7Vkwkx1rQB_5qESNX5oI0TZmWF5XrK4mIxrq3mE_8BWzBMLYJwc0aP5/w640-h384/VagrantNewBoxVersionRelease-2.png" width="640" /></a></div><p>Now my boxes are searchable:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZcv2vdjyu_9Lge4dOarMYiWycRUzVx77Tjg3p6VtxTlHAjoEfCNvZuH4N04Ei7Zp-R8MkB_r_uWlvQ6_qmZxcb8uvS1hMZSv_jyM3eXqPdnUk4gVG2voAWNhgZ5KhScCdwLhFZe_zBzdo/s1185/VagrantSearchMakker.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="683" data-original-width="1185" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZcv2vdjyu_9Lge4dOarMYiWycRUzVx77Tjg3p6VtxTlHAjoEfCNvZuH4N04Ei7Zp-R8MkB_r_uWlvQ6_qmZxcb8uvS1hMZSv_jyM3eXqPdnUk4gVG2voAWNhgZ5KhScCdwLhFZe_zBzdo/s320/VagrantSearchMakker.png" width="320" /></a></div><br /><p>To use a box, you can create a Vagrant file with the following reference to the box:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizMX015VE_bncwwwvLqNADGzxcy-eq28pnmeVNK4fD2bfYFRfhKEMVgthjK8PIvSSsoJ7WP5gAcOHEa1-lT-JIKM6CtkdpqpsC5TlLAyObPhJPCC4Egu3QuBxHvXl0a7W2w1_8gwbtFkKL/s560/VagrantFile.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="169" data-original-width="560" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizMX015VE_bncwwwvLqNADGzxcy-eq28pnmeVNK4fD2bfYFRfhKEMVgthjK8PIvSSsoJ7WP5gAcOHEa1-lT-JIKM6CtkdpqpsC5TlLAyObPhJPCC4Egu3QuBxHvXl0a7W2w1_8gwbtFkKL/s320/VagrantFile.png" width="320" /></a></div><br /><p>Or create a new box in a new folder using a command like <i>vagrant init makker/CO78SwGUI --box-version 1</i>, continued by <i>vagrant up</i>:</p>
<pre class="brush:plain">d:\Projects\vagrant\co78>vagrant init makker/CO78SwGUI --box-version 1
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.
d:\Projects\vagrant\co78>vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'makker/CO78SwGUI' could not be found. Attempting to find and install...
default: Box Provider: virtualbox
default: Box Version: 1
==> default: Loading metadata for box 'makker/CO78SwGUI'
default: URL: https://vagrantcloud.com/makker/CO78SwGUI
==> default: Adding box 'makker/CO78SwGUI' (v1) for provider: virtualbox
default: Downloading: https://vagrantcloud.com/makker/boxes/CO78SwGUI/versions/1/providers/virtualbox.box
==> default: Waiting for cleanup before exiting...
Download redirected to host: vagrantcloud-files-production.s3.amazonaws.com</pre><p>You can list boxes with the (sub)command <i>vagrant box list</i>:</p>
<pre class="brush:plain">d:\Projects\vagrant\co78>vagrant box list
CO77GUIv1.1 (virtualbox, 0)
makker/ol77SwGUIv1.1 (virtualbox, 1)
</pre>
<p>Remove a box with <i>vagrant box remove CO77GUIv1.1</i>:</p><pre class="brush:plain">d:\Projects\vagrant\co78>vagrant box remove CO77GUIv1.1
Box 'CO77GUIv1.1' (v0) with provider 'virtualbox' appears
to still be in use by at least one Vagrant environment. Removing
the box could corrupt the environment. We recommend destroying
these environments first:
rhfuse (ID: ca219fa1fe0b4984bf77aa7807c0feb2)
Are you sure you want to remove this box? [y/N] y
Removing box 'CO77GUIv1.1' (v0) with provider 'virtualbox'...
</pre><br /><p>But you can add the freshly created box also using the <i>vagrant box add</i> command:<br /></p>
<pre class="brush:plain">d:\Projects\vagrant\co78>vagrant box add makker/CO78SwGUI --box-version 1
==> box: Loading metadata for box 'makker/CO78SwGUI'
box: URL: https://vagrantcloud.com/makker/CO78SwGUI
==> box: Adding box 'makker/CO78SwGUI' (v1) for provider: virtualbox
box: Downloading: https://vagrantcloud.com/makker/boxes/CO78SwGUI/versions/1/providers/virtualbox.box
==> box: Box download is resuming from prior download progress
Download redirected to host: vagrantcloud-files-production.s3.amazonaws.com
Progress: 3% (Rate: 10.5M/s, Estimated time remaining: 0:03:29)</pre><br />
<p>As can be seen it mentions that it started the download earlier, but I broke it off. It apparently resumes the download.<br /></p><p>My current Vagrantfiles have the following declaration of the vagrant box:<br /></p>
<pre class="brush:plain">...
BOX_NAME="CO78GUIv1.1"
BOX_URL="file://../boxes/CO78SwGUIv1.0.box"
VM_MEMORY = 12288 # 12*1024 MB
...
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://vagrantcloud.com/search.
config.vm.box=BOX_NAME
config.vm.box_url=BOX_URL
config.vm.hostname=VM_HOST_NAME
config.vm.define VM_MACHINE
config.vm.provider :virtualbox do |vb|
vb.name=VM_NAME
vb.gui=VM_GUI
vb.memory=VM_MEMORY
vb.cpus=VM_CPUS
...</pre><p>Based on the suggestion of the Vagrant Cloud:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizMX015VE_bncwwwvLqNADGzxcy-eq28pnmeVNK4fD2bfYFRfhKEMVgthjK8PIvSSsoJ7WP5gAcOHEa1-lT-JIKM6CtkdpqpsC5TlLAyObPhJPCC4Egu3QuBxHvXl0a7W2w1_8gwbtFkKL/s560/VagrantFile.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="169" data-original-width="560" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizMX015VE_bncwwwvLqNADGzxcy-eq28pnmeVNK4fD2bfYFRfhKEMVgthjK8PIvSSsoJ7WP5gAcOHEa1-lT-JIKM6CtkdpqpsC5TlLAyObPhJPCC4Egu3QuBxHvXl0a7W2w1_8gwbtFkKL/s320/VagrantFile.png" width="320" /></a></div><p><br />I adapted this as follows:</p>
<pre class="brush:plain">...
BOX_NAME="makker/CO78SwGUI"
BOX_VERSION = "1"
#BOX_URL="file://../boxes/CO78SwGUIv1.0.box"
VM_MEMORY = 12288 # 12*1024 MB
...
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://vagrantcloud.com/search.
config.vm.box = BOX_NAME
config.vm.box_version = BOX_VERSION
# config.vm.box_url=BOX_URL
config.vm.hostname=VM_HOST_NAME
config.vm.define VM_MACHINE
config.vm.provider :virtualbox do |vb|
vb.name=VM_NAME
vb.gui=VM_GUI
vb.memory=VM_MEMORY
vb.cpus=VM_CPUS
...</pre>
<p>I uncommented the <i>BOX_URL</i> variable with <i>the config.vm.box_url</i> lines. And added the <i>BOX_VERSION</i> and <i>config.vm.box_version</i> lines. Most importantly I changed the <i>BOX_NAME</i> variable to <i>makker/CO78SwGUI</i>.</p><p>These suggestions will download my Cloud boxes without me needing to distributed them separately.</p><p>Happy Upping!<br />
</p><p><br />
</p>Martien van den Akkerhttp://www.blogger.com/profile/05183907832966359401noreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-21161736364376630002020-09-08T17:13:00.005+02:002020-09-09T09:44:01.622+02:00Silent install of SQL Developer<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-lHWsq9x_lBs/X1ebN9C-qwI/AAAAAAAADpE/qTdqZ-iwRTYONwf7HqZTzGnFGPO4lIDTQCNcBGAsYHQ/s908/langfr-800px-Oracle_SQL_Developer_logo.svg.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="908" data-original-width="800" height="256" src="https://1.bp.blogspot.com/-lHWsq9x_lBs/X1ebN9C-qwI/AAAAAAAADpE/qTdqZ-iwRTYONwf7HqZTzGnFGPO4lIDTQCNcBGAsYHQ/w226-h256/langfr-800px-Oracle_SQL_Developer_logo.svg.png" width="226" /></a></div><p>Last week I provided a script to automatically <a href="https://blog.darwin-it.nl/2020/08/silently-install-soa-quickstart-revised.html">install the SOA or BPM Quickstart</a>.</p><p>Today, below I'll provide a script to install SQL Developer on Windows. I always use the "zip-with-no-jre" file. Therefor installing it is simply unzipping it.</p><p>For unzipping, I use the java <i>jar</i> tool This is convenient, because if you want use SQL Developer you need a JDK (unless you choose to use the installer with jre). And if you have a jdk, you have the <i>jar</i> tool. The script mentioned in the previous article, takes care of installing java. So, if you want to do that as well, you could add it to this script.</p><p>One disadvantage of the <i>jar</i> tool is that it can't unzip to a certain folder other than the current folder. So you have to <i>CD</i> to the folder into which you want to unzip it. The script therefor saves the current folder, and CD's to the unzip folder. After installation it CD's back.</p><p>The script unzips into the a subfolder, under <i>C:\Oracle\SQLDeveloper</i>. I like to keep my Oracle IDE's together, but grouped. Within the zip file there is a <i>sqldeveloper</i> folder, which is renamed to the name of the zip.</p><p>With SQLDeveloper 20.2 I found that it required the <i>msvcr100.dll</i> in the $JDK\jre\bin folder. Apparently in the latest JDK 8 update (261), that I used when creating this script, it wasn't. I found it in <i>c:\Windows\System32</i> on my system, so I copied it from there to the $JDK\jre\bin folder. But a colleague didn't find it.</p><p>Another step in the script is that it copies the a copy of the <i>UserSnippets.xml</i> file. At my customer I created several handy maintenance queries that I saved as snippets. When you do so, you find those saved into the <i>UserSnippets.xml</i> file in the <i>%USERPROFILE%\AppData\Roaming\SQL Developer</i>. Where the <i>%USERPROFILE%</i> usually points to the <i>C:\users\%{your windows username}</i> folder.</p><p>If you want to share a copy of that to the users installing the tool using this script, you can save it in the same folder as this script. We keep it in SVN.</p><p></p>
<pre class="brush:plain">@echo off
set CMD_LOC=%~dp0
set CURRENT_DIR=%CD%
SETLOCAL
set SOFTWARE_HOME=x:\SOFTWARE\Software
set SQLDEV_INSTALL_HOME=%SOFTWARE_HOME%\SQL Developer
set SQLDEV_NAME=sqldeveloper-20.2.0.175.1842-no-jre
set SQLDEV_ZIP=%SQLDEV_INSTALL_HOME%\%SQLDEV_NAME%.zip
set SQLDEV_BASE=c:\Oracle\SQLDeveloper
set SQLDEV_HOME=%SQLDEV_BASE%\%SQLDEV_NAME%
set SQLDEV_USERDIR=%USERPROFILE%\AppData\Roaming\SQL Developer
set CMD_LIB=%CMD_LOC%\ext
rem Install SqlDeveloper
if not exist "%SQLDEV_HOME%" (
echo SqlDeveloper does not yet exist in "%SQLDEV_HOME%".
if exist "%SQLDEV_ZIP%" (
echo Install SqlDeveloper in %SQLDEV_HOME%.
if not exist "%SQLDEV_BASE%" (
echo Create folder %SQLDEV_BASE%
mkdir %SQLDEV_BASE%
)
cd %SQLDEV_BASE%
echo Unzip SqlDeveloper "%SQLDEV_ZIP%" into %SQLDEV_BASE%
"%JAVA_HOME%"\bin\jar.exe -xf "%SQLDEV_ZIP%"
echo Rename unzipped folder "sqldeveloper" to %SQLDEV_NAME%
rename sqldeveloper %SQLDEV_NAME%
rem Deze library wordt verwacht in de Java home, maar komt blijkbaar niet meer standaard mee.
if not exist "%JAVA_HOME%\jre\bin\msvcr100.dll" (
echo Copy msvcr100.dll from c:\Windows\System32\ to "%JAVA_HOME%\jre\bin"
copy c:\Windows\System32\msvcr100.dll "%JAVA_HOME%\jre\bin"
) else (
echo Library "%JAVA_HOME%\jre\bin\msvcr100.dll" already exists.
)
if not exist "%SQLDEV_USERDIR%" (
echo Create folder "%SQLDEV_USERDIR%"
mkdir "%SQLDEV_USERDIR%"
)
if not exist "%SQLDEV_USERDIR%\UserSnippets.xml" (
echo Copy "%CMD_LOC%\UserSnippets.xml" naar "%SQLDEV_USERDIR%"
copy "%CMD_LOC%\UserSnippets.xml" "%SQLDEV_USERDIR%" /Y
) else (
echo User Snippets "%SQLDEV_USERDIR%\UserSnippets.xml" already exists.
)
cd %CURRENT_DIR%
) else (
echo SqlDeveloper zip "%SQLDEV_ZIP%" does not exist!
)
) else (
echo SqlDeveloper already installed in %SQLDEV_HOME%.
)
echo Done.
ENDLOCAL
</pre><br/><i>Update 2020-09-09: in the line with mkdir "%SQLDEV_USERDIR%", there should be quotes around the folder, since there is a space in it.</i> <br/>The folder structure "%USERPROFILE%\AppData\Roaming\SQL Developer" is taken from an existing installation. This is where SQLDeveloper expects the user data.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-86539410651422606072020-08-31T15:39:00.001+02:002020-08-31T15:41:12.166+02:00Silently Install SOA QuickStart Revised<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-Avd7SMjyl9M/X0z94mgwjaI/AAAAAAAADok/EtwFqbrKOxcnBf9KxpwXjvwDZDl1fc0EQCNcBGAsYHQ/s606/JdeveloperSplash.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="294" data-original-width="606" height="150" src="https://1.bp.blogspot.com/-Avd7SMjyl9M/X0z94mgwjaI/AAAAAAAADok/EtwFqbrKOxcnBf9KxpwXjvwDZDl1fc0EQCNcBGAsYHQ/w310-h150/JdeveloperSplash.png" width="310" /></a></div><br />Earlier I wrote a script to silently install the SOA QuickStart installer and wrote about it <a href="https://blog.darwin-it.nl/2015/12/silent-install-soabpm-12c-quickstart.html">here</a>. <p></p><p>Several customer projects further and iterations on the script further, I revised this script lately again. Because I'm leaving this customer in a week or three, and to help my successors to build up their development pc's in a comfortable and standard way.</p><p>You may have noticed that over the years I've grown fond of scripting stuff, especially building up environments. At my current customer every developer installed the several IDE's, test tooling and TortoiseSVN by hand. So every one has the tooling in another folder structure. Checked out the subversion repo's by hand and therefor in another structure. </p><p>So, scripting things help in having the tooling in the same folder structure for every one. And that reduces the chances on problems and misconfigurations. Especially preventing the infamous phrase: 'It works with me...' when having problems.</p><p>One of the revisions is to have nested if-else structures in the script, which makes it more readable then the conditional goto's we were used to use in Windows <i>.bat</i> files.</p><p>Another important improvement was to have the install binaries in a separate fileserver-repository. This makes it possible to have the scripting and depending files in a Subversion repository.</p><p>The script improved <i>installSoaQS.bat</i> is as follows:</p><pre class="brush:plain">@echo off
rem Part 1: Settings
rem set JAVA_HOME=c:\Oracle\Java\jdk8
set JAVA_HOME=c:\Program Files\Java\jdk1.8.0_261
set SOFTWARE_HOME=Z:\Software
set JDK8_INSTALL_HOME=%SOFTWARE_HOME%\Java\JDK8
set JAVA_INSTALLER=%JDK8_INSTALL_HOME%\jdk-8u261-windows-x64.exe
rem set FMW_HOME=C:\oracle\JDeveloper\12213_SOAQS
set QS_INSTALL_HOME=%SOFTWARE_HOME%\Oracle\SOAQuickStart12.2.1.3
set QS_EXTRACT_HOME=%TEMP%\Oracle\SOAQuickStart12.2.1.3
set FMW_HOME=C:\oracle\JDeveloper\12213_SOAQS
set QS_RSP=soaqs1221_silentInstall.rsp
set QS_RSP_TPL=%QS_RSP%.tpl
set QS_JAR=fmw_12.2.1.3.0_soa_quickstart.jar
set QS_ZIP=%QS_INSTALL_HOME%\fmw_12.2.1.3.0_soaqs_Disk1_1of2.zip
set QS_JAR2=fmw_12.2.1.3.0_soa_quickstart2.jar
set QS_ZIP2=%QS_INSTALL_HOME%\fmw_12.2.1.3.0_soaqs_Disk1_2of2.zip
set QS_USER_DIR=c:\Data\JDeveloper\SOA
set CMD_LOC=%~dp0
set CURRENT_DIR=%CD%
rem Part 2: Install Java
rem Set JAVA_HOME
echo setx -m JAVA_HOME "%JAVA_HOME%"
setx -m JAVA_HOME "%JAVA_HOME%"
echo JAVA_HOME=%JAVA_HOME%
rem Check Java
if not exist "%JAVA_HOME%" (
if exist "%JAVA_INSTALLER%" (
echo Install %JAVA_HOME%
%JAVA_INSTALLER% /s INSTALLDIR="%JAVA_HOME%"
if exist "%JAVA_HOME%" (
echo Java Installer %JAVA_INSTALLER% succeeded.
) else (
echo Java Installer %JAVA_INSTALLER% apparently failed.
)
) else (
echo Java Installer %JAVA_INSTALLER% does not exist.
)
) else (
echo JAVA_HOME %JAVA_HOME% exists
)
rem Part 3: Check the QuickStart Installer Files
rem check SOA12.2 QS
if exist "%JAVA_HOME%" (
if not exist "%FMW_HOME%" (
echo Quickstart Installer %QS_JAR% not installed yet.
echo Let's try to install it in %FMW_HOME%
if not exist %QS_EXTRACT_HOME% (
echo Temp folder %QS_EXTRACT_HOME% does not exist, create it.
mkdir %QS_EXTRACT_HOME%
) else (
echo Temp folder %QS_EXTRACT_HOME% already exists.
)
echo Change to %QS_EXTRACT_HOME% for installation.
cd %QS_EXTRACT_HOME%
rem Check Quickstart is unzipped
echo Check if QuickStart Installer is unzipped.
rem Check QS_JAR
if not exist "%QS_JAR%" (
echo QuickStart Jar part 1 %QS_JAR% does not exist yet.
if exist "%QS_ZIP%" (
echo Unzip QuickStart Part 1 %QS_ZIP%
"%JAVA_HOME%"\bin\jar.exe -xf %QS_ZIP%
if exist "%QS_JAR%" (
echo QuickStart Jar part 1 %QS_JAR% now exists.
) else (
echo QuickStart Jar part 1 %QS_JAR% still not exists.
)
) else (
echo QuickStart ZIP part 1 %QS_ZIP% does not exist.
)
) else (
echo QuickStart Jar part 1 %QS_JAR% exists.
)
rem Check QS_JAR2
if exist "%QS_JAR%" (
if not exist "%QS_JAR2%" (
echo QuickStart Jar part 2 %QS_JAR2% does not exist yet.
if exist "%QS_ZIP2%" (
echo Unzip QuickStart Part 2 %QS_ZIP2%
"%JAVA_HOME%"\bin\jar.exe -xf %QS_ZIP2%
if exist "%QS_JAR2%" (
echo QuickStart Jar part 2 %QS_JAR2% now exists.
) else (
echo QuickStart Jar part 2 %QS_JAR2% still not exists.
)
) else (
echo QuickStart ZIP part 2 %QS_ZIP2% does not exist.
)
) else (
echo QuickStart Jar part 2 %QS_JAR2% exists.
)
)
rem Part 4: Install the QuickStart
echo Install %FMW_HOME%
echo Expand Response File Template %CMD_LOC%\%QS_RSP_TPL% to %CMD_LOC%\%QS_RSP%
powershell -Command "(Get-Content %CMD_LOC%\%QS_RSP_TPL%) -replace '\$\{ORACLE_HOME\}', '%FMW_HOME%' | Out-File -encoding ASCII %CMD_LOC%\%QS_RSP%"
echo Silent install SOA QuickStart, using response file: %CMD_LOC%\%QS_RSP%
"%JAVA_HOME%\bin\java.exe" -jar %QS_JAR% -silent -responseFile %CMD_LOC%\%QS_RSP% -nowait
echo Change back to %CURRENT_DIR%.
cd %CURRENT_DIR%
if exist "%FMW_HOME%" (
echo FMW_HOME %FMW_HOME% exists
rem Part 5: update the JDeveloper User Home location.
echo "et the JDeveloper user home settings
if not exist %QS_USER_DIR% mkdir %QS_USER_DIR%
echo set JDEV_USER_DIR_SOA and JDEV_USER_HOME_SOA as %QS_USER_DIR%
setx -m JDEV_USER_DIR_SOA %QS_USER_DIR%
setx -m JDEV_USER_HOME_SOA %QS_USER_DIR%
echo copy %CMD_LOC%\jdev.boot naar "%FMW_HOME%\jdeveloper\jdev\bin"
copy "%FMW_HOME%\jdeveloper\jdev\bin\jdev.boot" "%FMW_HOME%\jdeveloper\jdev\bin\jdev.boot.org" /Y
copy %CMD_LOC%\jdev.boot "%FMW_HOME%\jdeveloper\jdev\bin" /Y
echo copy %CMD_LOC%\ide.conf naar "%FMW_HOME%\jdeveloper\ide\bin"
copy "%FMW_HOME%\jdeveloper\ide\bin\ide.conf" "%FMW_HOME%\jdeveloper\ide\bin\ide.conf.org" /Y
copy %CMD_LOC%\ide.conf "%FMW_HOME%\jdeveloper\ide\bin" /Y
) else (
echo Quickstart Installer %QS_JAR% apparently failed.
)
) else (
echo Quickstart Installer %QS_JAR% already installed in %FMW_HOME%.
)
) else (
echo %JAVA_HOME% doesn't exist so can't install SOA Quick Start.
)
echo Done
</pre>
It first installs Oracle JDK 8 Update 261. Of course you can split this script to do only the Java install. <br /> Then it checks the existance of the QuickStart install files as Zip files. It will create a <i>Oracle\SOAQuickStart12.2.1.3</i> folder in the Windows %TEMP% Folder. After saving the current folder, it will do a change directory to it, to unzip the Installer Zip files into that temp folder. After the installation of the Quickstart it will change back to the saved folder. <div><br /></div><div>Mind that the <i>%TEMP%\Oracle\SOAQuickStart12.2.1.3</i> is not removed afterwards.<br /><p>The script expects the following files:</p><table border="1" cellpadding="5" cellspacing="0" style="background-color: white; color: black; font-family: tahoma, sans-serif; font-size: 13px;"><tbody><tr bgcolor="darkred"><th valign="center"><div align="center"><span style="color: white;">File</span> </div></th><th valign="top"><div align="center"><span style="color: white;">Location</span> </div></th></tr><tr><td>jdk-8u261-windows-x64.exe</td><td>Z:\Software\Java\JDK8<span> </span></td></tr><tr><td>fmw_12.2.1.3.0_soaqs_Disk1_1of2.zip</td><td>Z:\Software\Oracle\SOAQuickStart12.2.1.3</td></tr><tr><td>fmw_12.2.1.3.0_soaqs_Disk1_2of2.zip</td><td>Z:\Software\Oracle\SOAQuickStart12.2.1.3</td></tr><tr><td>fmw_12.2.1.3.0_soa_quickstart.jar</td><td>Extracted into %TEMP%\Oracle\SOAQuickStart12.2.1.3</td></tr><tr><td>fmw_12.2.1.3.0_soa_quickstart2.jar</td><td>Extracted into %TEMP%\Oracle\SOAQuickStart12.2.1.3</td></tr><tr><td>soaqs1221_silentInstall.rsp.tpl</td><td>Same folder as the script</td></tr>
<tr><td>jdev.boot</td><td>Same folder as the script</td></tr><tr><td>ide.conf</td><td>Same folder as the script</td></tr>
</tbody></table><p></p>These files are set in the variables at the top of the script. As you can see it will install the 12.2.1.3 version of the SOA QuickStart. This is because, that is the version we currently use. But, if you want to use 12.2.1.4, as I would recommend, then just change the relevant variables at the top. Same counts if you would want to use the BPM QuickStart: just change the relevant variables accordingly.<br />It will install the QuickStart into the folder C:\oracle\JDeveloper\12213_SOAQS. I do like to have an Oracle Home folder that not only shows the version but also the type of the product. I dislike the default of Oracle: <i>C:\Oracle\Middleware</i>.<br /><br /></div><div>The install script expects a file <i>soaqs1221_silentInstall.rsp.tpl</i> which is the template file of the response file:<br />
<pre class="brush:plain">[ENGINE]
#DO NOT CHANGE THIS.
Response File Version=1.0.0.0.0
[GENERIC]
#Set this to true if you wish to skip software updates
DECLINE_AUTO_UPDATES=true
#My Oracle Support User Name
MOS_USERNAME=
#My Oracle Support Password
MOS_PASSWORD=<SECURE VALUE>
#If the Software updates are already downloaded and available on your local system, then specify the path to the directory where these patches are available and set SPECIFY_DOWNLOAD_LOCATION to true
AUTO_UPDATES_LOCATION=
#Proxy Server Name to connect to My Oracle Support
SOFTWARE_UPDATES_PROXY_SERVER=
#Proxy Server Port
SOFTWARE_UPDATES_PROXY_PORT=
#Proxy Server Username
SOFTWARE_UPDATES_PROXY_USER=
#Proxy Server Password
SOFTWARE_UPDATES_PROXY_PASSWORD=<SECURE VALUE>
#The oracle home location. This can be an existing Oracle Home or a new Oracle Home
ORACLE_HOME=${ORACLE_HOME}
</pre>
<p>When the install was succesfull it will also copy the file <i>ide.conf</i> to the corresponding folder in the Jdeveloper home, to set proper heapsizes, since the default heapsize of Jdeveloper is quite sparingly. Also it copies the <i>jdev.conf</i> to the proper folder, to have a the Jdeveloper User dirs set to <i>C:\Data\Jdeveloper\SOA</i>. As can be set at the top as well. The rationale for this is to have the Jdeveloper User Dir out side the Windows User Profile, and thus more accessible. Also it allows for having also another Jdeveloper installation that is of the same base version, but does not have the SOA/BPM quickstart add-ons. For instance for plain Java-ADF development.</p><p>The used <i>ide.conf</i> is as follows:</p><pre class="brush:plain">#-----------------------------------------------------------------------------
#
# ide.conf - IDE configuration file for Oracle FCP IDE.
#
# Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
#
#-----------------------------------------------------------------------------
#
# Relative paths are resolved against the parent directory of this file.
#
# The format of this file is:
#
# "Directive Value" (with one or more spaces and/or tab characters
# between the directive and the value) This file can be in either UNIX
# or DOS format for end of line terminators. Any path seperators must be
# UNIX style forward slashes '/', even on Windows.
#
# This configuration file is not intended to be modified by the user. Doing so
# may cause the product to become unstable or unusable. If options need to be
# modified or added, the user may do so by modifying the custom configuration files
# located in the user's home directory. The location of these files is dependent
# on the product name and host platform, but may be found according to the
# following guidelines:
#
# Windows Platforms:
# The location of user/product files are often configured during installation,
# but may be found in:
# %APPDATA%\<product-name>\<product-version>\product.conf
# %APPDATA%\<product-name>\<product-version>\jdev.conf
#
# Unix/Linux/Mac/Solaris:
# $HOME/.<product-name>/<product-version>/product.conf
# $HOME/.<product-name>/<product-version>/jdev.conf
#
# In particular, the directives to set the initial and maximum Java memory
# and the SetJavaHome directive to specify the JDK location can be overridden
# in that file instead of modifying this file.
#
#-----------------------------------------------------------------------------
IncludeConfFile ../../ide/bin/jdk.conf
AddJavaLibFile ../../ide/lib/ide-boot.jar
# All required Netbeans jars for running Netbinox
AddJavaLibFile ../../netbeans/platform/lib/boot.jar
AddJavaLibFile ../../netbeans/platform/lib/org-openide-util-ui.jar
AddJavaLibFile ../../netbeans/platform/lib/org-openide-util.jar
AddJavaLibFile ../../netbeans/platform/lib/org-openide-util-lookup.jar
AddJavaLibFile ../../netbeans/platform/lib/org-openide-modules.jar
# Oracle IDE boot jar
AddJavaLibFile ../../ide/lib/fcpboot.jar
SetMainClass oracle.ide.osgi.boot.OracleIdeLauncher
# System properties expected by the Netbinox-Oracle IDE bridge
AddVMOption -Dnetbeans.home=../../netbeans/platform/
AddVMOption -Dnetbeans.logger.console=true
AddVMOption -Dexcluded.modules=org.eclipse.osgi
AddVMOption -Dide.cluster.dirs=../../netbeans/fcpbridge/:../../netbeans/ide/:../../netbeans/../
# Turn off verifications since the included classes are already verified
# by the compiler. This will reduce startup time significantly. On
# some Linux Systems, using -Xverify:none will cause a SIGABRT, if you
# get this, try removing this option.
#
AddVMOption -Xverify:none
# With OSGI, the LAZY (ondemand) extension loading mode is the default,
# to turn it off, use any other words, ie EAGER
#
AddVMOption -Doracle.ide.extension.HooksProcessingMode=LAZY
#
# Other OSGi configuration options for locating bundles and boot delegation.
#
AddVMOption -Dorg.eclipse.equinox.simpleconfigurator.configUrl=file:bundles.info
AddVMOption -Dosgi.bundles.defaultStartLevel=1
AddVMOption -Dosgi.configuration.cascaded=false
AddVMOption -Dosgi.noShutdown=true
AddVMOption -Dorg.osgi.framework.bootdelegation=*
AddVMOption -Dosgi.parentClassloader=app
AddVMOption -Dosgi.locking=none
AddVMOption -Dosgi.contextClassLoaderParent=app
# Needed for PL/SQL debugging
#
# To be disabled when we allow running on JDK9
AddVMOption -Xbootclasspath/p:../../rdbms/jlib/ojdi.jar
# To be enabled when we allow running on JDK9
#AddVM8Option -Xbootclasspath/p:../../rdbms/jlib/ojdi.jar
#AddJava9OrHigherLibFile ../../rdbms/jlib/ojdi.jar
# Needed to avoid possible deadlocks due to Eclipse bug 121737, which in turn is tied to Sun bug 4670071
AddVMOption -Dosgi.classloader.type=parallel
# Needed for performance as the default bundle file limit is 100
AddVMOption -Dosgi.bundlefile.limit=500
# Controls the allowed number of IDE processes. Default is 10, so if a higher limit is needed, uncomment this
# and set to the new limit. The limit can be any positive integer; setting it to 0 or a negative integer will
# result in setting the limit back to 10.
# AddVMOption -Doracle.ide.maxNumberOfProcesses=10
# Configure location of feedback server (Oracle internal use only)
AddVMOption -Dide.feedback-server=ide.us.oracle.com
# For the transformation factory we take a slightly different tack as we need to be able to
# switch the transformation factory in certain cases
#
AddJavaLibFile ../../ide/lib/xml-factory.jar
AddVMOption -Djavax.xml.transform.TransformerFactory=oracle.ide.xml.switchable.SwitchableTransformerFactory
# Override the JDK or XDK XML Transformer used by the SwitchableTransformerFactory
# AddVMOption -Doracle.ide.xml.SwitchableTransformer.jdk=...
# Pull parser configurations
AddJavaLibFile ../../ide/lib/woodstox-core-asl-4.2.0.jar
AddJavaLibFile ../../ide/lib/stax2-api-3.1.1.jar
AddVMOption -Djavax.xml.stream.XMLInputFactory=com.ctc.wstx.stax.WstxInputFactory
AddVMOption -Djavax.xml.stream.util.XMLEventAllocator=oracle.ideimpl.xml.stream.XMLEventAllocatorImpl
# Enable logging of violations of Swings single threaded rule. Valid arguments: bug,console
# Exceptions to the rule (not common) can be added to the exceptions file
AddVMOption -Doracle.ide.reportEDTViolations=bug
AddVMOption -Doracle.ide.reportEDTViolations.exceptionsfile=./swing-thread-violations.conf
# Set the default memory options for the Java VM which apply to both 32 and 64-bit VM's.
# These values can be overridden in the user .conf file, see the comment at the top of this file.
#AddVMOption -Xms128M
#AddVMOption -Xmx800M
AddVMOption -Xms2048M
AddVMOption -Xmx2048M
AddVMOption -XX:+UseG1GC
AddVMOption -XX:MaxGCPauseMillis=200
# Shows heap memory indicator in the status bar.
AddVMOption -DMainWindow.MemoryMonitorOn=true
#
# This option controls the log level at which we must halt execution on
# start-up. It can be set to either a string, like 'SEVERE' or 'WARNING',
# or an integer equivalent of the desired log level.
#
# AddVMOption -Doracle.ide.extension.InterruptibleExecutionLogHandler.interruptLogLevel=OFF
#
# This define keeps track of command line options that are handled by the IDE itself.
# For options that take arguments (-option:<arguments>), add the fixed prefix of
# the the option, e.g. -role:.
#
AddVMOption -Doracle.ide.IdeFrameworkCommandLineOptions=-clean,-console,-debugmode,-migrate,-migrate:,-nomigrate,-nonag,-nondebugmode,-noreopen,-nosplash,-role:,-su
</pre><p>The used jdev<i>.conf</i> is as follows:</p><pre class="brush:plain">#--------------------------------------------------------------------------
#
# Oracle JDeveloper Boot Configuration File
# Copyright 2000-2012 Oracle Corporation.
# All Rights Reserved.
#
#--------------------------------------------------------------------------
include ../../ide/bin/ide.boot
#
# The extension ID of the extension that has the <product-hook>
# with the IDE product's branding information. Users of JDeveloper
# should not change this property.
#
ide.product = oracle.jdeveloper
#
# Fallback list of extension IDs that represent the different
# product editions. Users of JDeveloper should not change this
# property.
#
ide.editions = oracle.studio, oracle.j2ee, oracle.jdeveloper
#
# The image file for the splash screen. This should generally not
# be changed by end users.
#
ide.splash.screen = splash.png
#
# The image file for the initial hidden frame icon. This should generally not
# be changed by end users.
#
hidden.frame.icon=jdev_icon.gif
#
# Copyright start is the first copyright displayed. Users of JDeveloper
# should not change this property.
#
copyright.year.start = 1997
#
# Copyright end is the second copyright displayed. Users of JDeveloper
# should not change this property.
#
copyright.year.end = 2014
#
# The ide.user.dir.var specifies the name of the environment variable
# that points to the root directory for user files. The system and
# mywork directories will be created there. If not defined, the IDE
# product will use its base directory as the user directory.
#
#ide.user.dir.var = JDEV_USER_HOME,JDEV_USER_DIR
ide.user.dir.var = JDEV_USER_HOME_SOA,JDEV_USER_DIR_SOA
#
# This will enable a "virtual" file system feature within JDeveloper.
# This can help performance for projects with a lot of files,
# particularly under source control. For non-Windows platforms however,
# any file changes made outside of JDeveloper, or by deployment for
# example, may not be picked by the "virtual" file system feature. Do
# not enable this for example, on a Linux OS if you use an external editor.
#
#VFS_ENABLE = true
#
# If set to true, prevent laucher from checking/setting the shell
# integration mechanism. Shell integration on Windows associates
# files with JDeveloper.
#
# The shell integration feature is enabled by default
#
#no.shell.integration = true
#
# Text buffer deadlock detection setting (OFF by default.) Uncomment
# out the following option if encountering deadlocks that you suspect
# buffer deadlocks that may be due to locks not being released properly.
#
#buffer.deadlock.detection = true
#
# This option controls the parser delay (i.e., for Java error underlining)
# for "small" Java files (<20k). The delay is in milliseconds. Files
# between the "small" (<20k) and "large" (>100k) range will scale the
# parser delay accordingly between the two delay numbers.
#
# The minimum value of this delay is 100 (ms), the default is 300 (ms).
#
ceditor.java.parse.small = 300
#
# This option controls the parser delay (i.e., for Java error underlining)
# for "large" Java files (>100k). The delay is in milliseconds.
#
# The minimum value for this delay is 500 (ms), the default is 1500 (ms).
#
ceditor.java.parse.large = 1500
#
# This option is to pass additional vm arguments to the out-of-process
# java compiler used to build the project(s). The arguments
# are used for both Ojc & Javac.
#
compiler.vmargs = -Xmx512m
#
# Additional (product specific) places to look for extension jars.
#
ide.extension.search.path=jdev/extensions:sqldeveloper/extensions
#
# Additional (product specific) places to look for roles.
#
ide.extension.role.search.path=jdev/roles
#
# Tell code insight to suppress @hidden elements
#
insight.suppresshidden=true
#
# Disable Feedback Manager. The feedback manager is for internal use
# only.
#
feedbackmanager.disable=false
#
# Prevents the product from showing translations for languages other
# than english (en) and japanese (ja). The IDE core is translated into
# other languages, but other parts of JDeveloper are not. To avoid
# partial translations, we throttle all locales other than en and ja.
#
ide.throttleLocale=true
#
# Specifies the locales that we support translations for when
# ide.throttleLocale is true. This is a comma separated list of
# languages. The default value is en,ja.
#
ide.supportedLocales=en,ja
#
# Specifies the maximum number of JAR file handles that will be kept
# open by the IDE class loader. A lower number keeps JDeveloper from
# opening too many file handles, but can reduce performance.
#
ide.max.jar.handles=500
#
# Specifies the classloading layer as OSGi. In the transition period
# to OSGi this flag can be used to check if JDev is running in OSGi
# mode.
#
oracle.ide.classload.layer=osgi
</pre><br /><br /><br /></div>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-14590008042406656402020-08-27T16:36:00.000+02:002020-08-27T16:36:01.835+02:00Finally created an Oracle Linux 8.2 myself<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivCs6euqZVecOUy1qLhIDMEzUOxV446kZdCL_OBkZazPBoIx55TBbDDwzsHqwNbpC2I5hpQ_ihaXSTT5rAOa_AfdeBKk2uQVwU9ua677rVwR1lp9svX3CBDl0mrv2r7KvOJIcPwDnfaUwT/s1065/2020-08-27-OL8U2+%255BRunning%255D+-+Oracle+VM+VirtualBox.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="843" data-original-width="1065" height="260" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivCs6euqZVecOUy1qLhIDMEzUOxV446kZdCL_OBkZazPBoIx55TBbDDwzsHqwNbpC2I5hpQ_ihaXSTT5rAOa_AfdeBKk2uQVwU9ua677rVwR1lp9svX3CBDl0mrv2r7KvOJIcPwDnfaUwT/w328-h260/2020-08-27-OL8U2+%255BRunning%255D+-+Oracle+VM+VirtualBox.png" width="328" /></a></div><br />I'm certainly not the first one to do a fresh Oracle Linux 8 installation. For instance the great Tim Hall <a href="https://oracle-base.com/articles/linux/oracle-linux-8-installation">already wrote about it</a>. My setup is quite similar, apart from:<p></p><ul style="text-align: left;"><li>I use 8.2 which is the latest-greatest at the moment. <br /></li><li>For my Vagrant projects I want a base box with the Server with GUI topology. So I used that, which was actually the default in the wizard.</li><li>I use a NAT network adapter, for my Vagrant projects, so I skipped the network setting Tim Hall mentions.</li></ul><p>Now, I use this as a base box for my Vagrant projects, and therefor I don't do this installation on a dayly basis. I have a Oracle Linux 7.7 box, and haven't had much problems with it.</p><p>However, I did had troubles with installing the Guest Additions this time. It didn't have the <i>kernel-devel</i> and <i>kernel-header</i> packages installed. Which is quite normal, so I did it using <i>yum</i>. However I kept getting the anoying mesasge that it couldn't get the <i>5.4.17-2011.5.3.el8uek.x86_64</i> version of the kernel headers. And the Guest Additions still wouldn't install. </p><p>It kept me busy for some time, until I realized that by default it starts with the 5.4.x UEK kernel, while I it could install the kernel packages and headers for the <i>4.18.0.x </i>version.</p><p>So I found out how to startup with the correct kernel (correct in the sense that it is the kernel that allows me to use the GuestAdditions...). This can be done as follows: <br /></p>
<pre class="brush:bash">sudo grubby --info=ALL
</pre>
<br />This lists the currently installed kernels. However, I found out that it is more convenient to check out the <i>/boot</i> folder:
<pre class="brush:bash">sudo ls /boot//vmlinuz-*
/boot//vmlinuz-0-rescue-fddb3eeab19e4a928d6bfa04e0f91830
/boot//vmlinuz-4.18.0-193.14.3.el8_2.x86_64
/boot//vmlinuz-4.18.0-193.el8.x86_64
/boot//vmlinuz-5.4.17-2011.5.3.el8uek.x86_64
</pre><br />This merely because for setting the default kernel I need to provide the link to the image, also with a grubby command:
<pre class="brush:bash">sudo grubby --set-default /boot/vmlinuz-4.18.0-193.14.3.el8_2.x86_64
</pre>
<br />Now, I can nicely install the necessary packages for the Guest Additions:
<pre class="brush:bash">sudo dnf install kernel-devel kernel-headers gcc make perl
</pre>
<p>Next stop: boxing it into a Vagrant box.<br /></p>Martien van den Akkerhttp://www.blogger.com/profile/05183907832966359401noreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-12435770106130002702020-08-27T12:46:00.006+02:002020-08-27T12:46:42.354+02:00Requeue expired JMS-AQ Messages<p>At my current customer we use JMS queues that are implemented with AQ queues based on <i>sys.aq$_jms_text_message</i>. In Weblogic you can create a so-called Foreign server that is able to interact with these queues over a datasource. For a Weblogic application, like SOA Suite or OSB, it is as if it is a regular Weblogic JMS queue. Pretty smart, because unlike a JDBC based Weblogic JMS Server, you can not only use the <i>sys.aq$_jms_text_message</i> type to <a href="https://blog.darwin-it.nl/2018/11/how-to-query-your-jms-over-aq-queues.html">query the aq table, as I described earlier</a>. Not only that, you can also use the AQ PL/Sql api's to enqueue and dequeue these messages.</p><p>This can come in handy when you need to purge the tables, to remove the expired messages. But this morning there was a hickup in OSB, so that it couldn't process these messages succesfully. Because of the persisting rollbacks the messages are moved to the exception queue by AQ with the reason 'MAX_RETRY_EXCEEDED'. After I investigated the issue and some interaction with our admins the OSB was restarted which solved the problem.</p><p></p><p>But the earlier expired messages were still in the exception queue and processes were waiting for the response. So I thought it would be fun to have my own script to re-enqueue the expired messages. </p><p>Although the admins turned out to have scripts for this, I would like to have my own. Theirs maybe smarter or at least they had more time to develop.</p><p>This script is at least publishable and might be a good starting point if you have to do something with AQ.<br /></p><p></p>
<pre class="brush:sql">declare
l_except_queue varchar2(30) := 'AQ$_DWN_OUTBOUND_TABLE_E';
l_dest_queue varchar2(30) := 'DWN_OUTBOUND';
l_message_type varchar2(30) := 'registersomethingmessage';
cursor c_qtb
is select qtb.queue_table
, qtb.queue
, qtb.msg_id
, qtb.corr_id correlation_id
, qtb.msg_state
, qtb.enq_timestamp
, qtb.user_data
, qtb.user_data.header.replyto
, qtb.user_data.header.type type
, qtb.user_data.header.userid userid
, qtb.user_data.header.appid appid
, qtb.user_data.header.groupid groupid
, qtb.user_data.header.groupseq groupseq
, qtb.user_data.header.properties properties
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'JMSCorrelationID') JMSCorrelationID
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'JMSMessageID') JMSMsgID
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'tracking_compositeInstanceId') tracking_compositeInstanceId
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'JMS_OracleDeliveryMode') JMS_OracleDeliveryMode
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'tracking_ecid') tracking_ecid
, (select num_value from table (qtb.user_data.header.properties) prp where prp.name = 'JMS_OracleTimestamp') JMS_OracleTimestamp
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'tracking_parentComponentInstanceId') tracking_prtCptInstanceId
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'tracking_conversationId') tracking_conversationId
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'BPEL_SENSOR_NAME') bpel_sensor_name
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'BPEL_PROCESS_NAME') bpel_process_name
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'BPEL_PROCESS_REVISION') bpel_process_rev
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'BPEL_DOMAIN') bpel_domain
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'SBLCorrelationID') SBLCorrelationID
, qtb.user_data.header
, qtb.user_data.text_lob text_lob
, qtb.user_data.text_vc text_vc
, qtb.expiration_reason
--, qtb.*
from (
select 'DWN_OUTBOUND_TABLE' queue_table
, qtb.*
from AQ$DWN_OUTBOUND_TABLE qtb
) qtb
where qtb.user_data.text_vc like '<'||l_message_type||'%'
and qtb.msg_state = 'EXPIRED'
and qtb.expiration_reason = 'MAX_RETRY_EXCEEDED'
order by queue_table, enq_timestamp asc;
l_payload SYS.AQ$_JMS_TEXT_MESSAGE;
l_sbl_correlation_id varchar2(100);
l_parentComponentInstanceId varchar2(100);
l_jms_type varchar2(100);
--
function get_jms_property(p_payload in SYS.AQ$_JMS_TEXT_MESSAGE, p_property_name in varchar2)
return varchar2
as
l_property varchar2(32767);
begin
select str_value into l_property from table (l_payload.header.properties) prp where prp.name = p_property_name;
return l_property;
exception
when no_data_found then
return null;
end get_jms_property;
--
procedure dequeue_msg(p_queue in varchar2, p_msg_id in raw)
is
l_dequeue_options dbms_aq.DEQUEUE_OPTIONS_T ;
l_payload SYS.AQ$_JMS_TEXT_MESSAGE;
l_message_properties dbms_aq.message_properties_t ;
l_msg_id raw(32);
begin
--l_dequeue_options.visibility := dbms_aq.immediate;
l_dequeue_options.visibility := dbms_aq.on_commit;
l_dequeue_options.msgid := p_msg_id;
DBMS_AQ.DEQUEUE (
queue_name => p_queue,
dequeue_options => l_dequeue_options,
message_properties => l_message_properties,
payload => l_payload,
msgid => l_msg_id);
end dequeue_msg;
--
procedure enqueue_msg(p_queue in varchar2, p_payload SYS.AQ$_JMS_TEXT_MESSAGE)
is
l_enqueue_options dbms_aq.ENQUEUE_OPTIONS_T ;
l_message_properties dbms_aq.message_properties_t ;
l_msg_id raw(32);
begin
--l_enqueue_options.visibility := dbms_aq.immediate;
l_enqueue_options.visibility := dbms_aq.on_commit;
DBMS_AQ.ENQUEUE (
queue_name => p_queue,
enqueue_options => l_enqueue_options,
message_properties => l_message_properties,
payload => p_payload,
msgid => l_msg_id);
end enqueue_msg;
--
begin
for r_qtb in c_qtb loop
l_payload := r_qtb.user_data;
l_jms_type := r_qtb.user_data.header.type;
l_sbl_correlation_id := get_jms_property(l_payload, 'SBLCorrelationID');
l_parentComponentInstanceId := get_jms_property(l_payload, 'tracking_parentComponentInstanceId');
dbms_output.put_line(r_qtb.queue||' - '||' - '||l_jms_type||' - '||r_qtb.msg_id||' - '||l_sbl_correlation_id||' - '||l_parentComponentInstanceId);
enqueue_msg(l_dest_queue , l_payload);
dequeue_msg(l_except_queue , r_qtb.msg_id);
end loop;
end;
</pre><p>This script starts with a cursor that is based on the query described <a href="https://blog.darwin-it.nl/2018/11/how-to-query-your-jms-over-aq-queues.html">in the post mentioned above</a>. It selects only the Expired messages, where the root-tag starts with a concatenation of '<' and the message type declared in the top. If there was a JMS type you could also select on the <i>userdata.header.type</i> attribute.<br /></p><p>It logs a few attributes, merely for me to check if the base of the script worked, without the dequeue and the enqueue. The selecting of the particular JMS properties are taken from the earlier script and are an example on properties that you could use to more granularly determine if a message is eligable to be re-enqueued.</p><p>The found message is enqueued and then dequeued, both with <i>visibility</i> set to <i>on_commit</i>. This ensures that the enqueue and dequeue is done within the same transaction. You should hit the commit button in SQL Developer (or your other favorite Database IDE).<br /></p><p></p><p>The from clause construct:
</p><pre class="brush:sql"> from (
select 'DWN_OUTBOUND_TABLE' queue_table
, qtb.*
from AQ$DWN_OUTBOUND_TABLE qtb
) qtb
</pre><p>
is from a script I created at the customer to query over all the available queue tables, by doing a union-all over all the queue-tables. That's why the first column names the queue table that is source for the record. </p><p>This script can be made more dynamic by putting it in a package and make a pipelined function for the query, so that you can provide the queuetable to query from as a parameter. You could even loop over all the <i>user_queue_tables</i> to dynamically select all the message from all the tables without having to do <i>union all</i>s over the familiar queue tables. See my <a href="https://blog.darwin-it.nl/2016/06/object-oriented-plsql.html">Object Oriented Pl/Sql article</a> for more info and inspiration. <br /></p><p></p><p>You might even have fun with <a href="https://blog.bar-solutions.com/?p=820">Polymorphic Table Functions</a>, the Patrick-ACE-Director-Bar-solutions is expert on that.</p><p></p><p><br /></p>Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-10793548052418768142020-08-11T09:58:00.006+02:002020-08-11T09:58:46.480+02:00The magic of CorrelationSets<p>CorrelationSets in BPEL are as old as the road to Rome. I wrote about it before: </p><ul style="text-align: left;"><li><a href="https://blog.darwin-it.nl/2017/07/pcs-and-correlations-next-big-thing.html" target="_blank">PCS and Correlations: the next big thing cavemen already used...</a></li><li><a href="https://blog.darwin-it.nl/2020/06/use-of-correlation-sets-in-soa-suite.html" target="_blank">Use of correlation sets in SOA Suite</a></li></ul><p>Although it was in the BPEL project from the very beginning, when Oracle acquired it in 2004, you might not have dealt with it before. But maybe not even realized that you can use it in Oracle Integration Cloud, with structured processes. <br /></p><p>In the first week of june I got to do a presentation about this subject, in a series of <a href="#" id="https://eventreg.oracle.com/profile/web/index.cfm?PKwebID=0x742084abcd&source=ACMK200511P00019:em:lw:ie:pt:SEV400056775" name="https://eventreg.oracle.com/profile/web/index.cfm?PKwebID=0x742084abcd&source=ACMK200511P00019:em:lw:ie:pt:SEV400056775">Virtual Meetups.</a></p><p>If you weren't able to attend, but would like to watch it then you're in luck, it got recorded by <a href="https://blog.mp3monster.org/2020/07/30/meetup-magic-of-correlations-in-soa-bpm-and-oracle-integration-cloud/">Phil Wilkins</a>: <br /></p><p></p><p><br /></p><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/IX5BhdqWsEY" width="320" youtube-src-id="IX5BhdqWsEY"></iframe></div><p><br /></p><p>In my presentation I start with a simple demo based on a BPEL process. I have put the resulting code on GitHub: <a href=" https://github.com/makker-nl/blog/tree/master/CorrelationDemo"> https://github.com/makker-nl/blog/tree/master/CorrelationDemo</a>. </p><p>Then I move on to a more complicated situation in OIC. I created an export for that project and placed it on GitHub too:
<a href="https://github.com/makker-nl/blog/tree/master/CorrelationDemoOIC">https://github.com/makker-nl/blog/tree/master/CorrelationDemoOIC</a><br /></p><p>This allows you to inspect it and try to recreate it yourself.</p><p>My sincere appologies for this late sharing. <br /></p>Martien van den Akkerhttp://www.blogger.com/profile/05183907832966359401noreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-65815049001372222232020-07-15T15:15:00.002+02:002020-07-15T15:15:50.171+02:00Receive and send WSA Properties in BPEL 2.0Last week I had the honour to present on CorrelationSets in a Virtual Meetup, which is a feature that relates to the WS-Addressing support of SOA Suite.<br />
<br />
At my current customer, I had to rebuild a BPEL Process from 1.1 to 2.0, to be able to split it up using embedded and reusable subprocesses.<br />
<br />
One requitement is to receive the wsa-Action property and reply it back, concatenated with 'Response'.<br />
<br />
Since it implements a WSDL with 3 operations, I need a Pick-OnMessage construction.<br />
<br />
To receive properties you can open the activity, in my case the OnMessage:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-12H1G00mM0o/Xw7-e5SbrWI/AAAAAAAADmM/0_1OQekU1k4n9XjWcXXia33O2qgaGZqmQCNcBGAsYHQ/s1600/2020-07-15%2BOnMessageProperties.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="494" data-original-width="621" height="254" src="https://1.bp.blogspot.com/-12H1G00mM0o/Xw7-e5SbrWI/AAAAAAAADmM/0_1OQekU1k4n9XjWcXXia33O2qgaGZqmQCNcBGAsYHQ/s320/2020-07-15%2BOnMessageProperties.png" width="320" /></a></div>
In the source his looks like the following:<br />
<pre class="brush:xml"><onMessage partnerLink="MyService_WS" portType="ns1:myService" operation="myOperation"
variable="MyService_InputVariable">
<bpelx:fromProperties>
<bpelx:fromProperty name="wsa.action" variable="wsaAction"/>
</bpelx:fromProperties></pre>
With the <i>wsaAction</i> variable here is based on <i>xsd:string</i>.<br />
<br />
However, this turns out not to work: the <i>wsaAction </i>variable stays empty.<br />
This turns out to be a bug, that should have been solved since 11.1.1.6, but apparently still works as is. Read more about it in support document <a href="https://support.oracle.com/epmos/faces/DocumentDisplay?id=1345071.1">1345071.1</a>.<br />
<br />
Solution is simple: just remove the <i>wsa.</i> prefix:<br />
<pre class="brush:xml"><onMessage partnerLink="MyService_WS" portType="ns1:myService" operation="myOperation"
variable="MyService_InputVariable">
<bpelx:fromProperties>
<bpelx:fromProperty name="action" variable="wsaAction"/>
</bpelx:fromProperties></pre>
For <i>invoke,</i> <i>reply, receive</i> and other activities it works the same.<br />
<br />
As said, in my case I need to reply with a wsa.action that is a concatenation of the received action with 'Response'. This can be done using an expression:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-cvUJARQdphI/Xw8AXqBtlVI/AAAAAAAADmY/yWDTE_mern4r7nvi_r_yXh4WTEGxQKD7QCNcBGAsYHQ/s1600/2020-07-15%2BReplyProperties.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="542" data-original-width="516" height="320" src="https://1.bp.blogspot.com/-cvUJARQdphI/Xw8AXqBtlVI/AAAAAAAADmY/yWDTE_mern4r7nvi_r_yXh4WTEGxQKD7QCNcBGAsYHQ/s320/2020-07-15%2BReplyProperties.png" width="304" /></a></div>
Again, first choose <i>wsa.action</i> and then in the source remove the <i>wsa.</i> prefix:<br />
<pre class="brush:xml"><reply name="ReplyMyService" partnerLink="MyService_WS" portType="ns1:myService"
variable="MyService_OutputVariable" operation="myOperation">
<bpelx:toProperties>
<bpelx:toProperty name="action">concat($wsaAction, 'Response')</bpelx:toProperty>
</bpelx:toProperties>
<bpelx:property name="action" variable="WSAction"/>
</reply></pre>
Testing this in SoapUI or ReadyAPI will show:<br />
<pre class="brush:xml"><env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:wsa="http://www.w3.org/2005/08/addressing">
<env:Header>
<wsa:Action>http://www.darwin-it.nl/my/myServiceResponse</wsa:Action>
<wsa:MessageID>urn:1694440c-c69a-11ea-bc81-0050569796a9</wsa:MessageID>
<wsa:ReplyTo>
...</pre>
<br />
For more info on setting properties, see the <a href="https://docs.oracle.com/middleware/12212/soasuite/develop/GUID-33A38C1A-38A6-473B-9FEA-D3164AD7A118.htm#SOASE20757">docs</a>.<br />
<br />
<br />Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-38720893218414792892020-06-30T14:32:00.001+02:002020-06-30T14:32:44.677+02:00A little bit of insight in SOA Suite future<br />
A few weeks ago I was made aware of a few announcements, which I think makes sense and that I want to pass on to my followers, sauced with a bit of my own perspective.<br />
<h3>
Containerized SOA</h3>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-eYVq54UmRsA/Xk_cUFWIwRI/AAAAAAAADUk/btoF3z2jHHo6U7sja9bMm4a9SqEZXsjTQCPcBGAYYCw/s1600/OracleKubernetes.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="413" data-original-width="617" height="133" src="https://1.bp.blogspot.com/-eYVq54UmRsA/Xk_cUFWIwRI/AAAAAAAADUk/btoF3z2jHHo6U7sja9bMm4a9SqEZXsjTQCPcBGAYYCw/s200/OracleKubernetes.png" width="200" /></a></div>
Last year I had made myself familar with the Oracle Weblogic Kubernetes Operator. See for instance my <a href="https://blog.darwin-it.nl/2020/02/my-weblogic-on-kubernetes-cheatsheet.html">Cheat Sheet Serie</a>. I also had the honour to talk about it during the Tech Summit at OUK in december '19. Weblogic under Kubernetes is apparently the way to go for Weblogic. And with that, also the Fusion Middleware Stack. However, until now only 'plain' Weblogic is supported under Kubernetes, on all Cloud platforms, as well as on your own on-premises Kubernetes platform.<br />
<br />
It was no surprise that SOA Suite would follow, and in March there an <a href="https://blogs.oracle.com/integration/announcing-early-access-of-soa-suite-for-kubernetes">early acces for SOA Suite on Kubernetes</a> was announced.<br />
<br />
In the announcement it is stated that Oracle will provide Container images for SOA Suite including OSB, that are also certified for deployment on production Kubernetes environments. Also documentation, support files, deployment scripts and samples.<br />
<br />
Later on other components will be certified. This is good news, because it will allow SOA Suite be run in co-existence with cloud native applications and be part of a more heterogenous application platform. To me this makes sense. It makes High Availability and Disaster Recovery easier, but although the application landscape will be diverse and heterogenous, this makes the maintenance, install, deploy and upgrade of FMW within that landscape more uniformly aligned with other application compents like web applications, possible microservices, etc.<br />
<h3>
Paid Market Place offering</h3>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-zKiACBZF43A/XvssECbH8kI/AAAAAAAADlc/CWlBDDWwhR8v7RxcYiD0pCScdJNpMMbpgCNcBGAsYHQ/s1600/2020-06-30-SOA-Marketplace.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="673" data-original-width="1147" height="187" src="https://1.bp.blogspot.com/-zKiACBZF43A/XvssECbH8kI/AAAAAAAADlc/CWlBDDWwhR8v7RxcYiD0pCScdJNpMMbpgCNcBGAsYHQ/s320/2020-06-30-SOA-Marketplace.png" width="320" /></a></div>
Another announcment I got recently is about the release of a<a href="https://docs.oracle.com/en/cloud/paas/soa-cloud/soa-marketplace/whats-new-soa-suite-marketplace.html"> "Paid" listing of Oracle SOA Suite for Oracle Cloud Infrastructure"</a> on the Oracle Marketplace. There was already a Bring Your Own Licence offering, that you could bring in to use your universal cloud credits to host your SOA Suite instance in the cloud. You could purchase a separate license, but now you can also use the Universal Cloud Credits to have a paid, licensed instance of SOA Suite in the cloud, without the need to purchase a license.<br />
<br />
And so there are two new offerings in the market place: <br /><ul>
<li>Oracle SOA Suite on Oracle Cloud Infrastructure (PAID) </li>
<li>Oracle SOA Suite with B2B EDI Adapter on Oracle Cloud Infrastructure (PAID)</li>
</ul>
These offerings include:<br /><ul>
<li>SOA with Service Bus & B2B Cluster, with additional leverage of the B2B EDI Adapter.</li>
<li>MFT Cluster</li>
<li>BAM</li>
</ul>
This will provide better options for deploying SOA Suite on OCI, to:<br />
<ul>
<li>Provision SOA instances using OCI </li>
<li>Manage instances using OCI</li>
<li>Scale up/down/in/out using OCI</li>
<li>Backup/restore using OCI. </li>
</ul>
Oracle's focus on delivering SOA Suite from the Market place. It is expected that current SOA Cloud Service customers will migrate to this offering. The Marktet Place SOA Suite will be enhanced and improved with new capabilities and functions, that not necessarily will be added to the SOA CS. <br />
Probably this will give Oracle a better and more uniform way to improve and deliver new versions of SOA Suite. It also makes sense in relationship to the SOA Suite on Containers announcement.<br />
<br />
For new customers the Marketplace is the way to get SOA Suite. Existing customers can use the BYOL offering, but might need to move to the new offering when contract renewal might be opportune.<br />
<h3>
What about Oracle Integration Cloud (OIC)?</h3>
This is still Oracle's prime offering for integrations and process modelling. You should first look at OIC for new projects. Only if you're an existing SOA Suite customer and/or have specific requirements that drive the choice to SOA Suite and related components, then you should consider the Marketplace SOA Suite offering.<br />
<br />
This makes the choices a bit clearer, I think.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-90824000701471993552020-06-19T16:42:00.000+02:002020-06-20T15:10:14.480+02:00Use of correlation sets in SOA SuiteYears ago, I had plans to write a book about BPEL or at least a series of articles to be bundled as a BPEL Course. I stranded with only one <a href="https://blog.darwin-it.nl/2016/05/bpel-chapter-1-hello-world-bpel-project.html">Hello World article</a>.<br />
<br />
This year, I came up with the idea of doing something around Correlation Sets. Preparing a series of articles and a talk. And therefor, let's start with an article on Correlation Sets in BPEL. Maybe later on I could pick up those earlier plans again.<br />
<br />
You may have read "BPEL", and tend to skip this article. But wait, if you use BPM Suite: the Oracle BPM Process Engine is the exact same thing as the BPEL Process engine! And if you use the Processes module of Oracle Integration Cloud: it can use Correlation Sets too. Surprise: again it uses the exact same Process Engine as Oracle SOA Suite BPEL and Oracle BPM Suite.<br />
<h3>
Why Correlation Sets?</h3>
Now, why Correlation Sets and what are those? You may be familiar with OSB or maybe Mulesoft, or other integration tools.<br />
OSB is a <i>stateless </i>engine. What comes in is executed at once until it is done. So, services in OSB are inherently synchronous and short-lived. You may argue that you can do Asynch Services in OSB. But those are in fact "synchronous" one-way services. Fire & Forget, as you will. They are executed right away (hence the quoted synchronous) , until it is done. But the calling application does not expect a result (and thus asynchronous in the sense that the caller won't wait).<br />
<br />
You could, and I have done it actually, create asynchronous request response services in OSB. Asynchronous Request Response services are actually two complementary one way fire & forget services. For such a WSDL both services are defined in different port types: one for the actual service consumer, and one callback service for the service provider. Using WS-Addressing header elements the calling service will provide a <i>ReplyTo</i> callback-endpoint and a <i>MessageI</i><i>d</i> to be provided by the responding service as an <i>RelatesTo MessageId</i>.<br />
<br />
This <i>RelatesTo </i><i>MessageId </i>serves as a correlation id, that maps to the initiating <i>MessageId</i>. <a href="https://docs.oracle.com/html/E13734_06/wsaddressing.htm">WS<i>- </i>Addressing </a>is a Webservice standard that describes the SOAP Header elements to use. As said, you can do this in OSB, OSB even has the WS-Addressing namespaces already defined. However, you have to code the determination and the setting of the <i>MessageId </i>and <i>ReplyTo-Address</i> yourself.<br />
<br />
Because of the inherently stateless foundation of OSB the services
are short-lived and thats why OSB is not suitable for long running
processes. The Oracle SOASuite BPEL engine on the other hand, is designed to orchestrate Services (WebServices originally, but from 12c onwards REST Services as well) in a statefull way. This makes BPEL suitable for long running transactions as well. Because of that after the acquisition of Collaxa, the company who created the BPEL Engine, Oracle decided to replace it's own database product Oracle Workflow (OWF) with BPEL. And SOA Suite and it's BPEL engine natively support WS-Addressing. Based upon an Async Request/Response WSDL it will make sure it adds the proper WS-Addressing elements and has a SOAP Endpoint to catch response messages. Based upon the <i>RelatesTo</i> message id in the response it will correlate the incoming response with the proper BPEL process instance that waits for that message. <br />
<br />
A BPEL process may run from a few seconds, to several minutes, days, months, or potentially even years. Although experience learned us that we wouldn't recommend BPEL services to run for longer than a few days. For real long running process you should choose BPM Suite or Oracle Integration Cloud/Process.<br />
<br />
WS-Addressing helps in correlating response messages to requests that are sent out previously. But, it does not correlate <i>Ad-Hoc</i> messages. When a process runs for more than a few minuts, chances are that the information stored within the process is changed externally. A customer waiting for a some process may have relocated or even died. So you may need to interact with a running process. You want to be able to send a message with the changed info to the running process instance. And you want to be sure that the engine correlates the message to the correct instance. Correlation Sets help with these ad-hoc messages that may or may not be send at any time during the running of the process.<br />
<h3>
An example BPEL process</h3>
Let's make a simple customer processing process that reads an xml file and processes it and writes it back to an xml file.<br />
My composite looks like:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/--eQGwsjV4qc/Xun2WD1NDEI/AAAAAAAADgY/Y5-WZ27Cp90cg8ieY3QubvRJdMr_GvRYACNcBGAsYHQ/s1600/CorrelationDemoComposite.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="457" data-original-width="1223" height="148" src="https://1.bp.blogspot.com/--eQGwsjV4qc/Xun2WD1NDEI/AAAAAAAADgY/Y5-WZ27Cp90cg8ieY3QubvRJdMr_GvRYACNcBGAsYHQ/s400/CorrelationDemoComposite.png" width="400" /></a></div>
It has two File Adapter definitions, an exposed service that polls on the <i>/tmp/In</i> folder for <i>customer*.xml</i> files. And a reference service that writes an xml file into the <i>/tmp/Out</i> folder as <i>customer%SEQ%_%yyMMddHHmmss%.xml</i>. I'm not going to explain how to setup the File adapters, that would be another course-chapter.<br />
<br />
For both adapters I created the following XSD:<br />
<pre class="brush:xml"><?xml version="1.0" encoding="UTF-8" ?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:cmr="http://xmlns.darwin-it.nl/xsd/demo/Customer"
targetNamespace="http://xmlns.darwin-it.nl/xsd/demo/Customer" elementFormDefault="qualified">
<xsd:element name="customer" type="cmr:CustomerType">
<xsd:annotation>
<xsd:documentation>A customer</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:complexType name="CustomerType">
<xsd:sequence>
<xsd:element name="id" maxOccurs="1" type="xsd:string"/>
<xsd:element name="firstName" maxOccurs="1" type="xsd:string"/>
<xsd:element name="lastName" maxOccurs="1" type="xsd:string"/>
<xsd:element name="lastNamePrefixes" maxOccurs="1" type="xsd:string" minOccurs="0"/>
<xsd:element name="gender" maxOccurs="1" type="xsd:string"/>
<xsd:element name="streetName" maxOccurs="1" type="xsd:string"/>
<xsd:element name="houseNumber" maxOccurs="1" type="xsd:string"/>
<xsd:element name="country" maxOccurs="1" type="xsd:string"/>
</xsd:sequence>
</xsd:complexType>
</xsd:schema>
</pre>
(Just finishing this article, I encounter that I missed a <i>city</i> element. It does not matter for the story, but in the rest of the example I use the <i>country</i> field for <i>city</i>.)<br />
<br />
The first iteration of the BPEL process just receives the file from the <i>customerIn</i> adapter, assigns it to the the input variable of the invoke of the <i>customerOut</i> adapter and invokes it:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-ONvybEPu5jo/Xun3TcyLFbI/AAAAAAAADgg/pWM7hwthlo8gvPdyohXGwLaZ5jlCtgvUwCNcBGAsYHQ/s1600/ProcessCustomerBpel.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="414" data-original-width="1160" height="142" src="https://1.bp.blogspot.com/-ONvybEPu5jo/Xun3TcyLFbI/AAAAAAAADgg/pWM7hwthlo8gvPdyohXGwLaZ5jlCtgvUwCNcBGAsYHQ/s400/ProcessCustomerBpel.png" width="400" /></a></div>
<br />
Deploy it to the SOA Server and test it:<br />
<br />
<pre class="brush:bash">[oracle@darlin-ind In]$ ls ../TestFiles/
customer1.xml customer2.xml
[oracle@darlin-ind In]$ cp ../TestFiles/customer1.xml .
[oracle@darlin-ind In]$ ls
customer1.xml
[oracle@darlin-ind In]$ ls
customer1.xml
[oracle@darlin-ind In]$ ls
customer1.xml
[oracle@darlin-ind In]$ ls
[oracle@darlin-ind In]$ ls ../Out/
customer2_200617125051.xml
[oracle@darlin-ind In]$
</pre>
The output customer hasn't changed and is just like the input:<br />
<pre class="brush:bash">[oracle@darlin-ind In]$ cat ../Out/customer2_200617125051.xml
<?xml version="1.0" encoding="UTF-8" ?><customer xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.darwin-it.nl/xsd/demo/Customer ../Schemas/Customer.xsd" xmlns="http://xmlns.darwin-it.nl/xsd/demo/Customer">
<id>1001</id>
<firstName>Jean-Michel</firstName>
<lastName>Jarre</lastName>
<gender>M</gender>
<streetName>Rue d'Oxygene</streetName>
<houseNumber>4</houseNumber>
<country>Paris</country>
</customer></pre>
<pre class="brush:bash">[oracle@darlin-ind In]$</pre>
<br />
This process is now rather short-lived and doesn't do much except for moving the contents of the file. Now, let's say that this processing of the file takes quite some time and but during the processing the customer can have relocated, or died or has otherwise changed it's information.<br />
<br />
I expanded my composite with a SOAP Service, based on a One-Way WSDL, that is based upon the same xsd:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-mF5FrdXiSKU/XutaPv9d0cI/AAAAAAAADg0/BL7c6OUj0GA0GUe8pGOVpz-douHR7eWkQCNcBGAsYHQ/s1600/CorrelationDemoComposite_UpdateCustomer.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="221" data-original-width="1012" height="69" src="https://1.bp.blogspot.com/-mF5FrdXiSKU/XutaPv9d0cI/AAAAAAAADg0/BL7c6OUj0GA0GUe8pGOVpz-douHR7eWkQCNcBGAsYHQ/s320/CorrelationDemoComposite_UpdateCustomer.png" width="320" /></a></div>
And this is how I changed the BPEL:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-XuA1nrzaKeQ/Xuy2ZHNKQLI/AAAAAAAADhs/0FDdY3dDqXE_QHHeO2VRGbBBiCDReU08ACNcBGAsYHQ/s1600/ProcessCustomerBpel_UpdateCustomer.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="878" data-original-width="1101" height="508" src="https://1.bp.blogspot.com/-XuA1nrzaKeQ/Xuy2ZHNKQLI/AAAAAAAADhs/0FDdY3dDqXE_QHHeO2VRGbBBiCDReU08ACNcBGAsYHQ/s640/ProcessCustomerBpel_UpdateCustomer.png" width="640" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
In this example, after setting the customer to the <i>customerOut </i>variable, there is a long running "customer processing" sequence, that takes "about" 5 minutes.<br />
<br />
But in parallel it now also listens to the UpdateCustomer partnerlink using a Receive. This could be done in a loop to also receive more follow-up messages.<br />
<br />
This might look like a bit unnecessarily complex, with the throw and catch combination. But the thing with the Flow activity is that it completes only when all the branches are completed. So, you need a means to "kill" the <i>Receive_UpdateCustomer</i> activity. Adding a Throw activity does this nicely. Although the activity is colored red, this is not an actual Fault exception. I use it here as a flow-control activity. It just has a simple fault name, that I found the easiest to enter in the source:<br />
<pre class="brush:xml"><throw name="ThrowFinished" faultName="client:Finished"/>
</pre>
<br />
This is because you can just use the client namespace reference. While in the designer you should provide a complete namespace URI:
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-Zc7UZNpQFKI/Xuy0CBJqSiI/AAAAAAAADhY/siqhVZCYOcUF0LGwTTxG9plMFQYlNZfWgCNcBGAsYHQ/s1600/ThrowFinised.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="530" data-original-width="528" height="320" src="https://1.bp.blogspot.com/-Zc7UZNpQFKI/Xuy0CBJqSiI/AAAAAAAADhY/siqhVZCYOcUF0LGwTTxG9plMFQYlNZfWgCNcBGAsYHQ/s320/ThrowFinised.png" width="318" /></a></div>
<br />
Same counts for the Catch, after creating one, it's easier to add the namespace from the source:
<br />
<pre class="brush:xml"> <catch faultName="client:Finished">
<assign name="AssignInputCustomer">
<copy>
<from>$ReceiveCustomer_Read_InputVariable.body</from>
<to expressionLanguage="urn:oasis:names:tc:wsbpel:2.0:sublang:xpath1.0">$Invoke_WriteCustomerOut_Write_InputVariable.body</to>
</copy>
</assign>
</catch>
</pre>
<br />
Side-note: did you know that if you click on an activity or scope/sequence in the Designer and switch to the source, the cursor will be moved to the definition of the activity you selected? To me this often comes handy with larger BPELs.<br />
<br />
By throwing the <i>Finished</i> exception the flow activity is left and by that all the unfinished branches are also closed and by that the Receive is quit too.<br />
<br />
When you get a SOAP message in the bpel example above you would still wait for finishing the process branch. You probably also need to notify the customer processing branch that the data is changed. That can be done in the same way, by doing a throw of a custom exception. <br />
<h3>
How to define Correlation Sets </h3>
The example above won't work as is. Because, how does BPEL know to which process instance the message has to be sent? We need to create a Correlation set. And to do so we need to define how we can correlate the UpdateCustomer message to the customerIn message. Luckily there is a <i>Customer.id</i> field. For this example that will do. But keep in mind: you can have multiple processes running for a customer. So you should add something that will identify the instance.<br />
<br />
You can add and edit correlation sets on the invoke, receive, and pick/onMessage activities. But also from the BPEL menu:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-2zyKWcQApGM/Xuy4rWQzeeI/AAAAAAAADh4/jxBKasxv6yMHl-neaP4aU-0QYFnW4epSQCNcBGAsYHQ/s1600/BPELMenu-CorrelationSets.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="247" data-original-width="467" height="169" src="https://1.bp.blogspot.com/-2zyKWcQApGM/Xuy4rWQzeeI/AAAAAAAADh4/jxBKasxv6yMHl-neaP4aU-0QYFnW4epSQCNcBGAsYHQ/s320/BPELMenu-CorrelationSets.png" width="320" /></a></div>
<br />
<br />
<br />
<br />
Then you can define a Correlation Set:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-0mctdU4RfuE/Xuy92RLlQII/AAAAAAAADiQ/QkFlcLT7UUEpkiJeBvRBXTmC-7mWoBoaQCNcBGAsYHQ/s1600/CustomerCorrelationSet.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="675" data-original-width="931" height="232" src="https://1.bp.blogspot.com/-0mctdU4RfuE/Xuy92RLlQII/AAAAAAAADiQ/QkFlcLT7UUEpkiJeBvRBXTmC-7mWoBoaQCNcBGAsYHQ/s320/CustomerCorrelationSet.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-d2CD_q8lnK0/Xuy92QQ_vuI/AAAAAAAADiU/oNfnG6YG5Gg-jG9KGOUctq7A6cZam5ipQCNcBGAsYHQ/s1600/FirstReceiveCorrelationSet.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="289" data-original-width="608" height="152" src="https://1.bp.blogspot.com/-d2CD_q8lnK0/Xuy92QQ_vuI/AAAAAAAADiU/oNfnG6YG5Gg-jG9KGOUctq7A6cZam5ipQCNcBGAsYHQ/s320/FirstReceiveCorrelationSet.png" width="320" /></a></div>
<br />
As you can see you can create multiple Correlation Sets, each with one or more properties. In the last window, create a property, then select it for the Correlation Set and click ok. Up to the first dialog.<br />
<br />
<br />
You'll see that the Correlation Set isn't valid yet. What misses, what I didn't provide in the last dialog, are the property aliases. We need to map the properties to the messages.<br />
I find it convenient to do that on the activities, since we also need to couple the correlation Sets to particular Invoke, Receive, and/or Pick/OnMessage activities. Let's begin with the first receive:<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-g4Lltwo0XZ4/Xuy-Cx0CgWI/AAAAAAAADiY/sTTzflAC1dwY0vwwtCwdsduzrt1EvwWOQCNcBGAsYHQ/s1600/FirstReceiveCorrelationSet.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="289" data-original-width="608" height="152" src="https://1.bp.blogspot.com/-g4Lltwo0XZ4/Xuy-Cx0CgWI/AAAAAAAADiY/sTTzflAC1dwY0vwwtCwdsduzrt1EvwWOQCNcBGAsYHQ/s320/FirstReceiveCorrelationSet.png" width="320" /></a></div>
Select the Correlations tab, and add the Correlation Set. Since this is the activity that the Customer Id first appears in a message in the BPEL processe, we need to initiate the Correlation Set. This can also be done on an invoke, when calling a process that may cause multiple ad-hoc follow-up messages. So, set the Initiate property to yes.<br />
Also, here you can have multiple correlation sets on an activity. <br />
<br />
Then click the edit (pencil) button to edit the Correlation Set. And add a property alias:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-pqAwzJqG4Go/XuzAVjMWzRI/AAAAAAAADio/7lNbCfCAPboOuo8FnUJ6wz6wH4Fy3f4kwCNcBGAsYHQ/s1600/CustomerCorrelationSetAddAliases.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="850" data-original-width="1156" height="293" src="https://1.bp.blogspot.com/-pqAwzJqG4Go/XuzAVjMWzRI/AAAAAAAADio/7lNbCfCAPboOuo8FnUJ6wz6wH4Fy3f4kwCNcBGAsYHQ/s400/CustomerCorrelationSetAddAliases.png" width="400" /></a></div>
<br />
To find the proper message type, I find it convenient to go through the partnerlink node, then select the proper WSDL. From that WSDL choose the proper message type. Now, you would think you could select the particular element. Unfortunately, it is slightly less user-friendly. After choosing the proper message type in the particular WSDL, click in the query field and type CTRL-Space. A balloon will pop up with the possible fields and when the field has a child element, then a follow-up balloon will pop-up. Doing so, finish your xpath, and click as many times on Ok to get all the dialogs closed properly.<br />
<br />
Another side-node, the CTRL-Space way of working with balloons also works with the regular expression builder in with creating Assign-Copy-rules. Sometimes I get the balloons un-asked for, which I actually find annoying sometimes.<br />
<br />
Do the same for the customer update Receive:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-UkBvKDrqWv4/XuzCZWSRl4I/AAAAAAAADi4/RN722MMhP9Mj2JzmjdbaiXJ7InZjtgixgCNcBGAsYHQ/s1600/CorrelationDemoComposite_UpdateCustomer.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="290" data-original-width="960" height="96" src="https://1.bp.blogspot.com/-UkBvKDrqWv4/XuzCZWSRl4I/AAAAAAAADi4/RN722MMhP9Mj2JzmjdbaiXJ7InZjtgixgCNcBGAsYHQ/s320/CorrelationDemoComposite_UpdateCustomer.png" width="320" /></a></div>
Here it is important to select <i>No</i> for the initate: we now adhere to the initiated Correlation Set.<br />
<br />
Wrap this up, deploy the composite and test.<br />
<h3>
Test Correlations</h3>
As in the first version copy an xml file to the <i>/tmp/In</i> folder. This results in a following BPEL Flow:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-R_tjQmiJbjg/XuzG0SqH7cI/AAAAAAAADjE/JPj43efqfP0w5UdHTmvBq2Dgf5BB7GLewCNcBGAsYHQ/s1600/ReceiveCustomerFlow.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="611" data-original-width="753" height="323" src="https://1.bp.blogspot.com/-R_tjQmiJbjg/XuzG0SqH7cI/AAAAAAAADjE/JPj43efqfP0w5UdHTmvBq2Dgf5BB7GLewCNcBGAsYHQ/s400/ReceiveCustomerFlow.png" width="400" /></a></div>
<br />
The yellow-highlighted activities are now active. So, apparently it waits for a Receive and for the processing (Wait activity).<br />
<br />
From the flow trace you can click on the compositename, besides the instance id, and then click on the Test button:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-at65QOd9qVI/XuzHKDa3G9I/AAAAAAAADjM/gVg6aNqZQB0AkZOuL97pHf5rJ1oaXqIOACNcBGAsYHQ/s1600/TestUpdateCustomer.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="364" data-original-width="880" height="165" src="https://1.bp.blogspot.com/-at65QOd9qVI/XuzHKDa3G9I/AAAAAAAADjM/gVg6aNqZQB0AkZOuL97pHf5rJ1oaXqIOACNcBGAsYHQ/s400/TestUpdateCustomer.png" width="400" /></a></div>
And enter new values for your customer:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-c1baHoy15lA/XuzHQNj_1EI/AAAAAAAADjQ/b03QNTjU_2w1-RfJjolpq35KzorLYny2wCNcBGAsYHQ/s1600/UpdateCustomerRequest.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="696" data-original-width="707" height="393" src="https://1.bp.blogspot.com/-c1baHoy15lA/XuzHQNj_1EI/AAAAAAAADjQ/b03QNTjU_2w1-RfJjolpq35KzorLYny2wCNcBGAsYHQ/s400/UpdateCustomerRequest.png" width="400" /></a></div>
<br />
<br />
In the bottom right corner you can click on the "Test Web Service" button, and from the resulting Response tab you can click on the launch flow trace.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-6zEYm84ENDQ/XuzIWEktAMI/AAAAAAAADjg/tDTaXMDV6VAjPYlGkNz9eoRcXKTyY3GFgCNcBGAsYHQ/s1600/UpdateCustomerFlow2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="651" data-original-width="761" height="341" src="https://1.bp.blogspot.com/-6zEYm84ENDQ/XuzIWEktAMI/AAAAAAAADjg/tDTaXMDV6VAjPYlGkNz9eoRcXKTyY3GFgCNcBGAsYHQ/s400/UpdateCustomerFlow2.png" width="400" /></a></div>
<br />
You'll find that the Receive has been done, and the Assign after that as well. Now, only the Wait activity is active.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-wMWMIy0xzeA/XuzI7Soq_KI/AAAAAAAADjo/7H8Q6_FVMqkKk9k_USqRg-MgE5oMmrp1QCNcBGAsYHQ/s1600/CustomerProcessingFinishedFlow.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="808" data-original-width="448" height="400" src="https://1.bp.blogspot.com/-wMWMIy0xzeA/XuzI7Soq_KI/AAAAAAAADjo/7H8Q6_FVMqkKk9k_USqRg-MgE5oMmrp1QCNcBGAsYHQ/s400/CustomerProcessingFinishedFlow.png" width="221" /></a></div>
<br />
After processing the flow it throws a Finished exception, and finishes the BPEL Flow.<br />
In this case the Receive was earlier than finisihing the Wait activity. So, in this flow the throw is unnecessary, but when the message wasn't received, then the throw is needed.<br />
<br />
Looking in the <i>/tmp/Out</i> folder we see that the file is updates neatly from the Update:<br />
<pre class="brush:plain">[oracle@darlin-ind In]$ ls ../Out/
customer2_200617125051.xml customer3_200619160921.xml
[oracle@darlin-ind In]$ cat ../Out/customer3_200619160921.xml
<?xml version="1.0" encoding="UTF-8" ?><ns1:customer xmlns:ns1="http://xmlns.darwin-it.nl/xsd/demo/Customer">
<ns1:id>1001</ns1:id>
<ns1:firstName>Jean-Michel</ns1:firstName>
<ns1:lastName>Jarre</ns1:lastName>
<ns1:lastNamePrefixes/>
<ns1:gender>M</ns1:gender>
<ns1:streetName>Equinoxelane</ns1:streetName>
<ns1:houseNumber>7</ns1:houseNumber>
<ns1:country>Paris</ns1:country>
</ns1:customer>[oracle@darlin-ind In]$
</pre>
<br />
<h3>
A bit of techie-candy</h3>
Where is all this beautifull stuff registered?<br />
First of all, for the Correlation properties, you will find a new WSDL has appeared:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-Vn21JTVeN7g/XuzKojObibI/AAAAAAAADj0/b6enmTiWr4shJXqtwbV3aT8856TV0N6ngCNcBGAsYHQ/s1600/Properties.wsdl.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="352" data-original-width="1208" height="116" src="https://1.bp.blogspot.com/-Vn21JTVeN7g/XuzKojObibI/AAAAAAAADj0/b6enmTiWr4shJXqtwbV3aT8856TV0N6ngCNcBGAsYHQ/s400/Properties.wsdl.png" width="400" /></a></div>
<br />
At the top of the source of the BPEL you'll find the following snippet:<br />
<pre class="brush:xml"> <bpelx:annotation>
<bpelx:analysis>
<bpelx:property name="propertiesFile">
<![CDATA[../WSDLs/<b><span style="color: red;">ProcessCustomer_properties.wsdl</span></b>]]>
</bpelx:property>
</bpelx:analysis>
</bpelx:annotation>
<import namespace="http://xmlns.oracle.com/pcbpel/adapter/file/CorrelationDemo/CorrelationDemo/customerIn"
location="../WSDLs/<span style="color: red;"><b>customerIn.wsdl</b></span>" importType="http://schemas.xmlsoap.org/wsdl/" ui:processWSDL="true"/>
</pre>
<br />
Here you see a reference to the properties wsdl. Also an import of the customerIn.wsdl. Let's take a look in there:
<br />
<pre class="brush:xml"><?xml version= '1.0' encoding= 'UTF-8' ?>
<wsdl:definitions
name="customerIn"
targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/file/CorrelationDemo/CorrelationDemo/customerIn"
xmlns:tns="http://xmlns.oracle.com/pcbpel/adapter/file/CorrelationDemo/CorrelationDemo/customerIn"
xmlns:jca="http://xmlns.oracle.com/pcbpel/wsdl/jca/"
xmlns:plt="http://schemas.xmlsoap.org/ws/2003/05/partner-link/"
xmlns:pc="http://xmlns.oracle.com/pcbpel/"
xmlns:imp1="http://xmlns.darwin-it.nl/xsd/demo/Customer"
xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
xmlns:cor="http://xmlns.oracle.com/CorrelationDemo/CorrelationDemo/ProcessCustomer/correlationset"
xmlns:bpel="http://docs.oasis-open.org/wsbpel/2.0/process/executable"
<b><span style="color: red;">xmlns:vprop="http://docs.oasis-open.org/wsbpel/2.0/varprop"</span></b>
xmlns:ns="http://oracle.com/sca/soapservice/CorrelationDemo/CorrelationDemo/Customer"
>
<plt:partnerLinkType name="Read_plt">
<plt:role name="Read_role">
<plt:portType name="tns:Read_ptt"/>
</plt:role>
</plt:partnerLinkType>
<b><span style="color: red;"> <vprop:propertyAlias propertyName="cor:customerId" xmlns:tns="http://xmlns.oracle.com/pcbpel/adapter/file/CorrelationDemo/CorrelationDemo/customerIn"
messageType="tns:Read_msg" part="body">
<vprop:query>imp1:id</vprop:query>
</vprop:propertyAlias>
<vprop:propertyAlias propertyName="cor:customerId" xmlns:ns13="http://oracle.com/sca/soapservice/CorrelationDemo/CorrelationDemo/Customer"
messageType="ns13:requestMessage" part="part1">
<vprop:query>imp1:id</vprop:query>
</vprop:propertyAlias></span></b>
<wsdl:import namespace="http://oracle.com/sca/soapservice/CorrelationDemo/CorrelationDemo/Customer"
location="Customer.wsdl"/>
<b><span style="color: red;"><wsdl:import namespace="http://xmlns.oracle.com/CorrelationDemo/CorrelationDemo/ProcessCustomer/correlationset"
location="ProcessCustomer_properties.wsdl"/></span></b>
</pre>
<br />
Below the <i>partnerLinkType</i> you find the propertyAliases.
<br />
Especially with older, migrated processes, this might be a bit tricky, because you might get the property aliases not in the wsdl you want. Then you need to register the proper wsdl in the BPEL and move the property aliases to the other wsdl, together with the <i>vprop</i> namespace declaration.<br />
When you move the WSDL to the MDS for reuse, move the property aliases to another wrapper WSDL. You shouldn't move the property aliases to the MDS with it. Because they belong to the process and shouldn't be shared, but also it makes it impossible for the designer to change. I'm not sure if it even would work. Probably it does, but you should not have that.<br />
<br />
As I mentioned before, you can have multiple Correlation Sets in your
BPEL (or BPMN) process and even on an activity. In complex interactions
this may make perfectly sense. For instance when there is overlap. You
may have initiated one Correlation Set on an earlier Invoke or Receive,
and use that to correlate to another message in a Receive. But that
message may have another identifying field that can be used to correlate
with other interactions. And so you may have a non-initiating
Correlation Set on an activity that initiates another one. Maybe even
based on different property-aliases on the same message.<br />
<h3>
Pitfalls</h3>
Per CorrelationSet you can have multiple properties. They are concatenated on to a string. Don't use too many properties to make-up the correlation set. Perferably only one. And use short scalar elements for the properties. In the past the maximum length was around 1000 characters. I've no idea what it is now. But multiple properties and property aliases may make it error-prone. During the concatenation, a different formatting may occur. It is harder to check, validate if the correlation elements in the messages conform with eachother. <br />
<br />
In the example above I used the customer id for the correlation property. This results in an initiated correlation set where the UpdateCustomer Receive is listening for. If you would initiate another process instance for the same customer, the process engine will find at the UpdateCustomer Receive that there already is a (same) Receive with the same Correlation Set. And will fail. The process engine identifies the particular activity in the process definition and the combination process-activity and Correlation Set is unique. A uniqueness violation at this point will result in a runtime fault. <br />
<br />
It doesn't matter if the message is initiate before of after the Receive is activated. If you would be so fast to issue an UpdateCustomer request before the process instance has activated the Receive, then it will be stored in a table, and picked up when the Receive activity is reached.<br />
<h3>
Conclusion</h3>
This may be new to you and sound very sophisticated. Or, not of course, if you were already familiar with it. If this is new to you: it was already in the product when Oracle acquired it in 2004!<br />
And not only that, you can use it in OIC Processes as well. Also for years. I wrote about <a href="https://blog.darwin-it.nl/2017/07/pcs-and-correlations-next-big-thing.html">that in 2016</a>.<br />
<br />
More on correlation sets, check out the <a href="https://docs.oracle.com/en/middleware/soa-suite/soa/12.2.1.3/develop/using-correlation-sets-and-message-aggregation.html#GUID-786B1444-55F1-45D5-A0F1-49F3EDE7E163">docs</a>.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-85194689627016294122020-05-19T16:21:00.000+02:002020-05-19T16:43:44.647+02:00Honey, I shrunk the database!<div class="separator" style="clear: both; text-align: center;">
<a href="https://www.oracle.com/database/" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="218" data-original-width="580" height="120" src="https://1.bp.blogspot.com/-_Z3HPWXtZiI/XsPqQJyEp5I/AAAAAAAADeU/0Qd4OLALUA0eIM0l42yuWshBYH17kAc-gCNcBGAsYHQ/s320/OracleDB-logo.png" width="320" /></a></div>
For my current assignment I need to get 3 SOA/B2B environments running. I'm going to try-out the multi-hop functionality, where the middle B2B environment should work as a dispatcher. The idea is that in Dev, Test and Pre Prod environments the dev environment can send messages to the remote trading partner's test environment, through the in-between-environment. To the remote trading partner, that in-between-environment should act as the local test environment, but then should be able to dispatch the message to the actual dev, test or pre-prod environment.<br />
<br />
I envision a solution, where this in-between B2B environment should act as this dispatching B2B hop. So I need to have 3 VMs running with their own database (although I could have them share one database), and Fusion Middleware domain.<br />
<br />
The Vagrant project that <a href="https://blog.darwin-it.nl/2020/05/new-fmw-12c-vagrant-project.html">I wrote about earlier this week</a>, creates a database and then provisions all the FMW installations and a domain. That database is a 12cR1 database (that I could upgrade) that is installed default. In my setup it takes about 1.8GB of memory. My laptop is 16GB, so to have 2 VMs running on it, and let Windows have some memory too, I want to have a VM of at most 6,5 GB.<br />
I need to run an AdminServer and a SOAServer, that I gave 1GB and GB respectively. And since they're not Docker containers, they both run an Oracle Linux 7 OS too.<br />
<br />
So, one of the main steps is to downsize the database to "very small".<br />
<br />
My starting point is an article I wrote years ago about <a href="https://blog.darwin-it.nl/2011/06/webcenter-11g-vm.html">shrinking an 11g database to XE proportions</a>.
<br />
As described in that article I created an <i>pfile </i>as follows:<br />
<pre class="brush:sql">create pfile from spfile;</pre>
<br />
This creates an <i>initorcl.ora </i>in the $ORACLE_HOME/dbs folder.
<br />
<br />
I copied that file to <i>initorcl.ora.small </i>and editted it:
<br />
<pre class="brush:plain">orcl.__data_transfer_cache_size=0
#orcl.__db_cache_size=1291845632
orcl.__db_cache_size=222298112
#orcl.__java_pool_size=16777216
orcl.__java_pool_size=10M
#orcl.__large_pool_size=33554432
orcl.__large_pool_size=4194304
orcl.__oracle_base='/app/oracle'#ORACLE_BASE set from environment
#orcl.__pga_aggregate_target=620756992
orcl.__pga_aggregate_target=70M
#orcl.__sga_target=1828716544
orcl.__sga_target=210M
#orcl.__shared_io_pool_size=83886080
orcl.__shared_io_pool_size=0
#orcl.__shared_pool_size=385875968
orcl.__shared_pool_size=100M
orcl.__streams_pool_size=0
*.audit_file_dest='/app/oracle/admin/orcl/adump'
*.audit_trail='db'
*.compatible='12.1.0.2.0'
*.control_files='/app/oracle/oradata/orcl/control01.ctl','/app/oracle/fast_recovery_area/orcl/control02.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='orcl'
*.db_recovery_file_dest='/app/oracle/fast_recovery_area'
*.db_recovery_file_dest_size=4560m
*.diagnostic_dest='/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'
*.open_cursors=300
*.pga_aggregate_target=578m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
#*.sga_target=1734m
*.sga_target=350M
*.undo_tablespace='UNDOTBS1'</pre>
<br />
The lines that I changed are copied with the original values commented out. So I downsized the db_cache_size, java_pool, large_pool and pga_aggregate_target. Also the sga_target, shared_io_pool(have it auto-managed) and shared_pool. I needed to set the sga_target to at least 350M, to get it started.<br />
SOASuite needs at least 300 processes and open_cursors.<br />
<br />
Now the script, checks if the database running. It is actually a copy of the <a href="https://github.com/makker-nl/vagrant/blob/master/Stage/commonScripts/db/12.1/StartStop/startDB.sh">startDB.sh</a> script also in my Vagrant project.<br />
<br />
If it is running, it shutdowns the database. It then creates a <i>pfile </i>for backup. If the database isn't running, it only creates the <i>pfile</i>.<br />
<br />
Then it I copied that file to <i>initorcl.ora.small</i> and creates a <i>spfile</i> from it. And then it starts the database again.<br />
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/../../install_env.sh
. $SCRIPTPATH/db12c_env.sh
#
db_num=`ps -ef|grep pmon |grep -v grep |awk 'END{print NR}'`
if [ $db_num -gt 0 ]
then
echo "Database Already RUNNING."
$ORACLE_HOME/bin/sqlplus "/ as sysdba" <<EOF
shutdown immediate;
prompt create new initorcl.ora.
create pfile from spfile;
exit;
EOF
#
# With use of a plugable database the following line needs to be added after the startup command
# startup pluggable database pdborcl;
#
sleep 10
echo "Database Services Successfully Stopped. "
else
echo "Database Not yet RUNNING."
$ORACLE_HOME/bin/sqlplus "/ as sysdba" <<EOF
prompt create new initorcl.ora.
create pfile from spfile;
exit;
EOF
sleep 10
fi
#
echo Copy initorcl.ora.small to $ORACLE_HOME/dbs/initorcl.ora, with backup to $ORACLE_HOME/dbs/initorcl.ora.org
mv $ORACLE_HOME/dbs/initorcl.ora $ORACLE_HOME/dbs/initorcl.ora.org
cp $SCRIPTPATH/initorcl.ora.small $ORACLE_HOME/dbs/initorcl.ora
#
echo "Starting Oracle Database again..."
$ORACLE_HOME/bin/sqlplus "/ as sysdba" <<EOF
create spfile from pfile;
startup;
exit;
EOF
</pre>
<br />
The scripts can be found <a href="https://github.com/makker-nl/vagrant/tree/master/Stage/commonScripts/db/12.1">here</a>.<br />
<br />
Oh, by the way: I must state here that I'm not a DBA. I'm not sure if those settings make sense all together. (Should have someone review it). So you should not rely on them for a serious environment. Not even a development one. My motto is that a development environment is a developer's production environment. For me this is to be able to try something out. And to show the mechanism to you.<br />
<br />
<br />
<br />
<br />Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-5538171548211639942020-05-15T17:12:00.001+02:002020-05-15T17:15:28.295+02:00New FMW 12c Vagrant project<h3>
Introduction </h3>
Several years ago I blogged about automatic creation of Fusion Middleware environments.<br />
See for instance <a href="https://blog.darwin-it.nl/2016/05/automatic-install-of-soa-suite-and.html">this article on installation</a>, <a href="https://blog.darwin-it.nl/2016/06/scripted-domain-creation-for-soabpm-osb.html">this one on the domain creation</a> and <a href="https://blog.darwin-it.nl/2016/06/a-couple-of-notes-of-automatic.html">these notes</a>.<br />
<br />
In between I wrote several articles on issues I got, <a href="https://blog.darwin-it.nl/2017/04/new-and-improved-re-start-and-stop.html">start/stop scripts</a>, etc.<br />
<br />
Later I found out <a href="https://blog.darwin-it.nl/2018/04/the-vagrant-way-of-provisioning.html">about Vagrant </a>and since then I worked with that. And this I enhanced through the years, for instance, nowadays I use <a href="https://blog.darwin-it.nl/2019/04/split-your-vagrant-provisioners.html">different provisioners</a> to setup my environment.<br />
<br />
Until this week I struggeled with a Oracle Linux 7 Update 7 box, as <a href="https://blog.darwin-it.nl/2020/05/vagrant-oracle-linux-and-vagrant-user.html">I wrote earlier this week</a>.<br />
<br />
For my current customer I needed to create a few B2B environments. So I got back to my vagrant projects and scripts and build a Vagrant project that can create a SOA/BPM/OSB+B2B environment.<br />
<br />
You can find it on GitHub in my <a href="https://github.com/makker-nl/vagrant/tree/master/ol77_soa12c">ol77_soa12c project</a>, with the scripts in <a href="https://github.com/makker-nl/vagrant/tree/master/Stage/commonScripts">this folder</a>.<br />
<br />
You'll need to get a Oracle Linux 7U7 Vagrant base box yourself. I tried to create one based on the simple base box of Oracle, as I wrote earlier this year. But in the end I created a simple base install of OL7U7, with one disk, and a Server with GUI package, a vagrant user (with password vagrant). As you can read in earlier articles.<br />
<br />
Also you'll need to download the installer zips from <a href="http://edelivery.oracle.com/">edelivery.oracle.com</a>.<br />
<h3>
Modularisation</h3>
What I did with my scripts in this revision, is that I split up the main method of the <a href="https://github.com/makker-nl/vagrant/blob/master/Stage/commonScripts/fmw/12.2.1.3/soabpm1221_domain/createFMWDomain.py">domain creation script</a>:<br />
<pre class="brush:python">#
def main():
try:
#
# Section 1: Base Domain + Admin Server
createBaseDomain()
#
# Section 2: Extend FMW Domain with templates
extendFMWDomain()
#
# Section 3: Create Domain Datasources
createDatasources()
#
# Section 4: Create UnixMachines, Clusters and Managed Servers
createMachinesClustersAndServers()
#
# Section 5: Add Servers to ServerGroups.
addFMWServersToGroups()
#
print('Updating the domain.')
updateDomain()
print('Closing the domain.')
closeDomain();
#
# Section 6: Create boot properties files.
createBootPropertiesForServers()
#
# Checks
print('\n7. Checks')
print(lineSeperator)
listServerGroups(domainHome)
#
print('\nExiting...')
exit()
except NameError, e:
print 'Apparently properties not set.'
print "Please check the property: ", sys.exc_info()[0], sys.exc_info()[1]
usage()
except:
apply(traceback.print_exception, sys.exc_info())
stopEdit('y')
exit(exitcode=1)</pre>
<br />
All the secions I moved to several sub-functions. I added an extra section for Checks and validations. One check I added is to list the server groups of the domain servers. But I may envision other validations later.<br />
<br />
<h3>
Policy Manager</h3>
<br />
Another thing is that in the method <i>addFMWServersToGroups()</i> I changed the script so that it complies to the <a href="https://blogs.oracle.com/soa/soa-suite-12c%3a-topology-suggestions">topology suggestions from the Oracle Product management</a>. Important aspect here is that for SOA, OSB and BAM it is important to determine if you want a domain with only one of these products, or that you create a combined domain. By default these products will have the Oracle Webservice Managment Policy Manager targetted in to the particular cluster or server. However, you should have only one PolicyManager per domain. So, if you want a combined domain with both SOA and OSB, then you would need to create a separate WSM_PM cluster. This is done using the <i>wsmEnabled</i>cproperty in the <a href="https://github.com/makker-nl/vagrant/blob/master/Stage/commonScripts/fmw/12.2.1.3/soabpm1221_domain/fmw.properties">fmw.properties</a> file. Based on this same property the server groups are added:<br />
<pre class="brush:python">#
# Add a FMW server to the appropriate group depending on if a separate WSM PM Cluster is added.
def addFMWServerToGroups(server, svrGrpOnly, srvGrpComb):
if wsmEnabled == 'true':
print 'WSM Disabled: add server group(s) '+",".join(svrGrpOnly)+' to '+server
setServerGroups(server, svrGrpOnly)
else:
print 'WSM Enabled: add server group(s) '+",".join(srvGrpComb)+' to '+server
setServerGroups(server, srvGrpComb)
#
# 5. Set Server Groups to the Domain Servers
def addFMWServersToGroups():
print('\n5. Add Servers to ServerGroups')
print(lineSeperator)
cd('/')
#print 'Add server groups '+adminSvrGrpDesc+ ' to '+adminServerName
#setServerGroups(adminServerName, adminSvrGrp)
if osbEnabled == 'true':
addFMWServerToGroups(osbSvr1, svrGrpOnly, srvGrpComb)
if osbSvr2Enabled == 'true':
addFMWServerToGroups(osbSvr2, osbSvrOnlyGrp, osbSvrCombGrp)
if soaEnabled == 'true':
addFMWServerToGroups(soaSvr1, soaSvrOnlyGrp, soaSvrCombGrp)
if soaSvr2Enabled == 'true':
addFMWServerToGroups(soaSvr2, soaSvrOnlyGrp, soaSvrCombGrp)
if bamEnabled == 'true':
addFMWServerToGroups(bamSvr1, bamSvrOnlyGrp, bamSvrCombGrp)
if bamSvr2Enabled == 'true':
addFMWServerToGroups(bamSvr2, bamSvrOnlyGrp, bamSvrCombGrp)
if wsmEnabled == 'true':
print 'Add server group(s) '+",".join(wsmSvrGrp)+' to '+wsmSvr1+' and possibly '+wsmSvr2
setServerGroups(wsmSvr1, wsmSvrGrp)
if wsmSvr2Enabled == 'true':
setServerGroups(wsmSvr2, wsmSvrGrp)
if wcpEnabled == 'true':
print 'Add server group(s) '+",".join(wcpSvrGrp)+' to '+wcpSvr1+' and possibly '+wcpSvr2
setServerGroups(wcpSvr1, wcpSvrGrp)
if wcpSvr2Enabled == 'true':
setServerGroups(wcpSvr2, wcpSvrGrp)
print('Finished ServerGroups.')</pre>
<br />
The groups are declared at the top:<br />
<pre class="brush:python"># ServerGroup definitions
# See also: https://blogs.oracle.com/soa/soa-suite-12c%3a-topology-suggestions
#adminSvrGrp=["JRF-MAN-SVR"]
osbSvrOnlyGrp=["OSB-MGD-SVRS-ONLY"]
osbSvrCombGrp=["OSB-MGD-SVRS-COMBINED"]
soaSvrOnlyGrp=["SOA-MGD-SVRS-ONLY"]
soaSvrCombGrp=["SOA-MGD-SVRS"]
bamSvrOnlyGrp=["BAM12-MGD-SVRS-ONLY"]
bamSvrCombGrp=["BAM12-MGD-SVRS"]
wsmSvrGrp=["WSMPM-MAN-SVR", "JRF-MAN-SVR", "WSM-CACHE-SVR"]
wcpSvrGrp=["SPACES-MGD-SVRS","PRODUCER_APPS-MGD-SVRS","AS-MGD-SVRS","DISCUSSIONS-MGD-SVRS"]
wccSvrGrp=["UCM-MGD-SVR"]</pre>
<br />
For SOA, OSB and BAM you see that there is a default or "combined" server group, and a "server only" group. If <i>wsmEnabeld</i> is false, then the combined group is used and then the Policy Manager is added to the managed server or cluster. If it is true then the "only"-group is used.
<br />
<h3>
Other Remarks</h3>
An earlier project I did failed when creating the domain. Somehow I had to run it twice to get the domain created. Somehow this is solved.<br />
<br />
In my scripts I still use the 12.2.1.3 zips. But the scripts are quite
easiliy adaptable for 12.2.1.4. I'll do that in the near future
hopefully. But my current customer still uses this version. So, I went
from here. <br />
<br />
The project also adapts the nodemanager properties, creates a nodemanager linux service, and copies start stop scripts. However, I missed the bit of setting the listener port and type (plain or SSL) of the nodemanager in the Machine definition. So starting the domain needs a little bit of tweaking.<br />
<br />
And for my project I need at 3 environments. So I need to downsize the database and the managed servers so that I can run it in 6GB, and can have 2 VMs on my 16GB laptop. <br />
<br />
And I need to add a bridged network adapter to the Vagrant project, so that I can have the environments connect to each other.Anonymousnoreply@blogger.com1tag:blogger.com,1999:blog-4533777417600103698.post-21788767971680981972020-05-13T15:54:00.001+02:002020-05-13T15:54:41.381+02:00Vagrant Oracle Linux and the Vagrant user: use a password<a href="https://blog.darwin-it.nl/2019/12/create-vagrant-box-with-oracle-linux-7.html">Last year </a>and <a href="https://blog.darwin-it.nl/2020/02/vagrant-box-with-oracle-linux-77.html">earlier this year</a> I have been struggling to create a new Vagrant box based on an installation of the Oracle Base box. I had some extra requirements, for instance having a GUI in my server to get to a proper desktop when it comes handy. I found in the end that it might be more convenient to create a base box by myself. I also tried using a ssh-key to have the vagrant user connect to the box to do the provisioning. But what I did, I get "Cannot allocate memory"-errors in any stage of the provisioning. For instance, when upgrading the guest additions:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-8mMyUdyEXLU/Xrv26Uff8MI/AAAAAAAADd8/xxGs_FCjLssp8Tm33mKLJbDv22LsyINuACNcBGAsYHQ/s1600/2020-05-13-vagrant-cannot-allocate-memory.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="512" data-original-width="1083" height="151" src="https://1.bp.blogspot.com/-8mMyUdyEXLU/Xrv26Uff8MI/AAAAAAAADd8/xxGs_FCjLssp8Tm33mKLJbDv22LsyINuACNcBGAsYHQ/s320/2020-05-13-vagrant-cannot-allocate-memory.png" width="320" /></a></div>
<br />
Using a ssh-key is actually the recommended approach. Read my <a href="https://blog.darwin-it.nl/2020/02/vagrant-box-with-oracle-linux-77.html">previous blog article</a> on the matter for instructions on how to do it.<br />
<br />
It struck me on why I couldn't have a Oracle Linux 7U7 box working as a base for new VMs. And why would I get these nasty memory allocation errors.<br />
I upgraded from Vagrant 2.2.6 to 2.2.9, and VirtualBox from 6.1.4 to 6.1.6, but this wasn't quite related to these versions.<br />
<br />
And just now I realized that the one thing I do differently with this box in stead of my OL7U5 box is the vagrant user ssh-key in stead of the password. So, I made sure that the vagrant user can logon using an ssh password. For instance by reviewing the file <i>/etc/ssh/sshd_config</i> and specifically the option <i>PasswordAuthentication</i>:<br />
<pre class="brush:plain">...
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication yes
#PermitEmptyPasswords no
#PasswordAuthentication no
...</pre>
Make sure it's set to <i>yes</i>.<br />
<br />
<br />
Then re packed the box<br />
<br />
<pre class="brush:plain">d:\Projects\vagrant>vagrant package --base OL7U7 --output d:\Projects\vagrant\boxes\ol77GUIv1.1.box
==> OL7U7: Exporting VM...
==> OL7U7: Compressing package to: d:/Projects/vagrant/boxes/ol77GUIv1.1.box</pre>
<br />
<br />
And removed the old box:
<br />
<pre class="brush:plain">d:\Projects\vagrant\ol77_gui>vagrant box list
ol75 (virtualbox, 0)
ol77 (virtualbox, 0)
ol77GUIv1.0 (virtualbox, 0)
ol77GUIv1.1 (virtualbox, 0)
d:\Projects\vagrant\ol77_gui>vagrant box remove ol77GUIv1.0
Removing box 'ol77GUIv1.0' (v0) with provider 'virtualbox'...
d:\Projects\vagrant\ol77_gui>vagrant box list
ol75 (virtualbox, 0)
ol77 (virtualbox, 0)
ol77GUIv1.1 (virtualbox, 0)
</pre>
<br />
I found that it might be usefull to check if there are vagrant processes currently running. Since I got an exception that Vagrant said that the box was locked:
<br />
<pre class="brush:plain">d:\Projects\vagrant\ol77_gui>vagrant global-status
id name provider state directory
--------------------------------------------------------------------
There are no active Vagrant environments on this computer! Or,
you haven't destroyed and recreated Vagrant environments that were
started with an older version of Vagrant.
</pre>
<br />
If your box is running it could say something like:
<br />
<pre class="brush:plain">d:\Projects\vagrant\ol77_gui>vagrant global-status
id name provider state directory
-----------------------------------------------------------------------
42cbd44 darwin virtualbox running d:/Projects/vagrant/ol77_gui
The above shows information about all known Vagrant environments
on this machine. This data is cached and may not be completely
up-to-date (use "vagrant global-status --prune" to prune invalid
entries). To interact with any of the machines, you can go to that
directory and run Vagrant, or you can use the ID directly with
Vagrant commands from any directory. For example:
"vagrant destroy 1a2b3c4d"
</pre>
<br />
You could also do a prune of invalid entries:
<br />
<pre class="brush:plain">d:\Projects\vagrant\ol77_gui>vagrant global-status --prune
id name provider state directory
-----------------------------------------------------------------------
42cbd44 darwin virtualbox running d:/Projects/vagrant/ol77_gui
...
</pre>
<br />
In the Vagrantfile I set the ssh username and password:
<br />
<pre class="brush:ruby">Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://vagrantcloud.com/search.
config.vm.box = BOX_NAME
config.vm.box_url=BOX_URL
config.vm.define "darwin"
config.vm.provider :virtualbox do |vb|
vb.name = VM_NAME
vb.gui = true
vb.memory = VM_MEMORY
vb.cpus = VM_CPUS
# Set clipboard and drag&drop bidirectional
vb.customize ["modifyvm", :id, "--clipboard-mode", "bidirectional"]
vb.customize ["modifyvm", :id, "--draganddrop", "bidirectional"]
...
end
#config.ssh.username="darwin"
config.ssh.username="vagrant"
config.ssh.password="vagrant"
</pre>
<br />
It is common to have the vagrant user's password be "vagrant".
Lastly I "upped" my VM. And this all seemed to solve my memory allocation problems.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-RLZXxZcRgUc/Xrv7r1QtQCI/AAAAAAAADeI/rO6C4NRGTJUVwyhCy-y3aP1PYASeTjLaQCNcBGAsYHQ/s1600/2020-05-13-vagrant-cannot-allocate-memory-solved.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="512" data-original-width="1083" height="151" src="https://1.bp.blogspot.com/-RLZXxZcRgUc/Xrv7r1QtQCI/AAAAAAAADeI/rO6C4NRGTJUVwyhCy-y3aP1PYASeTjLaQCNcBGAsYHQ/s320/2020-05-13-vagrant-cannot-allocate-memory-solved.png" width="320" /></a></div>
<br />
Apparently, we can't use the ssh-key to provision the box.<br />
<br />
Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-44958409144443912452020-04-29T17:29:00.002+02:002020-04-29T17:29:16.387+02:00SOA Suite: SOAP Faults in BPEL and MediatorIn the past few months, at our current customer we are having a "robustness project" to improve our SOA Suite implementation. We had a lot of duplication and it turned out that we had a lot of WSDLs in our composite projects. Many of those are a result of BPEL projects from 10g. But some of them weren't possible to move because it would break the project.<br />
<br />
The first projects where I encountered the problem were projects with Mediators. After moving the WSDLs to MDS, most of our SoapUI/ReadyAPI unit test worked, except for those simulating a SOAP Fault. It seemed that the Mediator could not map the SOAP Fault. I searched "me an accident", we would say in Holland. But without any luck.<br />
<br />
Actually, I can't find any documents that talks about catching SOAP Faults in SOASuite. Which is a weird thing, because in BPM Suite, sharing the same soa-infra and process engine, there is a preference for SOAP Faults. Because BPM can react with specific exception transitions on SOAP Faults.<br />
<br />
So what is this weird behavior? Well actually, SOA Suite, apparently both BPEL and Mediator, interpret SOAP Faults as <i>Remote Faults</i>! So, in BPEL you can't catch it as a SOAP Fault and Mediator can't route it in the correct way. What you would suggest from the UI.<br />
<br />
However, just now I found a solution. That is, I found it earlier for Mediator, but couldn't explain it. Since the same behavior can be seen in BPEL as well, I can write down my story.<br />
<br />
Normally, if you would add a reference to your composite, it would look like something as follows in the composite.xml source:<br />
<pre class="brush:xml"> <reference name="ManagedFileService"
ui:wsdlLocation="oramds:/apps/Generiek/WSDLs/ManagedFileUtilProcess.wsdl">
<interface.wsdl interface="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess#wsdl.interface(managedfile_ptt)"/>
<binding.ws port="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess#wsdl.endpoint(managedfile_ptt/managedfile_pttPort)"
location="oramds:/apps/Generiek/WSDLs/ManagedFileUtilProcess.wsdl" soapVersion="1.1">
<property name="endpointURI">http://soa.hostname:soa.port/soa-infra/services/default/ManagedFileService/managedfileprocess_client_ep</property>
<property name="weblogic.wsee.wsat.transaction.flowOption" type="xs:string" many="false">WSDLDriven</property>
</binding.ws>
</reference></pre>
<br />
What you see here is a <i>ui:wsdlLocation</i>, which should point to a WSDL in the MDS. under <i>binding.ws</i> there is a <i>location</i> attribute that at many customers would point to your concrete WSDL. At my current customer we work with an <i>endpointURI </i>property that is overwritten using the config plan. In any way, the <i>service</i> element of the WSDL is in the MDS or on the Remote Server, if your refer to an external service.<br />
<br />
If the external service would raise an SOAP Fault, it can't be caught, other than through a Catch all:<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-YstMV3kTFtc/Xqmcy6ijQ3I/AAAAAAAADdQ/M8ezLv2NLms5vSjOu9ypAx_DDtfXLmImACNcBGAsYHQ/s1600/2020-04-29%2BMoveFile%2BCatch.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="555" data-original-width="955" height="231" src="https://1.bp.blogspot.com/-YstMV3kTFtc/Xqmcy6ijQ3I/AAAAAAAADdQ/M8ezLv2NLms5vSjOu9ypAx_DDtfXLmImACNcBGAsYHQ/s400/2020-04-29%2BMoveFile%2BCatch.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<br />
This makes it also hard to interact in the correct way with the fault, to interpret the underlying problem. This service should rename or move a file on the filesystem. And in this case the file couldn't be found. But the remote fault would suggest something else.<br />
<br />
But, there is a real easy workaround. I wouldn't call it a solution, since I think SOA Suite should just handle SOAP Faults correctly.<br />
<br />
In the composites <i>WSDLs</i> folder make a copy of the concrete WSDL and strip it down as follows:<br />
<pre class="brush:xml"><?xml version= '1.0' encoding= 'UTF-8' ?>
<wsdl:definitions name="ManagedFileProcess"
targetNamespace="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess"
xmlns:tns="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess"
xmlns:mfs="http://xmlns.darwin-it.nl/soa/xsd/Generiek/ManagedFileService/ManagedFileProcess"
xmlns:plnk="http://docs.oasis-open.org/wsbpel/2.0/plnktype"
xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/">
<wsdl:import location="oramds:/apps/Generiek/WSDLs/ManagedFileUtilProcess.wsdl"
namespace="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess"/>
<wsdl:service name="managedfile_ptt">
<wsdl:port name="managedfile_pttPort" binding="tns:managedfile_pttSOAP11Binding">
<soap:address location="http://soa.hostname:soa.port/soa-infra/services/default/ManagedFileService/managedfileprocess_client_ep"/>
</wsdl:port>
</wsdl:service>
</wsdl:definitions>
</pre>
In the service element there is a reference to the SOAP endpoint of the service, in this case simply a local SOA Suite service.
<br />
<br />
In the composite you need to change the reference:
<br />
<pre class="brush:xml"> <reference name="ManagedFileService"
ui:wsdlLocation="oramds:/apps/Generiek/WSDLs/ManagedFileUtilProcess.wsdl">
<interface.wsdl interface="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess#wsdl.interface(managedfile_ptt)"/>
<binding.ws port="http://xmlns.darwin-it.nl/soa/wsdl/Generiek/ManagedFileService/ManagedFileProcess#wsdl.endpoint(managedfile_ptt/managedfile_pttPort)"
location="WSDLs/ManagedFileUtilProcess.wsdl" soapVersion="1.1">
<property name="endpointURI">http://soa.hostname:soa.port/soa-infra/services/default/ManagedFileService/managedfileprocess_client_ep</property>
<property name="weblogic.wsee.wsat.transaction.flowOption" type="xs:string" many="false">WSDLDriven</property>
</binding.ws>
</reference></pre>
Here you change the <i>binding.ws location</i> to refer to the local stripped WSDL. the endpointURI property does not make much sense anymore, but it does not gets in the way.<br />
<br />
You also need to change your config plan to contain the following WSDL Replacement:<br />
<pre class="brush:xml"> <wsdlAndSchema name="*">
<searchReplace>
<search>http://soa.hostname:soa.port/soa-infra/services/default/ManagedFileService/managedfileprocess_client_ep</search>
<replace>http://soasuite12c.soa.darwin-it.nl:8001/soa-infra/services/default/ManagedFileService/managedfileprocess_client_ep</replace>
</searchReplace>
</wsdlAndSchema></pre>
<br />
This will do a replacement of the endpoint in the WSDL that can be used.
<br />
<br />
<br />
If you deploy this, using the config plan, then amazingly, SOAP Faults <i>are</i> correctly interpreted:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-YstMV3kTFtc/Xqmcy6ijQ3I/AAAAAAAADdQ/qO5_vE863BUSNY7BD5rY29I0qF1zQC7YACEwYBhgL/s1600/2020-04-29%2BMoveFile%2BCatch.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="555" data-original-width="955" height="231" src="https://1.bp.blogspot.com/-YstMV3kTFtc/Xqmcy6ijQ3I/AAAAAAAADdQ/qO5_vE863BUSNY7BD5rY29I0qF1zQC7YACEwYBhgL/s400/2020-04-29%2BMoveFile%2BCatch.png" width="400" /></a></div>
<br />
<br />
Now, we get a neat SoapFault caught by a specific catch, based on the fault in the WSDL of the partner link.<br />
<br />
Again, this works similarly for Mediator.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Anonymousnoreply@blogger.com2tag:blogger.com,1999:blog-4533777417600103698.post-86294024399213919872020-03-11T11:03:00.003+01:002020-03-11T11:12:04.554+01:00SOA Composite Sensors and the ORA-01461: can bind a LONG value only for insert into a LONG column exceptionLast year I wrote about <a href="https://blog.darwin-it.nl/2019/08/soasuite-composite-sensors-why-and-how.html">SOA Composite Sensors</a> and how they can be a good alternative for the BPEL Indexes in 10g. This week I was confronted with the "ORA-01461: can bind a LONG value only for insert into a LONG column" exception in one of our composites. It was about a process that is triggered to do some message archiving.<br />
<h3>
A bit about BPEL Sensors</h3>
Funny thing is that this archiving process is triggered by BPEL sensor. To recap: you can create a BPEL Sensor by clicking the monitor icon in your BPEL process:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-mW2eWP3-QhI/XmiwM3lW0bI/AAAAAAAADZM/c2eR2sxXcXMrkhNyWdzncgCkC3Tr4uJ8QCEwYBhgLMPniovMF/s1600/BPELSensors.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="317" data-original-width="872" height="116" src="https://1.bp.blogspot.com/-mW2eWP3-QhI/XmiwM3lW0bI/AAAAAAAADZM/c2eR2sxXcXMrkhNyWdzncgCkC3Tr4uJ8QCEwYBhgLMPniovMF/s320/BPELSensors.png" width="320" /></a></div>
It's the heart-beat-monitor icon in the button-area top right of the panel. Then it shows the BPEL process in a Layered mode, you can't edit the process any more, but you can add, remove and edit sensors. Sensors are indicated with little antenna icons with an activity. You can put them on any kind of activity. Even Empty activities, what adds extra potential reason to use to an empty activity.<br />
<br />
If you click an antena icon you can define a series of sensors, but editing them will bring up the following dialog:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-tUV0cGp-azo/XmiwM81_6yI/AAAAAAAADZQ/rwIzd0jBJqg2wYs1iDD_MG3ZWPdUEOeswCNcBGAsYHQ/s1600/BPELSensors2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="513" data-original-width="586" height="280" src="https://1.bp.blogspot.com/-tUV0cGp-azo/XmiwM81_6yI/AAAAAAAADZQ/rwIzd0jBJqg2wYs1iDD_MG3ZWPdUEOeswCNcBGAsYHQ/s320/BPELSensors2.png" width="320" /></a></div>
<br />
It allows you to add variables and possible expressions to elements within those variables to a sensor. And also add one of more sensor actions that can be triggered on the trigger moment (Evaluation Time) that can be set as well.
<br />
<br />
A Sensor action can be set as:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-MLTwwFf8fAE/XmiwM8vcNcI/AAAAAAAADZI/A4wl9OhFJMw_uWNHcmfrDSXzb7ZqOfvcwCNcBGAsYHQ/s1600/BPELSensors3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="243" data-original-width="336" height="231" src="https://1.bp.blogspot.com/-MLTwwFf8fAE/XmiwM8vcNcI/AAAAAAAADZI/A4wl9OhFJMw_uWNHcmfrDSXzb7ZqOfvcwCNcBGAsYHQ/s320/BPELSensors3.png" width="320" /></a></div>
<br />
In 11g we used the JMS Adapter, but apparently that didn't work anymore the way it was in 12c. So, we changed it to JMS Queues. As with compsite sensors, in the BPEL folder, together with the BPEL process you get two files: <i>YourBPELProcess_sensor.xml</i> containing the Sensor definitions and <i>YourBPELProcess_sensorAction.xml</i> containing the sensor action definitions.<br />
<br />
When the sensor is activated, a JMS message is produced on the queue, with an xml following a predefined xsd. In that XML you will find info about the triggering BPEL instance, like name and instance ID, and a list of variable data. Each of the variables defined in the Sensor is in the list, in the order as defined in the sensor.<br />
<br />
By the way, BPEL sensors are part of the product since before 10g...<br />
<br />
<h3>
The actual error case</h3>
In our case this message archiving process was triggered from another bpel using a Sensor. The archiving process was listening to the queue as defined in the Sensor Action. Picking up messages that are from certain sensors, using a message selector based on the sensor name.<br />
<br />
On the JMS Interface (Exposed Service) of the message archiving process, I defined a set of Composite Sensors, to be able to search for them. This would help in finding the archiving instance that belongs to the triggering process. Since sensors work asynchronously, they're not tight together in a Flow Trace.<br />
<br />
In some cases, we got the following exception in the Diagnostic log:<br />
<pre class="brush:plain">[2020-03-11T09:19:50.855+01:00] [DWN_SOA_01] [WARNING] [] [oracle.soa.adapter.jms.inbound] [tid: DaemonWorkThread: '639' of WorkManager: 'default_Adapters'] [userId: myadmin] [ecid: c8e2b75e-7aed-4305-84c5-9ef5cf928c7b-0bb833b1,0:11:9] [APP: soa-infra] [partition-name: DOMAIN] [tenant-name: GLOBAL] [oracle.soa.tracking.FlowId: 463993] [oracle.soa.tracking.InstanceId: 762213] [oracle.soa.tracking.SCAEntityId: 381353] [oracle.soa.tracking.FaultId: 400440] [FlowId: 0000N38eGGo5aaC5rFK6yY1UNay100012j] [composite_name: MyComposite] [composite_version: 1.0] [endpoint_name: DWN_MyCompositeInterface_WS] JmsConsumer_runInbound: [destination = jms/DWN_OUTGOING, subscriber = null] : weblogic.transaction.RollbackException: Unexpected exception in beforeCompletion: sync=org.eclipse.persistence.transaction.JTASynchronizationListener@2d7a86a9[[
Internal Exception: java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column
Error Code: 1461 javax.resource.ResourceException: weblogic.transaction.RollbackException: Unexpected exception in beforeCompletion: sync=org.eclipse.persistence.transaction.JTASynchronizationListener@2d7a86a9
Internal Exception: java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column
Error Code: 1461
at oracle.tip.adapter.jms.inbound.JmsConsumer.afterDelivery(JmsConsumer.java:321)
at oracle.tip.adapter.jms.inbound.JmsConsumer.runInbound(JmsConsumer.java:982)
at oracle.tip.adapter.jms.inbound.JmsConsumer.run(JmsConsumer.java:893)
at oracle.integration.platform.blocks.executor.WorkManagerExecutor$1.run(WorkManagerExecutor.java:184)
at weblogic.work.j2ee.J2EEWorkManager$WorkWithListener.run(J2EEWorkManager.java:209)
at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:644)
at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:622)
at weblogic.work.DaemonWorkThread.run(DaemonWorkThread.java:39)
Caused by: javax.resource.ResourceException: weblogic.transaction.RollbackException: Unexpected exception in beforeCompletion: sync=org.eclipse.persistence.transaction.JTASynchronizationListener@2d7a86a9
Internal Exception: java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column
Error Code: 1461
at oracle.tip.adapter.fw.jca.messageinflow.MessageEndpointImpl.afterDelivery(MessageEndpointImpl.java:379)
at oracle.tip.adapter.jms.inbound.JmsConsumer.afterDelivery(JmsConsumer.java:306)
... 11 more
Caused by: weblogic.transaction.RollbackException: Unexpected exception in beforeCompletion: sync=org.eclipse.persistence.transaction.JTASynchronizationListener@2d7a86a9
Internal Exception: java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column
...
</pre>
<br />
Of course the process instance failed. It took me some time to figure out what went wrong. It was suggested that it was due to the composite sensors, but I waved that away initially, since I introduced them earlier (although a colleague had removed them for no apparent reason and I re-introduced them). I couln't see that these were the problem, because it ran through the unit-tests and in most cases they weren't a problem.
<br />
<br />
But the error indicates an triggered interface: <i>[endpoint_name: DWN_MyCompositeInterface_WS]</i>, and in this case a destination: <i>[destination = jms/DWN_OUTGOING, subscriber = null]</i>.<br />
<br />
Since the process is triggered from the queue with messages from <i>BPEL Sensors</i> these <i>Composite Sensors</i> were defined on variableData from the BPEL Sensor XML. And as said above, the variables appear in the XML in the order they're defined in the BPEL Sensor.<br />
<br />
One of the Composite Sensors were defined as:<br />
<pre class="brush:xml"><sensor sensorName="UitgaandBerichtNummer" kind="service" target="undefined" filter="" xmlns:imp1="http://xmlns.oracle.com/bpel/sensor">
<serviceConfig service="DWN_MessageArchivingBeginExchange_WS" expression="$in.actionData/imp1:actionData/imp1:payload/imp1:variableData/imp1:data" operation="ArchiverenBeginUitwisseling" outputDataType="string" outputNamespace="http://www.w3.org/2001/XMLSchema"/>
</sensor>
</pre>
<br />
With the expression: <i>$in.actionData/imp1:actionData/imp1:payload/imp1:variableData/imp1:data</i>.<br />
Because it is a list, there can be more than one <i>variableData</i> occurences. And without an index, it will select all of them. If, for instance one them contains the actual message to archive, and that message is quite large, then the resulting value becomes too large. And that results in the error above.<br />
<br />
All I had to do is to select the proper occurence of the message id as shown in the Sensor Dialog above. The expression had to be: <i>$in.actionData/imp1:actionData/imp1:payload/imp1:variableData[2]/imp1:data</i><br />
<br />
<h3>
Conclusion</h3>
This solved the error. I wanted to log this for future reference. But, also to show how to find out this seemingly more obscure error.<br />
<br />Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-20520547528730476622020-02-28T13:30:00.001+01:002020-02-28T14:45:11.442+01:00Vagrant box with Oracle Linux 77 basebox - additional fiddlingsLast year on the way home from the UK OUG TechFest 19, I wrote about creating a Vagrant box from the Oracle provided basebox in <a href="https://blog.darwin-it.nl/2019/12/create-vagrant-box-with-oracle-linux-7.html">this article</a>.<br />
<br />
Lately I wanted to use it but I stumbled upon some nasty pitfalls.<br />
<br />
<h4>
Failed to load SELinux policy</h4>
For starters, as described in the article, I added the 'Server with GUI' package and packaged the box in a new base box. This is handy, because the creation of the GUI box is quite time-consuming and requires an intermediate restart. But if I use the new Server-with-GUI basebox, the new VM fails to start with the message: "Systemd: Failed to load SELinux policy. Freezing.".<br />
<br />
This I could solve using the support document <a href="https://support.oracle.com/epmos/faces/DocumentDisplay?id=2314747.1">2314747.1</a>. I have to add it to my provision scripts, but before packaging the box, you need to edit the file <i>/etc/selinux/config</i>:<br />
<pre class="brush:plain"># This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted</pre>
<br />
The option <i>SELINUX</i> turned out to be set on<i> enforcing</i>.<br />
<br />
<i></i>
<br />
<h3>
Vagrant unsecure keypair</h3>
When you first start your VM, you'll probably see messages like:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-MQ16DdzTay0/XlkCOB26DqI/AAAAAAAADVY/VQtEVuvWT-MYbqdjFK2u_pmuCIAcC7XqwCNcBGAsYHQ/s1600/2020-02-28%2BVagrant%2BInsecure%2BKeypair.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="159" data-original-width="634" height="100" src="https://1.bp.blogspot.com/-MQ16DdzTay0/XlkCOB26DqI/AAAAAAAADVY/VQtEVuvWT-MYbqdjFK2u_pmuCIAcC7XqwCNcBGAsYHQ/s400/2020-02-28%2BVagrant%2BInsecure%2BKeypair.png" width="400" /></a></div>
The working of this is described in the Vagrant documentation about creating a base box under the chapter <a href="https://www.vagrantup.com/docs/boxes/base.html">"vagrant" User</a>. I think when I started with Vagrant, I did not fully grasped this part. Maybe the documentation changed. Basically you need to download the <a href="https://github.com/hashicorp/vagrant/tree/master/keys">Vagrant insecure keypair</a> from GitHub. Then in the VM, you'll need to update the file <i>authorized_keys</i> in the <i>.ssh</i> folder of the <i>vagrant</i> user:<br />
<pre class="brush:bash">[vagrant@localhost ~]$ cd .ssh/
[vagrant@localhost .ssh]$ ls
authorized_keys
[vagrant@localhost .ssh]$ pwd
/home/vagrant/.ssh
[vagrant@localhost .ssh]$
</pre>
<br />
The contents look like:
<br />
<pre class="brush:plain">ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGn8m1kC2mHfPx0dno+HNNYfhgXUZHn8Rt7orIm2Hlc7g4JkvCN6bO7mrYhUbdN2qjy2TziPdlndTAI0E1HK2GbwRM8+N02CNzBg5zvJosMQhweU7EXsDZjYRNJ/SAgVlU5EqIPzmznFjp08uzvBAe2u+L4dZ9kIZ23z/GVWupNpTJmem6LsqS3xg/h0qKf2LFv55SqtLVLlC1sAxL4fvBi3fFIsR9+NLf0fxb+tV/xrprn3yYXT1GyRPVtYAbiOzE3gUOWLKQZVkCXN8R69JeY8P5YgPGx9gSLCiNyLLmqCdF4oLIBMg82lZ0a3/BXG7AoAHVxh7caOoWJrFAjVK9 vagrant
</pre>
<br />
This is now a generated public key matching with a newly generated private key, matching with this file in my .vagrant folder:
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-GVQYCHVtw58/XlkEN8jwkxI/AAAAAAAADVk/ZoA2HcsWbu8xDV5chKf5f7TREa7pWaAZwCNcBGAsYHQ/s1600/2020-02-28%2BVagrant%2Bprivate%2Bkey.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="219" data-original-width="946" height="92" src="https://2.bp.blogspot.com/-GVQYCHVtw58/XlkEN8jwkxI/AAAAAAAADVk/ZoA2HcsWbu8xDV5chKf5f7TREa7pWaAZwCNcBGAsYHQ/s400/2020-02-28%2BVagrant%2Bprivate%2Bkey.png" width="400" /></a></div>
As shown, it is the <i>private_key</i> file in the <i>.vagrant\machines\darwin\virtualbox\</i> folder.<br />
If you update the <i>authorized_keys</i> file of the vagrant user with the public key of the Vagrant insecure keypair, then you need to remove the <i>private_key</i> file. Vagrant will notice that it finds the insecure key and replaces the insecure file with a newly generated private one. By the way, I noticed that sometimes Vagrant won't remove the insecure public key. That means that someone could login to your box using the insecure keypair. You might not want that, so remove that public key from the file.<br />
For convenience, the insecure public key is:
<br />
<pre class="brush:plain">ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant
</pre>
<br />
It's this file on GitHub:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-f0ZQM3Bh-go/XlkFznOugQI/AAAAAAAADVw/tKBr5I2e1F0-wZmKYRxU5SOwy6EKP9c5gCNcBGAsYHQ/s1600/2020-02-28%2BVagrant%2Binsecure%2Bkeypair%2Bgithub.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="359" data-original-width="1025" height="112" src="https://1.bp.blogspot.com/-f0ZQM3Bh-go/XlkFznOugQI/AAAAAAAADVw/tKBr5I2e1F0-wZmKYRxU5SOwy6EKP9c5gCNcBGAsYHQ/s320/2020-02-28%2BVagrant%2Binsecure%2Bkeypair%2Bgithub.png" width="320" /></a></div>
<br />
<h3>
Oracle user</h3>
For my installations I allways use an Oracle user. And it is quite safe to say I always use the password 'welcome1', for demo and training boxes that is (fieeewww).<br />
<br />
But I found out that I could not logon to that user using ssh with a simple password.<br />
That is because in the Oracle vagrant basebox this option is set to no. To solve it, edit the following file <i>/etc/ssh/sshd_config</i> and find the option <i>PasswordAuthentication</i>:<br />
<pre class="brush:plain">...
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication yes
#PermitEmptyPasswords no
#PasswordAuthentication no
...
</pre>
<br />
Comment the line with value <i>no</i> and uncomment the one with <i>yes</i>.
<br />
<br />
You can add this to your script to enable it:
<pre class="brush:plain">echo 'Allow PasswordAuthhentication'
sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.org
sudo sed -i 's/PasswordAuthentication no/#PasswordAuthentication no/g' /etc/ssh/sshd_config
sudo sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/g' /etc/ssh/sshd_config
sudo service sshd restart
</pre>
<br/>You need to restart the sshd as shown in the last line, to have this take effect.
<h3>
Conclusion</h3>
I'll need to add the changes above to my Vagrant scripts, at least the one creating the box based on the one from Oracle. And now I need to look into the file systems created in the Oracle box, to be able to extend them with mine... But that might be input for another story.<br />
<br />Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-55912557334608919582020-02-27T12:23:00.001+01:002020-02-27T12:23:16.017+01:00My first node program: get all the named complexTypes from an xsd file<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-mDImqusMqFo/XlejTLEfL8I/AAAAAAAADVM/2xnx5ot5Q-w3MT7eeg45ie567-c7KiRUgCNcBGAsYHQ/s1600/nodejs-image.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Node JS Logo" border="0" data-original-height="900" data-original-width="900" height="200" src="https://1.bp.blogspot.com/-mDImqusMqFo/XlejTLEfL8I/AAAAAAAADVM/2xnx5ot5Q-w3MT7eeg45ie567-c7KiRUgCNcBGAsYHQ/s200/nodejs-image.png" title="Node JS Logo" width="200" /></a></div>
Lately I'm working on some scripting for scanning SOA Projects for several queries. Some more in line of my <a href="https://blog.darwin-it.nl/2018/11/using-ant-to-investigate-jca-adapters.html">script to scan JCA files</a>. I found that ANT is very helpfull in selecting the particular files to process. Also in another script I found it very usefull to use JavaScript with in ANT.<br />
<br />
In my JCA scan example, and my other scripts, at some points I need to read and interpret the found xml document to get the information from it in ANT and save it to a file. For that I used XSL to transform the particular document to be able to address the particular elements as properties in ANT.<br />
<br />
In my latest fiddlings I need to gather all the references of elements from a large base xsd in XSDs, WSDLs, BPELs, XSLTs and composite.xml. I found quickly that transforming a wsdl or xsd using XSLT hard, if not near to impossible. For instance, I needed to get all the type attributes referencing an element or type within the target namespace of the referenced base xsd. And although mostly the same namespace prefix is used, I can't rely on that. So in the end I used a few JavaScript functions to parse the document as a string.<br />
<br />
Now, at this point I wanted to get all the named xsd:complexTypes, and then I found it fun to try that into a node js script. You might be surprised, but I haven't done this before, although I did some JavaScript once in a while. I might have done some demo node js try-outs, but don't count those.<br />
<br />
So I came up with this script:<br />
<pre class="brush:jscript">const fs = require('fs');
var myArgs = process.argv.slice(2);
const xsdFile=myArgs[0];
const complexTypeFile = myArgs[1];
//
const complexTypeStartTag="<xsd:complexType"
// Log arguments
console.log('myArgs: ', myArgs);
console.log('xsd: ', xsdFile);
//
// Exctract an attribute value from an element
function getAttributeValue(element, attributeName){
var attribute =""
var attributePos=element.indexOf(attributeName);
if (attributePos>-1){
attribute = element.substring(attributePos);
attributePos=attribute.indexOf("=")+1;
attribute=attribute.substring(attributePos).trim();
var enclosingChar=attribute.substring(0,1);
attribute=attribute.substring(1,attribute.indexOf(enclosingChar,1));
}
return attribute;
}
// Create complexType Output file.
fs.writeFile(complexTypeFile,'ComplexType\n', function(err){
if(err) throw err;
});
// Read and process the xsdFile
fs.readFile(xsdFile, 'utf8', function(err, contents){
//console.log(contents);
var posStartComplexType = contents.indexOf(complexTypeStartTag);
while (posStartComplexType > -1){
// Abstract complexType declaration
var posEndComplexType= contents.indexOf(">", posStartComplexType);
console.log("Pos: ".concat(posStartComplexType, "-", posEndComplexType));
var complexType= contents.substring(posStartComplexType, posEndComplexType+1);
// Log the complexType
console.log("Complex type: [".concat(complexType,"]"));
var typeName = getAttributeValue(complexType, "name")
if (typeName==""){
typeName="embedded";
}
console.log(typeName);
fs.appendFileSync(complexTypeFile, typeName.concat("\n"));
//Move on to find next possible complexType
contents=contents.substring(posEndComplexType+1);
posStartComplexType = contents.indexOf(complexTypeStartTag);
}
});
console.log('Done with '+xsdFile);</pre>
<br />
It parses the arguments, where it expects first a reference to the XSD file to parse, and as a second argument the filename to write all the names to.
<br />
<br />
The function <i>getAttributeValue()</i> finds an attribute from the provided <i>element</i>, based on the <i>attributeName </i>and returns its value if found. Otherwise it will return an empty string.<br />
<br />
The main script will first write a header row to the output csv file. Then reads the xsd file asynchronously (there for the done message will be shown before the console logs from the processing of the file), and in finds every occurence of the <i>xsd:complexType </i>from the contents. When found, it will find the end of the start tag declaration and within it it will find the name attribute. This name attribute is then appended (synchronously) to the csv file.<br />
<br />
How to read a file I found <a href="https://code-maven.com/reading-a-file-with-nodejs">here</a>. Appending a file here on <a href="https://stackoverflow.com/questions/3459476/how-to-append-to-a-file-in-node">stackoverflow</a>.<br />
<br />Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-34884456361117911842020-02-25T15:35:00.002+01:002020-02-25T15:39:19.267+01:00Get XML Document from SOA Infra tableToday I'm investigating a problem in an interaction between Siebel and SOASuite. I needed to find a set of correlated messages, where BPEL expects only one message but gets 2 from Siebel.<br />
<br />
I have a query like:
<br />
<pre class="brush:sql">SELECT
dmr.message_guid,
dmr.document_id,
dmr.part_name,
dmr.document_type,
dmr.dlv_partition_date,
xdc.document_type,
xdc.document,
GET_XML_DOCUMENT(xdc.document,to_clob(' ')) doc_PAYLOAD,
xdc.document_binary_format,
dmg.conv_id ,
dmg.conv_type,
dmg.properties msg_properties
FROM
document_dlv_msg_ref dmr
join xml_document xdc on xdc.document_id = dmr.document_id
join dlv_message dmg on dmg.message_guid = dmr.message_guid
where dmg.cikey in (select cikey from cube_instance where flow_id = 4537505 or flow_id = 4537504);
</pre>
<br />
To get all the messages that are related to two flows that run parallel based on the same message exchange.
<br />
The thing is that of course you want to see the contents of the message in the xml_document. This attribute is a BLOB that contains the parsed document from oracle xml classes. You need the oracle classes to serialize it to a String representation of the document. I found this nice <a href="https://blog.heyn.me/post/118494390965/how-to-convert-xmldocumentdocument-to-clob-using">solution from Michael Heyn</a>.<br />
<br />
In 12c this did not work right a way. First I had to rename the class to SOAXMLDocument, because I got a Java compilation error complaining that XMLDocument already was in use. I think it conflicts with the imported <i>oracle.xml.parser.v2.XMLDocument</i> class. Renaming it was the simple solution.<br />
<br />
Another thing is that in SOA Suite 12c, the documents are apparent<br />
<br />
<pre class="brush:java">set define off;
CREATE OR REPLACE AND COMPILE JAVA SOURCE NAMED "SOAXMLDocument" as
// Title: Oracle Java Class to Decode XML_DOCUMENT.DOCUMENT Content
// Author: Michael Heyn, Martien van den Akker
// Created: 2015 05 08
// Twitter: @TheHeynComplex
// History:
// 2020-02-25: Added GZIP Unzip and renamed class to SOAXMLDocument
// Import all required classes
import oracle.xml.parser.v2.XMLDOMImplementation;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import oracle.xml.binxml.BinXMLStream;
import oracle.xml.binxml.BinXMLDecoder;
import oracle.xml.binxml.BinXMLException;
import oracle.xml.binxml.BinXMLProcessor;
import oracle.xml.scalable.InfosetReader;
import oracle.xml.parser.v2.XMLDocument;
import oracle.xml.binxml.BinXMLProcessorFactory;
import java.util.zip.GZIPInputStream;
// Import required sql classes
import java.sql.Blob;
import java.sql.Clob;
import java.sql.SQLException;
public class SOAXMLDocument{
public static Clob GetDocument(Blob docBlob, Clob tempClob){
XMLDOMImplementation xmlDom = new XMLDOMImplementation();
BinXMLProcessor xmlProc = BinXMLProcessorFactory.createProcessor();
ByteArrayOutputStream byteStream;
String xml;
try {
// Create a GZIP InputStream from the Blob Object
GZIPInputStream gzipInputStream = new GZIPInputStream(docBlob.getBinaryStream());
// Create the Binary XML Stream from the GZIP InputStream
BinXMLStream xmlStream = xmlProc.createBinXMLStream(gzipInputStream);
// Decode the Binary XML Stream
BinXMLDecoder xmlDecode = xmlStream.getDecoder();
InfosetReader xmlReader = xmlDecode.getReader();
XMLDocument xmlDoc = (XMLDocument) xmlDom.createDocument(xmlReader);
// Instantiate a Byte Stream Object
byteStream = new ByteArrayOutputStream();
// Load the Byte Stream Object
xmlDoc.print(byteStream);
// Get the string value of the Byte Stream Object as UTF8
xml = byteStream.toString("UTF8");
// Empty the temporary SQL Clob Object
tempClob.truncate(0);
// Load the temporary SQL Clob Object with the xml String
tempClob.setString(1,xml);
return tempClob;
}
catch (BinXMLException ex) {
return null;
}
catch (IOException e) {
return null;
}
catch (SQLException se) {
return null;
}
catch (Exception e){
return null;
}
}
}
/</pre>
<br />Also, I needed to execute <i>set define off</i> before it.
Another thing is that in SOA Suite 12c the documents are apparently stored as GZIP object. Therefor I had to put the binaryStream from the <i>docBlob</i> parameter into a <i>GZIPInputStream</i>, and feed that to the <i>xmlProc.createBinXMLStream()</i>.
<br />
<br />
Then create the following Function wrapper:<br />
<pre class="brush:sql">CREATE OR REPLACE FUNCTION GET_XML_DOCUMENT(p_blob BLOB
,p_clob CLOB)
RETURN CLOB AS LANGUAGE JAVA
NAME 'SOAXMLDocument.GetDocument(java.sql.Blob, java.sql.Clob) return java.sql.Clob';
</pre>
<br />
You can use it in a query as:
<br />
<pre class="brush:sql">select * from (
select xdc2.*, GET_XML_DOCUMENT(xdc2.document,to_clob(' ')) doc_PAYLOAD
from
(select *
from xml_document xdc
where xdc.doc_partition_date > to_date('25-02-20 09:10:00', 'DD-MM-YY HH24:MI:SS') and xdc.doc_partition_date < to_date('25-02-20 09:20:00', 'DD-MM-YY HH24:MI:SS')
) xdc2
) xdc3
where xdc3.doc_payload like '%16720284%' or xdc3.doc_payload like '%9F630D36DD24214EE053082D260AB792%'
</pre>
<br />
In this example I do a scan over documents between a certain period where I filter over contents from the blob. Notice that database need to unparse the blob of every row to be able to filter on it. You should not do this over the complete table.
Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-82262367668996214502020-02-21T14:35:00.000+01:002020-02-21T14:35:54.362+01:00My Weblogic on Kubernetes Cheatsheet, part 3.<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-eYVq54UmRsA/Xk_cUFWIwRI/AAAAAAAADUg/3wlXuIWHQJIxPdoSL3rThefPfNeks8TOQCNcBGAsYHQ/s1600/OracleKubernetes.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Oracle Kubernetes" border="0" data-original-height="413" data-original-width="617" height="213" src="https://1.bp.blogspot.com/-eYVq54UmRsA/Xk_cUFWIwRI/AAAAAAAADUg/3wlXuIWHQJIxPdoSL3rThefPfNeks8TOQCNcBGAsYHQ/s320/OracleKubernetes.png" title="Oracle Kubernetes" width="320" /></a></div>
<br />
In two previous parts I already wrote about my Kubernetes experiences and the important commands I learned:<br />
<ul>
<li><a href="https://blog.darwin-it.nl/2020/01/my-weblogic-on-kubernetes-cheatsheet.html">My Weblogic on Kubernetes Cheatsheet, part 1</a></li>
<li><a href="https://blog.darwin-it.nl/2020/01/my-weblogic-on-kubernetes-cheatsheet_24.html">My Weblogic on Kubernetes Cheatsheet, part 2.</a> </li>
</ul>
My way of learning and working is to put those commands in little scriptlets, one more usefull then the other. But all with the goal to keep those together. <br />
<br />
It its time to write part 3, in which I will present some maintenance functions, mainly to connect with your pods.<br />
<h3>
Get node and pod info</h3>
<h4>
getdmnpod-status.sh
</h4>
In part 2 I ended with the script <i>getdmnpods.sh</i>. You can parse the output using awk to get just the status of the pods:<br />
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pod statuses for $WLS_DMN_NS
kubectl -n $WLS_DMN_NS get pods -o wide| awk '{print $1 " - " $3}'
</pre>
<br />
<h4>
getpods.sh</h4>
With <i>getdmnpods.sh</i> you can get the status of the pods running your domain. There's also a weblogic operator pod. To show this, use:<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pods for $K8S_NS
kubectl get po -n $K8S_NS
</pre>
<br />
<h4>
getstmpods.sh</h4>
Then also the kubernetes cluster infrastructure consist of a set of pods. Show these using:<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pods for kube-system
kubectl -n kube-system get pods
</pre>
<br />
<br />
<h4>
getnodes.sh</h4>
<br />
On OCI your cluster is running on a set of nodes. These OCI Instances are actually running your system. You can show those, with their IP's and Kubernetes versions using:
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s nodes
kubectl get node
</pre>
<h4>
getdmnsitlogs.sh</h4>
<br />
Of course you want to see some logs, especially when something went wrong. Perhaps you want to see some specific loggings. For instance, this script show the logs of the admin pod, grepping the logs situational related to the situational config:
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get situational config logs for $WLS_DMN_NS server $ADM_POD
kubectl -n $WLS_DMN_NS logs $ADM_POD | grep -i situational
</pre>
<br />
<h3>
Weblogic Operator</h3>
When I was busy with getting the MedRec Sample application deployed to Kubernetes, at one point I got stuck because, as I later learned, my Kubernetes Operator's version was behind. <br />
<h4>
list_wlop.sh </h4>
I learned I could get Weblogic Operator information as follows:<br />
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo List Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm list $WL_OPERATOR_NAME
cd $SCRIPTPATH
</pre>
<br />
<h4>
delete_weblogic_operator.sh </h4>
When you find that the operator needs an update, you can remove it with this script:<br />
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Delete Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm del --purge $WL_OPERATOR_NAME
cd $SCRIPTPATH
</pre>
<br />
<h4>
install_weblogic_operator.sh
</h4>
<br />
Then of course, you want to install it with the proper function. This can be done using:
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Install Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm install kubernetes/charts/weblogic-operator \
--name $WL_OPERATOR_NAME \
--namespace $K8S_NS \
--set image=oracle/weblogic-kubernetes-operator:2.3.0 \
--set serviceAccount=$K8S_SA \
--set "domainNamespaces={}"
cd $SCRIPTPATH
</pre>
<br />
Take note of the image named in this script. Make sure that it matches the image with the latest-greatest operator version. In this script I apparently still use 2.3.0, but as of <a href="https://oracle.github.io/weblogic-kubernetes-operator/">November 15th, 2019 2.4.0 is released</a>.
<br />
<h4>
upgrade_weblogic_operator.sh</h4>
Besides an install and delete chart, there is also an operator upgrade Helm chart:
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Upgrade Weblogic Operator $WL_OPERATOR_NAME with domainNamespace $WLS_DMN_NS
cd $HELM_CHARTS_HOME
helm upgrade \
--reuse-values \
--set "domainNamespaces={$WLS_DMN_NS}" \
--wait \
$WL_OPERATOR_NAME \
kubernetes/charts/weblogic-operator
cd $SCRIPTPATH</pre>
<br />
<h3>
Connect to the pods</h3>
The containers in the pods are running Linux (I know this is a quite blunt statement). So you might want to be able to connect to them. In case of Weblogic, you might want to be able to run <i>wlst.sh</i> to navigate to the MBean tree to investigate certain settings and find out why certain settings won't work in runtime. <br />
<h4>
admbash.sh and mr1bash.sh</h4>
To get to the console of the container you can run for the AdminServer the script <i>admbash.sh</i>:<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Start bash in $WLS_DMN_NS - $ADM_POD
kubectl exec -n $WLS_DMN_NS -it $ADM_POD /bin/bash
</pre>
<br />
And for one of the managed servers a variant of <i>mr1bash.sh</i>:
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pods for $WLS_DMN_NS
kubectl -n $WLS_DMN_NS get pods -o wide
kubectl exec -n medrec-domain-ns -it medrec-domain-medrec-server1 /bin/bash
</pre>
<br />
On the commandline you can then run <i>wlst.sh</i> and connect to your AdminServer.
<br />
<h4>
dwnldAdmLogs.sh and dwnldMr1Logs.sh</h4>
<br />
The previous scripts can help to navigate through your container and find the contents. However, you'll find that the containers lack certain basic bash commands like <i>vi</i>. The <i>cat</i> command does exist, but not very convenient investigating large log files. So, very soon I found the desire to download the log files to investigate them with a proper editor. You can do it for the admin server using <i>dwnldAdmLogs.sh</i>:
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
#
LOG_FILE=$ADM_SVR.log
OUT_FILE=$ADM_SVR.out
#
echo From $WLS_DMN_NS/$ADM_POD download $DMN_HOME/servers/$ADM_SVR/logs/$LOG_FILE to $LCL_LOGS_HOME/$LOG_FILE
kubectl cp $WLS_DMN_NS/$ADM_POD:$DMN_HOME/servers/$ADM_SVR/logs/$LOG_FILE $LCL_LOGS_HOME/$LOG_FILE
echo From $WLS_DMN_NS/$ADM_POD download $DMN_HOME/servers/$ADM_SVR/logs/$OUT_FILE to $LCL_LOGS_HOME/$OUT_FILE
kubectl cp $WLS_DMN_NS/$ADM_POD:$DMN_HOME/servers/$ADM_SVR/logs/$OUT_FILE $LCL_LOGS_HOME/$OUT_FILE
</pre>
<br />
And for one of the managed servers a variant of <i>dwnldMr1Logs.sh</i>:
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
#
LOG_FILE=$MR_SVR1.log
OUT_FILE=$MR_SVR1.out
#
echo From $WLS_DMN_NS/$MR1_POD download $DMN_HOME/servers/$MR_SVR1/logs/$LOG_FILE to $LCL_LOGS_HOME/$LOG_FILE
kubectl cp $WLS_DMN_NS/$MR1_POD:$DMN_HOME/servers/$MR_SVR1/logs/$LOG_FILE $LCL_LOGS_HOME/$LOG_FILE
echo From $WLS_DMN_NS/$MR1_POD download $DMN_HOME/servers/$MR_SVR1/logs/$OUT_FILE to $LCL_LOGS_HOME/$OUT_FILE
kubectl cp $WLS_DMN_NS/$MR1_POD:$DMN_HOME/servers/$MR_SVR1/logs/$OUT_FILE $LCL_LOGS_HOME/$OUT_FILE
</pre>
<br />
I found these scripts very handy, because I can quickly repeatedly download the particular log files. <br />
<br />
<h3>
Describe kube resources</h3>
<br />
Many resources in Kubernetes can be described. In my case I found it very usefull when debugging the configuration overrides.
<br />
<h4>
descjdbccm.sh</h4>
<br />
One subject in the <a href="https://github.com/nagypeter/weblogic-operator-tutorial/blob/master/tutorials/domain-home-in-image.md">Weblogic Operator tutorial</a> workshop is to do configuration overrides, and one of the steps is to create a configuration map. This is one example of a resource that can be desribed:
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Describe jdbc configuration map of $WLS_DMN_NS
kubectl describe cm jdbccm -n $WLS_DMN_NS</pre>
<br />
Usefull to see what the latest overrides values are.
<br />
<h4>
override_weblogic_domain.sh</h4>
To perform the weblogic override I use the following script:<br />
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Delete configuration map jdbccm for Domain $WLS_DMN_UID
kubectl -n $WLS_DMN_NS delete cm jdbccm
#echo Override Weblogic Domain $WLS_DMN_UID using $SCRIPTPATH/medrec-domain/override
kubectl -n $WLS_DMN_NS create cm jdbccm --from-file $SCRIPTPATH/medrec-domain/override
kubectl -n $WLS_DMN_NS label cm jdbccm weblogic.domainUID=$WLS_DMN_UID
</pre>
<br />
Obviously the <i>descjdbccm.sh</i> is very usefull in combination with this script.
<br />
<h4>
descmrsecr.sh</h4>
<br />
Another part in the configuration overrides is the storage of the database credentials and connection URL. We store those in a secret that is referenced in the overide files. This is smart, because you now only need to create or update the secret and then run the configuration override script. To describe the secret you can use:
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Describe secret $MR_DB_CRED of namespace $WLS_DMN_NS
kubectl describe secret $MR_DB_CRED -n $WLS_DMN_NS
</pre>
<br />
Since it is a secret, you can show the names of the attributes in the secret, but not their values.
<br />
<h4>
create_mrdbsecret.sh</h4>
<br />
You need to create or update secrets. Apparently you need to delete it first to be able to (re)create it. This script does it for two secrets, for two datasources:
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
#
function prop {
grep "${1}" $SCRIPTPATH/credentials.properties|cut -d'=' -f2
}
#
MR_DB_USER=$(prop 'db.medrec.username')
MR_DB_PWD=$(prop 'db.medrec.password')
MR_DB_URL=$(prop 'db.medrec.url')
#
echo Delete Medrec DB Secret $MR_DB_CRED for $WLS_DMN_NS
kubectl -n $WLS_DMN_NS delete secret $MR_DB_CRED
echo Create Medrec DB Secret $MR_DB_CRED for $MR_DB_USER and URL $MR_DB_URL
kubectl -n $WLS_DMN_NS create secret generic $MR_DB_CRED --from-literal=username=$MR_DB_USER --from-literal=password=$MR_DB_PWD --from-literal=url=$MR_DB_URL
kubectl -n $WLS_DMN_NS label secret $MR_DB_CRED weblogic.domainUID=$WLS_DMN_UID
#
SMPL_DB_CRED=dbsecret
echo Delete Medrec DB Secret $SMPL_DB_CRED for $WLS_DMN_NS
kubectl -n $WLS_DMN_NS delete secret $SMPL_DB_CRED
echo Create DB Secret dbsecret $SMPL_DB_CRED for $WLS_DMN_NS
kubectl -n $WLS_DMN_NS create secret generic $SMPL_DB_CRED --from-literal=username=scott2 --from-literal=url=jdbc:oracle:thin:@test.db.example.com:1521/ORCLCDB
kubectl -n $WLS_DMN_NS label secret $SMPL_DB_CRED weblogic.domainUID=$WLS_DMN_UID
</pre>
<br />
This script gets the MedRec database credentials from a property file. Obviously you need to store those values in a save place. So you might figure that having them in a property file might not be a very safe way. You could change the script of course to ask for the particular password. And you might want to adapt it to be able to load different property files per target environment.
<br />
<h3>
Can I?</h3>
The Kubernetes API has of course an <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/">authorization schema</a>. One of the first things in the Weblogic Operator tutorial is that when you create your OKE Cluster you should make sure that you have the authorization to access your Kubernetes cluster using a system admin account.<br />
<br />
To check if you're able to call the proper API's for your setup you can use the following scripts:<br />
<h4>
canideploy.sh</h4>
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo K8s Can I deploy?
kubectl auth can-i create deploy</pre>
<br />
<h4>
canideployassystem.sh</h4>
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo K8s Can I deploy as system?
kubectl auth can-i create deploy --as system:serviceaccount:kube-system:default
</pre>
<br />
<h3>
Conclusion</h3>
At this point I showed you my scriptlets up to now. There is a lot to investigate still. For instance, there are Terraform examples to create your OKE cluster from scratch with Terraform. This is very promising as an alternative to the on-line wizards. Also I would like to create some (micro-)services to get data from the MedRec database and run them in pods side by side with the MedRec application. Maybe even with a OJet front end.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-13981809569736554452020-02-02T14:33:00.002+01:002020-02-02T15:17:52.111+01:00Virtualbox 6.1.2 and Vagrant 2.2.7 - the working combination<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-mANXILe2EJU/XjbM0LIHOZI/AAAAAAAADSQ/qdkVhpJDwCooc3Mr7QAps1lxHmE11LotgCNcBGAsYHQ/s1600/VagrantLogo.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="60" data-original-width="207" src="https://1.bp.blogspot.com/-mANXILe2EJU/XjbM0LIHOZI/AAAAAAAADSQ/qdkVhpJDwCooc3Mr7QAps1lxHmE11LotgCNcBGAsYHQ/s1600/VagrantLogo.png" /></a></div>
<br />
<a href="https://1.bp.blogspot.com/-lgaPBpYY93k/XjbM0dfpWHI/AAAAAAAADSU/xeWAHakDhWg-3mhya5nV0uMSLiBGa0SRgCNcBGAsYHQ/s1600/vbox_logo.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="180" data-original-width="140" src="https://1.bp.blogspot.com/-lgaPBpYY93k/XjbM0dfpWHI/AAAAAAAADSU/xeWAHakDhWg-3mhya5nV0uMSLiBGa0SRgCNcBGAsYHQ/s1600/vbox_logo.png" /></a><br />
<br />
<br />
<br />
<br />
Today I found out that <a href="https://www.vagrantup.com/downloads.html">Vagrant 2.2.7 </a>has been released. A few weeks ago, the <a href="https://www.virtualbox.org/">Oracle VirtualBox</a> celebrated the release of 6.1.2. The thing with VirtualBox 6.1.2 was that it wasn't compatile with Vagrant 2.2.6, since that version of Vagrant lacked the support of the Virtualbox 6.1 base-release. It was solvable, as described by <a href="https://oracle-base.com/blog/2020/01/16/virtualbox-6-1-2/">Tim Hall</a>, with a solution of <a href="https://blogs.oracle.com/author/simoncoter">Simon Coter</a>. Happily, as expected, Vagrant 2.2.7 supports 6.1.x now. So, I was eager to try that out. And it works indeed.<br />
<br />
However, the first time I 'upped' a Vagrant project, I hit the error:<br />
<pre class="brush:plain">VBoxManage.exe: error: Unknown option: --clipboard
</pre>
<br />
Sadly this was due to the following lines in my <i>Vagrantfile</i>:<br />
<pre class="brush:ruby"> # Set clipboard and drag&drop bidirectional
#vb.customize ["modifyvm", :id, "--clipboard", "bidirectional"]
#vb.customize ["modifyvm", :id, "--draganddrop", "bidirectional"]</pre>
I did not try the <i>--draganddrop</i> option. But assumed that it would fail too. Commenting those out (as in the example) got my Vagrantfile ok again.<br />
I use this to have bi-directional clipboard and draganddrop, which is off by default. So, I have to figure why this is.
<br />
After startup of the new VM, I tested the clipboard functionality and although these lines are commented out, it worked as such. Apparently I don't need those lines anymore.
<br />
<br />Since it did not let me go, I tried:
<pre class="brush:plain">C:\Program Files\Oracle\VirtualBox>vboxmanage modifyvm
Usage:
VBoxManage modifyvm <uuid|vmname>
[--name <name>]
[--groups <group>, ...]
[--description <desc>]
...
[--clipboard-mode disabled|hosttoguest|guesttohost|
bidirectional]
[--draganddrop disabled|hosttoguest|guesttohost|
bidirectional]
</pre><br/>Apparently the the option changed to <i>--clipboard-mode</i>.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-49431414606079743832020-01-24T15:43:00.003+01:002020-01-24T15:50:31.131+01:00Configure Weblogic Policies and Actions using WLST<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-f1qImBdCTDE/Xirt4r_nFjI/AAAAAAAADRs/_wQS0WDFIPwxbDFNvW4N5bU04S8gyL1MACNcBGAsYHQ/s1600/2020-01-24-WeblogicConsoleMonitoringDashboard-Graph.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="720" data-original-width="1370" height="168" src="https://1.bp.blogspot.com/-f1qImBdCTDE/Xirt4r_nFjI/AAAAAAAADRs/_wQS0WDFIPwxbDFNvW4N5bU04S8gyL1MACNcBGAsYHQ/s320/2020-01-24-WeblogicConsoleMonitoringDashboard-Graph.png" width="320" /></a></div>
Fairly regularly I give a training on Weblogic Tuning and Troubleshooting, where I talk about JVMs, Garbage Collections, and some subsystems of Weblogic, JMS and JDBC for instance, and how to tune and troubleshoot them.<br />
<br />
One of the larger parts of the training is the Weblogic Diagnostic Framework. I find it quite interesting, but also pretty complex. And maybe therefor hardly used in the every day Weblogic administration. And that might be a pity, because it can be quite powerfull. You can find the use of it in Fusion Middleware, with preconfigured policies and actions. But I guess that other tooling on Weblogic diagnostics and monitoring, like <a href="https://wlsdm.com/">WLSDM</a> also rely on it (although I don't know for sure).<br />
<br />
Configuring WLDF might be quite hard, and during the last run of the training, I figured that it might help to turn the solution of the workshop into a script. At least to show what you're doing when executing the labs. But, certainly also to show how you can put your configurations into a script that you can extend and reuse over different environments.<br />
<br />
This week I got a question on the Oracle community,<a href="https://community.oracle.com/thread/4310953">To be notify of les warning Logs</a>, that made me remember this script. Maybe it's not exactly the answer, but I think at least it can be a starting point. And I realized that I did not write about it yet.<br />
<h3>
11g vs 12c</h3>
I stumbeld upon a nice <a href="https://www.oracle.com/technical-resources/articles/cico-wldf.html">11g blog about this subject</a>. In 12c Oracle renamed this part of the WLDF: where in 11g it was called "Watches and Notifications" it is now called "Policies and Actions". When working with the console, you'll find that the console follows the new naming. But in WLST the APIs still have the old 11g naming. So keep in mind that <i>Policies</i> are <i>Watches </i>and <i>Actions</i> are <i>Notifications</i>.<br />
<br />
Documentation about Configuring Policies and Actions can be found <a href="https://docs.oracle.com/middleware/1221/wls/WLDFC/config_watch_notif.htm#WLDFC188">here</a>.<br />
<br />
I'm not going to explain all the concepts of subject, but going through my base script step by step, and then conclude with some remarks and ideas.<br />
<h3>
Diagnostic Module</h3>
Just like JMS resources, the Diagnostic Framework combines the resources into WLDFSystemResource Modules. A Diagnostic Module is in essence an administration unit to combine the resources. A diagnostic module can be created by the following WLST function:<br />
<pre class="brush:python">#
def createDiagnosticModule(diagModuleName, targetServerName):
module=getMBean('/WLDFSystemResources/'+diagModuleName)
if module==None:
print 'Create new Diagnostic Module'+diagModuleName
edit()
startEdit()
cd('/')
module = cmo.createWLDFSystemResource(diagModuleName)
targetServer=getMServer(targetServerName)
module.addTarget(targetServer)
# Activate changes
save()
activate(block='true')
print 'Diagnostic Module created successfully.'
else:
print 'Diagnostic Module'+diagModuleName+' already exists!'
return module</pre>
<br />
The script is created in a way that it first checks if the Diagnostic Module already exists. You'll see that all the functions in this article work like this. This helps also in the use of it in Weblogic under Kubernetes environments. Diagnostic modules are registered under '<i>/WLDFSystemResources</i>'. And it is created with the <i>createWLDFSystemResource()</i> method. Also like a <i>JMSModules</i> you need to target them. This function does this based on the <i>targetServerName</i> and uses the <i>getMServer()</i> function to get the MBean to target:<br />
<pre class="brush:python">#
def getMServer(serverName):
server=getMBean('/Servers/'+serverName)
return server</pre>
<br />
Many Weblogic resources need to be targetted. I notice that I do this in many different ways in different scripts all over this blog and in my work. Maybe I need to write a more generic, smarter way of doing this. In this case I simply just target to a single server, but it could be list of servers and/or clusters.
<br />
The function is called from a main function as follows:
<br />
<pre class="brush:python">import os,sys, traceback
#
adminHost=os.environ["ADM_HOST"]
adminPort=os.environ["ADM_PORT"]
admServerUrl = 't3://'+adminHost+':'+adminPort
#
adminUser='weblogic'
adminPwd='welcome1'
ttServerName=os.environ["TTSVR_NAME"]
diagModuleName='TTDiagnostics'
#
...
def main():
try:
print 'Connect to '+admServerUrl
connect(adminUser,adminPwd,admServerUrl)
createDiagnosticModule(diagModuleName, ttServerName)
...
</pre>
<br />
<h3>
Collectors</h3>
Collectors are also called Harvesters. They monitor MBeans Attributes and regularly store the data in the Harvested Data Archive of the targetted Managed Server. You can find it under Diagnostics->Log Files in the Weblogic Console. The Weblogic console also includes a Monitoring Dashboard. That can be get to via the console Home page using the 'Monitoring Dashboard' link.<br />
<br />
Without collectors, you can only view MBean Attributes in the Graphs from the moment you start a graph. It will collect the MBeans from that moment onwards, until you pause/stop the collection.<br />
However, using a collector you can view the Attribute values from the history back.<br />
<br />
A Collector is created within a diagnostic module. You need to define a metricType: the MBean Type, for instance 'JDBCDataSourceRuntimeMBean'. Then a namespace, in this case the ServerRuntime. You can specify a set of instances of the particular MBean Type, or provide <i>None</i> to watch all the instances of the particular type. And from the instances you specify a comma separated list of attributes you want to harvest.<br />
<br />
This leads to the following function:<br />
<br />
<pre class="brush:python">#
def createCollector(diagModuleName, metricType, namespace, harvestedInstances,attributesCsv):
harvesterName='/WLDFSystemResources/'+diagModuleName+'/WLDFResource/'+diagModuleName+'/Harvester/'+diagModuleName
harvestedTypesPath=harvesterName+'/HarvestedTypes/';
print 'Check Collector '+harvestedTypesPath+metricType
collector=getMBean(harvestedTypesPath+metricType)
if collector==None:
print 'Create new Collector for '+metricType+' in '+diagModuleName
edit()
startEdit()
cd(harvestedTypesPath)
collector=cmo.createHarvestedType(metricType)
cd(harvestedTypesPath+metricType)
attributeArray=jarray.array([String(x.strip()) for x in attributesCsv.split(',')], String)
collector.setHarvestedAttributes(attributeArray)
collector.setHarvestedInstances(harvestedInstances)
collector.setNamespace(namespace)
# Activate changes
save()
activate(block='true')
print 'Collector created successfully.'
else:
print 'Collector '+metricType+' in '+diagModuleName+' already exists!'
return collector</pre>
<br />
This creates the Collecter using createHarvestedType() for the MBean Type (metricType). The list of Attributes is provided as comma separated string. But the setter on the collector (setHarvestedAttributes(attributeArray)) expects a <i>jarray</i>, so the csv-list needs to be translated. It is created by a (to me a bit peculiar when I first saw it) python construct:
<br />
<pre class="brush:python"> attributeArray=jarray.array([String(x.strip()) for x in attributesCsv.split(',')], String)</pre>
<br />
It splits the csv string with the comma as a separator and then loops over the resulting values. For each value it constructs a String, that is trimmed. The resulting String values are fed to a String based jarray.array factory.
<br />
<br />
The following line added to the main function will call the function, when you want to watch all inststances:<br />
<pre class="brush:python">createCollector(diagModuleName, 'weblogic.management.runtime.JDBCDataSourceRuntimeMBean','ServerRuntime', None, 'ActiveConnectionsCurrentCount,CurrCapacity,LeakedConnectionCount')
</pre>
<br />
In the case you do want to select a specific set of instances, you need to do that as follows:<br />
<pre class="brush:python"> harvestedInstancesList=[]
harvestedInstancesList.append('com.bea:ApplicationRuntime=medrec,Name=TTServer_/medrec,ServerRuntime=TTServer,Type=WebAppComponentRuntime')
harvestedInstances=jarray.array([String(x.strip()) for x in harvestedInstancesList], String)
createCollector(diagModuleName, 'weblogic.management.runtime.WebAppComponentRuntimeMBean','ServerRuntime', harvestedInstances,'OpenSessionsCurrentCount')
</pre>
<br />
The thing in this case is that the instances self are described in an expression that uses commas. You could construct these expressions using properties of course. And then use the construct above to add those to a jarray.
<br />
<h3>
Actions</h3>
<br />
When you want to have WLDF to take action upon a certain condition, you need to create an <i>Action</i> for it. A simple one is to create a message on a JMS queue. But according to the documentation you could have the following types:<br />
<ul>
<li>Java Management Extensions (JMX)</li>
<li>Java Message Service (JMS)</li>
<li>Simple Network Management Protocol (SNMP)</li>
<li>Simple Mail Transfer Protocol (SMTP)</li>
<li>Diagnostic image capture</li>
<li>Elasticity framework (scaling your dynamic cluster)</li>
<li>REST</li>
<li>WebLogic logging system</li>
<li>WebLogic Scripting Tool (WLST)</li>
</ul>
I created a script for a JMS Action, just by recording the configuration within the console and transformed it into the following script:<br />
<br />
<pre class="brush:python">#
def createJmsNotificationAction(diagModuleName, actionName, destination, connectionFactory):
policiesActionsPath='/WLDFSystemResources/'+diagModuleName+'/WLDFResource/'+diagModuleName+'/WatchNotification/'+diagModuleName
jmsNotificationPath=policiesActionsPath+'/JMSNotifications/'
print 'Check notification action '+jmsNotificationPath+actionName
jmsNtfAction=getMBean(jmsNotificationPath+actionName)
if jmsNtfAction==None:
print 'Create new JMS NotificationAction '+actionName+' in '+diagModuleName
edit()
startEdit()
cd(policiesActionsPath)
jmsNtfAction=cmo.createJMSNotification(actionName)
jmsNtfAction.setEnabled(true)
jmsNtfAction.setTimeout(0)
jmsNtfAction.setDestinationJNDIName(destination)
jmsNtfAction.setConnectionFactoryJNDIName(connectionFactory)
# Activate changes
save()
activate(block='true')
print 'JMS NotificationAction created successfully.'
else:
print 'JMS NotificationAction '+actionName+' in '+diagModuleName+' already exists!'
return jmsNtfAction
</pre>
<br />
For other types, just click on the record link in the console and perform the configuration. Then transform it in a function as above.<br />
<br />
I think this function does not need much explanation. It can be called as follows, using the JNDI names of the destination and a connection factory:
<br />
<pre class="brush:python">createJmsNotificationAction(diagModuleName, 'JMSAction', 'com.tt.jms.WLDFNotificationQueue', 'weblogic.jms.ConnectionFactory')
</pre>
<br />
<h3>
Policies</h3>
A Policy identifies a situation to trap for monitoring or diagnostic purposes. It constitutes of an expression that identifies the situation and one of more actions to follow up on it when the expression evaluates to true. The default language for the expression is the WLDF Query Language, but it is deprecated and superceeded by the Java Expression Language (EL).<br />
<br />
Another aspect of the policy is the alarm. When an event in Weblogic is fired, that correlates to the policy, you might not want to have the handlers executed every time it occurs. If, for instance, a JMS queue hits a high count, and you define a policy with an email-action, you might not want a email-message for every new message posted on the queue. Then not only queue is flooded but it will flood your inbox as well. In the next function the alarm is set as 'AutomaticReset', after 300 seconds. When fired, the policy is then disabled for the given amount of time, and then is automatically enabled again. <br />
<pre class="brush:python">#
def createPolicy(diagModuleName, policyName, ruleType, ruleExpression, actions):
policiesActionsPath='/WLDFSystemResources/'+diagModuleName+'/WLDFResource/'+diagModuleName+'/WatchNotification/'+diagModuleName
policiesPath=policiesActionsPath+'/Watches/'
print 'Check Policy '+policiesPath +policyName
policy=getMBean(policiesPath +policyName)
if policy==None:
print 'Create new Policy '+policyName+' in '+diagModuleName
edit()
startEdit()
cd(policiesActionsPath)
policy=cmo.createWatch(policyName)
policy.setEnabled(true)
policy.setExpressionLanguage('EL')
policy.setRuleType(ruleType)
policy.setRuleExpression(ruleExpression)
policy.setAlarmType('AutomaticReset')
policy.setAlarmResetPeriod(300000)
cd(policiesPath +policyName)
set('Notifications', actions)
schedule=getMBean(policiesPath +policyName+'/Schedule/'+policyName)
schedule.setMinute('*')
schedule.setSecond('*')
schedule.setSecond('*/15')
# Activate changes
save()
activate(block='true')
print 'Policy created successfully.'
else:
print 'Policy '+policyName+' in '+diagModuleName+' already exists!'
return policy</pre>
<br />
A policy can drive multiple actions. Therefor they must also be provided as an <i>jarray</i>. For that the following lines are added to the main function:<br />
<br />
<pre class="brush:python"> actionsList=[]
actionsList.append('com.bea:Name=JMSAction,Type=weblogic.diagnostics.descriptor.WLDFJMSNotificationBean,Parent=[TTDomain]/WLDFSystemResources[TTDiagnostics],Path=WLDFResource[TTDiagnostics]/WatchNotification[TTDiagnostics]/JMSNotifications[JMSAction]')
actions=jarray.array([ObjectName(action.strip()) for action in actionsList], ObjectName)
createPolicy(diagModuleName,'HiStuckThreads', 'Harvester', 'wls:ServerHighStuckThreads(\"30 seconds\",\"10 minutes\",5)', actions)
</pre>
<br />
As you can see, the <i>JMSAction</i> created earlier is coded as an expression and added to the list. As mentioned earlier with the Harvested Instances, you could wrap this into a separate function to build up the expression based on properties. In the example above, the rule is defined as: <i>'wls:ServerHighStuckThreads(\"30 seconds\",\"10 minutes\",5)'</i>, and added as a hardcoded parameter to the call to the <i>createPolicy()</i> function.<br />
<br />
Another example, is:<br />
<pre class="brush:python"> ruleExpression='wls:ServerGenericMetricRule(\"com.bea:Name=MedRecGlobalDataSourceXA,ServerRuntime=TTServer,Type=JDBCDataSourceRuntime\",\"WaitingForConnectionHighCount\",\">\",0,\"30 seconds\",\"10 minutes\")'
createPolicy(diagModuleName,'OverloadedDS', 'Harvester', ruleExpression, actions)
</pre>
In this example the rule is quite long and would make the line to create the policy quite long. But again, this could be abstracted into a function that builds the expression.<br />
<h3>
Conclusion</h3>
I put the complete script on <a href="https://github.com/makker-nl/WeblogicScripts/blob/master/WLDF/createDiagnosticModule.py">github</a>. It is a starting point showing how to setup collectors, policies and actions using WLST. It could be extended with functions that create the different expressions based on properties. This would make the scripting more robust, because you would not need to formulate your expressions for every purpose when you want different values.<br />
<br />
When I started with this script during the training, I imagined that you could define a library for several types of collectors, actions and policies. You could drive those with a smart property or xml-configuration file that define all the policies that you want to add to the environment. You could even create different property files for different kinds of environments. You could have different weblogic domains for particular applications, but also for OSB, SOA, BI Publisher, etc. Based on the kind of environment you may want different sets of weblogic resources monitored.<br />
<br />
If you make sure that all your functions are re-entrant, you could easily add them to the scripts run from your Docker files, to build up and start your Kubernetes Weblogic Operator domain. See my earlier posts about my cheat sheet, <a href="https://blog.darwin-it.nl/2020/01/my-weblogic-on-kubernetes-cheatsheet.html">part 1</a> and <a href="https://blog.darwin-it.nl/2020/01/my-weblogic-on-kubernetes-cheatsheet_24.html">part 2</a>.Anonymousnoreply@blogger.com1tag:blogger.com,1999:blog-4533777417600103698.post-69529622349325451942020-01-24T12:52:00.001+01:002020-01-24T12:52:31.331+01:00My Weblogic on Kubernetes Cheatsheet, part 2.<a href="https://1.bp.blogspot.com/-kWL-lJ3GB_M/XMxbFCxt2AI/AAAAAAAAC3Y/Px9tM9w6psUUXaANiSVxcSMI5NyuWCMlQCPcBGAYYCw/s1600/KubernetesLogo.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="113" data-original-width="115" src="https://1.bp.blogspot.com/-kWL-lJ3GB_M/XMxbFCxt2AI/AAAAAAAAC3Y/Px9tM9w6psUUXaANiSVxcSMI5NyuWCMlQCPcBGAYYCw/s1600/KubernetesLogo.png" /></a>In my <a href="https://blog.darwin-it.nl/2020/01/my-weblogic-on-kubernetes-cheatsheet.html">previous </a>blog-post I published the first part of my Kubernetes cheatsheet, following the <a href="https://github.com/nagypeter/weblogic-operator-tutorial/blob/master/tutorials/domain-home-in-image.md">Weblogic Operator tutorial</a>. In this part 2, I'll publish the scripts I created for the next few chapters in the tutorial.<br />
<br />
<h3>
Install Traefik Software Loadbalancer</h3>
The fourth part of the tutorial is about installing the Treafic Software Loadbalancer service. It is described in this part: <a href="https://github.com/nagypeter/weblogic-operator-tutorial/blob/master/tutorials/install.traefik.md">3. Install Traefik Software Loadbalancer</a>. And also uses Helm to install the service.<br />
<h4>
install_traefik.sh</h4>
Install Treafik is just a quest of (nice Dutch idiom 😉) installing the proper Helm chart.
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Install traefik
cd $HELM_CHARTS_HOME
helm install stable/traefik \
--name traefik-operator \
--namespace traefik \
--values kubernetes/samples/charts/traefik/values.yaml \
--set "kubernetes.namespaces={traefik}" \
--set "serviceType=LoadBalancer"
cd $SCRIPTPATH
</pre>
<br />
<h4>
getsvclbr.sh</h4>
A simple script to check the Loadbalancer Service:<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get service traefik
kubectl get service -n traefik
</pre>
<br />
I did not change the name or the namespace of the Treafik loadbalancer. But I could do easily by moving the name and namespace to the <i>oke_env.sh</i>. But I haven't had the need, but I can imagine that it could raise.<br />
<h4>
getpublicip.sh</h4>
The scriptlet above will show all the treafik information. But to show only the public IP that you need to get to your application, you can use:<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get traefik public IP
kubectl describe svc traefik-operator --namespace traefik | grep Ingress | awk '{print $3}'
</pre>
<br />
Again, I could move the serivce name and namespace to <i>oke_env.sh</i>.
<br />
When the Weblogic domain is created, there will be some additional commands and therefor scriptlets, that update the Treafik configuration.
<br />
<br />
<h3>
Deploy WebLogic Domain</h3>
Now it gets really interesting: we're getting to create the actual Weblogic pods that will run the Weblogic Domain. This part is described in the <a href="https://github.com/nagypeter/weblogic-operator-tutorial/blob/master/tutorials/deploy.weblogic_short.md">fifth part: Deploy Weblogic Domain</a>.<br />
<h4>
create_wlsdmnaccount.sh</h4>
This script will do a sequence of things. You could argue that several of the scripts described earlier could have been combined, like this.<br />
Anyway, this one does the following:<br />
<ul>
<li>Create a Weblogic Domain Namespace</li>
<li>Create and label a Kubernetes secret within that namespace for the Admin boot credentials.</li>
<li>Create a secriet for the Docker registry on the Oracle infrastructure (Oracle Container Image Repository).</li>
</ul>
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
#
function prop {
grep "${1}" $SCRIPTPATH/credentials.properties|cut -d'=' -f2
}
#
echo Create weblogic domain namespace $WLS_DMN_NS
kubectl create namespace $WLS_DMN_NS
WLS_USER=$(prop 'weblogic.user')
WLS_PWD=$(prop 'weblogic.password')
echo Create a Kubernetes secret $WLS_DMN_CRED in namespace $WLS_DMN_NS containing the Administration Server boot credentials for user $WLS_USER
kubectl -n $WLS_DMN_NS create secret generic $WLS_DMN_CRED \
--from-literal=username=$WLS_USER \
--from-literal=password=$WLS_PWD
echo Label the $WLS_DMN_CRED in namespace $WLS_DMN_NS secret with domainUID $WLS_DMN_NAME
kubectl label secret $WLS_DMN_CRED \
-n $WLS_DMN_NS \
weblogic.domainUID=$WLS_DMN_NAME \
weblogic.domainName=$WLS_DNM_NAME \
--overwrite=true
echo Create secret for oci image repository $OCIR_CRED
OCIR_USER=$(prop 'ocir.user')
OCIR_PWD=$(prop 'ocir.password')
OCIR_EMAIL=$(prop 'ocir.email')
OCI_TEN=$(prop 'oci.tenancy')
OCI_REG=$(prop 'oci.region')
kubectl create secret docker-registry $OCIR_CRED \
-n $K8S_NS \
--docker-server=${OCI_REG}.ocir.io \
--docker-username="${OCI_TEN}/${OCIR_USER}" \
--docker-password="${OCIR_PWD}" \
--docker-email="${OCIR_EMAIL}"
</pre>
<br />
The scripts starts with a function to read credential properties. It relies on the <i>credentials.properties</i> file:
<br />
<pre class="brush:plain">weblogic.user=weblogic
weblogic.password=welcome1
ocir.user=that.would.be.me
ocir.password=something difficult
ocir.email=my.email.adres@darwin-it.nl
oci.tenancy=ours
oci.region=fra
db.medrec.username=MEDREC_OWNER
db.medrec.password=Medrec_Password$83!
db.medrec.url=jdbc:oracle:thin:@10.0.10.6:1521/pdb1.sub50abc021b.medrecokecluste.oraclevcn.com
</pre>
<br />
I like the way this works with the <i>prop</i> function, because you can abstract these properties without the need to have them as a environment variable. Especially credentials you would not have in an en environment variable. Actually, passwords should be stored even more secure than in a plain property file, of course. But, I think I would move the OCID's used in seting up the OKE cluster to the <i>credentials.properties</i> in a next iteration.<br />
<br />
<h4>
upgrade_traefik.sh
</h4>
When the Weblogic domain namespace is created, you can add it to the Treafik configuration:<br />
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Upgrade traefik with namespace $WLS_DMN_NS
cd $HELM_CHARTS_HOME
helm upgrade \
--reuse-values \
--set "kubernetes.namespaces={traefik,$WLS_DMN_NS}" \
--wait \
traefik-operator \
stable/traefik
cd $SCRIPTPATH
</pre>
<br />
<h4>
start_dmn.sh</h4>
The next step in the tutorial is to create the Weblogic Domain by starting the pods. To do so, you need to create a <i>domain.yaml</i>, as in this <a href="https://github.com/nagypeter/weblogic-operator-tutorial/blob/master/k8s/domain.yaml">example</a>. You'll need to update the properties according to your environment, specifying things like the project domain container image, names, credential sets, etc.<br />
For this script, one property is of special interest:<br />
<pre class="brush:plain"> # - "NEVER" will not start any server in the domain
# - "ADMIN_ONLY" will start up only the administration server (no managed servers will be started)
# - "IF_NEEDED" will start all non-clustered servers, including the administration server and clustered servers up to the replica count
serverStartPolicy: "IF_NEEDED"
</pre>
<br />
The property <i>serverStartPolicy</i> defines if the Weblogic pods should start or stop. To have them started, you would set the property to "IF_NEEDED" and to stop to "NEVER". I found that I had to change this property several times and then run kubectl apply on the yaml. Soon I thought that this should be more convenient. So I created a copy of the domain.yaml, called <i>domain.yaml.tpl</i>. In that file I changed the property to:<br />
<pre class="brush:plain"> # - "NEVER" will not start any server in the domain
# - "ADMIN_ONLY" will start up only the administration server (no managed servers will be started)
# - "IF_NEEDED" will start all non-clustered servers, including the administration server and clustered servers up to the replica count
serverStartPolicy: "$SVR_STRT_POLICY"
#serverStartPolicy: "NEVER"</pre>
<br />
Now using a script I can create a copy of the <i>domain.yaml.tpl</i> to
<i>domain.yaml</i> while replacing the environment variable to the desired value. A few years ago I found the Linux command <i>envsubst</i> that can read in a file from <i>stndin</i> to replace all the environment variables with there corresponding value and output the result to <i>stdout</i>.<br />
<br />
So to start the domain, I would use:<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Start Domain $K8S_DMN_NAME
export SVR_STRT_POLICY="IF_NEEDED"
WLS_DMN_YAML_TPL=${WLS_DMN_YAML}.tpl
envsubst < $WLS_DMN_YAML_TPL > $WLS_DMN_YAML
kubectl apply -f $WLS_DMN_YAML</pre>
<br />
It will set the SVR_STRT_POLICY to "IF_NEEDED", and then streams the <i>domain.yaml.tpl</i> to the <i>domain.yaml</i> through <i>envsubst</i>. Then perform kubectl apply on the resulting yaml.
<br />
Obviously, you can use this also to abstract properties as cluster replicas and java options out of the yaml file and have them in a property file or oke_env.sh.
<br />
<h4>
stop_dmn.sh</h4>
<br />
And of course, the stopping the domain will work accordingly, but then using the value "NEVER".
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Stop Domain $K8S_DMN_NAME
export SVR_STRT_POLICY="NEVER"
WLS_DMN_YAML_TPL=${WLS_DMN_YAML}.tpl
envsubst < $WLS_DMN_YAML_TPL > $WLS_DMN_YAML
kubectl apply -f $WLS_DMN_YAML</pre>
<br />
<h4>
upgrade_traefik_routing.sh</h4>
<br />
When your domain is started, you probably want to be able to reach it from out of the internet. To do so you need to create an Ingress rule. This is done as follows:
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Upgrade traefik with namespace $WLS_DMN_NS
cat << EOF | kubectl apply -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-pathrouting-1
namespace: $WLS_DMN_NS
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: medrec-domain-cluster-medrec-cluster
servicePort: 8001
- path: /console
backend:
serviceName: medrec-domain-adminserver
servicePort: 7001
EOF</pre>
<br />
The thing with this script is that it does not leverage the <i>oke_env.sh</i> script, except for the namespace. There are still a few hardcoded names for the service and the Ingress names. But, just like the namespace, it is easy to move those to the <i>oke_env.sh</i> script.
<br />
<h4>
getdmnpods.sh</h4>
<br />
When started your domain, you would check the pods. This can be done with this script:
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pods for $WLS_DMN_NS
kubectl -n $WLS_DMN_NS get pods -o wide
</pre>
<br />
Initially, you would see 3 pods, one for the Admin server and two Managed servers. But after changing the <i>domain.yaml.tpl</i> following the <a href="https://github.com/nagypeter/weblogic-operator-tutorial/blob/master/tutorials/scale.weblogic.md">sixth part: Scaling WebLogic Cluster</a>, you would see more (or less) pods. Mind that if you use my start and stop scripts, you should not change the <i>domain.yaml </i>file itself but the <i>domain.yaml.tpl</i> file.<br />
<h3>
Conclusion</h3>
These scripts helped me a lot in documenting the setup. You can just follow the tutorial, which leaves you with a working environment. But it's just a recipe that you need to follow over and over again. Also, I found at a few points that it is tricky to fill in the correct value.<br />
<br />
Lately, I started to look into Terraform for AWS and OCI. I figure that the tutorial that I followed with these scripts can be broken down into a few parts that are infact setting up OCI resources (Instances, Services like Treafik, etc.) and provisioning OKE. So it would be interesting to see where we can use Terraform to setup the OCI resources and services, leaving only the scripts to setup OKE itself. <br />
<br />
<br />
<br />
<br />
<br />Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-68801360281738480492020-01-15T11:37:00.000+01:002020-01-15T11:37:49.079+01:00Javascript in ANTEarlier I wrote about an ANT script to scan <a href="https://blog.darwin-it.nl/2018/11/using-ant-to-investigate-jca-adapters.html">JCA adapters files</a> in your projects home, subversion working copy or github local repo.<br />
<br />
In my current project we use sensors to kick-of message-archiving processes, without cluttering the BPEL process. I'm not sure if I would do that like that if I would do on a new project, but technically the idea is interesting. Unfortunately, we did not build a registry what BPEL processes make use of it and how. So I tought of how I could easily find out a way to scan that, and found that based on the script to scan JCA files, I could easily scan all the BPEL sensor files. If you have found the project folders, like I did in the JCA scan script, you can search for the <i>*_sensor.xml</i> files.<br />
<br />
So in a few hours I had a basic sript. Now, in a second iteration, I would like to know what sensorActions the sensors trigger. For that I need to interpret the accompanying <i>*_sensorAction.xml</i> file. There for, based on the found sensor filename I need to determine the name of the sensor action file.<br />
<br />
The first step to that is to figure out how to do a substring in ANT. With a <a href="https://lmgtfy.com/?q=ant+property+substring&s=l">quick google on "ant property substring"</a>, I found a <a href="https://stackoverflow.com/questions/945374/how-to-pull-out-a-substring-in-ant">nice stackoverflow thread</a>, with a nice example of an <a href="https://ant.apache.org/manual/Tasks/scriptdef.html">ANT script defininition</a> based on Javascript:<br />
<pre class="brush:xml"> <scriptdef name="substring" language="javascript">
<attribute name="text"/>
<attribute name="start"/>
<attribute name="end"/>
<attribute name="property"/>
<![CDATA[
var text = attributes.get("text");
var start = attributes.get("start");
var end = attributes.get("end") || text.length();
project.setProperty(attributes.get("property"), text.substring(start, end));
]]>
</scriptdef></pre>
<br />
And that can be called like:
<br />
<pre class="brush:xml"> <substring text="${sensor.file.name}" start="0" end="20" property="sensorAction.file.name"/>
<echo message="Sensor Action file: ${sensorAction.file.name1}"></echo></pre>
<br />
The javascript <i>substring()</i> function is zero-based, so the first character is indexed by 0.
<br />
Not every sensor file name has the same length, the file is called after the BPEL file that it is tight too. And so to get the base name, the part without the "_sensor.xml" postfix, we need to determine the length of the filename. A script that determines that can easily be extracted from the script above:
<br />
<pre class="brush:xml"> <scriptdef name="getlength" language="javascript">
<attribute name="text"/>
<attribute name="property"/>
<![CDATA[
var text = attributes.get("text");
var length = text.length();
project.setProperty(attributes.get("property"), length);
]]>
</scriptdef></pre>
<br />
Perfect! Using this I could create the logic in ANT to determine the sensorAction file name. However, I thought that it would be easier to determine the filename in Javascript all the way. Using the strength of the proper language at hand:<br />
<pre class="brush:xml"> <!-- Script to get the sensorAction filename based on the sensor filename.
1. Cut the extension "_sensor.xml" from the filename.
2. Add "_sensorAction.xml" to the base filename.
-->
<scriptdef name="getsensoractionfilename" language="javascript">
<attribute name="sensorfilename"/>
<attribute name="property"/>
<![CDATA[
var sensorFilename = attributes.get("sensorfilename");
var sensorFilenameLength = sensorFilename.length();
var postfixLength = "_sensor.xml".length();
var sensorFilenameBaseLength=sensorFilenameLength-postfixLength;
var sensorActionFilename=sensorFilename.substring(0, sensorFilenameBaseLength)+"_sensorAction.xml";
project.setProperty(attributes.get("property"), sensorActionFilename);
]]>
</scriptdef></pre>
And then I can get the sensorAction filename as follows:<br />
<pre class="brush:xml"> <getsensoractionfilename sensorfilename="${sensor.file.name}" property="sensorAction.file.name"/>
<echo message="Sensor Action file: ${sensorAction.file.name}"></echo></pre>
<br />Superb! I found ANT a powerfull language/tool already. But with a few simple JavaScript snippets you can extend it easily.
<br />Notice by the way also the use of xslt in the <a href="https://blog.darwin-it.nl/2018/11/using-ant-to-investigate-jca-adapters.html">Scan JCA adapters files article</a>. You can read xml files as properties, but to do that conveniently you need to transform a file like the sensors.xml in a way that you can easily reference the properties following the element-hierarchy. This is also explained in the <a href="https://blog.darwin-it.nl/2018/11/using-ant-to-investigate-jca-adapters.html">Scan JCA adapters files article</a>.
<br />I'll go further with my sensors scan script. Maybe I'll write about it when done.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-26072252681114246362020-01-10T14:59:00.003+01:002020-01-10T16:31:39.089+01:00My Weblogic on Kubernetes Cheatsheet, part 1.Last week I had the honour to present at the <a href="https://www.ukougconferences.org.uk/ukoug/frontend/reg/absViewDocumentFE.csp?documentID=1590&eventID=9">UKOUG TechFest 19</a>, together with my 'partner in crime', I think I can say now: <a href="https://www.ukougconferences.org.uk/ukoug/frontend/reg/absViewDocumentFE.csp?documentID=1674&eventID=9">Simon Haslam</a>. We combined our sessions into a <a href="https://ukoug.org/resource/collection/904331A4-A794-4730-AC6B-6AB8B1DD1E36/20191201-Kubernetes_Managed_Weblogic_Revival_-.pdf">part 1</a> and a <a href="https://ukoug.org/resource/collection/904331A4-A794-4730-AC6B-6AB8B1DD1E36/20191201-Kubernetes_Managed_Weblogic_Revival_-.pdf">part 2</a>.<br />
<br />
For me this presentation is the result of having done a workshop at the <a href="https://blog.darwin-it.nl/2019/07/weblogic-under-kubernetes-weblogic.html">PaaSForum in Mallorca</a>, and then to work that around into a setup where I was able to run the MedRec Weblogic sample application against a managed Database under Kubernetes.<br />
<br />
<h3>
Kubernetes Weblogic Operator Tutorial</h3>
I already wrote a blog about my workshop at the PaaSForum this year, but Marc Lameriks from Amis, did a <a href="https://technology.amis.nl/2019/09/28/deploying-an-oracle-weblogic-domain-on-a-kubernetes-cluster-using-oracle-weblogic-server-kubernetes-operator/">walkthrough on the workshop</a>. It basically comes down to this <a href="https://github.com/nagypeter/weblogic-operator-tutorial/blob/master/tutorials/domain-home-in-image.md">tutorial</a>, which you can do as a self-paced tutorial. Or checkout a Meetup in your neighbourhoud. If you're in the Netherlands, we'll be happy to organized one, or if you like I would come over to your place and we could set something up. See also the links at the end of <a href="https://ukoug.org/resource/collection/904331A4-A794-4730-AC6B-6AB8B1DD1E36/20191201-Kubernetes_Managed_Weblogic_Revival_-.pdf">part 2</a> of our presentations for more info on the tutorial for instance.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-nm0AV28TRkw/XfO8Ckk6KhI/AAAAAAAADPc/-mB5kt16KP0pOK04xcyUtPGr3dMr420DQCNcBGAsYHQ/s1600/2019-12-13%2BMedrecHome.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="991" data-original-width="1572" height="201" src="https://1.bp.blogspot.com/-nm0AV28TRkw/XfO8Ckk6KhI/AAAAAAAADPc/-mB5kt16KP0pOK04xcyUtPGr3dMr420DQCNcBGAsYHQ/s320/2019-12-13%2BMedrecHome.png" width="320" /></a></div>
I did the tutorial more or less three times now, once at the PaaSForum, then I re-did it, but deliberately changed namespace-names, domain-name, etc. Just to see where the dependencies are, and actually to see where the pitfalls are. It's based on my method to get to know an unfamiliar city: deliberately get lost in it. Two years ago we moved to another part of Amersfoort. To get to know my new neighbourhood, I often took another way home then I when I left. And this is basically what I did with the tutorial too.<br />
<br />
The last time I did it was to try to run a more real-life application with an actual database. And therefor I setup a new OKE cluster, this time in a compartment of our own company cloud subscription. Interesting in that is that you work with a normal customer-alike subscription within a compartment. Another form of a deliberate D-Tour. But also to setup a database and see that configuration overrides to change your runtime datasource-connection pool actually works.<br />
<br />
<br />
<table><tbody>
<tr><td><div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-IUSzMcKd-HM/XfO8K_HgldI/AAAAAAAADPg/N62GrlZBPTYi3pAJ7oPsDpPvxYCIC0ipgCNcBGAsYHQ/s1600/2019-12-13%2B-Patient%2BProfile.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="894" data-original-width="1432" height="199" src="https://1.bp.blogspot.com/-IUSzMcKd-HM/XfO8K_HgldI/AAAAAAAADPg/N62GrlZBPTYi3pAJ7oPsDpPvxYCIC0ipgCNcBGAsYHQ/s320/2019-12-13%2B-Patient%2BProfile.png" width="320" /></a></div>
</td><td><div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-ldmsWOAMWpc/XfO8K5bn2BI/AAAAAAAADPo/-5G2ZzpclCoL_ghBWCiCF713_2JNOy_cwCNcBGAsYHQ/s1600/2019-12-13%2BMedRecPations-Record%2BPrescriptions.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="832" data-original-width="1393" height="191" src="https://1.bp.blogspot.com/-ldmsWOAMWpc/XfO8K5bn2BI/AAAAAAAADPo/-5G2ZzpclCoL_ghBWCiCF713_2JNOy_cwCNcBGAsYHQ/s320/2019-12-13%2BMedRecPations-Record%2BPrescriptions.png" width="320" /></a></div>
</td></tr>
<tr><td colspan="2"><div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-GYu9owMcQ5M/XfO8KyuiSJI/AAAAAAAADPk/bS4-RzT3-D0U4jME1NlY4T6EAXthUWikgCNcBGAsYHQ/s1600/2019-12-13%2BMedRecPations-Record%2BSummary.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="792" data-original-width="1362" height="186" src="https://1.bp.blogspot.com/-GYu9owMcQ5M/XfO8KyuiSJI/AAAAAAAADPk/bS4-RzT3-D0U4jME1NlY4T6EAXthUWikgCNcBGAsYHQ/s320/2019-12-13%2BMedRecPations-Record%2BSummary.png" width="320" /></a></div>
</td></tr>
</tbody></table>
<br />
<br />
<br />
<h3>
Cheatsheet</h3>
When doing the tutorial, you'll find that besides all the configurations on the Cloud Pages, to setup your OKE Cluster, configure Oracle Pipelines, you'll find that you'll have to enter a lot of commandline-commands. Most of them are <i>kubectl</i> commands, some <i>helm</i>, and a bit of OCI commandline interface. Doing it the first time I soon got lost in the meaning of them and what I was doing with it. Also, most <i>kubectl</i> commands work with namespaces where your Weblogic has another namespace then the Weblogic Operator. And as is my habit nowadays, I soon put the commands in smart but simple scripts. And those I want to share with you. Maybe not all, but at least enough so you'll get the idea.<br />
<br />
I also found the official <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/">kubernetes.io kubectl cheat sheet</a> and this <a href="https://github.com/RehanSaeed/Kubernetes-Cheat-Sheet">one on github</a>. But those are more explanations of the particular commands.<br />
<br />
I found it helpfull to set up this Cheatsheet following the <a href="https://github.com/nagypeter/weblogic-operator-tutorial/blob/master/tutorials/domain-home-in-image.md">tutorial</a>. I guess this helps in relating the commands in what they're meant for. <br />
<h4>
Shell vs. Alias</h4>
At the UKOUG TechFest, someone pointed that you could use aliases too. Of course. You could do an alias like<br />
<pre class="brush:bash">alias k=kubectl</pre>
<br />
However, you'll still need to extend every command with the proper namespace, pod naming, etc.
<br />
Therefor, I used the approach of creating a <i>oke_env.sh</i> script that I can include in every script, and a property file to store the credentials to put in secrets. Then call (source) the <i>oke_env.sh</i> script in every other script.<br />
<h3>
Setup Oracle Kubernetes Engine instance on Oracle Cloud Infrastructure</h3>
These scripts refer to the first part of the tutorial: <a href="https://github.com/nagypeter/weblogic-operator-tutorial/blob/master/tutorials/setup.oke.md">0. Setup Oracle Kubernetes Engine instance on Oracle Cloud Infrastructure</a>.<br />
<h4>
oke_env.sh</h4>
It all starts with my <i>oke_env.sh</i>. Here you'll find all the particular necessary variables that are used in most other scripts. I think in a next iteration I would move the OIC_USER, OCID_TENANCY and OCID_CLUSTERID to my credential properties file. But I introduced that later on, during my experiments.<br />
<br />
<pre class="brush:bash">#!/bin/bash
echo Set OKE Environment
export OCID_USER="ocid1.user.oc1..{here goes that long string of characters}"
export OCID_TENANCY="ocid1.tenancy.oc1..{here goes that other long string of characters}"
export OCID_CLUSTERID="ocid1.cluster.oc1.eu-frankfurt-1.{yet another long string of characters}"
export REGION="eu-frankfurt-1" # or your other region
export CLR_ADM_BND=makker-cluster-admin-binding
export K8S_NS="medrec-weblogic-operator-ns"
export K8S_SA="medrec-weblogic-operator-sa"
export HELM_CHARTS_HOME=/u01/content/weblogic-kubernetes-operator
export WL_OPERATOR_NAME="medrec-weblogic-operator"
export WLS_DMN_NS=medrec-domain-ns
export WLS_USER=weblogic
export WLS_DMN_NAME=medrec-domain
export WLS_DMN_CRED=medrec-domain-weblogic-credentials
export OCIR_CRED=ocirsecret
export WLS_DMN_YAML=/u01/content/github/weblogic-operator-medrec-admin/setup/medrec-domain/domain.yaml
export WLS_DMN_UID=medrec-domain
export MR_DB_CRED=mrdbsecret
export ADM_POD=medrec-domain-adminserver
export MR1_POD=medrec-domain-medrec-server1
export MR2_POD=medrec-domain-medrec-server2
export MR3_POD=medrec-domain-medrec-server3
export DMN_HOME=/u01/oracle/user_projects/domains/medrec-domain
export LCL_LOGS_HOME=/u01/content/logs
export ADM_SVR=AdminServer
export MR_SVR1=medrec-server1
export MR_SVR2=medrec-server2
export MR_SVR3=medrec-server3</pre>
<br />
<h4>
credentials.properties</h4>
<br />
This stores the most important credentials. That allows me to abstract those from the scripts. However, as mentioned, I should move the OIC_USER, OCID_TENANCY and OCID_CLUSTERID variables to this file.
<br />
<pre class="brush:bash">weblogic.user=weblogic
weblogic.password=welcome1
ocir.user=my.email@address.nl
ocir.password=my;difficult!pa$$w0rd
ocir.email=my.email@address.nl
oci.tenancy=ourtenancy
oci.region=fra
db.medrec.username=MEDREC_OWNER
db.medrec.password=MEDREC_PASSWORD
db.medrec.url=jdbc:oracle:thin:@10.11.12.13:1521/pdb1.subsomecode.medrecokeclstr.oraclevcn.com</pre>
<br />
<h4>
create_kubeconfig.sh</h4>
After having setup the OKE Cluster in OCI, and configured your OCI CLI, the first actuall command you issue is to create a Kube Config file, using the OCI CLI. This one is executed only once, normally for every setup. So this script is merely to document my commands:<br />
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Create Kubeconfig -> Copy command from Access Kube Config from cluster
mkdir -p $HOME/.kube
oci ce cluster create-kubeconfig --cluster-id $OCID_CLUSTERID --file $HOME/.kube/config --region $REGION --token-version 2.0.0
</pre>
<br />
The SCRIPTPATH variable declaration is a trick to be able to refer to other scripts relatively from that variable. Then as you will see in all my subsequent scripts, I source here the <i>oke_env.sh</i> script. Doing so I can refer to the particular variables in the oci command. There for, as described in the tutorial, you should note down your <i>OCID_CLUSTERID</i> and update that into the <i>oke_env.sh</i> file, as well as the <i>REGION</i> variable.<br />
<br />
Note by the way, that recently Oracle Kubernetes Engine upgraded to only support the Kubeconfig token version 2.0.0. See also this <a href="https://docs.cloud.oracle.com/iaas/Content/ContEng/Tasks/contengdownloadkubeconfigfile.htm">document</a>.<br />
<h4>
getnodes.sh</h4>
This one is a bit dumb, and could as easily be created by an alias:<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s nodes
kubectl get node</pre>
<br />
Even the call to the oke_env.sh doesn't add anything, really but it is a base for the other scripts and when needing to add namespaces it makes sense.
<br />
<br />
<h4>
create_clr_rolebinding.sh</h4>
The last part of setting up the OKE cluster is to create a role binding. This is done with:<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Create cluster role binding
echo kubectl create clusterrolebinding $CLR_ADM_BND --clusterrole=cluster-admin --user=$OCID_USER
kubectl create clusterrolebinding $CLR_ADM_BND --clusterrole=cluster-admin --user=$OCID_USER
</pre>
<br />
<h3>
Install WebLogic Operator</h3>
The second part of the tutorial is about seting up your project environment with Github and have Oracle Pipelines build your projects image. This is not particularly related to K8S, so no relevant scripts there. <br />
The next part of the tutorial is about installing the operator: <a href="https://github.com/nagypeter/weblogic-operator-tutorial/blob/master/tutorials/domain-home-in-image.md">2. Install WebLogic Operator</a>. <br />
<h4>
create_kubeaccount.sh</h4>
Installling Weblogic Operator is done using Helm. As far as I have understood is Helm a sort of package manager for Kubernetes. Funny thing in naming is that where Kubernetes is Greek for the steering officer on a ship, helm is the steering device of a ship. It makes use of Tiller, the server side part of Helm. A tiller is the "steering stick" or lever that manages the Helm device. (To be honest, to me it feels a bit the otherway around, I guess I would have named the server side Helm and the client Tiller).<br />
<br />
As a first step is to create a Helm Cluster admin role binding, a kubernetes namespace for the Weblogic Operator and a serviceaccount within this namespace. To do so the script <i>create_kubeaccount.sh</i> does the following:<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Create helm-user-cluster-admin-role
cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: helm-user-cluster-admin-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: kube-system
EOF
echo Create namespace $K8S_NS
kubectl create namespace $K8S_NS
echo kubectl create serviceaccount -n $K8S_NS $K8S_SA
kubectl create serviceaccount -n $K8S_NS $K8S_SA
</pre>
<br />
<h4>
install_weblogic_operator.sh</h4>
<br />
Installing the Weblogic operator is done with this script. Notice that you need to execute the helm command within the folder in which you checked out the Weblogic Operator github repository.
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Install Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm install kubernetes/charts/weblogic-operator \
--name $WL_OPERATOR_NAME \
--namespace $K8S_NS \
--set image=oracle/weblogic-kubernetes-operator:2.3.0 \
--set serviceAccount=$K8S_SA \
--set "domainNamespaces={}"
cd $SCRIPTPATH</pre>
<br />
The script will <i>cd </i>to the Weblogic Operator local repository and executes <i>helm</i>. In the begin of the script the current folder is saved as <i>SCRIPTPATH</i>. After running the <i>helm</i> command, it does a <i>cd</i> back to it.
<br />
<h4>
delete_weblogic_operator.sh</h4>
During my investigations the Weblogic Operator was upraded. If you take a closer look to the command in the <a href="https://github.com/nagypeter/weblogic-operator-tutorial/blob/master/tutorials/install.operator.md">tutorial</a>, you'll notice that the image that is used is <i>oracle/weblogic-kubernetes-operator:<b>2.0</b></i>, but I used <i>oracle/weblogic-kubernetes-operator:<b>2.3.0</b></i> in the script above.<br />
I found it usefull to be able to delete the operator to be able to re-install it again. To delete the weblogic operator run the <i>delete_weblogic_operator.sh</i> script:<br />
<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Delete Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm del --purge $WL_OPERATOR_NAME
cd $SCRIPTPATH
</pre>
<br />
Again in this script the <i>helm</i> command is surrounded by a cd to the helm charts folder of the Weblogic Operator local github repository, and back again to the current folder.<br />
<br />
<h4>
getpods.sh</h4>
After having installed the Weblogic Operator, you can list the pods of the kubernetes namespace it runs in, using this script:<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pods for $K8S_NS
kubectl get po -n $K8S_NS
</pre>
<br />
<h4>
list_wlop.sh</h4>
You can check the Weblogic Operator installion by performing a <i>helm list</i> of the Weblogic Operator charts. I wrapped that ino this script:<br />
<pre class="brush:bash">#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo List Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm list $WL_OPERATOR_NAME
cd $SCRIPTPATH
</pre>
<br />
<h3>
Conclusion</h3>
If you would have followed the workshop, and maybe used my scripts, uptil now you have installed the Weblogic Operator. Let's not make this article too long and call this Part 1. And quickly move on to part 2, to install/configure and monitor the rest of the setup. Maybe at the end I move these contents to an easy to navigate set of articles.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-4533777417600103698.post-91513941365751439742019-12-02T19:31:00.000+01:002019-12-03T17:46:16.772+01:00Create a Vagrant box with Oracle Linux 7 Update 7 Server with GUI<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-S_Dw0LOgPCo/XeVZGSY1buI/AAAAAAAADOU/B4Y_GR2V1G8pX8bECMTQacR4sJQB1-tfwCNcBGAsYHQ/s1600/Penguin.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="280" data-original-width="320" height="175" src="https://1.bp.blogspot.com/-S_Dw0LOgPCo/XeVZGSY1buI/AAAAAAAADOU/B4Y_GR2V1G8pX8bECMTQacR4sJQB1-tfwCNcBGAsYHQ/s200/Penguin.png" width="200" /></a></div>
Yesterday and today I have been attending the UKOUG TechFest '19 in Brighton. And it got me eager to try things out. For instance with new Oracle DB 19c features. And therefor I should update my vagrant boxes to be able to install one. But I realized my basebox is still on Oracle Linux 7U5, and so I wanted to have a neatly fresh, latest OL 7U7 box.<br />
<h3>
Use Oracle's base box</h3>
Now, last year I wrote about how to create your own Vagrant Base Box: <a href="https://blog.darwin-it.nl/2018/04/oracle-linux-7-update-5-is-out-time-to.html">Oracle Linux 7 Update 5 is out: time to create a new Vagrant Base Box</a>. So I could create my own, but already quite some time ago I found out that Oracle supplies those base boxes.<br />
<br />
They're made available at <a href="https://yum.oracle.com/boxes">https://yum.oracle.com/boxes</a>, and there are boxes for OL6, OL7 and even OL8. I want to use OL 7U7, and thus I got started with that one. It's neatly described at the mentioned link and it all comes down to:<br />
<br />
<pre class="brush:bash">$ vagrant box add --name <name> <url>
$ vagrant init <name>
$ vagrant up
$ vagrant ssh</pre>
<br />
And in my case:<br />
<br />
<pre class="brush:bash">$ vagrant box add --name ol77 https://yum.oracle.com/boxes/oraclelinux/ol77/ol77.box
$ vagrant init ol77
$ vagrant up
$ vagrant ssh</pre>
<br />
Before you do that <i>vagrant up</i>, you might want to edit your vagrant file, to add a name for your VM:<br />
<pre class="brush:ruby">BOX_NAME="ol77"
VM_NAME="ol77"
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://vagrantcloud.com/search.
config.vm.box = BOX_NAME
...
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
config.vm.provider "virtualbox" do |vb|
vb.name = VM_NAME
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
end
#
...
</pre>
<br />
Otherwise your VM name in Virtual box would be someting like ol7_default_1235897983, something cryptic with a random number.<br />
<br />
If you do a <i>vagrant up</i> now it will boot up nicely.<br />
<br />
<h3>
VirtualBox Guest Additions</h3>
The VirtualBox GuestAdditions are from version 6.12, while my VirtualBox installation already has 6.14. I found it handy to have a plugin that auto-updates it. My co-Oracle-ACE Maarten Smeets wrote about that <a href="https://technology.amis.nl/2019/03/23/6-tips-to-make-your-life-with-vagrant-even-better/">earlier</a>. It comes down to executing the following in a command line:<br />
<pre class="brush:plain">vagrant plugin install vagrant-vbguest</pre>
<br />
If you do a <i>vagrant up</i> now, it will update the guest additions. However, to be able to do so, it needs to install all kinds of kernel packages to compile the drivers. So, be aware that this might take some time, and you'll need internet connection.<br />
<h3>
Server with GUI</h3>
The downloaded box is a Linux Server install, <i>without</i> a UI. This probably is fine for most of the installations you do. But I like to be able to log on to the desktop from time to time, and I want to be able to connect to that using MobaXterm, and be able to run a UI based installer or application. A bit of X-support is handy. How to do that, I found at this <a href="https://linuxconfig.org/install-gnome-gui-on-rhel-7-linux-server">link</a>.<br />
<br />
GUI support is one of the group packages that are supported by Oracle Linux 7, and this works exactly the same as RHEL7 (wonder why that is?).<br />
<br />
To list the available packages groups are supported, you can do:<br />
<br />
<pre class="brush:bash">[vagrant@localhost ~]$ sudo yum group list
There is no installed groups file.
Maybe run: yum groups mark convert (see man yum)
Available Environment Groups:
Minimal Install
Infrastructure Server
File and Print Server
Cinnamon Desktop
MATE Desktop
Basic Web Server
Virtualization Host
Server with GUI
Available Groups:
Backup Client
Base
Cinnamon
Compatibility Libraries
Console internet tools
Development tools
E-mail server
Educational Software
Electronic Lab
Fedora Packager
Fonts
General Purpose Desktop
Graphical Administration Tools
Graphics Creation Tools
Hardware monitoring utilities
Haskell
Input Methods
Internet Applications
KDE Desktop
Legacy UNIX Compatibility
MATE
Milkymist
Network Infrastructure Server
Networking Tools
Office Suite and Productivity
Performance Tools
Scientific support
Security Tools
Smart card support
System Management
System administration tools
Technical Writing
TurboGears application framework
Web Server
Web Servlet Engine
Xfce
Done
</pre>
<br />
(After having executed <i>vagrant ssh</i>.)<br />
You'll find '<i>Server with GUI</i>' as one of the options. This will install all the necessary packages to run Gnome. But, if you want to have <i>KDE</i> there's also package group for that.<br />
<br />
To install it you would run:<br />
<pre class="brush:bash">[vagrant@localhost ~]$ sudo yum groupinstall 'Server with GUI'
There is no installed groups file.
Maybe run: yum groups mark convert (see man yum)
Resolving Dependencies
--> Running transaction check
---> Package ModemManager.x86_64 0:1.6.10-3.el7_6 will be installed
--> Processing Dependency: ModemManager-glib(x86-64) = 1.6.10-3.el7_6 for package: ModemManager-1.6.10-3.el7_6.x86_64
--> Processing Dependency: libmbim-utils for package: ModemManager-1.6.10-3.el7_6.x86_64
--> Processing Dependency: libqmi-utils for package: ModemManager-1.6.10-3.el7_6.x86_64
--> Processing Dependency: libqmi-glib.so.5()(64bit) for package: ModemManager-1.6.10-3.el7_6.x86_64
....
....
python-firewall noarch 0.6.3-2.0.1.el7_7.2 ol7_latest 352 k
systemd x86_64 219-67.0.1.el7_7.2 ol7_latest 5.1 M
systemd-libs x86_64 219-67.0.1.el7_7.2 ol7_latest 411 k
systemd-sysv x86_64 219-67.0.1.el7_7.2 ol7_latest 88 k
Transaction Summary
========================================================================================================================
Install 303 Packages (+770 Dependent packages)
Upgrade ( 7 Dependent packages)
Total download size: 821 M
Is this ok [y/d/N]:
</pre>
<br />
It will list a whole bunch of packages with dependencies that it will install. If you're up to it, at this point you would confirm with 'y'. Notice that there will be a bit over a 1000 packages installed, so it will be busy with that for a while.<br />
This is because it will install the complete Gnome Desktop environment.
<br />
You could also do:<br />
<pre class="brush:bash">[vagrant@localhost ~]$ sudo yum groupinstall 'X Window System' 'GNOME'</pre>
<br />
That will install only the minimum, necessary packages to run Gnome. I did not try that yet.
<br />
If it finished installing all the packages, the one thing that is left, is to change the default runlevel, since obviously you want to start in the GUI by default. I think most in the cases, at least.<br />
This is done by:
<br />
<pre class="brush:bash">[vagrant@localhost ~]$ sudo systemctl set-default graphical.target</pre>
<br />
I could have put that in a provision script, like I've done before. And maybe I will do that.<br />
<h3>
Package the box</h3>
You will have noticed that it would have stamped quite some time to update the kernel packages for installing the latest Guest Additons and the GUI desktop. To prevent us from doing that over and over again, I thought it was wise to package the box into a <i>ol77SwGUI</i> box (Server with GUI). I described that in my previous article last year:
<br />
<pre class="brush:plain">vagrant package --base ol77_default_1575298630482_71883 --output d:\Projects\vagrant\boxes\OL77SwGUIv1.0.box</pre>
<br />
<h3>
The result</h3>
<div>
This will deliver you a Vagrant Box/VirtualBox image with:</div>
<div>
<ul>
<li>Provider: VirtualBox</li>
<li>64 bit</li>
<li>2 vCPUs</li>
<li>2048 MB RAM</li>
<li>Minimal package set installed</li>
<li>32 GiB root volume</li>
<li>4 GiB swap</li>
<li>XFS root filesystem</li>
<li>Extra 16GiB VirtualBox disk image attached, dynamically allocated</li>
<li>Guest additions installed</li>
<li>Yum configured for Oracle Linux yum server. _latest and _addons repos enabled as well as _optional_latest, _developer, _developer_EPEL where available.</li>
<li>And as an extra addon: <i>Server with GUI installed</i>.</li>
</ul>
Or basically more or less what I have in may own base box. What I'm less happy with is the 16GiB extra disk image attached. I want a bigger disk for my installations, or at least the data. I'll need to figure out what I want to do with that. Maybe I add an extra disk and reformat the lot with a disk spanning Logical Volume based filesystem.<br />
<br />
<h3>
Update</h3>
I found that that the box from Oracle lacks Video Memory to run VM in GUI mode. Because GUI wasn't in the VM, it 8 MB was sufficient.<br />
I added the following to change the video memory:<br />
<pre class="brush:ruby"> config.vm.provider "virtualbox" do |vb|
vb.name = VM_NAME
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
#
# https://stackoverflow.com/questions/24231620/how-to-set-vagrant-virtualbox-video-memory
vb.customize ["modifyvm", :id, "--vram", "128"]
end</pre>
<br />
Based on the hint found <a href="https://stackoverflow.com/questions/24231620/how-to-set-vagrant-virtualbox-video-memory">here on StackOverflow</a>.<br />
I also added this on my <a href="https://github.com/makker-nl/vagrant/tree/master/ol77">GitHub vagrant project</a>.<br />
<br />
<br /></div>
Anonymousnoreply@blogger.com0