Wednesday, June 15, 2011

OSB with JMS Queues

I don't often get to play with Oracle Service Bus much anymore, but I always enjoy the chance when it comes along. Right now that need has indeed arrived in the form of JMS messaging.
OSB makes message management easy, queues or topics, messages to either can be enqueued or dequeued with simple services. Let me show you how.

Here is the use case
  1. Dequeue a message on a JMS queue
  2. Write message payload to a report
  3. Write out and queue new message on another queue, possibly after transformation for another service/system to collect, and,
  4. Make the write action call-able via a Web Service interface.
The only gotcha with OSB is this simple rule.

Business Services only Enqueue messages
Proxy Services only Dequeue messages

Understanding, remembering and applying this principal make's the whole thing easy.

I have created two WebLogic Server JMS queues which I will use in this example; andrewq1 and andrewq2.

Lets begin.
1. Login to the OSB console and create whatever folder structure you like.
2. Create a Business Service to write (enqueue) the message payload using the JMS transport adapter. Make sure you use the jms format; eg jms://192.168.13.136:8011/jms.andrewconnfactory/jms.andrewq2
I named mine writeQ.


3. Enable the Operational Settings for Monitoring and Tracing. Save and Submit.
4. Create a Proxy Service now to read (dequeue) the JMS messages.
5. Much the same as the Business Service configure the message payload to use the JMS transport adapter. Make sure you use the jms format; eg jms://192.168.13.136:8011/jms.andrewconnfactory/jms.andrewq1
I named mine readQ.




6. Enable the Operational Settings for Monitoring and Tracing. Save and Submit. 
7. Now we need to modify the Message Flow so the readQ Proxy Service knows what to do and writes out a message report so we can view the payload via the OSB operations dashboard.
8. Create a Pipeline pair and add a stage as shown.


9. Modify the stage actions to write out the dequeued payload to a report, and then route the payload to the queue andrewq2 using the writeQ business service. We could also handle some transformation etc here, but thats for another post :-)


10. That's it for OSB, Save and Submit your changes. Lets start testing!
11. We can cheat at the moment and use the WebLogic JMS message queue utility available from the Queue Monitoring Tab. We could also write a Business Service to write a message to this queue too, and we have created one already, but this way adds another option to this example.
12. Select show messages on the outbound queue from the monitoring tab.


13. Select the new button. You can also see the queued messages sitting here as well, not that there will be any as OSB is polling this queue for messages now and will act to dequeue as soon as one is written.


14. Create a new message and make sure you set the message type to be TextMessage.




15. That will create a message on the first queue, OSB will collect it, write out the payload to a report and write the message to the second queue. To view the report select Message Reports from the Operations Dashboard.




16. Selecting this message will show you the payload details and the transaction information.


17. When you select details you will see the xml payload.


18. Now we can go to the second queue (mine was called andrewq2) and you will see some messages queued awaiting collection.




19. And again you can select a message to see the contents. No surprise here, the payload shows what I typed in earlier.




And thats it. Well there is alot more that could and probably should be done, but this is just a taster, and you can extend what you have created now as you please.

Enjoy and don't forget to look in sometimes on WebCenter's very capable integration cousin - OSB.

Thursday, May 19, 2011

Creating Business Mashups using a webservice in Webcenter PS3

One key capability in WebCenter PS3 is the ability to create Data Controls and Taskflows at runtime. You can create a data control using SQL or a webservice, and wire this data control into a custom taskflow that can be created at run time which shall use a Mashup Style to render the data control. We could build different data controls and mash them up via taskflows by leveraging the underlying WebCenter schemas.

Let’s build a data control from a webservice and render that on a WebCenter Spaces page.

In this example we will create a simple currency converter task flow, consuming a web service and make a data control out of it. From there we will display it on the Spaces page.

The service url is: http://www.webservicex.net/CurrencyConvertor.asmx?WSDL

Here are the Steps :


1. Logon to your Space as an admin user.
2. From you Space navigate to Manage > All Settings



3. Go to the spaces resource Management tab and navigate to the Data Controls



4. Then create a Data Control by selecting the webservice option. Select Create > Web Service. Provide the details for the data control Name (Currency Converter), Description and provide the WSDL details. Ensure you provide Proxy details as well if needed and click connect.



5. Select the method to be consumed Click Create to finish



6. Click Edit and select the "show" attribute for the Data control to be visible from the Composer/Business Dictionary
7. Create a Taskflow with a Blank Mashup style



8. Edit the task flow by selecting Edit > Edit
9. Select Add Content and Select the Mash-Ups folder




10. Select Data Controls




11. Select Currency Converter




12. Select Add > ADF Button




13. Select Conversion Rate to drill into the Currency Converter Data Control

14. Select the Add > ADF Output Formatted w/Label




15. Select ToCurrency Add > ADF Input Text w/Label

16. Select FromCurrency Add > ADF Input Text w/Label

17. Then select Close

18. Select View and choose Source




19. Select the element InputText: ConversionRate_ToCurrency element and select Edit
20. Change the Input Text Label to “To Currency” then OK




21. Repeat the process again for the FromCurrency InputText element and change the Label to “From Currency” then OK

22. Now select the panelLabelandReturn text element and change the Label to “Exchange rate” then OK




23. Finally select the commandButton and change the Text to “Get Exchange Rate” then OK

24. Select the Box element then select Edit and order the “To Currency” Input text element to the top then OK




25. Save and Close the Task Flow editor

26. Select Edit > Show to display the Task Flow in the Business Catalogue

27. Now we need to create a new page to display this task flow

28. Select Pages > Create Page, name it MashUp select a blank template
29. Select Add Content > Mash-Ups > Task Flows and choose to Add the CurrencyConverterTF task flow to the page.

30. Select Save and Close

31. Run a test and Enter USD in the To Currency field and AUD in the From Currency field and press the Get Exchange Rate button




Gotta love the value of the Australian dollar right now. Time to go shopping :-)

That was a quick tutorial on how to add webservice capabilities to your WebCenter deployment.

Many thanks to Vijaykumar Yenne for his original viewlet.

Cheers
Andrew Rosson

Perth, Western Australia

JVM settings: Handy hints

The Java Virtual Machine (JVM) is a virtual “execution engine” instance that executes the bytecodes in compiled Java class files. Java programs are compiled into a form called Java bytecodes. To the JVM, a stream of bytecodes is a sequence of instructions.

Tuning the JVM to achieve optimal application performance is one of the most critical aspects of WebLogic Server performance.

Oracle recommends to use:
• Oracle JRockit JVM for production servers
• Sun HotSpot JVM for development servers and for running other WLS utilities

Setting Weblogic Server JVM Arguments:

If we want to use different JVM after domain creation is possible by setting some WLS JVM arguments.

Start JVM with Custom JVM settings.

export JAVA_VENDOR=”Oracle”
export USER_MEM_ARGS=”-Xms512m –Xmx1g”
./startWebLogic.sh

  • “Oracle” indicates that you are using the JRockit SDK. It is valid only on platforms that supportJRockit.
  • “Sun” indicates that you are using the Sun SDK.
  • “HP” and “IBM” indicate that you are using SDKs that Hewlett Packard or IBM have provided. These values are valid only on platforms that support HP or IBM SDKs.

Basic Sun JVM Arguments :

  • -XX:NewSize (default 2 MB): Default size of new generation (in bytes)
  • -XX:MaxNewSize: Maximum size of new generation (in bytes). Since 1.4, MaxNewSize is computed as a function of NewRatio.
  • -XX:NewRatio (default = 2): Ratio of new to old generation sizes
  • -XX:SurvivorRatio (default = 8): Ratio of Eden size to one survivor space size.
  • -XX:TargetSurvivorRatio (default = 50%): Desired percentage of survivor space used after scavenge
  • -XX:MaxPermSize: Size of the permanent generation

Basic JRockit JVM Arguments:

  • -Xms The initial amount of heap allocated to the JVM
  • -Xmx The maximum amount of heap that this JVM can allocate
  • -Xns Size of the nursery generation in the heap
  • -XgcPrio A priority level that helps to determine which GC algorithms the JVM will use at run time:
  • throughput: Maximize application throughput
  • pausetime: Minimize how long GC runs
  • deterministic: Consistent response times
  • -XXcompactRatio The percentage of the heap

Common JVM Issues:

Out of Memory : JVMs trigger java.lang.OutOfMemoryError when there is insufficient memory to perform some task . An out-of-memory condition can occur when there is free memory available in the heap but it is too fragmented and not contiguously located to store the object being allocated or moved (as part of a garbage collection cycle).

Memory Leak : Are a common cause of out-of-memory errors, can occur because of excessive caching.

JVM Crash: We can identify and troubleshoot a JVM crash by the diagnostic files that are generated by the JVM. A snapshot is created
that captures that state of the JVM process at the time of the error.

This binary file contains information about the entire JVM process and needs to be opened using debugging tools.
The gdb debugging tool, popular on Linux, can extract useful information from core files. The Dr.Watson tool on Windows provides similar capabilities.

On the Sun JVM, the log file is named hs_err_pid.log, where is the process ID of the process. JRockit refers to this error log as a “dump” file, and is named jrockit..dump.

Basic JVM Tools:
– Stack Trace
– Thread Dump
– Verbose GC
– Sun Profiler Agent
– Sun Diagnostic Tools
– JVisualVM

Sunday, October 10, 2010

Local jDev deployments to running WebLogic AMI instances.

I have been running some Oracle Fusion Middleware workshops lately, using a mixture of Amazon (AMI) instances and a local install of Oracle jDeveloper, and experienced many problems attempting to deploy a local SOA composite to the remote AMI instance.

I try to deploy to a machine with an external IP like... ec2-184-73-150-246.compute-1.amazonaws.com, but the internal IP which is different eg: 10.112.31.184 rejects the SCA deployment.

jDeveloper can connect to the server ok, and commence the deployment, but I guess the internal IP is tricking soa-infra into rejecting the deploy activity.

The jDeveloper log looks like this...

[12:23:42 PM] Deploying sca_POProcessing_rev3.2.jar to partition "default" on server AdminServer
[10.112.31.184:7001] [12:23:42 PM] Processing sar=/C:/Temp/Downloads/POProcessing/POProcessing/deploy/sca_POProcessing_rev3.2.jar
[12:23:42 PM] Adding sar file - C:\Temp\Downloads\POProcessing\POProcessing\deploy\sca_POProcessing_rev3.2.jar
[12:23:42 PM] Preparing to send HTTP request for deployment
[12:23:42 PM] Creating HTTP connection to host:10.112.31.184, port:7001 [12:23:42 PM] Sending internal deployment descriptor
[12:23:42 PM] Sending archive - sca_POProcessing_rev3.2.jar
[12:24:03 PM] Error sending deployment request to server AdminServer
[10.112.31.184:7001] java.net.ConnectException: Connection timed out: connect
[12:24:03 PM] Error sending deployment request to server AdminServer
[10.112.31.184:7001] java.net.ConnectException: Connection timed
out: connect
[12:24:03 PM] #### Deployment incomplete. ####

Well to fix this you could try and muck about with the hostname, perhaps set the internal IP address to an Elastic IP, but I am pleased to say I eventually found an easier way.

I added the external listen address to the admin server. Easy hey?

Here’s how...
  1. Fire up WebLogic Console in your browser
  2. Create an edit session if necessary
  3. Go to “your domain name” > Environment > Servers and select your SOA deployment server (mine is just AdminServer in the screenshot)
  4. On the Configuration, General tab select the “Advanced” section at the bottom of your page
  5. Enter your AMI IP eg: ec2-184-73-150-246.compute-1.amazonaws.com to the “External Listen Address” field
  6. Save your work and yes, I am afraid so, restart your server(s)




















That did the trick and now deployments from my local jDeveloper work fine.

[01:07:14 PM] Preparing to send HTTP request for deployment
[01:07:14 PM] Creating HTTP connection to host:ec2-184-73-150-246.compute-1.amazonaws.com, port:7001
[01:07:14 PM] Sending internal deployment descriptor
[01:07:14 PM] Sending archive - sca_POProcessing_rev3.2.jar
[01:07:36 PM] Received HTTP response from the server, response code=200
[01:07:36 PM] Successfully deployed archive sca_POProcessing_rev3.2.jar to partition "default" on server AdminServer [ec2-184-73-150-246.compute-1.amazonaws.com:7001]
[01:07:36 PM] Elapsed time for deployment: 35 seconds [01:07:36 PM] ---- Deployment finished. ----

Hope this small tip helps you continue to enjoy the benefits of developing locally in jDeveloper and using WebLogic with SOA , BPM and OSB in the cloud.

Monday, August 17, 2009

WebLogic Pack and Unpack Commands

One small WebLogic utility which I think doesn't get the recognition it deserves is the pack and unpack command.

Sure you could just Copy + Paste your domain from node to node, but why not achieve the same thing in a more elegant and tidy way? And good luck pasting to your remote EC2 or cloud servers by the way!

What does the pack and unpack command do?

The pack and unpack commands provide a quick alternative way to package up your existing WLS domain for transportation and distribution across members of your WLS cluster.

It can be used to do more as well, like creating domains and templates, but we are just going to focus on the features associated with copying existing domains, and specifically WLS Managed Servers.

This is the best way to go to move domain information with Managed Server configs etc, to other EC2 machines or to servers in your cloud infrastructure.

The pack command creates a template archive (.jar) file that contains a snapshot of a domain, while the unpack command is used to create a Managed Server domain directory hierarchy on a remote machine.

Pack syntax

pack -domain=domain -template=template -template_name="template_name"

[-template_author="author"][-template_desc="description"]
[-managed={true|false}][-log=log_file] [-log_priority=log_priority]

You create a Managed Server template by executing the pack command, on an existing domain that already includes the definition of one or more Managed Servers. This domain must contain Managed Server definitions in the config.xml file, which are the ones you have specified when you completed the Domain Config Wizard.

You can create the template
with the -managed=true option, which should copy additional files to the template including .cmd, .sh, .xml, .ini and .properties files, but I have found that the default parameter -managed=false does copy all these files faithfully anyway. Include or exclude this parameter as you wish, your experiences may be different to mine, but -managed=false is not a required parameter.

From the machine that contains the Administration Server and the definition of Managed Servers, navigate to the /bea/wlserver_10.3/common/bin and run:

pack -domain=domain -template=template.jar -template_name="template_name"

Where,
  • domain: The path to the domain from which the template is to be created.
  • template.jar: The path to the template, and the filename of the template to be created.
  • template_name: Descriptive name for the template enclosed in quotes.
For example, executing the following command creates a Managed Server template named mydomain_managed.jar from a domain named mydomain.

pack -domain=/bea/user_projects/domains/mydomain

-template=/bea/user_templates/mydomain_managed.jar
-template_name="My Managed Server Domain"

Unpack syntax

unpack -template=template -domain=domain [-user_name=username]

[-password=password] [-app_dir=application_directory]
[-java_home=java_home_directory] [-server_start_mode={dev|prod}]
[-log=log_file] [-log_priority=log_priority]

Now this looks a little trickier, but it is not.

Remember you must have installed WebLogic Server onto each machine you want to host a Managed Server, and all WebLogic Server instances within your domain must run the same version of the WebLogic Server software.

You need to establish a session with your destination machine, eg SSH whatever, create a directory /bea/user_templates and copy the .jar file over to that server directory. Remember your file permissions and most importantly the destination servers IP and port information must match what you entered in the Domain Config Wizard when you initially created the domain template.

On the destination machine, navigate to the /bea/wlserver_10.3/common/bin and run:

unpack -domain=domain -template=template.jar

Where,
  • domain: The path to the domain to be created.
  • template.jar: The path to the template from which the domain is to be created
For example, executing the following command using a template named mydomain_managed.jar creates a domain named myManagedDomain.

unpack -domain=/bea/user_projects/domains/myManagedDomain
-template=/bea/user_templates/mydomain_managed.jar

Conclusion

That's it. You will then be rewarded with a nice new directory structure created for you with all the important domain pieces in place. Start it up via Node Manager, or with your startManagedWebLogic command.

A nice simple and quick way to distribute all your managed servers across your domain cluster members, and easier than Copy + Paste.

Well I think so anyway :-)

Thursday, May 1, 2008

Gotta start somewhere.

Hi this is my first comment, just to get the ball rolling.
have a nice day.