WebSphere

Subscribe to WebSphere: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get WebSphere: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


IBM WebSphere software products Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, hyper filter, Timothée Bensimon

Related Topics: WebSphere

IBM WebSphere software products: Article

Your Guide to Portal Clustering in WebSphere Portal Server 5.1

WebSphere Portal Server 5.1

We must now tell our new Portal node to talk to the same configuration database that the Portal on WAS1 is using. To do this, we need to edit the wpconfig.properties file on WAS2. We want the database-related parameters to match what we configured on WAS1. Refer to the InfoCenter for the list of relevant parameters.

Once the database parameters are configured in the wpconfig.properties file, put them into effect by executing:
WPS_HOME\config\WPSconfig.bat 
  connect-database
This command will hook us up so that both Portals are reading the same configuration information.

At this point we have two Portals installed, both secured using LDAP, both using the same remote database to store the config data, and both ready to be clustered.

Cluster Time
The Portal cluster is defined by using the Deployment Manager Admin Console. Log into the console and navigate to the Servers -> Clusters section of the navigation tree.
  • Click New.
    • Define the cluster name.
  • Check the box Prefer local enabled.
  • Check the box Create Replication Domain for this cluster.
  • Check the box for “Select an existing server to add to this cluster” and then choose server WebSphere_Portal on node WAS1 from the list.
  • Check the box Create Replication Entry in this Server.
  • Click Next.
    • Create the second cluster member.
  • Define the name of the cluster member. Make sure it’s different from the name of the cluster member 1 above.
  • Select node WAS2.
  • Uncheck the box Generate Unique HTTP Ports.
  • Check the box Create Replication Entry in this Server.
  • Click Apply and then click Next to view the summary.
    • To create the new cluster, click Finish.
    • Save the changes
The cluster is created. See, the actual cluster creation is quick and easy.

When we create the second cluster member, the Deployment Manager is actually copying portlets and other configuration files from WAS1 to WAS2. It’s synchronizing the nodes so they both contain the same application information. This is why we didn’t have to install portlets during the Portal install on WAS2.

You may ask what replication entries we create in this step. One of the chief benefits that the Network Deployment topology gives us is the ability to have our cluster members on WAS1 and WAS2 share information, both session information and dynacache information. With this step, all members in the defined Replication Domain that have Replication Entries will be able to share this valuable information.

If you’re load-balancing these Portals, having the user session shared by all the cluster members would be part of a high-availability configuration. If one Portal goes down, the other still has your session data active in memory.

Tidy Up
But of course it couldn’t be THAT simple. There are a few other tasks to do to ensure our cluster operates smoothly.

First, to be able to deploy portlets properly, we must edit a file located on each of our Portal servers. The file is called DeploymentService.properties and it is located in WPS_HOME\shared\app\config\services.

Open this file and set the wps.appserver.name property to the name of the cluster you defined in step 1a above. Save and close this file. We’re done with it.

Next we must update the wpconfig.properties file on each Portal node with the correct cluster member information. The server name is no longer what it was when we installed the Portal. Each cluster member has a specific name that we need to use. So open up the wpconfig.properties file and edit the ServerName property. Set this to match the cluster member name used in the Deployment Manager. To determine the cluster member name, click Servers > Cluster Topology in the Deployment Manager Admin Console and expand the cluster to view the cluster members.
Last, we must enable our Portals to accept the dynamic cache replication that we enabled when we created the cluster. In a normal Web application you wouldn’t have to do this, but this isn’t a normal Web application.

If this step isn’t completed, situations could arise in which users have different views or different access rights, depending on which cluster member handles the user’s request.

On WAS1, execute the following:
WPS_HOME\config\WPSconfig.bat 
  action-set-dynacache  
–DserverName=cluster_member –
  DReplicatorName=replicator_name
In this syntax, the value of cluster_member is the name of the cluster member to update. In this case it’s the cluster member on WAS1. The value of replicator_name is the name of the cluster member with which to replicate, in this case the cluster member on WAS2.

Be sure to run the same command (with the values reversed) on the WAS2 node.

After making the configuration changes detailed above, it would be a good idea to do a Full Synchronization of all the changes. This will instruct the Deployment Manager to copy any and all changes out to the two federated nodes via the nodeagents.

In the Admin Console, select System Administration > Nodes, select the two nodes from the list, and click Full Resynchronize. The Admin Console will display a message indicating that a request for full synchronization has succeeded. Be sure to check to the status messages in the Runtime Messages panel at the bottom of the screen so you know the request completed successfully.

We haven’t really talked about a remote HTTP server to handle static content and the incumbent installation and configuration of the WebSphere Plugin, but that’s really incidental to clustering the Portal. Suffice it to say that the Plugin located on the remote Web servers will contain information of the two Portal cluster members and route traffic to them based on the policies set in the cluster administration area of the Admin Console.

HTTP Session Replication
When dealing with a cluster, once a user creates a session with a portal by logging in, the user is returned to that WebSphere Portal cluster member for the rest of his session. There’s a portlet in the global settings section of the Portal Administration panel that will tell you which cluster member is handling the current session.

The cluster member currently handling the session is referred to as the session owner. If this cluster member fails, then the Plugin on the Web servers will route the next request to another cluster member. The new cluster member either retrieves the session from a server that has the backup copy of the session or it retrieves the session from its own backup copy table. This server then becomes the owner of the session and affinity is maintained with this server.

Whenever a session is modified in any of the cluster members, that session data is replicated to each of the other members of that cluster. So in our example WAS1 would replicate the session with WAS2. By default, in a cluster, the sessions are replicated to each of the cluster members that are using the same replicator domain (which is defined when the cluster was created).

However, because this gets defined during the creation of the cluster and not the cluster member (AppServer) itself, we must go back and tell the Web container on each cluster member’s AppServer that it should use this replication domain to store session data.

To enable this “memory-to-memory” session replication:
  • Click Application Servers > WebSphere_Portal > Web Container > Session Management
  • Click Distributed Environment Settings under Additional Properties.
  • Select Memory to Memory Replication.
  • Click Apply.
  • Repeat these steps for each cluster member.
Save the changes to the Deployment Manager master config and synchronize the changes. Restart the cluster members so replication can take effect. You are all set. No need to worry about losing a session.

Clustered!
Now that you have a functioning cluster of Portal servers, you may be tempted to believe that nothing will ever go wrong and that your life as a Portal admin is worry-free. You have every reason to believe this. Your Portal cluster is highly available, it is running on the industry-leading Application Server, it is extremely powerful and flexible, but there are some inherent fragilities to be aware of. And we’ll discuss them next month!

Hopefully you’ll come back to hear about some practical techniques for managing real-world portal clusters. This is stuff that you probably won’t find in the InfoCenter.

Speaking of the InfoCenter, there’s good information about establishing a Portal cluster at http://publib.boulder.ibm.com/pvc/wp/510/ent/en/InfoCenter/wpf/clus_inst...

More Stories By Chris Lockhart

Chris Lockhart is a senior technical resource at Perficient, a firm recognized for its expertise around IBM technologies. Chris has worked with IBM's WebSphere, Tivoli and Lotus Software platforms for more than 6 years. For more information, please visit www.perficient.com

Comments (2)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.