previous section, we looked at the different throttling mechanisms WebLogic JMS provides to help offset temporary spikes in message production. In general, you need to design your messaging applications so that the consumers can keep up with the producers over long periods of time. If messaging applications tend to produce more messages than they can consume, eventually the application will fall so far behind that it runs into physical resource constraints such as running out of memory or disk space. In this section, we will discuss some design considerations for messaging applications.
Choosing a Destination Type
When designing a JMS application, a commonly asked question is whether to use queues or topics. Trying to think in terms of destination type often leads to confusion. Instead, you should think about what type of messaging your application requires. In general, point-to-point style messaging should use queues while publish-and-subscribe style messaging should use topics. Of course, there is no hard and fast rule, and it is possible to use either type of destination with most applications if you are willing to do enough work. When using queues, some things to remember are
Each message will be processed by one consumer.
Messages will remain on the queue until they are consumed or expire.
Persistent messages are always persisted.
Using message selectors becomes expensive as the number of messages in the queue gets large.
When using topics, some things to remember are
Each message can be processed by every consumer.
Unless using durable subscriptions, messages will be processed only if at least one consumer is listening at the time the message is sent.
Persistent messages are persisted only when durable subscriptions exist.
Using message selectors with topics becomes expensive as the number of consumers gets large.
Best Practice | Choose your destination type based on the type of messaging. Point-to-point messaging implies queues, and publish-and-subscribe messaging implies topics. |
Locating Destinations
In WebLogic JMS, a destination physically resides on a single server. JMS provides two ways for an application to obtain a reference to a destination. You can look up the destination in JNDI and cast it to the appropriate destination type, Queue or Topic, or use the QueueSession.createQueue(destinationName) or TopicSession.createTopic(destinationName) methods to locate an existing queue or topic. These create methods require an application to pass destination names using a vendor-specific syntax. For WebLogic JMS, the syntax is jms-server-name/jms-destination-name; for example, to obtain a reference to a destination named queue1 that resides in the JMS server named JMSServer1, you would use a destination name of JMSServer1/queue1. When referring to a distributed destination, the JMS server name and forward slash should be omitted because a distributed destination spans JMS servers. We will talk more about distributed destinations later.
Given that the createQueue() and createTopic() methods require specifying the WebLogic JMS server name, we feel that these methods are of limited value because we generally want to hide the location of the destination from the application. Any advantages that might be gained by using these methods rather than JNDI are likely to be offset by requiring the application to understand what JMS server the destination lives in. As a result, we recommend using JNDI to obtain references to JMS destinations. JNDI lookups, though, are relatively expensive so applications should attempt to look up JMS destinations once and cache them for reuse throughout the life of the application.
Best Practice | Use JNDI to locate destinations. Caching and sharing JMS destinations throughout the application will help to minimize the impact of the JNDI lookup overhead. |
Choosing the Appropriate Message Type
Choosing a message type is the second design choice you face when designing JMS applications. TextMessage is one of the more commonly used message types simply because of the type of data typically exchanged. As the popularity of XML increases, TextMessage popularity increases because the JMS specification does not explicitly define a message type for exchanging XML documents. Unfortunately, serializing a string is more CPU intensive than serializing other Java primitive types. Using strings as the message payload often implies that the receiver must parse the message in order to extract the data encoded in the string. WebLogic JMS also provides an XMLMessage type. The primary advantage of the XMLMessage type is the built-in support for running XPATH-style message selectors on the body of the message.Let’s take a minute to talk more about XML messages. Exchanging XML messages via JMS makes it easy to think about solving many age-old application integration problems. While JMS uses Java and Java already provides a platform-independent way of exchanging data, not all messaging applications are written in Java. Fortunately, many popular legacy messaging systems, such as IBM’s WebSphere MQ, offer a JMS API in addition to their other language bindings. BEA is also providing a C API to WebLogic JMS as alpha code on its developer’s Web site at http://dev2dev.bea.com/resourcelibrary/utilitiestools/environment.jsp. We hope that this will become a supported part of the product in the future. This solves the message exchange part of the problem.XML solves the data exchange part of the problem by providing a portable, language-neutral format with which to represent structured data. As a result, it is not surprising to see many JMS applications using XML messages as their payload. Of course, the portability and flexibility of XML do not come without a cost. Not only are XML messages generally sent using TextMessage objects, which makes their serialization more costly, but they also generally require parsing the data in order to convert it into language object representations that the application can manipulate more easily. All of this requires that the receivers to do more work just to be able to get the message into a form where it can be processed.
This is not to say that you should avoid XML messages completely. XML is the format of choice for messages that cross application and/or organizational boundaries. When talking about applications here, we define an application as a program or set of programs that are designed, built, tested, and more importantly, deployed together as a single unit. What we want to caution you on is using XML message formats everywhere—even within the boundaries of a single application—just to be using XML. Use XML where it makes sense, and use other binary representations where XML is not required.
Best Practice | Use XML messages for inter-application messaging and binary messages for intra-application messaging. |
When choosing the message type for an application, there are several things you should consider. A well-defined message should have just enough information for the receiver to process it. You should consider how often the application will need to send the message and whether the message needs to be persistent. Message size can have a considerable impact on network bandwidth, memory requirements, and disk I/O. Keep messages as small as possible without removing information needed to be able to process the message efficiently. Once you decide on the information to pass, use the simplest message type that can do the job. If possible, use a message format that is directly usable by the receiver, such as a MapMessage or ObjectMessage. If you need to pass a string message, converting it to a byte array and sending it using a BytesMessage may perform better than sending it as a string in a TextMessage, especially for large strings. If you have only a few primitive types to send in a message, try using a MapMessage instead of an ObjectMessage for better performance. As the number of fields gets larger, however, the mapping code itself can add complexity. You might find that the performance benefit is outweighed by the additional maintenance burden. MapMessage also provides some extensibility in the sense that you can add new name-value pairs without breaking existing consumers. When you are using an ObjectMessage, passing objects that implement Externalizable rather than Serializable can improve the marshalling performance, at the added cost of requiring you to provide implementations of the readExternal() and writeExternal() methods.We should caution you about the implications of using an ObjectMessage to pass data across application boundaries. ObjectMessage uses Java serialization, which relies on the sender and the receiver having the same exact versions of the class available. This can lead to tightly coupled producers and consumers—even though you are using asynchronous messaging for communication. It might be possible to escape some of this coupling by using Externalizable objects, but this just means that your externalization code has to deal with object versioning.
Compressing Large Messages
XML messages tend to be larger than their binary counterparts. As you can imagine, larger messages will require more network bandwidth, more memory, and more importantly, more storage space and disk I/O to persist. If the messages are infrequent, this may not be an issue, but as the message frequency increases, the overhead will compound and start affecting the overall health of your messaging system. One way to reduce this impact is to compress large messages that carry strings as their payload. Compressed XML messages may actually provide a more compact representation of the data than any binary format.The java.util.zip package provides everything you need to support message compression and decompression. The performance impact of message compression, however, is not clear-cut and needs to be evaluated on a case-by-case basis. When considering the use of compression, there are a number of things to weigh. First, are your messages big enough to warrant compression? Small messages do not generally compress very well so using compression can, in some cases, actually increase the size of your message.Second, will the extra overhead of compressing and decompressing every message prevent your applications from meeting your performance and scalability requirements? Compressing messages can be thought of as a crude form of throttling because the compression step will slow down your message producers. Of course, this isn’t necessarily a good thing because your consumers will also have to decompress the message before processing it.Finally, will compression significantly reduce your application’s network and memory resource requirements? If the producers and consumers are not running inside the same server as the destination, compressing the messages can reduce the network transfer time for messages. It can also reduce the memory requirements for your WebLogic JMS server. If the messages are persistent, it can also reduce the amount of disk I/O for saving and retrieving the messages. When the persistent store type is JDBC, it can reduce the network traffic between the WebLogic Server and the database. If the producers, consumers, and JMS server are all running inside the same WebLogic Server instance, many of these benefits may be outweighed by the additional CPU and memory overhead of compression and decompression.
Best Practice | The decision on whether to use compression is something that needs to be carefully considered. If the producers, consumers, and destinations are collocated inside a WebLogic Server instance, it is generally better not to use compression. |
Selecting a Message Acknowledgment Strategy
WebLogic JMS retains each message until the consumer acknowledges that it has received the message. Only at this point can WebLogic JMS remove the message from the server. Committing a transaction is one way for an application to acknowledge a message has been received. If transactions are not being used, an application uses message acknowledgments to acknowledge that a message, or set of messages, has been received. Message acknowledgments and transactions are mutually exclusive. If you specify both transactions and acknowledgments, WebLogic JMS will use transactions and ignore the acknowledgment mode.Your application’s message acknowledgment strategy can have a significant impact on performance and scalability. WebLogic JMS defaults to using AUTO_ACKNOWLEDGE mode. This means that WebLogic JMS will automatically acknowledge each message after the receiver processes it successfully. Using AUTO_ACKNOWLEDGE mode can reduce the chance of duplicate messages; however, it comes at a cost because the receiver’s run time must send an acknowledgment message to the JMS server after each message to tell the server to remove the message.If your application can tolerate duplicate messages, JMS defines the DUPS_OK_ACKNOWLEDGE mode to allow the receiver’s run time to acknowledge the messages lazily. WebLogic Server 8.1 does not currently do anything special for DUPS_OK_ACKNOWLEDGE mode; this mode behaves exactly like AUTO_ACKNOWLEDGE mode. Another technique that gives you a little more control is using CLIENT_ACKNOWLEDGE mode to explicitly acknowledge groups of messages rather than each message individually. While message duplication is still possible, it typically occurs only because of a failure where your receiver has already processed some messages but had not acknowledged them. You could imagine building a strategy that tries to detect duplicate messages when starting up and/or recovering from a failure condition.For stand-alone, nondurable subscribers, WebLogic JMS caches local copies of the message in the client JVM to optimize network overhead. Because this caching strategy also includes message acknowledgment optimizations, these subscribers really will not benefit from using the aforementioned CLIENT_ACKNOWLEDGE strategy; AUTO_ACKNOWLEDGE should perform equally well.In addition to the standard JMS message acknowledgment modes, WebLogic JMS provides two additional acknowledgment modes through the weblogic.jms.WLSession interface:NO_ACKNOWLEDGE.This mode tells WebLogic JMS not to worry about message acknowledgments and simply provide a best-effort delivery of messages. In this mode, WebLogic JMS will immediately delete messages after they have been delivered, which can lead to both lost and duplicate messages. Applications that want to maximize performance and scalability and can tolerate both lost and duplicate messages should use this acknowledgment mode.MULTICAST_NO_ACKNOWLEDGE.This mode tells WebLogic JMS to use IP multicast to deliver messages to consumers. As the name implies, this mode has similar semantics to the NO_ACKNOWLEDGE mode.We will talk more about using multicast sessions later in this section.
Designing Message Selectors
As we discussed earlier in the “JMS Key Concepts” section, message selectors allow consumers to further specify the set of messages they want to receive. Consumers specify a logical statement using an SQL WHERE clause-like syntax that the JMS provider evaluates against each message’s headers and/or properties to determine whether the consumer should receive the message. WebLogic JMS adds another type of selector for use with the WebLogic JMS XMLMessage type. With this message type, you can specify XPATH expressions that evaluate against the XML body of the message.In WebLogic JMS, all selector evaluation and filtering takes place on the JMS server, with the exception of multicast subscribers, which we will discuss later. For topics, WebLogic JMS evaluates each subscriber’s message selector against every message published to the topic to determine whether to deliver the message to the subscriber. For queues, the evaluation process is more complex. A message is always delivered to a queue, and WebLogic JMS will evaluate the message against each receiver’s message selector until it finds a match. If no consumer’s message selector matches the message, the message will remain in the queue. When a new consumer associates itself with the queue, WebLogic JMS will have to evaluate its message selector against each message in the queue. Because of this, message selectors generally perform better with topics than with queues. As with any such general statement, your mileage may vary depending on the exact circumstances in your application.It is often better to split a destination into multiple destinations and eliminate the need for message selectors. For example, imagine an application that sends messages to the trade queue. If the application’s consumers that use message selectors to select only buy or sell orders, we can split the trade queue into buy and sell queues and eliminate the need for a message selector. When a producer sends a buy message, it sends the message directly to the buy queue, and only buy consumers need to listen to that queue. Partitioning application in this way has other advantages besides performance. With this architecture, you can monitor each message type individually in each queue, and if performance does become an issue down the road, you can even separate the queues onto separate servers.
Best Practice | Always evaluate the advantages and disadvantages of partitioning your application before deciding to use message selectors. Favor splitting destinations over the use of message selectors when there is a clear separation of message types. |
Of course, there will be situations when using a message selector is unavoidable. In these situations, there are several things to keep in mind when designing your message selector strategy. First, what fields does your selector need to reference? Message header fields such as JMSMessageID, JMSTimestamp, JMSRedelivered, JMSCorrelationID, JMSType, JMSPriority, JMSExpiration, and JMSDeliveryTime and application-defined message properties are the fastest to access. Examining an XMLMessage message body adds a significant amount of overhead and, therefore, will be much slower.
Suppose that, in the previous example, the producer sends a message in XML format like the one shown here:
<order type=”buy”>
<symbol>beas</symbol>
<quantity>5000</quantity>
</order>
The consumers can use an XPATH message selector like this:
“JMS_BEA_SELECT(‘xpath’,’/Order/attribute::type’) = ‘buy’”
These XPATH message selectors are the most expensive expressions to evaluate because they involve parsing at least some portion of the XML document. Of course, XPATH selectors are convenient to use, but you should realize that they are expensive and plan your use of them accordingly.Second, what type of operators do you need to use? In general, you should strive to keep selectors as simple as possible. Avoid complex operators such as like, in, or between in favor of primitive operators such as =, >, or <. The more complex the selector is, the slower its evaluation will be. In general, an XPATH expression will be the most expensive because it has to scan the XML body of the message looking for the element or attribute value to compare.
Best Practice | Keep message selectors as simple as possible. Try to avoid more complex operators such as like, in, or between. |
Third, what data type do you need to use for the selector? Where possible, avoid the use of strings in message properties, especially if they are large. Strings are more expensive to serialize and more expensive to compare than other primitive types. In our previous example, if we decided to use message selectors to distinguish between buy and sell messages, we would be better off performance-wise defining the message property as an integer (for example, 1 = buy, 2 = sell) rather than using the strings buy or sell.WebLogic Server 7.0 SP2 and 8.1 introduce the concept of indexed selectors for applications that need to use strings to differentiate messages. With an indexed selector, an application uses message property names, rather than the value of a particular message property, to distinguish the messages. For example, the application would set the buy message property to any value on all buy messages and the sell message property to any value on all sell messages. Consumers only interested in sell messages would use the expression sell IS NOT NULL for its message selector. Since the message property names are indexed, it is generally faster to use an indexed selector than it is to use the strings buy and sell as the value for the trade_type message property.
Fourth, when using compound selectors, which elements are most efficient to process and which elements are most selective? If a selector involves both message header fields and message property fields, place the message header field to the left of the expression. It is less expensive to use the expression JMSPriority > 5 AND (trade_type == ‘buy’) than it is to use the expression (trade_type == ‘buy’) AND JMSPriority > 5. When a selector involves multiple evaluation criteria one field is much more selective than the other, it may make sense to put this one first to reduce the number of evaluations necessary to rule out a particular message. For example, if you had a selector that did something like (trade_type == ‘buy’) AND (trade_num-shares > 100000), it would make more sense to reverse the order because presumably there are many more buy orders than there are orders that involve more than 100,000 shares. Selector evaluation is always left-to-right, except where parentheses explicitly preclude it.
Tip | With compound selectors, order matters. WebLogic JMS will short-circuit message selector evaluation once it determines the message does not match. Design your selectors to take advantage of this default left-to-right evaluation order. |
Finally, what type of messaging do you need to use? We certainly do not recommend that you choose the messaging model based on whether you need to use message selectors. You should know, though, that message selectors generally tend to be faster and more predicable with topics than with queues. Of course, this is not always the case. When using message selectors with queues, the performance will be very dependent on the consumers keeping up with the producers and with quick matching of a message to a consumer. In cases where the queue is typically empty and it is easy to find a match between a message and a consumer, queues will actually outperform topics. The problem is that when something happens to make the consumers not keep up, the performance of message selectors with queues will degrade much faster than with topics.One last thing to mention before moving on to other design considerations is the interaction between message selectors and paging. WebLogic JMS maintains messages in memory whenever possible. Whenever paging is necessary, only the message body is paged out of memory. This means that WebLogic JMS can evaluate most selectors even if the message body itself is paged out. The exception to this would be XPATH selectors. Because WebLogic JMS evaluates the selectors on topics at the time the message is published, this is only a big concern for XPATH selectors used in conjunction with queues.
Choosing a Message Expiration Strategy
By default, JMS messages never expire. When your application is sending messages to queues or topics with durable subscribers, WebLogic JMS must retain the message until it is consumed. This is fine in most point-to-point messaging applications because consumers are constantly consuming messages. Any message sent to a queue will typically be consumed in a relatively short period of time. If the queue consumers get disconnected, they will typically reconnect as soon as possible and start processing any messages that might have built up in the queue. For durable subscribers to a topic, this is not necessarily true. The messaging system is forced to retain any message that has not been consumed by a durable subscriber, regardless of whether that durable subscriber will ever return. In this case, WebLogic JMS is at the mercy of the durable subscriber to unsubscribe when he or she no longer wishes to receive the messages. If the durable subscriber logic is flawed in such a way that the subscribers do not unsubscribe properly, then the messaging system will start to fill up with messages that may never be delivered. As you can imagine, this calls for real caution in your use of durable subscribers. Fortunately, there is another way to help deal with this problem.Conventional wisdom suggests that the time-sensitive messages should be sent only to nondurable subscribers, and that is true for the most part. There are situations when a producer may wish to publish time-sensitive messages even when a subscriber is not connected. For example, an employee portal application may wish to publish messages to a topic that represents all employee mailboxes. If each employee uses his or her login id as the durable subscription’s client id, the employees can receive published messages every time they log in. Imagine that you want to send 401(k) enrollment information to all your employees. The problem is that the JMS server must retain the message until every employee reads the message, which may never happen. Because the message is really irrelevant after the enrollment period ends, it is better to set an expiration time on the message so that WebLogic JMS can discard the message when it becomes unimportant—even if some employees never read it.Message expiration can be set at the connection factory level or via any of the other override mechanisms discussed earlier. Using the Time To Live attribute or by explicitly calling the setTimeToLive() method on the QueueSender or TopicPublisher, you can specify the number of milliseconds the WebLogic JMS should retain an undelivered message after it is sent.
Best Practice | For messages that become irrelevant after a certain time, set the Time To Live attribute or call setTimeToLive() on the producer to avoid message buildup. |
Active Expiration
Prior to WebLogic Server 8.1, WebLogic JMS used a lazy message expiration policy. This means that it would remove expired messages from the system only when it happened to discover them in its normal course of processing messages. If a destination was idle, it was possible for expired messages to accumulate and continue to consume system resources. This meant that, under certain conditions, it was possible for a new message to be rejected because of quota restrictions even though the destination or JMS server contained expired messages that, if removed, would have cleared the quota condition and allowed for the delivery of the message.
WebLogic Server 8.1 adds support for active message expiration, in addition to the lazy message expiration scheme. Active message expiration works by having each JMS server periodically scan all destinations for expired messages. The JMS server’s Expiration Scan Interval property controls the frequency of the scans. If a message expires at time t, the maximum length of time that the message will be retained is t + ExpirationScanInterval + s, where s is the time it takes to scan all message expiration times in the JMS server at the next scan interval. Some messages may be removed almost immediately, by lazy message expiration. Other messages may not be removed until the full amount of ExpirationScanInterval + s seconds has elapsed. Setting ExpirationScanInterval to 0 disables active message expiration. Even with active expiration disabled, messages will still expire and be removed during normal message processing by the lazy message expiration mechanism.
Warning | Setting ExpirationScanInterval to a very large value effectively disables active scanning for expired messages. Of course, expired messages will still be removed during normal message processing by the lazy expiration mechanism. |
Expiration Policies
Prior to WebLogic Server 8.1, WebLogic JMS simply discarded all expired messages it found. Although this is still the default behavior, WebLogic Server 8.1 supports the concept of an Expiration Policy on a destination. Expiration policies allow you to define the action that WebLogic JMS should take when it finds an expired message. Configure the Expiration Policy using the JMS template or destination’s Expiration Policy Configuration tab in the WebLogic Console. The valid values are as follows:(none).Same as the Discard policy, WebLogic JMS removes expired messages from the destination. Discard.WebLogic JMS removes expired messages from the destination. Log.WebLogic JMS removes the expired messages from the destination and writes an entry to the server log file indicating that the messages have been removed. The Expiration Logging Policy defines the actual information that is logged.Redirect.WebLogic JMS moves the expired messages from their current destination to that destination’s configured Error Destination, if defined.
By setting the Expiration Policy to Log, you are telling WebLogic JMS to write a log entry for every expired message it removes. Using the Expiration Logging Policy attribute, you can tell WebLogic JMS what information to log. WebLogic JMS will always write the JMSMessageID header; by default, this is the only information logged. You can add message headers or properties to the list by explicitly listing their names using a comma to separate the entries. WebLogic JMS also provides two wildcard values, %header% and %properties%, that will write all message headers or all message properties to the log, respectively.
Note | When the Expiration Policy is set to Log, WebLogic JMS always writes the JMSMessageID field to the log. If you forget to set the Expiration Logging Policy, then the log entry will contain only the message’s JMSMessageID value. |
Handling Poison Messages
At some point, most messaging applications encounter situations where they are unable to process a message successfully. There are multiple reasons that this might occur; for example, the message could contain bad data, or the business logic might require access to a back-end system that is temporarily unavailable. In these situations, the message consumer cannot successfully process the message and needs to do something with that message so that it can move on to do other useful work, if possible. For example, a message-driven bean (MDB) using transactional delivery might call setRollbackOnly() on its MessageDrivenContext object to prevent the transaction from committing, thus forcing the JMS provider to requeue message.Of course, the problem with our example is that the JMS provider will simply try to redeliver the message at some point in the future. If the redelivery occurs and the application is still unable to process the message, the application can end up in a deadly cycle of trying to process a poison message. When designing your messaging application, you need to understand in what situations your application might encounter poison messages and come up with strategies that make sense to reduce the burden on the underlying messaging system.
If your application accepts messages from another application, you might want to plan for unexpected or invalid message formats or data. While you could use WebLogic JMS’s support for dead-letter queues to handle this situation, it really doesn’t solve the problem. In this case, the problem is not a system-level problem with the actual delivery of the message, but an application-level problem with the expected contents of the message. As a result, asking the messaging system to try to redeliver the message is a waste of resources because the application will never be able to process the message. Furthermore, the offending message producer might continue to try to resend this message or, worse, all messages with this invalid message format or data. In this situation, you almost always want the application to reject the message, possibly by notifying the sending application that the message was rejected because of a bad message format or bad data. This means that you need the receiving application to divert the poison message to an error-handling process that will notify the sender of the problem rather than rejecting the message and forcing the messaging system to try to redeliver it.
Best Practice | Use application-level error handling rather than redelivery and error destinations to handle errors in message content, including invalid formats and bad data. |
Another common situation that occurs in the message processing application is that a back-end system becomes temporarily unavailable. Because the receiving application may require access to this back-end system to be able to process the incoming messages, the application must somehow delay the processing of the messages until the back-end system becomes available. Ideally, you could somehow detect that the back-end system is unavailable and simply tell the application to stop trying to consume any messages until further notice. Unfortunately, today this means writing your application to support this. For example, an application using MDBs to consume the messages would need to stop or undeploy the MDB to prevent it from continually trying to consume the messages and restart or redeploy it only once the back-end system becomes available. For cases where you expect, or at least want to plan for, long periods of back-end system unavailability, you should carefully consider using a mechanism for stopping and restarting the consumption of messages. If your back-end systems are highly available and you never expect more than transient periods of unavailability, then you might want to rely on the JMS provider’s ability to redeliver the message at some point in the future. WebLogic JMS supports this through the use of Redelivery Delay, Redelivery Limit, and Error Destinations.
Tip | For receiving applications that require access to external systems that are known to be unavailable occasionally, you will want to use a mechanism to stop and restart your message consumers when the system becomes unavailable. |
Redelivery Delay
Redelivery Delay instructs WebLogic JMS to defer the redelivery of messages for a specified amount of time. Messages with a redelivery delay do not prevent other messages behind the delayed message from being delivered and can alter message ordering. You can set the Default Redelivery Delay on your WebLogic JMS connection factory. From there, you can explicitly override the Redelivery Delay on the session by using the WebLogic JMS extensions:
((WLSession)queueSession).setRedeliveryDelay(redeliveryDelayMilliseconds);
You can also administratively override both the connection factory and session settings by setting the Redelivery Delay Override on a template or destination.
Redelivery Limit and Error Destination
Redelivery Limit controls the number of times that WebLogic JMS will attempt to deliver a message before declaring it undeliverable. When a message is determined to be undeliverable, WebLogic JMS will move the message from its current destination to the Error Destination associated with the current destination. This Error Destination feature is sometimes known as a dead-letter queue. If an Error Destination is not configured, WebLogic JMS will silently delete the messages. If the Error Destination has reached its quota, WebLogic JMS will drop the message and log an error message once every five minutes until the quota problem is resolved. For non-persistent messages, this means that the message is discarded; for persistent messages, the message will remain in the persistent store and will reappear in the original destination the next time the server starts.Message producers can set the redelivery limit for messages they produce using the WebLogic JMS WLMessageProducer extension:
((WLMessageProducer)queueSender).setRedeliveryLimit(3);
If you pass -1 as the argument to setRedeliveryLimit(), it means that there is no limit unless it is overridden by the destination. Both the Redelivery Limit and Error Destination are configurable on a JMS template or destination. Setting the Redelivery Limit on a template or destination overrides any setting passed in from the producer; a value of -1 specifies that there is no override. An Error Destination can be a queue or a topic but must physically reside in the same JMS server as the associated destination.When using error destinations, it is very important to incorporate the processing of messages from this destination in your application. One of the biggest challenges in doing this can be determining why the message was not successfully processed and what your application needs to do with it. We highly recommend that you do not let the error queue be used to handle application-level message content errors. If you can handle these errors through a separate process, then you should be able to treat all messages in the error destination as messages that could not be processed due to transient failures. One way of processing them would simply be to resend them to their original destination once you know that the transient failure has subsided. If you are using an Expiration Policy of Redirect, you may also have to look at message expiration times to segregate the messages to retry from those that expired. This isn’t a big deal because it is easy to accomplish. Trying to segregate application-level errors from transient system-level errors placed onto the error queue is much more difficult.
Ordered Redelivery
To add to this complication, some systems depend on the order of the messages. While the JMS specification requires that consumers receive messages in the order they were received by the JMS provider, it does not define the ordering requirements for message redelivery. WebLogic JMS can support ordered redelivery of messages but only when the consumer configuration meets certain constraints.First, ordered redelivery requires that the destination have only one consumer. While this seems like a huge limitation at first, we will show you in the next section why this is necessary even for truly ordered delivery. Next, the destination sort order must be set such that the message will be placed at the top of the ordering. For example, sorting on message priority might cause a message to be placed behind a higher-priority message that just arrived. Finally, message selectors will cause the ordered redelivery to be applicable only to the current message and any other messages that match the selector.
Tip | Ordered redelivery of messages requires you to use only one consumer for the destination. Destinations with custom sort orders and consumers using message selectors can affect and/or prevent ordered redelivery. |
Recall from our earlier discussion that, for asynchronous consumers, messages are pipelined and the MessagesMaximum attribute of the connection factory controls this pipeline size. This creates a problem when a message is redelivered. In the previous versions of WebLogic Server, setting MessagesMaximum to 1 does not solve the problem because, in reality, it means two messages are outstanding, one in possession of the consumer and another in flight. If the consumer rolls back the first message, that message will be redelivered only after the in-flight message. This causes us to lose the desired message ordering.WebLogic Server 8.1 redefines the behavior of the MessagesMaximum attribute, and a value of 1 now means that there will be no in-flight messages. Therefore, you must also create an application-specific connection factory and set its MessagesMaximum value to 1 to achieve ordered redelivery.
Warning | You must set your application’s connection factory’s MessagesMaximum attribute to 1 to get ordered redelivery of messages. In previous versions of WebLogic Server, even this did not guarantee that the message redelivery will maintain message order even if there is only one asynchronous consumer. |
For more details on these constraints, please refer to the WebLogic JMS documentation at http://edocs.bea.com/wls/docs81/jms/implementl. We will talk more about message ordering issues in the next section.
Handling Message Ordering Issues
Many applications require processing of messages in the order in which they were received. The problem is that the only way to guarantee that messages are processed in order is to have a single consumer processing one message at a time. For example, imagine that you have to send three messages to a queue in the following order: message1, message2, and message3. If you have two consumers, consumer1 and consumer2, processing messages concurrently from that queue, WebLogic JMS will pick up message1 and hand it to consumer1 and then pick up message2 and hand it to consumer2. From a WebLogic JMS perspective, it has delivered the messages in order; from an application perspective, the messages may or may not be processed in order, depending on the thread or process scheduling between the two consumers. It is a race condition at this point. As such, it is entirely possible for consumer2 to get more resources than consumer1 and complete the processing of message2 before consumer1 completes the processing of message1. Furthermore, it is also possible for consumer2 to go back to WebLogic JMS to get message3 and complete the processing of message3 before consumer1 ever finishes with message1. In short, as soon as you start parallel processing, message ordering across the threads or processes can no longer be guaranteed.Clearly, you need a way to maintain ordering without creating a bottleneck in your application that can process only one message at a time. Unfortunately, there are no easy answers to this problem, and any messaging vendor that claims to have solved the problem is probably talking about a different problem. The typical way to handle message ordering issues is to try to define sets of messages that require ordering only within the set and parallelize the processing by assigning different sets of messages to different threads/processes. WebLogic JMS supports ordered messaging for destinations that have a single consumer. It is up to you, as the application architect, to determine your application’s ordering requirements and your options for addressing them.
Warning | Strict ordered processing of a set of messages is possible only if there is one consumer. Make sure you truly understand the application’s ordering requirements so that you can explore your options for partitioning the messages to achieve parallelism while still maintaining order where it counts. |
Using Transactions
Transactions are used when multiple operations need to be treated as single atomic unit. As discussed earlier, the JMS specification introduces the concept of a transacted session to allow multiple JMS operations to be performed within the scope of a transaction. If your transactions involve multiple JMS operations only within a single session, you should use transacted sessions. For transactions that involve multiple JMS sessions and other resources, you can make your JMS session JTA-aware by enabling XA transaction support on your connection factory. Using the WebLogic Console, simply check the XA Connection Factory Enabled checkbox on the Transactions Configuration tab. This will make WebLogic JMS return a connection factory that implements the javax.jms.XAConnectionFactory interface whenever you look up the connection factory from JNDI.If your transaction involves multiple resources, the WebLogic JTA transaction manager detects this and switches automatically to the two-phase commit (2PC) protocol. WebLogic JMS implements its own XA resource manager and therefore can participate in a 2PC transaction without requiring support from the underlying storage manager (for example, the JDBC driver for JDBC-based message stores). One side effect of this is that transactions that involve JMS and any other database resource—even if JMS is using the same database as its message store—will always involve a 2PC transaction. Another side effect of this is that any JMS JDBC-based message stores cannot use XA JDBC drivers. You must use the non-XA version of the driver for accessing the message store; WebLogic JMS will handle the global transactions commit or rollback on the underlying database.
Note | Do not use an XA JDBC driver to create JMS JDBC stores even when the store would participate in global transactions. |
For any other database work done by other components as part of the transaction, you need to be using a JTA-aware DataSource that refers to a JDBC connection pool set up with an XA-compliant JDBC driver. A JTA-aware DataSource means one that has the Honors Global Transactions attribute selected; in previous versions of WebLogic Server, this was known as a TxDataSource. It is possible for WebLogic JTA to involve one non-XA resource in a global transaction. For a DataSource that refers to a JDBC connection pool that is not using an XA-compliant driver, you can use the Emulate Two-Phase Commit for Non-XA Driver option. WebLogic JTA uses a variation of the last-agent commit algorithm where the non-XA resource is committed only after all of the XA resources have responded that they are prepared to commit. If the non-XA resource fails, WebLogic JTA tells the XA resources to roll back; otherwise, it tells them to commit. While this algorithm is the best that can be done with non-XA resources in a global transaction, it is always better not to involve non-XA resources in global transactions in order to minimize the risk of failures.If you are going to be using global transactions that involve JMS, the most important thing to keep in mind is that WebLogic JTA will optimize global transaction coordination for collocated resources. Some of the ways that you can collocate resources are as follows:
If your transactions involve JMS and one or more EJBs, deploy all of your EJBs and JMS destinations on the same WebLogic Server instances. If you are using distributed destinations, deploy all of the EJBs to every server that hosts a member destination.
If your transactions involve JMS and JDBC resources, deploy your JDBC connection pools, JTA-aware DataSource objects, and JMS destinations on the same WebLogic Server instances. Again, for distributed destinations this means deploying them to every server that hosts a member destination.
If your transactions involve multiple JMS destinations, deploy all of the destinations on the same WebLogic Server instance. For applications accessing multiple distributed destinations, make sure to collocate the members of each distributed destination on the same WebLogic Server instances. It is even more efficient if you can deploy them in the same JMS servers.
Before moving on, it is worth noting that a transacted session performs better than JTA-aware sessions when using a stand-alone client application. JTA-aware sessions have to manage resources on both the client and the server hosting the JMS connection. Transacted sessions use a transaction delegation model where the transaction scope is restricted to the JMS server. Transacted sessions are well suited for batching send and receive requests from stand-alone clients running in their own JVM. JTA-aware sessions are well suited for server-side applications, which typically access other J2EE components within the same global transaction context.
Using Multicast Sessions
Multicast sessions are a WebLogic JMS extension that can improve performance dramatically, especially when your application needs to send individual messages to a large number of subscribers. When using IP multicast to transmit messages, the underlying network needs to carry only one copy of the message regardless of the number of subscribers. Because of the inherent unreliable nature of the UDP protocol on which IP multicast is based, WebLogic JMS cannot guarantee delivery of messages sent using multicast sessions. Network congestion plays a big role in the quality of service. Clearly, applications that cannot tolerate message loss should not consider the use of multicast sessions.Multicast also requires a tightly controlled network environment. Most routers and firewalls are not configured to allow multicast traffic to pass through them. While it is possible to configure them, this implies that your subscribers and WebLogic JMS servers are all connected by a network that you can control. Multicast messages use a time-to-live (TTL) concept that routers use to control the propagation of multicast messages. Each router that forwards a multicast packet decrements the packet’s TTL; once the TTL reaches zero, the packet will no longer be forwarded between network segments. Multicast uses a special class of IP addresses, known as Class D addresses, which range from 224.0.0.0 to 239.255.255.255. Typically, addresses in the 224.0.0.x range are reserved for multicast routing.WebLogic JMS supports only multicast sessions for topics. This makes sense because the benefit of multicast is seen only when the same message is sent to large numbers of consumers. To use multicast sessions, you need to configure the multicast information for your topics. When using multicast, we highly recommend that you select unique multicast address and port combinations for each topic that will be using multicast for message delivery. Doing this will help segregate the traffic for a particular topic and will reduce the chances of message loss.
Once the topics are properly configured, you need to create a JMS session that uses the WebLogic JMS-specific MULTICAST_NO_ACKNOWLEDGE acknowledgment mode, as shown here. Note that multicast sessions cannot use transacted sessions or JTA transactions. Use the following code to create a multicast session:
TopicSession topicSession = topicConnection.createTopicSession(false,
WLSession.MULTICAST_NO_ACKNOWLEDGE);
Next, we create the TopicSubscriber as we normally would, as shown here. This call will fail if the topic is not configured to support multicast. Also note that multicast consumers cannot be durable subscribers.
TopicSubscriber topicSubscriber = topicSession.createSubscriber(topic);
Finally, we need to register our MessageListener and start the connection, if it is not already started. Multicast sessions must use asynchronous delivery via the MessageListener:
topicSubscriber.setMessageListener(new MyMessageListener());
topicConnection.start();
For multicast sessions, WebLogic JMS tracks the message sequence. A sequence gap occurs when messages are lost or received out of order. When WebLogic JMS detects a sequence gap, it will deliver a weblogic.jms.extensions.SequenceGap_Exception to the multicast session’s ExceptionListener, if one is registered.
Tip | If your application cares about sequence gaps when using multicast delivery, you can register an ExceptionListener with the WebLogic JMS session to be notified when sequence gaps occur. |
Handling Request/Reply Style Message Exchange
JMS is all about sending and receiving messages. Whenever an application sends a message to another application, it is not uncommon for the sending application to require a response message after its original message is processed. This pattern is so common that JMS explicitly supports the pattern in several ways.First, JMS supports that concept of a temporary destination, and the JMS messages headers include a JMSReplyTo field for passing a reference to a reply-to destination as part of a message. While there is nothing that requires the reply-to destination to be a temporary one, this is a common pattern that clients use to prevent having to use message selectors to find their response among responses for other clients. An example of how you might use this is the following:
Queue responseQueue = queueSession.createTemporaryQueue();
QueueReceiver queueReceiver = queueSession.createReceiver(responseQueue);
queueReceiver.setMessageListener(new MyMessageListener());
textMessage.setText(“My Request Message”);
textMessage.setJMSReplyTo(responseQueue);
queueSender.send(textMessage);
responseQueue.delete();
Now, let’s look at the consumer of the request message. In the consumer, we simply get the JMSReplyTo destination from the request message, generate our response message, and send the response message to the destination:
Queue replyQueue = (Queue)requestMessage.getJMSReplyTo();
queueSender.send(replyQueue, replyMessage);
In this example, our request producer is using the MessageListener to asynchronously receive the response that will be sent to the temporary destination. This is the recommended way of accomplishing the request/response pattern. Of course, applications sometimes want to block until the response comes back. You could achieve this using the synchronous receive() method:
Queue responseQueue = queueSession.createTemporaryQueue();
QueueReceiver queueReceiver = queueSession.createReceiver(responseQueue);
textMessage.setText(“My Request Message”);
textMessage.setJMSReplyTo(responseQueue);
queueSender.send(textMessage);
Message responseMessage = queueReceiver.receive();
responseQueue.delete();
Here, we used the no-args receive() method that blocks until a message arrives. There is also a version that accepts a time-out value, after which the method will return control to the application even if no message has arrived. Finally, there is a receiveNoWait() method that does not block and will return null if no message is waiting.
Warning | Use the receive(long timeout) or receiveNoWait() methods inside server applications that need to receive a response message synchronously from another application. Even in stand-alone JMS client applications, think twice before using the no-args receive() method, which can cause the application to block for an uncontrolled length of time. |
Notice that, in both cases, we call the delete() method on the temporary destination when we are through with it. Applications should try to reuse temporary destinations rather than continually creating and deleting them, wherever possible. WebLogic Server will automatically delete temporary destinations when the JMS connection is closed.The JMS specification authors must have thought that this pattern was so common that they created an easier way to accomplish the same thing by using a Requestor object. The code shown here demonstrates how to accomplish the same synchronous request/response pattern using a temporary queue.
QueueRequestor queueRequestor =
new QueueRequestor(queueSession, requestQueue);
textMessage.setText(“My Request Message”);
Message responseMessage = queueRequestor.request(textMessage);
The QueueRequestor and TopicRequestor utility classes automatically create the temporary destination and block waiting for the response. Be forewarned that these classes do not allow you to perform nonblocking or blocking with a time-out request. You must use nontransacted sessions with these classes. As with all temporary destinations, the messages sent to them are non-persistent because temporary destinations, by definition, do not survive application restarts or failures.
Best Practice | When using request/response style messaging in a WebLogic Server, be very careful about calling blocking methods to receive the response. If you must call receive(), always use a relatively short time-out to prevent tying up WebLogic Server execute threads for extended periods of time. Wherever possible, use the asynchronous MessageListener to wait for the reply. |
The other major approach for supporting request/reply messaging is through the use of a correlation ID. Correlation IDs provide you with the ability to assign a unique identifier to a message and its reply. JMS doesn’t do anything with these correlation IDs; it is up to the application to use them to correlate requests with replies. By using correlation IDs, you have much more freedom about where and when you send the reply. Using correlation IDs can be useful even when used in conjunction with temporary destinations to help applications that can have multiple outstanding messages at any point in time.To use correlation IDs, the first thing you need to decide on is what unique identifier you are going to use to correlate the messages. The JMS provider creates a unique identifier for every JMS message that it stores in the JMSMessageID header. As a result, using the JMSMessageID as the correlation ID is a common practice. JMS messages also contain a JMSCorrelationID header that applications can use to set the correlation ID for a particular message. When using this scheme, the producer sending the message needs to call only the getJMSMessageID() method on the Message after the message is sent. It is important to wait until after the message is sent because WebLogic JMS does not actually set the message’s JMSMessageID header until the message is sent. The producer doesn’t actually need to set the JMSCorrelationID field in the request message because the consumer is going to associate the JMSMessageID of the request message with the JMSCorrelationID of the reply message:
replyMessage.setJMSCorrelationID(requestMessage.getJMSMessageID());
Of course, there is nothing preventing you from using your own correlation ID scheme. Simply set the correlation ID in the original request message, and have the consumer read the incoming request message’s correlation ID using the getJMSCorrelationID() method and then set it on the outgoing reply message.
The last thing we need to discuss as it relates to correlation IDs is the use of a shared reply queue across all requests. A very common pattern we see occurs where you have a synchronous client, such as a Web application responding to a request from a browser, needing to call a back-end system that is accessible only via a messaging system. Usually, the synchronous client wants to send a message and wait for the response. The client, however, will typically only wait so long and then give up on the response, possibly even resending the original request. This causes a problem if our back-end system is slow but still working in that the shared reply queue may end up with reply messages that have already been abandoned by the requestor. Fortunately, there are several things that we can do to handle this problem.First, we can use message expiration on the reply messages to prevent them from accumulating. For that matter, we might want to use expiration times on the request messages to try to prevent the back end from receiving messages that the client has given up on. Finally, you could just use temporary destinations that the client deletes when giving up on the reply. This will also give your back-end system some indication that the client has left when it gets an error trying to send the response to the temporary destination that no longer exists. Of course, none of these solutions really solves all of the application-level problems associated with this type of scenario, but at least they help keep the messaging system healthy. The preferred approach to dealing with this type of scenario is to try to separate the synchronous client request into two parts, one to submit the request and another to look for the response.