- my message is in an ActiveMQ queue or topic
- its processing fails, because of some exception - ie. database access exception due to server nonavailability
- since we get an exception, the message is not handled properly, we may want to retry processing attempt some time later
- of course, for the redelivery to happen we need the message to stay in the ActiveMQ queue - fetching messages from the queue will be stopped until the redelivery succeeds or expires
Author: Marcin Cylke
Easier and nicer JMS
JMS seems like a hostile ground. It has all it's quirks and strange behaviours. A couple of defining standards plus esoteric brokers, queues and topics. |
![]() |
At work, we mainly use open source Jms solutions, namely Apache ActiveMQ. This one is usually bundled with Apache Servicemix, as a message broker for this particular ESB. As there are some minor caveats in this scennerio, I'd like to describe here some guidelines for getting to running JMS queues.
Treat this post as a quick cheat sheet with the most common things about JMS I tend to forget :)
Minor glitches encountered during work with embedded broker led to some thoughts about switching to external broker. This is how I configure SMX and AcviteMQ.
Necessary steps:
- change apache-servicemix/conf/servicemix.properties activemq.port to sth else than standard, for example 61626
- change apache-activemq/conf/activemq.xml with this settings:
- change port, the service listens on:
- setup separate JMX instance:
- change port, the service listens on:
- the nicest tool I found for browsing queues and topics is Hermes JMS. Sample config, that connects Hermes to ActiveMQ instance is on the picture below:
- sending simple messages with Hermes is basic, but what if you need to set some headers, send bulk messages, etc. Easy, just use Hermes xml format. Look like this code snippet below and is rather self-explanatory:
105 1235 - since we use lots of Apache Camel to consume messages, here is a simple way to start broker in your tests:
- start a broker
BrokerService broker = new org.apache.activemq.broker.BrokerService(); broker.setBrokerName("AMQ-1"); broker.addConnector("tcp://localhost:51616"); broker.setPersistent(false); broker.start();
Notice it has persistance disabled.
- initialize Camel's JMS component:
ctx.removeComponent("jms"); ctx.addComponent("jms", ActiveMQComponent.activeMQComponent("tcp://localhost:51616"));
- if you want to pass messages to reference endpoints, (like ref:input), use this wrapper method:
private JmsEndpoint createJmsEndpoint(String endpoint) throws JMSException { ActiveMQComponent amqc = (ActiveMQComponent) ctx.getComponent("jms"); JmsEndpoint endp = JmsEndpoint.newInstance(new ActiveMQTopic(endpoint), amqc); return endp; } createJmsEndpoint("ESB/XYZ")
- start a broker
These are all the tricks I've got for now! But if you know some other good tools that handle JMS, feel free to comment! Got more advices, again, comment!
Schematron to the rescue!
In an ideal world all the standards fit well into their places. It is sufficient to use just one serious standard, because all the problems can be solved with it - the standardization processes is there for some reason. But that happens only in ideal world, which we're not living in. | ![]() |

How does it look?
The rules' file may look like this:TouK Schematron test harness checking GetMigrationOffers Report date. Unique offers allowed. Each offer has to have an @abc attribute Each offer has to have a tariff Each offer has to have a promotion checking GetAllPhones TACs should be unique. TAC: , handsetId: offerId:Here we see two rules, one named getMigrationOffers and the other getAllPhones. The rules - mainly their asserts seem pretty self explanatory, but for the sake of completeness I'll describe the rules for getAllPhones. There is one rule, which checks the uniqueness of tac elements. This rule tries to ensure that each handset should have a list of unique tac elements as its children. However there may appear tac elements of the same value in different handset elements. Given an input XML in the form of:
12028006 20070705 35535302 01216100 01216100 12028006 20070705 35535302 01216100 12028006 20070705 35535302 01216100And passing those two files through the processing pipeline you get a report:
TACs should be unique. TAC: 01216100, handsetId: 95 offerId: 103021 TACs should be unique. TAC: 01216100, handsetId: 95 offerId: 103021 [...]After running the validation, the report presents us with the result. It shows that there are actually non-unique tacs. Unfortunately the rule itself is not so optimal, as it is executed for each tac node. The better case would be to create a rule operating only on groups of tacs - having a rule for each handset's tacs would be much better.
Performance consideration
As you may have seen, Schematron gives quite a potential, if it comes to rules building - maybe not the easiest to comprehend, since written with XPath, but good enough. However, with all the XML processing involved in the process, it may take some considerable amount of time to execute such validations. For example, processing rules for file getMigrationOffers.xml takes about 2.296s - the file has 82 offer elements, which the rules operate on. But validating the other file, getAllPhones.xml takes 5.324s, with 3113 tac elements, and the rule iterating all of them. This overhead is too much in most of the situations. That's why this solution is rather not for use in normal execution pipeline - it would be unwise to put Schematron to check each request, thus entangle it into my Web Services normal flow. What may be more desirable is to deploy a continuous integration server, with a project querying such Web Service and checking the rules in this manner.Conclusion
So, what's so great about having one XML generate another XML? Perhaps nothing, I think it would took just about a day to write some shell, python,that would perform equally (or even better). However, we loose technology homogeneity, and employ some other environments, not specific to our primary target platform, and that seems bad. Of course using some powerful text processing tool to impose the same rules might be much more efficient, thou less coherent. What is your approach to such situations? Have you used Schematron or any other similar tool? Code for this example is available on GitHub - http://github.com/zygm0nt/schematron-example.Complex flows with Apache Camel
- try to operate on XML messages, so you get the advantage of XPaths, XSL and other benefits,
- don't convert XML into Java classes back and forth and be worried with problems like XML conversion,
- try to get a simple flow of the process.


- receiving input
- setting exchange property to a given xpath, which effectively is the name of the first XML element in the input stream
- than, the input data is sent to three different external services, each of them replies with some fictional data - notice routes a, b and c. The SimpleContentSetter processor is just for responding with a given text.
- the response from all three services is somehow processed by RequestEnricher bean, which is described below
- eventually the exchange is logged in specified category
public class SimpleTest { public void setUp() throws Exception { TestRouter tr = new TestRouter(); ctx.addRoutes(tr); } @Test public void shouldCheck() throws Exception { ctx.createProducerTemplate().send("direct:in", getInOut("")); } class TestRouter extends RouteBuilder { public void configure() throws Exception { ((ProcessorType)from("direct:in") .setProperty("operation").xpath("local-name(/*)", String.class) .multicast(new MergeAggregationStrategy()) .to("direct:a", "direct:b", "direct:c") .end() .setBody().simple("${in.body}")) .bean(RequestEnricher.class, "enrich") .to("log:pl.touk.debug"); from("direct:a").process(new SimpleContentSetter("")); from("direct:b").process(new SimpleContentSetter("")); from("direct:c").process(new SimpleContentSetter("")); } } }What's unusual in this code is the fact, that what normally Camel does when you write a piece of DSL like:
.to("direct:a", "direct:b", "direct:c")is pass input to service a, than a's output gets passed to b, becomes it's input, than b's output becomes c's input. The problem being, you loose the output from a and b, not mentioning that you might want to send the same input to all three services. That's where a little tool called multicast() comes in handy. It offers you the ability to aggregate the outputs of those services. You may even create an AggregationStrategy that will do it the way you like. Below class, MergeAggregationStrategy does exactly that kind of work - it joins outputs from all three services. A lot of info about proper use of AggregationStrategy-ies can be found in this post by Torsten Mielke.
public class MergeAggregationStrategy implements AggregationStrategy { public Exchange aggregate(Exchange oldExchange, Exchange newExchange) { if (oldExchange.isFailed()) { return oldExchange; } transformMessage(oldExchange.getIn(), newExchange.getIn()); transformMessage(oldExchange.getOut(), newExchange.getOut()); return newExchange; } private void transformMessage(Message oldM, Message newM) { String oldBody = oldM.getBody(String.class); String newBody = newM.getBody(String.class); newM.setBody(oldBody + newBody); } }However nice this may look (or not), what you're left with is a mix of multiple XMLs. Normally this won't do you much good. Better thing to do is to parse this output in some way. What we're using for this is a Groovy :). Which is great for the task of parsing XML. A lot less verbose than ordinary Java. Let's assume a scenario, that the aggregated output, currently looking like this: is to be processed with the following steps in mind:
- use as the result element
- use attributes param1, param2, param3 from element and add it to result element
public class RequestEnricher { public String enrich(@Property(name = "operation") String operation, Exchange ex) { use(DOMCategory) { def dhl = new groovy.xml.Namespace("http://example.com/common/dhl/schema", 'dhl') def pc = new groovy.xml.Namespace("http://example.com/pc/types", 'pc') def doc = new XmlParser().parseText(ex.in.body) def pcRequest = doc."aaaa"[0] ["param1", "param2", "param3"].each() { def node = doc.'**'[("" + it)][0] if (node) pcRequest['@' + it] = node.text() } gNodeListToString([pcRequest]) } } String gNodeListToString(list) { StringBuilder sb = new StringBuilder(); list.each { listItem -> StringWriter sw = new StringWriter(); new XmlNodePrinter(new PrintWriter(sw)).print(listItem) sb.append(sw.toString()); } return sb.toString(); } }What we're doing here, especially the last line of enrich method is the conversion to String. There are some problems for Camel if we spit out Groovy objects. The rest is just some Groovy specific ways of manipulating XML. But looking into enrich method's parameters, there is @Property annotation used, which binds the property assigned earlier in a router code to one of the arguments. That is really cool feature and there are more such annotations:
- @XPath
- @Header
- @Headers and @Properties - gives whole maps of properties or headers
Meetbsd 2010
DAY 1
It took place on 2nd-3rd of July, 2010, so this review is rather dated :) However, I'd like to keep this as reminder. I've arrived to the conference site, which was located in building of the Faculty of Mathematics and Computer Science a few minutes after the official start of the conference. I had been traveling from Warsaw the same day, and the only train that would not require me to get up at some night hour would arrive a bit too late. Oh well :) I grabbed a tea and some biscuits and entered the series of lectures. The first thing to listen to was a Welcome intro - quite nice one. Conducted by a guy from Cisco (AFAIK). He was talking about the opportunities for Kraków and how it will become a Polish Silicon Valley in near future, etc. Actually I don't share his believes but the talk was ok. Then came Dru Lavigne with some insight into BSD Certification program. Actually, does anybody use this? Come one. Do we really need another certification process? I for sure don't see the need, especially for the BSD community. However the trend is good, may help popularize BSDs among enterprise leaders, because if something is certified, than it can be used in big enterprises, right? :) Sławek Żak talked about NoSQL. Although the talk gave a bit of info about what the idea is and how does it compare to normal DBs, I did not find his presentation entertaining. In my opinion, there was not enough emphasis on the difference in usage for such databases. The talk about NoSQL I'd attended on Javarsovia was a lot better. Next talk, presented by Attilio Rao was very, very technical. It was about "VFS/Vnode interface in FreeBSD". It was rather an API presentation, and introduction on how to implement an FS in FreeBSD infrastructure, than a conference talk. This kind of presentation would be good suited for FreeBSD kernel developers not sysadmins. Jakub Klama's talk on the process of porting FreeBSD to Da Vinci embedded system was interesting. It had some photos of the board, tackled a few technical corners, but caught my attention. Well done! Out guy among FreeBSD hackers - Paweł Jakub Dawidek - gave speech about HAST - High Availability STorage. In other words he implemented DRBD for FreeBSD. Sadly, for me this is just catching up with what Linux has in mainline since 2.6.33 (it was working very well even before that). It's not so feature rich as DRBD, but the project is slowly maturing. Nevertheless, it's good to finally have this on board. Then an inconspicuous guy come onto the stage. Came from Bulgaria, named Nikolay Aleksandrov, that guy gave a talk titled Developing high speed FreeBSD. And the subject was astounding. He works for a major Bulgarian ISP and due to lack of cash to buy some serious networking gear, he wrote a FreeBSD extension that would sit in-between network adapter and the kernel and do all the hard work like routing, VLANs, and more. His goal was to make it lighting fast, and as far as his results showed, he succeeded. This talk was really amazing, he did what would normally take hundreds of thousands of dollars - in cash and skills - in his free time, or at least as a pet project.DAY 2
Well, I'd skipped the first lecture of the day, because of laziness ;) Had decide to pack myself and arrive to listen about what can freebsd borrow from AIX. Jan Srzednicki talked about some nice tools from the AIX world. He proposed that adding an educational, console-based tool for conducting basic (and even not so basic) tasks, would encourage people to learn the system. I think it would work. However the rest of his ideas weren't good enough - at least not for me. Next thing in line was The new USB stack. Interesting talk about new USB stack development, conducted by Hans Petter Selasky. This guy was really passionate about USB things ;-) Martin Matuska presented his set of shell scripts that allow to create mfsBSD - an in-memory FreeBSD install. Since I'm already doing this kind of things with OpenBSD, the talk was entertaining. Marcko Zec and Network stack virtualization. This was about extending FreeBSD to be able to create lots of compartmentalized environments with their own network stacks. As noted in the presentation: the solution still has problems with graceful shutdown of the stack. Still not stable enough - but very promising. The closing presentation, given by Warner Losh (very knowledgeable guy behing bsdimp.blogspot.com) on the subject Using FreeBSD in a commercial settings. The talk was not what I've expected, but nevertheless was very interesting. It was about branching and merging back changes in case of using FreeBSD as a base for some commercial products. This could be easily applied to any other Open Source project. Warner described possible strategies for branching and performing merges, he noted also pros and cons of all the described solutions. All in all, that was a fun time. Even thou I don't use any BSD as my primary system at this time, and my BSD skills are a bit rusty, the talks were nice enough :) for a hobbist like me.Generic Enum converter for iBatis
- all given Enums are jaxb generated objects - but any standard Java Enum should work
- conversion was 1-to-1, no special conditions and processing
@XmlType(name ="ServiceType") @XmlEnum public enum ServiceType { @XmlEnumValue("stationary") STATIONARY("stationary"), @XmlEnumValue("mobile") MOBILE("mobile"); private final String value; ServiceType(String v) { value = v; } public String value() { return value; } public static ServiceType fromValue(String v) { for (ServiceType c: ServiceType.values()) { if (c.value.equals(v)) { return c; } } throw new IllegalArgumentException(v); } }"No big deal", you say. I beg to differ. What I wanted to achieve was a simple construction which would look like this when used for another Enum (CommonEnumTypeHandler is the name of my generic converter):
public class ServiceTypeHandler extends CommonEnumTypeHandler { }Unfortunately due to the fact, that Java does not have reified generics, which is described in multiple places, I had to stick with passing through a Class type of my enum. So it looks like this:
public class ServiceTypeHandler extends CommonEnumTypeHandler { public ServiceTypeHandler() { super(ServiceType.class); } }My final class has to look like this one below:
import java.sql.SQLException; import com.ibatis.sqlmap.client.extensions.ParameterSetter; import com.ibatis.sqlmap.client.extensions.ResultGetter; import com.ibatis.sqlmap.client.extensions.TypeHandlerCallback; public abstract class CommonEnumTypeHandler implements TypeHandlerCallback { Class enumClass; public CommonEnumTypeHandler(Class clazz) { this.enumClass = clazz; } public void setParameter(ParameterSetter ps, Object o) throws SQLException { if (o.getClass().isAssignableFrom(enumClass)) { ps.setString(((T) o).name().toUpperCase()); } else throw new SQLException("Excpected " + enumClass + " object than: " + o); } public Object getResult(ResultGetter rs) throws SQLException { Object o = valueOf(rs.getString()); if (o == null) throw new SQLException("Unknown parameter type: " + rs.getString()); return o; } public Object valueOf(String s) { return Enum.valueOf(enumClass, s.toUpperCase()); } }
Javarsovia 2010
Eclipse ecosystem
- Swordfish - SOA solution, with BAM (Business Activity Monitoring) implemented
- XText - enables you to write a simple (or not) DSLs for your apps
- E4 - next gen Eclipse IDE, with many great ideas in it