Easier and nicer JMS

JMS seems like a hostile ground. It has all it’s quirks and strange behaviours. A couple of defining standards plus esoteric brokers, queues and topics.

At work, we mainly use open source Jms solutions, namely Apache ActiveMQ. This one is usually bundled with Apache Servicemix, as a message broker for this particular ESB. As there are some minor caveats in this scennerio, I’d like to describe here some guidelines for getting to running JMS queues.

Treat this post as a quick cheat sheet with the most common things about JMS I tend to forget :)

Minor glitches encountered during work with embedded broker led to some thoughts about switching to external broker. This is how I configure SMX and AcviteMQ.

Necessary steps:

  • change apache-servicemix/conf/servicemix.properties activemq.port to sth else than standard, for example 61626
  • change apache-activemq/conf/activemq.xml with this settings:
    • change port, the service listens on:
              
                  
              
      
    • setup separate JMX instance:
              
                  
              
      
  • the nicest tool I found for browsing queues and topics is Hermes JMS. Sample config, that connects Hermes to ActiveMQ instance is on the picture below: HermesJMS to ActiveMQ connection config
  • sending simple messages with Hermes is basic, but what if you need to set some headers, send bulk messages, etc. Easy, just use Hermes xml format. Look like this code snippet below and is rather self-explanatory:
    
        
            
                
                
    <![CDATA[
      
        
          105
          1235
        
      
    ]]>
    
            
        
    
    
  • since we use lots of Apache Camel to consume messages, here is a simple way to start broker in your tests:
    • start a broker
              BrokerService broker = new org.apache.activemq.broker.BrokerService();
              broker.setBrokerName("AMQ-1");
              broker.addConnector("tcp://localhost:51616");
              broker.setPersistent(false);
              broker.start();
      

      Notice it has persistance disabled.

    • initialize Camel’s JMS component:
          ctx.removeComponent("jms");
          ctx.addComponent("jms", ActiveMQComponent.activeMQComponent("tcp://localhost:51616"));
      
    • if you want to pass messages to reference endpoints, (like ref:input), use this wrapper method:
      private JmsEndpoint createJmsEndpoint(String endpoint) throws JMSException {
              ActiveMQComponent amqc = (ActiveMQComponent) ctx.getComponent("jms");
              JmsEndpoint endp = JmsEndpoint.newInstance(new ActiveMQTopic(endpoint), amqc);
              return endp;
      }
      
      createJmsEndpoint("ESB/XYZ")
      

These are all the tricks I’ve got for now! But if you know some other good tools that handle JMS, feel free to comment! Got more advices, again, comment!

You May Also Like

Recently at storm-users

I've been reading through storm-users Google Group recently. This resolution was heavily inspired by Adam Kawa's post "Football zero, Apache Pig hero". Since I've encountered a lot of insightful and very interesting information I've decided to describe some of those in this post.

  • nimbus will work in HA mode - There's a pull request open for it already... but some recent work (distributing topology files via Bittorrent) will greatly simplify the implementation. Once the Bittorrent work is done we'll look at reworking the HA pull request. (storm’s pull request)

  • pig on storm - Pig on Trident would be a cool and welcome project. Join and groupBy have very clear semantics there, as those concepts exist directly in Trident. The extensions needed to Pig are the concept of incremental, persistent state across batches (mirroring those concepts in Trident). You can read a complete proposal.

  • implementing topologies in pure python with petrel looks like this:

class Bolt(storm.BasicBolt):
    def initialize(self, conf, context):
       ''' This method executed only once '''
        storm.log('initializing bolt')

    def process(self, tup):
       ''' This method executed every time a new tuple arrived '''       
       msg = tup.values[0]
       storm.log('Got tuple %s' %msg)

if __name__ == "__main__":
    Bolt().run()
  • Fliptop is happy with storm - see their presentation here

  • topology metrics in 0.9.0: The new metrics feature allows you to collect arbitrarily custom metrics over fixed windows. Those metrics are exported to a metrics stream that you can consume by implementing IMetricsConsumer and configure with Config.java#L473. Use TopologyContext#registerMetric to register new metrics.

  • storm vs flume - some users' point of view: I use Storm and Flume and find that they are better at different things - it really depends on your use case as to which one is better suited. First and foremost, they were originally designed to do different things: Flume is a reliable service for collecting, aggregating, and moving large amounts of data from source to destination (e.g. log data from many web servers to HDFS). Storm is more for real-time computation (e.g. streaming analytics) where you analyse data in flight and don't necessarily land it anywhere. Having said that, Storm is also fault-tolerant and can write to external data stores (e.g. HBase) and you can do real-time computation in Flume (using interceptors)

That's all for this day - however, I'll keep on reading through storm-users, so watch this space for more info on storm development.

I've been reading through storm-users Google Group recently. This resolution was heavily inspired by Adam Kawa's post "Football zero, Apache Pig hero". Since I've encountered a lot of insightful and very interesting information I've decided to describe some of those in this post.

  • nimbus will work in HA mode - There's a pull request open for it already... but some recent work (distributing topology files via Bittorrent) will greatly simplify the implementation. Once the Bittorrent work is done we'll look at reworking the HA pull request. (storm’s pull request)

  • pig on storm - Pig on Trident would be a cool and welcome project. Join and groupBy have very clear semantics there, as those concepts exist directly in Trident. The extensions needed to Pig are the concept of incremental, persistent state across batches (mirroring those concepts in Trident). You can read a complete proposal.

  • implementing topologies in pure python with petrel looks like this:

class Bolt(storm.BasicBolt):
    def initialize(self, conf, context):
       ''' This method executed only once '''
        storm.log('initializing bolt')

    def process(self, tup):
       ''' This method executed every time a new tuple arrived '''       
       msg = tup.values[0]
       storm.log('Got tuple %s' %msg)

if __name__ == "__main__":
    Bolt().run()
  • Fliptop is happy with storm - see their presentation here

  • topology metrics in 0.9.0: The new metrics feature allows you to collect arbitrarily custom metrics over fixed windows. Those metrics are exported to a metrics stream that you can consume by implementing IMetricsConsumer and configure with Config.java#L473. Use TopologyContext#registerMetric to register new metrics.

  • storm vs flume - some users' point of view: I use Storm and Flume and find that they are better at different things - it really depends on your use case as to which one is better suited. First and foremost, they were originally designed to do different things: Flume is a reliable service for collecting, aggregating, and moving large amounts of data from source to destination (e.g. log data from many web servers to HDFS). Storm is more for real-time computation (e.g. streaming analytics) where you analyse data in flight and don't necessarily land it anywhere. Having said that, Storm is also fault-tolerant and can write to external data stores (e.g. HBase) and you can do real-time computation in Flume (using interceptors)

That's all for this day - however, I'll keep on reading through storm-users, so watch this space for more info on storm development.