xquery4j in action

In my previous article, I introduced a wrapper library for Saxon, xquery4j http://github.com/rafalrusin/xquery4j.
Here, I will explain how to use it to create an article generator in Java and XQuery for XHTML, called Article. You can download it here: http://github.com/rafalrusin/Article. It’s a simple DSL for article generation.

I think it is something worth noticing, because the whole project took me just a while to implement and has interesting features. Those are:

  • embedded code syntax highlighting for a lot of programming languages (using external program highlight),
  • creating href entries for links, so you don’t need to type URL twice
  • it integrates natively with XHTML constructs

This is an example of an input it takes:

<a:article xmlns='http://www.w3.org/1999/xhtml' xmlns:a="urn:article">
Some text
<a:code lang="xml"><![CDATA[

]]>

It generates XHTML output for it, using command

./run <input.xml >output.xhtml

The interesting thing is that XQuery expression for this transformation is very simple to do in Saxon. This is the complete code of it:

declare namespace a="urn:article";
declare default element namespace "http://www.w3.org/1999/xhtml";

declare function a:processLine($l) {
for $i in $l/node()
return
typeswitch ($i)
case element(a:link, xs:untyped) return <a href=“{$i/text()}”>{$i/text()}
default return $i
};

declare function a:articleItem($i) {
typeswitch ($i)
case element(a:l, xs:untyped) return (a:processLine($i),
)

case element(a:code, xs:untyped) return
( a:highlight($i/text(), $i/@lang)/body/* ,
)

default return “error;”
};

<html xmlns=“http://www.w3.org/1999/xhtml”>

<br /> a.xml


<link rel=“stylesheet” type=“text/css” href=“highlight.css”/>


{
for $i in a:article/*
return
a:articleItem($i)
}

Inside this expression, there is bound a:highlight Java function, which takes two strings on input (a code and a language) and returns DOM Node containing XHTML output from highlight command.
Since there is not much trouble with manipulating DOM using xquery4j, we can get as simple solution as this for a:highlight function:

public static class Mod {
public static Node highlight(final String code, String lang) throws Exception {
Validate.notNull(lang);
final Process p = new ProcessBuilder("highlight", "-X", "--syntax", lang).start();
Thread t = new Thread(new Runnable() {

public void run() {
try {
OutputStream out = p.getOutputStream();
IOUtils.write(code, out);
out.flush();
out.close();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
});
t.start();
String result = IOUtils.toString(p.getInputStream());
t.join();
return DOMUtils.parse(result).getDocumentElement();
}
}

Please note that creating a separate thread for feeding input into highlight command is required, since Thread’s output queue is limited and potentially might lead to dead lock. So we need to concurrently collect output from spawned Process.
However at the end, when we need to convert a String to DOM and we use xquery4j’s DOMUtils.parse(result), so it’s a very simple construct.

You May Also Like

Open IMS Core Mr interface

Open IMS Core does’t have standard way to define connection to MRF (Media Resource Function) on Mr interface.In IMS Mr interface is based on SIP and is similar to ISC used by Application Server (AS). Because of that we can define MRF as IMS AS and just add Wildcard PSI that has trigger on that AS. That [...]

Recently at storm-users

I've been reading through storm-users Google Group recently. This resolution was heavily inspired by Adam Kawa's post "Football zero, Apache Pig hero". Since I've encountered a lot of insightful and very interesting information I've decided to describe some of those in this post.

  • nimbus will work in HA mode - There's a pull request open for it already... but some recent work (distributing topology files via Bittorrent) will greatly simplify the implementation. Once the Bittorrent work is done we'll look at reworking the HA pull request. (storm’s pull request)

  • pig on storm - Pig on Trident would be a cool and welcome project. Join and groupBy have very clear semantics there, as those concepts exist directly in Trident. The extensions needed to Pig are the concept of incremental, persistent state across batches (mirroring those concepts in Trident). You can read a complete proposal.

  • implementing topologies in pure python with petrel looks like this:

class Bolt(storm.BasicBolt):
    def initialize(self, conf, context):
       ''' This method executed only once '''
        storm.log('initializing bolt')

    def process(self, tup):
       ''' This method executed every time a new tuple arrived '''       
       msg = tup.values[0]
       storm.log('Got tuple %s' %msg)

if __name__ == "__main__":
    Bolt().run()
  • Fliptop is happy with storm - see their presentation here

  • topology metrics in 0.9.0: The new metrics feature allows you to collect arbitrarily custom metrics over fixed windows. Those metrics are exported to a metrics stream that you can consume by implementing IMetricsConsumer and configure with Config.java#L473. Use TopologyContext#registerMetric to register new metrics.

  • storm vs flume - some users' point of view: I use Storm and Flume and find that they are better at different things - it really depends on your use case as to which one is better suited. First and foremost, they were originally designed to do different things: Flume is a reliable service for collecting, aggregating, and moving large amounts of data from source to destination (e.g. log data from many web servers to HDFS). Storm is more for real-time computation (e.g. streaming analytics) where you analyse data in flight and don't necessarily land it anywhere. Having said that, Storm is also fault-tolerant and can write to external data stores (e.g. HBase) and you can do real-time computation in Flume (using interceptors)

That's all for this day - however, I'll keep on reading through storm-users, so watch this space for more info on storm development.

I've been reading through storm-users Google Group recently. This resolution was heavily inspired by Adam Kawa's post "Football zero, Apache Pig hero". Since I've encountered a lot of insightful and very interesting information I've decided to describe some of those in this post.

  • nimbus will work in HA mode - There's a pull request open for it already... but some recent work (distributing topology files via Bittorrent) will greatly simplify the implementation. Once the Bittorrent work is done we'll look at reworking the HA pull request. (storm’s pull request)

  • pig on storm - Pig on Trident would be a cool and welcome project. Join and groupBy have very clear semantics there, as those concepts exist directly in Trident. The extensions needed to Pig are the concept of incremental, persistent state across batches (mirroring those concepts in Trident). You can read a complete proposal.

  • implementing topologies in pure python with petrel looks like this:

class Bolt(storm.BasicBolt):
    def initialize(self, conf, context):
       ''' This method executed only once '''
        storm.log('initializing bolt')

    def process(self, tup):
       ''' This method executed every time a new tuple arrived '''       
       msg = tup.values[0]
       storm.log('Got tuple %s' %msg)

if __name__ == "__main__":
    Bolt().run()
  • Fliptop is happy with storm - see their presentation here

  • topology metrics in 0.9.0: The new metrics feature allows you to collect arbitrarily custom metrics over fixed windows. Those metrics are exported to a metrics stream that you can consume by implementing IMetricsConsumer and configure with Config.java#L473. Use TopologyContext#registerMetric to register new metrics.

  • storm vs flume - some users' point of view: I use Storm and Flume and find that they are better at different things - it really depends on your use case as to which one is better suited. First and foremost, they were originally designed to do different things: Flume is a reliable service for collecting, aggregating, and moving large amounts of data from source to destination (e.g. log data from many web servers to HDFS). Storm is more for real-time computation (e.g. streaming analytics) where you analyse data in flight and don't necessarily land it anywhere. Having said that, Storm is also fault-tolerant and can write to external data stores (e.g. HBase) and you can do real-time computation in Flume (using interceptors)

That's all for this day - however, I'll keep on reading through storm-users, so watch this space for more info on storm development.