Multi module Gradle project with IDE support

This article is a short how-to about multi-module project setup with usage of the Gradle automation build tool. Here’s how Rich Seller, a StackOverflow user, describes Gradle: Gradle promises to hit the sweet spot between Ant and Maven. It uses Ivy’s approach for dependency resolution. It allows for convention over configuration but also includes Ant tasks as first class citizens. It also wisely allows you to use existing Maven/Ivy repositories. So why would one use yet another JVM build tool such as Gradle? The answer is simple: to avoid frustration involved by Ant or Maven. Short story I was fooling around with some fresh proof of concept and needed a build tool. I’m pretty familiar with Maven so created project from an artifact, and opened the build file, pom.xml for further tuning. I had been using Grails with its own build system (similar to Gradle, btw) already for some time up then, so after quite a time without Maven, I looked on the pom.xml and found it to be really repulsive. Once again I felt clearly: XML is not for humans. After quick googling I found Gradle. It was still in beta (0.8 version) back then, but it’s configured with Groovy DSL and that’s what a human likes :) Where are we In the time Ant can be met but among IT guerrillas, Maven is still on top and couple of others like for example Ivy conquer for the best position, Gradle smoothly went into its mature age. It’s now available in 1.3 version, released at 20th of November 2012. I’m glad to recommend it to anyone looking for relief from XML configured tools, or for anyone just looking for simple, elastic and powerful build tool. Lets build I have already written about basic project structure so I skip this one, reminding only the basic project structure: <project root> │ ├── build.gradle └── src ├── main │ ├── java │ └── groovy │ └── test ├── java └── groovy Have I just referred myself for the 1st time? Achievement unlocked! ;) Gradle as most build tools is run from a command line with parameters. The main parameter for Gradle is a ‘task name’, for example we can run a command: gradle build. There is no ‘create project’ task, so the directory structure has to be created by hand. This isn’t a hassle though. Java and groovy sub-folders aren’t always mandatory. They depend on what compile plugin is used. Parent project Consider an example project ‘the-app’ of three modules, let say: database communication layer domain model and services layer web presentation layer Our project directory tree will look like: the-app │ ├── dao-layer │ └── src │ ├── domain-model │ └── src │ ├── web-frontend │ └── src │ ├── build.gradle └── settings.gradle the-app itself has no src sub-folder as its purpose is only to contain sub-projects and build configuration. If needed it could’ve been provided with own src though. To glue modules we need to fill settings.gradle file under the-app directory with a single line of content specifying module names: include 'dao-layer', 'domain-model', 'web-frontend' Now the gradle projects command can be executed to obtain such a result: :projects ------------------------------------------------------------ Root project ------------------------------------------------------------ Root project 'the-app' +--- Project ':dao-layer' +--- Project ':domain-model' \--- Project ':web-frontend' …so we know that Gradle noticed the modules. However gradle build command won’t run successful yet because build.gradle file is still empty. Sub project As in Maven we can create separate build config file per each module. Let say we starting from DAO layer. Thus we create a new file the-app/dao-layer/build.gradle with a line of basic build info (notice the new build.gradle was created under sub-project directory): apply plugin: 'java' This single line of config for any of modules is enough to execute gradle build command under the-app directory with following result: :dao-layer:compileJava :dao-layer:processResources UP-TO-DATE :dao-layer:classes :dao-layer:jar :dao-layer:assemble :dao-layer:compileTestJava UP-TO-DATE :dao-layer:processTestResources UP-TO-DATE :dao-layer:testClasses UP-TO-DATE :dao-layer:test :dao-layer:check :dao-layer:build BUILD SUCCESSFUL Total time: 3.256 secs To use Groovy plugin slightly more configuration is needed: apply plugin: 'groovy' repositories { mavenLocal() mavenCentral() } dependencies { groovy 'org.codehaus.groovy:groovy-all:2.0.5' } At lines 3 to 6 Maven repositories are set. At line 9 dependency with groovy library version is specified. Of course plugin as ‘java’, ‘groovy’ and many more can be mixed each other. If we have settings.gradle file and a build.gradle file for each module, there is no need for parent the-app/build.gradle file at all. Sure that’s true but we can go another, better way. One file to rule them all Instead of creating many build.gradle config files, one per each module, we can use only the parent’s one and make it a bit more juicy. So let us move the the-app/dao-layer/build.gradle a level up to the-app/build-gradle and fill it with new statements to achieve full project configuration: def langLevel = 1.7 allprojects { apply plugin: 'idea' group = 'com.tamashumi' version = '0.1' } subprojects { apply plugin: 'groovy' sourceCompatibility = langLevel targetCompatibility = langLevel repositories { mavenLocal() mavenCentral() } dependencies { groovy 'org.codehaus.groovy:groovy-all:2.0.5' testCompile 'org.spockframework:spock-core:0.7-groovy-2.0' } } project(':dao-layer') { dependencies { compile 'org.hibernate:hibernate-core:4.1.7.Final' } } project(':domain-model') { dependencies { compile project(':dao-layer') } } project(':web-frontend') { apply plugin: 'war' dependencies { compile project(':domain-model') compile 'org.springframework:spring-webmvc:3.1.2.RELEASE' } } idea { project { jdkName = langLevel languageLevel = langLevel } } At the beginning simple variable langLevel is declared. It’s worth knowing that we can use almost any Groovy code inside build.gradle file, statements like for example if conditions, for/while loops, closures, switch-case, etc… Quite an advantage over inflexible XML, isn’t it? Next the allProjects block. Any configuration placed in it will influence – what a surprise – all projects, so the parent itself and sub-projects (modules). Inside of the block we have the IDE (Intellij Idea) plugin applied which I wrote more about in previous article (look under “IDE Integration” heading). Enough to say that with this plugin applied here, command gradle idea will generate Idea’s project files with modules structure and dependencies. This works really well and plugins for other IDEs are available too. Remaining two lines at this block define group and version for the project, similar as this is done by Maven. After that subProjects block appears. It’s related to all modules but not the parent project. So here the Groovy language plugin is applied, as all modules are assumed to be written in Groovy. Below source and target language level are set. After that come references to standard Maven repositories. At the end of the block dependencies to groovy version and test library – Spock framework. Following blocks, project(‘:module-name’), are responsible for each module configuration. They may be omitted unless allProjects or subProjects configure what’s necessary for a specific module. In the example per module configuration goes as follow: Dao-layer module has dependency to an ORM library – Hibernate Domain-model module relies on dao-layer as a dependency. Keyword project is used here again for a reference to other module. Web-frontend applies ‘war’ plugin which build this module into java web archive. Besides it referes to domain-model module and also use Spring MVC framework dependency. At the end in idea block is basic info for IDE plugin. Those are parameters corresponding to the Idea’s project general settings visible on the following screen shot. jdkName should match the IDE’s SDK name otherwise it has to be set manually under IDE on each Idea’s project files (re)generation with gradle idea command. Is that it? In the matter of simplicity – yes. That’s enough to automate modular application build with custom configuration per module. Not a rocket science, huh? Think about Maven’s XML. It would take more effort to setup the same and still achieve less expressible configuration quite far from user-friendly. Check the online user guide for a lot of configuration possibilities or better download Gradle and see the sample projects. As a tasty bait take a look for this short choice of available plugins: java groovy scala cpp eclipse netbeans ida maven osgi war ear sonar project-report signing and more, 3rd party plugins…

This article is a short how-to about multi-module project setup with usage of the Gradle automation build tool.

Here’s how Rich Seller, a StackOverflow user, describes Gradle:

Gradle promises to hit the sweet spot between Ant and Maven. It uses Ivy’s approach for dependency resolution. It allows for convention over configuration but also includes Ant tasks as first class citizens. It also wisely allows you to use existing Maven/Ivy repositories.

So why would one use yet another JVM build tool such as Gradle? The answer is simple: to avoid frustration involved by Ant or Maven.

Short story

I was fooling around with some fresh proof of concept and needed a build tool. I’m pretty familiar with Maven so created project from an artifact, and opened the build file, pom.xml for further tuning.
I had been using Grails with its own build system (similar to Gradle, btw) already for some time up then, so after quite a time without Maven, I looked on the pom.xml and found it to be really repulsive.

Once again I felt clearly: XML is not for humans.

After quick googling I found Gradle. It was still in beta (0.8 version) back then, but it’s configured with Groovy DSL and that’s what a human likes :)

Where are we

In the time Ant can be met but among IT guerrillas, Maven is still on top and couple of others like for example Ivy conquer for the best position, Gradle smoothly went into its mature age. It’s now available in 1.3 version, released at 20th of November 2012. I’m glad to recommend it to anyone looking for relief from XML configured tools, or for anyone just looking for simple, elastic and powerful build tool.

Lets build

I have already written about basic project structure so I skip this one, reminding only the basic project structure:

<project root>
│
├── build.gradle
└── src
    ├── main
    │   ├── java
    │   └── groovy
    │
    └── test
        ├── java
        └── groovy

Have I just referred myself for the 1st time? Achievement unlocked! ;)

Gradle as most build tools is run from a command line with parameters. The main parameter for Gradle is a ‘task name’, for example we can run a command: gradle build.
There is no ‘create project’ task, so the directory structure has to be created by hand. This isn’t a hassle though.
Java and groovy sub-folders aren’t always mandatory. They depend on what compile plugin is used.

Parent project

Consider an example project ‘the-app’ of three modules, let say:

  1. database communication layer
  2. domain model and services layer
  3. web presentation layer

Our project directory tree will look like:

the-app
│
├── dao-layer
│   └── src
│
├── domain-model
│   └── src
│
├── web-frontend
│   └── src
│
├── build.gradle
└── settings.gradle

the-app itself has no src sub-folder as its purpose is only to contain sub-projects and build configuration. If needed it could’ve been provided with own src though.

To glue modules we need to fill settings.gradle file under the-app directory with a single line of content specifying module names:

include 'dao-layer', 'domain-model', 'web-frontend'

Now the gradle projects command can be executed to obtain such a result:

:projects
------------------------------------------------------------
Root project
------------------------------------------------------------
Root project 'the-app'
+--- Project ':dao-layer'
+--- Project ':domain-model'
\--- Project ':web-frontend'

…so we know that Gradle noticed the modules. However gradle build command won’t run successful yet because build.gradle file is still empty.

Sub project

As in Maven we can create separate build config file per each module. Let say we starting from DAO layer.
Thus we create a new file the-app/dao-layer/build.gradle with a line of basic build info (notice the new build.gradle was created under sub-project directory):

apply plugin: 'java'

This single line of config for any of modules is enough to execute gradle build command under the-app directory with following result:

:dao-layer:compileJava
:dao-layer:processResources UP-TO-DATE
:dao-layer:classes
:dao-layer:jar
:dao-layer:assemble
:dao-layer:compileTestJava UP-TO-DATE
:dao-layer:processTestResources UP-TO-DATE
:dao-layer:testClasses UP-TO-DATE
:dao-layer:test
:dao-layer:check
:dao-layer:build

BUILD SUCCESSFUL

Total time: 3.256 secs

To use Groovy plugin slightly more configuration is needed:

apply plugin: 'groovy'

repositories {
    mavenLocal()
    mavenCentral()
}

dependencies {
    groovy 'org.codehaus.groovy:groovy-all:2.0.5'
}

At lines 3 to 6 Maven repositories are set. At line 9 dependency with groovy library version is specified. Of course plugin as ‘java’, ‘groovy’ and many more can be mixed each other.

If we have settings.gradle file and a build.gradle file for each module, there is no need for parent the-app/build.gradle file at all. Sure that’s true but we can go another, better way.

One file to rule them all

Instead of creating many build.gradle config files, one per each module, we can use only the parent’s one and make it a bit more juicy. So let us move the the-app/dao-layer/build.gradle a level up to the-app/build-gradle and fill it with new statements to achieve full project configuration:

pipeline {
    agent any
    stages {
        stage('Unit Test') {
            steps {
                sh 'mvn clean test'
            }
        }
        stage('Deploy Standalone') {
            steps {
                sh 'mvn deploy -P standalone'
            }
        }
        stage('Deploy AnyPoint') {
            environment {
                ANYPOINT_CREDENTIALS = credentials('anypoint.credentials')
            }
            steps {
                sh 'mvn deploy -P arm -Darm.target.name=local-4.0.0-ee -Danypoint.username=${ANYPOINT_CREDENTIALS_USR}  -Danypoint.password=${ANYPOINT_CREDENTIALS_PSW}'
            }
        }
        stage('Deploy CloudHub') {
            environment {
                ANYPOINT_CREDENTIALS = credentials('anypoint.credentials')
            }
            steps {
                sh 'mvn deploy -P cloudhub -Dmule.version=4.0.0 -Danypoint.username=${ANYPOINT_CREDENTIALS_USR} -Danypoint.password=${ANYPOINT_CREDENTIALS_PSW}'
            }
        }
    }
}

At the beginning simple variable langLevel is declared. It’s worth knowing that we can use almost any Groovy code inside build.gradle file, statements like for example if conditions, for/while loops, closures, switch-case, etc… Quite an advantage over inflexible XML, isn’t it?

Next the allProjects block. Any configuration placed in it will influence – what a surprise – all projects, so the parent itself and sub-projects (modules). Inside of the block we have the IDE (Intellij Idea) plugin applied which I wrote more about in previous article (look under “IDE Integration” heading). Enough to say that with this plugin applied here, command gradle idea will generate Idea’s project files with modules structure and dependencies. This works really well and plugins for other IDEs are available too.
Remaining two lines at this block define group and version for the project, similar as this is done by Maven.

After that subProjects block appears. It’s related to all modules but not the parent project. So here the Groovy language plugin is applied, as all modules are assumed to be written in Groovy.
Below source and target language level are set.
After that come references to standard Maven repositories.
At the end of the block dependencies to groovy version and test library – Spock framework.

Following blocks, project(‘:module-name’), are responsible for each module configuration. They may be omitted unless allProjects or subProjects configure what’s necessary for a specific module. In the example per module configuration goes as follow:

  • Dao-layer module has dependency to an ORM library – Hibernate
  • Domain-model module relies on dao-layer as a dependency. Keyword project is used here again for a reference to other module.
  • Web-frontend applies ‘war’ plugin which build this module into java web archive. Besides it referes to domain-model module and also use Spring MVC framework dependency.

At the end in idea block is basic info for IDE plugin. Those are parameters corresponding to the Idea’s project general settings visible on the following screen shot.

jdkName should match the IDE’s SDK name otherwise it has to be set manually under IDE on each Idea’s project files (re)generation with gradle idea command.

Is that it?

In the matter of simplicity – yes. That’s enough to automate modular application build with custom configuration per module. Not a rocket science, huh? Think about Maven’s XML. It would take more effort to setup the same and still achieve less expressible configuration quite far from user-friendly.

Check the online user guide for a lot of configuration possibilities or better download Gradle and see the sample projects.
As a tasty bait take a look for this short choice of available plugins:

  • java
  • groovy
  • scala
  • cpp
  • eclipse
  • netbeans
  • ida
  • maven
  • osgi
  • war
  • ear
  • sonar
  • project-report
  • signing

and more, 3rd party plugins…

You May Also Like

Devoxx 2012 review


I'm sitting in a train to Charleroi, looking through a window at the Denmark landscape, street lights flashing by, people comming home from work, getting out for a Friday night party, or having a family dinner. To my left, guys from SoftwareMill are playing cards.
I don't really see them. My mind is busy elsewhere, sorting out and processing last two days in Antwerp, where 3400 developers, from 41 different countries, listened to 200 different sessions at the Devoxx, AFAIK the biggest Java conference this year.

Zookeeper + Curator = Distributed sync

An application developed for one of my recent projects at TouK involved multiple servers. There was a requirement to ensure failover for the system’s components. Since I had already a few separate components I didn’t want to add more of that, and since there already was a Zookeeper ensemble running - required by one of the services, I’ve decided to go that way with my solution.

What is Zookeeper?

Just a crude distributed synchronization framework. However, it implements Paxos-style algorithms (http://en.wikipedia.org/wiki/Paxos_(computer_science)) to ensure no split-brain scenarios would occur. This is quite an important feature, since I don’t have to care about that kind of problems while using this app. You just need to create an ensemble of a couple of its instances - to ensure high availability. It is basically a virtual filesystem, with files, directories and stuff. One could ask why another filesystem? Well this one is a rather special one, especially for distributed systems. The reason why creating all the locking algorithms on top of Zookeeper is easy is its Ephemeral Nodes - which are just files that exist as long as connection for them exists. After it disconnects - such file disappears.

With such paradigms in place it’s fairly easy to create some high level algorithms for synchronization.

Having that in place, it can safely integrate multiple services ensuring loose coupling in a distributed way.

Zookeeper from developer’s POV

With all the base services for Zookeeper started, it seems there is nothing else, than just connect to it and start implementing necessary algorithms. Unfortunately, the API is quite basic and offers files and directories abstractions with the addition of different node type (file types) - ephemeral and sequence. It is also possible to watch a node for changes.

Using bare Zookeeper is hard!

Creating connections is tedious - and there is lots of things to take care of. Handling an established connection is hard - when establishing connection to ensemble, it’s necessary to negotiate a session also. During the whole process a number of exceptions can occur - these are “recoverable” exceptions, that can be gracefully handled and not break the connection.

    class="c8"><span>So, Zookeeper API is hard.</span></p><p class="c1"><span></span></p><p class="c8"><span>Even if one is proficient with that API, then there come recipes. The reason for using Zookeeper is to be able to implement some more sophisticated algorithms on top of it. Unfortunately those aren&rsquo;t trivial and it is again quite hard to implement them without bugs.</span>

And since distributed systems are hard, why would anyone want another difficult to handle tool?

Enter Curator

<p
    class="c8"><span>Happily, guys from Netflix implemented a nice abstraction for dealing with Zookeeper internals. They called it Curator and use it extensively in the company&rsquo;s environment. Curator offers consistent API for Zookeeper&rsquo;s functionality. It even implements a couple of recipes for distributed systems.</span>

File read/write

<p
    class="c8"><span>The basic use of Zookeeper is as a distributed configuration repository. For this scenario I only need read/write capabilities, to be able to write and read files from the Zookeeper filesystem. This code snippet writes a sample json to a file on ZK filesystem.</span>

<a href="#"
                                                                                                  name="0"></a>

EnsurePath ensurePath = new EnsurePath(markerPath);
ensurePath.ensure(client.getZookeeperClient());
String json = “...”;
if (client.checkExists().forPath(statusFile(core)) != null)
     client.setData().forPath(statusFile(core), json.getBytes());
else
     client.create().forPath(statusFile(core), json.getBytes());


Distributed locking

Having multiple systems there may be a need of using an exclusive lock for some resource, or perhaps some big system requires it’s components to synchronize based on locks. This “recipe” is an ideal match for those situations.

ref="#"
                                                                                    name="b0329bbbf14b79ffaba1139881914aea887ef6a3"></a>



lock = new InterProcessSemaphoreMutex(client, lockPath);
lock.acquire(5, TimeUnit.MINUTES);
… do sth …
lock.release();


 (from https://github.com/zygm0nt/curator-playground/blob/master/src/main/java/pl/touk/curator/LockingRemotely.java)

Sevice Advertisement

<p

    class="c8"><span>This is quite an interesting use case. With many small services on different servers it is not wise to exchange ip addresses and ports between them. When some of those services may go down, while other will try to replace them - the task gets even harder. </span>

That’s why, with Zookeeper in place, it can be utilised as a registry of existing services.

If a service starts, it registers into the ServiceRegistry, offering basic information, like it’s purpose, role, address, and port.

Services that want to use a specific kind of service request an access to some instance. This way of configuring easily decouples services from their configuration.

Basically this scenario needs ? steps:

<span>1. Service starts and registers its presence (</span><span class="c5"><a class="c0"
                                                                               href="https://github.com/zygm0nt/curator-playground/blob/master/src/main/java/pl/touk/curator/WorkerAdvertiser.java#L44">https://github.com/zygm0nt/curator-playground/blob/master/src/main/java/pl/touk/curator/WorkerAdvertiser.java#L44</a></span><span>)</span><span>:</span>



ServiceDiscovery discovery = getDiscovery();
            discovery.start();
            ServiceInstance si = getInstance();
            log.info(si);
            discovery.registerService(si);



2. Another service - on another host or in another JVM on the same machine tries to discover who is implementing the service (https://github.com/zygm0nt/curator-playground/blob/master/src/main/java/pl/touk/curator/WorkerFinder.java#L50):

<a href="#"

                                                                                                  name="3"></a>

instances = discovery.queryForInstances(serviceName);

The whole concept here is ridiculously simple - the service advertising its presence just stores a file with its whereabouts. The service that is looking for service providers just look into specific directory and read stored definitions.

In my example, the structure advertised by services looks like this (+ some getters and constructor - the rest is here: https://github.com/zygm0nt/curator-playground/blob/master/src/main/java/pl/touk/model/WorkerMetadata.java):



public final class WorkerMetadata {
    private final UUID workerId;
    private final String listenAddress;
    private final int listenPort;
}


Source code

<p

    class="c8"><span>The above recipes are available in Curator library (</span><span class="c5"><a class="c0"
                                                                                                    href="http://curator.incubator.apache.org/">http://curator.incubator.apache.org/</a></span><span>). Recipes&rsquo;
usage examples are in my github repo at </span><span class="c5"><a class="c0"
                                                                   href="https://github.com/zygm0nt/curator-playground">https://github.com/zygm0nt/curator-playground</a></span>

Conclusion

<p
    class="c8"><span>If you&rsquo;re in need of a reliable platform for exchanging data and managing synchronization, and you need to do it in a distributed fashion - just choose Zookeeper. Then add Curator for the ease of using it. Enjoy!</span>


  1. image comes from: http://www.flickr.com/photos/jfgallery/2993361148
  2. all source code fragments taken from this repo: https://github.com/zygm0nt/curator-playground

An application developed for one of my recent projects at TouK involved multiple servers. There was a requirement to ensure failover for the system’s components. Since I had already a few separate components I didn’t want to add more of that, and since there already was a Zookeeper ensemble running - required by one of the services, I’ve decided to go that way with my solution.

What is Zookeeper?

Just a crude distributed synchronization framework. However, it implements Paxos-style algorithms (http://en.wikipedia.org/wiki/Paxos_(computer_science)) to ensure no split-brain scenarios would occur. This is quite an important feature, since I don’t have to care about that kind of problems while using this app. You just need to create an ensemble of a couple of its instances - to ensure high availability. It is basically a virtual filesystem, with files, directories and stuff. One could ask why another filesystem? Well this one is a rather special one, especially for distributed systems. The reason why creating all the locking algorithms on top of Zookeeper is easy is its Ephemeral Nodes - which are just files that exist as long as connection for them exists. After it disconnects - such file disappears.

With such paradigms in place it’s fairly easy to create some high level algorithms for synchronization.

Having that in place, it can safely integrate multiple services ensuring loose coupling in a distributed way.

Zookeeper from developer’s POV

With all the base services for Zookeeper started, it seems there is nothing else, than just connect to it and start implementing necessary algorithms. Unfortunately, the API is quite basic and offers files and directories abstractions with the addition of different node type (file types) - ephemeral and sequence. It is also possible to watch a node for changes.

Using bare Zookeeper is hard!

Creating connections is tedious - and there is lots of things to take care of. Handling an established connection is hard - when establishing connection to ensemble, it’s necessary to negotiate a session also. During the whole process a number of exceptions can occur - these are “recoverable” exceptions, that can be gracefully handled and not break the connection.

    class="c8"><span>So, Zookeeper API is hard.</span></p><p class="c1"><span></span></p><p class="c8"><span>Even if one is proficient with that API, then there come recipes. The reason for using Zookeeper is to be able to implement some more sophisticated algorithms on top of it. Unfortunately those aren&rsquo;t trivial and it is again quite hard to implement them without bugs.</span>

And since distributed systems are hard, why would anyone want another difficult to handle tool?

Enter Curator

<p
    class="c8"><span>Happily, guys from Netflix implemented a nice abstraction for dealing with Zookeeper internals. They called it Curator and use it extensively in the company&rsquo;s environment. Curator offers consistent API for Zookeeper&rsquo;s functionality. It even implements a couple of recipes for distributed systems.</span>

File read/write

<p
    class="c8"><span>The basic use of Zookeeper is as a distributed configuration repository. For this scenario I only need read/write capabilities, to be able to write and read files from the Zookeeper filesystem. This code snippet writes a sample json to a file on ZK filesystem.</span>

<a href="#"
                                                                                                  name="0"></a>

EnsurePath ensurePath = new EnsurePath(markerPath);
ensurePath.ensure(client.getZookeeperClient());
String json = “...”;
if (client.checkExists().forPath(statusFile(core)) != null)
     client.setData().forPath(statusFile(core), json.getBytes());
else
     client.create().forPath(statusFile(core), json.getBytes());


Distributed locking

Having multiple systems there may be a need of using an exclusive lock for some resource, or perhaps some big system requires it’s components to synchronize based on locks. This “recipe” is an ideal match for those situations.

ref="#"
                                                                                    name="b0329bbbf14b79ffaba1139881914aea887ef6a3"></a>



lock = new InterProcessSemaphoreMutex(client, lockPath);
lock.acquire(5, TimeUnit.MINUTES);
… do sth …
lock.release();


 (from https://github.com/zygm0nt/curator-playground/blob/master/src/main/java/pl/touk/curator/LockingRemotely.java)

Sevice Advertisement

<p

    class="c8"><span>This is quite an interesting use case. With many small services on different servers it is not wise to exchange ip addresses and ports between them. When some of those services may go down, while other will try to replace them - the task gets even harder. </span>

That’s why, with Zookeeper in place, it can be utilised as a registry of existing services.

If a service starts, it registers into the ServiceRegistry, offering basic information, like it’s purpose, role, address, and port.

Services that want to use a specific kind of service request an access to some instance. This way of configuring easily decouples services from their configuration.

Basically this scenario needs ? steps:

<span>1. Service starts and registers its presence (</span><span class="c5"><a class="c0"
                                                                               href="https://github.com/zygm0nt/curator-playground/blob/master/src/main/java/pl/touk/curator/WorkerAdvertiser.java#L44">https://github.com/zygm0nt/curator-playground/blob/master/src/main/java/pl/touk/curator/WorkerAdvertiser.java#L44</a></span><span>)</span><span>:</span>



ServiceDiscovery discovery = getDiscovery();
            discovery.start();
            ServiceInstance si = getInstance();
            log.info(si);
            discovery.registerService(si);



2. Another service - on another host or in another JVM on the same machine tries to discover who is implementing the service (https://github.com/zygm0nt/curator-playground/blob/master/src/main/java/pl/touk/curator/WorkerFinder.java#L50):

<a href="#"

                                                                                                  name="3"></a>

instances = discovery.queryForInstances(serviceName);

The whole concept here is ridiculously simple - the service advertising its presence just stores a file with its whereabouts. The service that is looking for service providers just look into specific directory and read stored definitions.

In my example, the structure advertised by services looks like this (+ some getters and constructor - the rest is here: https://github.com/zygm0nt/curator-playground/blob/master/src/main/java/pl/touk/model/WorkerMetadata.java):



public final class WorkerMetadata {
    private final UUID workerId;
    private final String listenAddress;
    private final int listenPort;
}


Source code

<p

    class="c8"><span>The above recipes are available in Curator library (</span><span class="c5"><a class="c0"
                                                                                                    href="http://curator.incubator.apache.org/">http://curator.incubator.apache.org/</a></span><span>). Recipes&rsquo;
usage examples are in my github repo at </span><span class="c5"><a class="c0"
                                                                   href="https://github.com/zygm0nt/curator-playground">https://github.com/zygm0nt/curator-playground</a></span>

Conclusion

<p
    class="c8"><span>If you&rsquo;re in need of a reliable platform for exchanging data and managing synchronization, and you need to do it in a distributed fashion - just choose Zookeeper. Then add Curator for the ease of using it. Enjoy!</span>


  1. image comes from: http://www.flickr.com/photos/jfgallery/2993361148
  2. all source code fragments taken from this repo: https://github.com/zygm0nt/curator-playground

New HTTP Logger Grails plugin

I've wrote a new Grails plugin - httplogger. It logs:

  • request information (url, headers, cookies, method, body),
  • grails dispatch information (controller, action, parameters),
  • response information (elapsed time and body).

It is mostly useful for logging your REST traffic. Full HTTP web pages can be huge to log and generally waste your space. I suggest to map all of your REST controllers with the same path in UrlMappings, e.g. /rest/ and configure this plugin with this path.

Here is some simple output just to give you a taste of it.

17:16:00,331 INFO  filters.LogRawRequestInfoFilter  - 17:16:00,340 INFO  filters.LogRawRequestInfoFilter  - 17:16:00,342 INFO  filters.LogGrailsUrlsInfoFilter  - 17:16:00,731 INFO  filters.LogOutputResponseFilter  - >> #1 returned 200, took 405 ms.
17:16:00,745 INFO filters.LogOutputResponseFilter - >> #1 responded with '{count:0}'
17:18:55,799 INFO  filters.LogRawRequestInfoFilter  - 17:18:55,799 INFO  filters.LogRawRequestInfoFilter  - 17:18:55,800 INFO  filters.LogRawRequestInfoFilter  - 17:18:55,801 INFO  filters.LogOutputResponseFilter  - >> #2 returned 404, took 3 ms.
17:18:55,802 INFO filters.LogOutputResponseFilter - >> #2 responded with ''

Official plugin information can be found on Grails plugins website here: http://grails.org/plugins/httplogger or you can browse code on github: TouK/grails-httplogger.