Introducing camel-drools component

Introduction In this post I’ll try to introduce

Apache Camel component for Drools library – a great an widely used Business Rules Management System. When we decided to use Drools 5 inside Servicemix for some big project, it turned out that there is no production-ready solution that will meet out requirements. The servicemix-drools component is lacking several very important features, eg:
* StatefulSession database persistence for long-running processes,
* support for Complex Event Processing (event-based rules),
* Apache Camel based deployment to ease rules consequence processing,
* Support for Drools unit testing framework. To satisfy those requirements, Maciek Próchniak created a set of utility classes, which helped us run Drools inside Camel route. Starting from this codebase, we did some refactoring, add few new features (eg. pluggable object persistance) and released camel-drools component on TouK Open Source Projects forge.

Example To summarize key features and show how to use camel-drools component, let’s try to implement an example taken from Drools Flow documentation:

There is kind-of ‘process’ where first Task1 and Task2 are created and can be executed in parallel. Task3 needs to be executed after completion of both Task1 and Task2.

Implementation The class Task have 2 fields, a

name and completed flag, we also need an id for session serialization:

public class Task implements Serializable {
    private static final long serialVersionUID = -2964477958089238715L;    
    private String name;
    private boolean completed;

    public Task(String name) {
        this(name, false);
    }

    public Task(String name, boolean completed) {
        this.name = name;
        this.completed = completed;
    }

    public String getName() {
        return name;
    }

    public boolean isCompleted() {
        return completed;
    }

    public long getId() {
        return name.hashCode();
    }
}

We also define another class representing the state of process, needed to fire rules in correct order. Using that model, we now can implement our ruleset, defined in

task.drl file:

import org.apache.camel.component.drools.stateful.*

global org.apache.camel.component.drools.CamelDroolsHelper helper

rule "init"
salience 100
    when
        $s : State(name=="start")
    then
        insert(new Task("Task1"));
        insert(new Task("Task2"));
        retract($s);
end

rule "all tasks completed"
    when
        not(exists Task(completed==false))
        not(exists State(name=="end"))
    then
        insert(new Task("Task3"));
end

rule "Task3 completed"
salience 30
    when 
        Task(name=="Task3", completed==true)
    then
        insert(new State("end"));
        helper.send("direct:completed", "completed");
end

In first rule – “init” we insert two tasks and then retract state object from the session to avoid recursive execution of that rule. Rule “all tasks completed” shows the power of Drools – we just declare that this rule is fired when “there are no incompleted tasks” and don’t have to specify which tasks. So this shows rather ‘declarative’ than ‘imperative’ way of development – we have much more expressiveness than just step-by-step actions which lead to some situation. The

CamelDroolsHelper is a wrapper for ProducerTemplate and can be used to send some message trough another Camel route as consequence of a rule. But how are Tasks mark as completed in Drools session? The idea is to expose session through Camel endpoint to allow insert or update objects, which are passed as body of exchanges:

public class TaskRouteBuilder extends RouteBuilder {
    @Override
    public void configure() throws Exception {
        from("direct:drools")
            .setHeader("drools.key", constant(new MultiKey(new String[] {
                "process-1"
            })))
            .to("drools:task.drl?stateful=true");
        from("direct:completed").to("log:test");
    }
}

The Drools endpoint is described by

"drools:task.drl?stateful=true" URI. It loads definition of rules from task.drl file and runs endpoint in stateful mode (described next paragraph). When object is passed to this endpoint, it is inserted (or updated) to session and fireAllRules() method is called. Another important thing is “drools.key” header – it is used to distinguish sessions between “processes”. E.g. when we have some customer-oriented rules, we want to group facts and events in session per customer – by some customer id. When the “drools.key” is set to that id, sessions for different customers could be found and saved separately.

Stateful session persistence Camel-drools component can be used in two modes:

stateful *and *stateless. The main difference between those is session persistence – only in stateful mode session is stored in database. So long duration event rules are correctly handled only in this mode – and this is what we used in this example. Let’s look at Spring context definition:

task_id

As you can see, there are some requirements for database objects to handle session persistence correctly – two tables: one for KnowlegdeStatefulSession and one for objects (facts and events) persistence. You can name them freely, just provide those names to

sessionTable and objectTable properties of sessionDAO. A sequence for id generation is also needed.

Route and rules testing Here is example test for TaskRouteBuilder:

@SuppressWarnings("unchecked")
public class TaskRouteBuilderTest extends TaskRouteBuilder {

    DefaultCamelContext ctx;
    ProducerTemplate tpl;
    MockSessionDAO dao;

    @Before
    public void makeContext() throws Exception {
        ctx = new DefaultCamelContext();
        ctx.addComponent("drools", new DroolsComponent(ctx));
        ApplicationContext appCtx = new ClassPathXmlApplicationContext(
            new String[] {
                "camel-drools-context.xml",
                "mock-dao-context.xml"
            });
        dao = (MockSessionDAO) appCtx.getBean("sessionDAO");
        ctx.setRegistry(new ApplicationContextRegistry(appCtx));
        ctx.addRoutes(this);
        ctx.addRoutes(new RouteBuilder() {
            @Override
            public void configure() throws Exception {
                from("direct:completed").to("mock:test");
            }
        });
        ctx.start();
        tpl = ctx.createProducerTemplate();
    }

    @Test
    public void testUpdate() throws Exception {
        Endpoint endpoint = ctx.getEndpoint("direct:drools");
        tpl.requestBody(endpoint, new State("start"));
        SessionWithIdentifier session = dao.getSession();
        Assert.assertEquals(2, session.getSession().getFactHandles().size());
        tpl.requestBody(endpoint, new Task("Task1", true));
        tpl.requestBody(endpoint, new Task("Task2", true));
        Assert.assertEquals(3, session.getSession().getFactHandles().size());
        tpl.requestBody(endpoint, new Task("Task3", true));

        MockEndpoint mock = MockEndpoint.resolve(ctx, "mock:test");
        mock.expectedMessageCount(1);
        mock.setResultWaitTime(5000 L);
        mock.assertIsSatisfied();
    }
}

In setup method some required initialization is done – camel-drools-context.xml file is loaded and MockSessionDao created. The test first starts process by passing State object with “start” name to Drools session through Camel route. This should add Task1 and Task2 to session – and it’s tested by counting the factHadles in session. Next, Task1 and Task2 are updated by making them completed, which should result in Task3 present in session – another factHandle. Last step is to complete Task3 and check that last rule is executed by assertions on MockEndpoint. You can download source code for this example and whole component from

here – this is branch for Camel 1.x version, which we use in our project.

You May Also Like

Multi module Gradle project with IDE support

This article is a short how-to about multi-module project setup with usage of the Gradle automation build tool.

Here's how Rich Seller, a StackOverflow user, describes Gradle:
Gradle promises to hit the sweet spot between Ant and Maven. It uses Ivy's approach for dependency resolution. It allows for convention over configuration but also includes Ant tasks as first class citizens. It also wisely allows you to use existing Maven/Ivy repositories.
So why would one use yet another JVM build tool such as Gradle? The answer is simple: to avoid frustration involved by Ant or Maven.

Short story

I was fooling around with some fresh proof of concept and needed a build tool. I'm pretty familiar with Maven so created project from an artifact, and opened the build file, pom.xml for further tuning.
I had been using Grails with its own build system (similar to Gradle, btw) already for some time up then, so after quite a time without Maven, I looked on the pom.xml and found it to be really repulsive.

Once again I felt clearly: XML is not for humans.

After quick googling I found Gradle. It was still in beta (0.8 version) back then, but it's configured with Groovy DSL and that's what a human likes :)

Where are we

In the time Ant can be met but among IT guerrillas, Maven is still on top and couple of others like for example Ivy conquer for the best position, Gradle smoothly went into its mature age. It's now available in 1.3 version, released at 20th of November 2012. I'm glad to recommend it to anyone looking for relief from XML configured tools, or for anyone just looking for simple, elastic and powerful build tool.

Lets build

I have already written about basic project structure so I skip this one, reminding only the basic project structure:
<project root>

├── build.gradle
└── src
├── main
│ ├── java
│ └── groovy

└── test
├── java
└── groovy
Have I just referred myself for the 1st time? Achievement unlocked! ;)

Gradle as most build tools is run from a command line with parameters. The main parameter for Gradle is a 'task name', for example we can run a command: gradle build.
There is no 'create project' task, so the directory structure has to be created by hand. This isn't a hassle though.
Java and groovy sub-folders aren't always mandatory. They depend on what compile plugin is used.

Parent project

Consider an example project 'the-app' of three modules, let say:
  1. database communication layer
  2. domain model and services layer
  3. web presentation layer
Our project directory tree will look like:
the-app

├── dao-layer
│ └── src

├── domain-model
│ └── src

├── web-frontend
│ └── src

├── build.gradle
└── settings.gradle
the-app itself has no src sub-folder as its purpose is only to contain sub-projects and build configuration. If needed it could've been provided with own src though.

To glue modules we need to fill settings.gradle file under the-app directory with a single line of content specifying module names:
include 'dao-layer', 'domain-model', 'web-frontend'
Now the gradle projects command can be executed to obtain such a result:
:projects

------------------------------------------------------------
Root project
------------------------------------------------------------

Root project 'the-app'
+--- Project ':dao-layer'
+--- Project ':domain-model'
\--- Project ':web-frontend'
...so we know that Gradle noticed the modules. However gradle build command won't run successful yet because build.gradle file is still empty.

Sub project

As in Maven we can create separate build config file per each module. Let say we starting from DAO layer.
Thus we create a new file the-app/dao-layer/build.gradle with a line of basic build info (notice the new build.gradle was created under sub-project directory):
apply plugin: 'java'
This single line of config for any of modules is enough to execute gradle build command under the-app directory with following result:
:dao-layer:compileJava
:dao-layer:processResources UP-TO-DATE
:dao-layer:classes
:dao-layer:jar
:dao-layer:assemble
:dao-layer:compileTestJava UP-TO-DATE
:dao-layer:processTestResources UP-TO-DATE
:dao-layer:testClasses UP-TO-DATE
:dao-layer:test
:dao-layer:check
:dao-layer:build

BUILD SUCCESSFUL

Total time: 3.256 secs
To use Groovy plugin slightly more configuration is needed:
apply plugin: 'groovy'

repositories {
mavenLocal()
mavenCentral()
}

dependencies {
groovy 'org.codehaus.groovy:groovy-all:2.0.5'
}
At lines 3 to 6 Maven repositories are set. At line 9 dependency with groovy library version is specified. Of course plugin as 'java', 'groovy' and many more can be mixed each other.

If we have settings.gradle file and a build.gradle file for each module, there is no need for parent the-app/build.gradle file at all. Sure that's true but we can go another, better way.

One file to rule them all

Instead of creating many build.gradle config files, one per each module, we can use only the parent's one and make it a bit more juicy. So let us move the the-app/dao-layer/build.gradle a level up to the-app/build-gradle and fill it with new statements to achieve full project configuration:
def langLevel = 1.7

allprojects {

apply plugin: 'idea'

group = 'com.tamashumi'
version = '0.1'
}

subprojects {

apply plugin: 'groovy'

sourceCompatibility = langLevel
targetCompatibility = langLevel

repositories {
mavenLocal()
mavenCentral()
}

dependencies {
groovy 'org.codehaus.groovy:groovy-all:2.0.5'
testCompile 'org.spockframework:spock-core:0.7-groovy-2.0'
}
}

project(':dao-layer') {

dependencies {
compile 'org.hibernate:hibernate-core:4.1.7.Final'
}
}

project(':domain-model') {

dependencies {
compile project(':dao-layer')
}
}

project(':web-frontend') {

apply plugin: 'war'

dependencies {
compile project(':domain-model')
compile 'org.springframework:spring-webmvc:3.1.2.RELEASE'
}
}

idea {
project {
jdkName = langLevel
languageLevel = langLevel
}
}
At the beginning simple variable langLevel is declared. It's worth knowing that we can use almost any Groovy code inside build.gradle file, statements like for example if conditions, for/while loops, closures, switch-case, etc... Quite an advantage over inflexible XML, isn't it?

Next the allProjects block. Any configuration placed in it will influence - what a surprise - all projects, so the parent itself and sub-projects (modules). Inside of the block we have the IDE (Intellij Idea) plugin applied which I wrote more about in previous article (look under "IDE Integration" heading). Enough to say that with this plugin applied here, command gradle idea will generate Idea's project files with modules structure and dependencies. This works really well and plugins for other IDEs are available too.
Remaining two lines at this block define group and version for the project, similar as this is done by Maven.

After that subProjects block appears. It's related to all modules but not the parent project. So here the Groovy language plugin is applied, as all modules are assumed to be written in Groovy.
Below source and target language level are set.
After that come references to standard Maven repositories.
At the end of the block dependencies to groovy version and test library - Spock framework.

Following blocks, project(':module-name'), are responsible for each module configuration. They may be omitted unless allProjects or subProjects configure what's necessary for a specific module. In the example per module configuration goes as follow:
  • Dao-layer module has dependency to an ORM library - Hibernate
  • Domain-model module relies on dao-layer as a dependency. Keyword project is used here again for a reference to other module.
  • Web-frontend applies 'war' plugin which build this module into java web archive. Besides it referes to domain-model module and also use Spring MVC framework dependency.

At the end in idea block is basic info for IDE plugin. Those are parameters corresponding to the Idea's project general settings visible on the following screen shot.


jdkName should match the IDE's SDK name otherwise it has to be set manually under IDE on each Idea's project files (re)generation with gradle idea command.

Is that it?

In the matter of simplicity - yes. That's enough to automate modular application build with custom configuration per module. Not a rocket science, huh? Think about Maven's XML. It would take more effort to setup the same and still achieve less expressible configuration quite far from user-friendly.

Check the online user guide for a lot of configuration possibilities or better download Gradle and see the sample projects.
As a tasty bait take a look for this short choice of available plugins:
  • java
  • groovy
  • scala
  • cpp
  • eclipse
  • netbeans
  • ida
  • maven
  • osgi
  • war
  • ear
  • sonar
  • project-report
  • signing
and more, 3rd party plugins...

Rapid js + css development

BackgroundLast time I had some work to do in OSGi web module written in Spring MVC. If we have application splitted to well-designed modules, back-end development in this framework run in OSGi environment is quite fast because after some modification w...